id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
60,012
https://en.wikipedia.org/wiki/Formal%20power%20series
In mathematics, a formal series is an infinite sum that is considered independently from any notion of convergence, and can be manipulated with the usual algebraic operations on series (addition, subtraction, multiplication, division, partial sums, etc.). A formal power series is a special kind of formal series, of the form where the called coefficients, are numbers or, more generally, elements of some ring, and the are formal powers of the symbol that is called an indeterminate or, commonly, a variable. Hence, power series can be viewed as a generalization of polynomials where the number of terms is allowed to be infinite, and differ from usual power series by the absence of convergence requirements, which implies that a power series may not represent a function of its variable. Formal power series are in one to one correspondence with their sequences of coefficients, but the two concepts must not be confused, since the operations that can be applied are different. A formal power series with coefficients in a ring is called a formal power series over The formal power series over a ring form a ring, commonly denoted by (It can be seen as the -adic completion of the polynomial ring in the same way as the -adic integers are the -adic completion of the ring of the integers.) Formal powers series in several indeterminates are defined similarly by replacing the powers of a single indeterminate by monomials in several indeterminates. Formal power series are widely used in combinatorics for representing sequences of integers as generating functions. In this context, a recurrence relation between the elements of a sequence may often be interpreted as a differential equation that the generating function satisfies. This allows using methods of complex analysis for combinatorial problems (see analytic combinatorics). Introduction A formal power series can be loosely thought of as an object that is like a polynomial, but with infinitely many terms. Alternatively, for those familiar with power series (or Taylor series), one may think of a formal power series as a power series in which we ignore questions of convergence by not assuming that the variable X denotes any numerical value (not even an unknown value). For example, consider the series If we studied this as a power series, its properties would include, for example, that its radius of convergence is 1 by the Cauchy–Hadamard theorem. However, as a formal power series, we may ignore this completely; all that is relevant is the sequence of coefficients [1, −3, 5, −7, 9, −11, ...]. In other words, a formal power series is an object that just records a sequence of coefficients. It is perfectly acceptable to consider a formal power series with the factorials [1, 1, 2, 6, 24, 120, 720, 5040, ... ] as coefficients, even though the corresponding power series diverges for any nonzero value of X. Algebra on formal power series is carried out by simply pretending that the series are polynomials. For example, if then we add A and B term by term: We can multiply formal power series, again just by treating them as polynomials (see in particular Cauchy product): Notice that each coefficient in the product AB only depends on a finite number of coefficients of A and B. For example, the X5 term is given by For this reason, one may multiply formal power series without worrying about the usual questions of absolute, conditional and uniform convergence which arise in dealing with power series in the setting of analysis. Once we have defined multiplication for formal power series, we can define multiplicative inverses as follows. The multiplicative inverse of a formal power series A is a formal power series C such that AC = 1, provided that such a formal power series exists. It turns out that if A has a multiplicative inverse, it is unique, and we denote it by A−1. Now we can define division of formal power series by defining B/A to be the product BA−1, provided that the inverse of A exists. For example, one can use the definition of multiplication above to verify the familiar formula An important operation on formal power series is coefficient extraction. In its most basic form, the coefficient extraction operator applied to a formal power series in one variable extracts the coefficient of the th power of the variable, so that and . Other examples include Similarly, many other operations that are carried out on polynomials can be extended to the formal power series setting, as explained below. The ring of formal power series If one considers the set of all formal power series in X with coefficients in a commutative ring R, the elements of this set collectively constitute another ring which is written and called the ring of formal power series in the variable X over R. Definition of the formal power series ring One can characterize abstractly as the completion of the polynomial ring equipped with a particular metric. This automatically gives the structure of a topological ring (and even of a complete metric space). But the general construction of a completion of a metric space is more involved than what is needed here, and would make formal power series seem more complicated than they are. It is possible to describe more explicitly, and define the ring structure and topological structure separately, as follows. Ring structure As a set, can be constructed as the set of all infinite sequences of elements of , indexed by the natural numbers (taken to include 0). Designating a sequence whose term at index is by , one defines addition of two such sequences by and multiplication by This type of product is called the Cauchy product of the two sequences of coefficients, and is a sort of discrete convolution. With these operations, becomes a commutative ring with zero element and multiplicative identity . The product is in fact the same one used to define the product of polynomials in one indeterminate, which suggests using a similar notation. One embeds into by sending any (constant) to the sequence and designates the sequence by ; then using the above definitions every sequence with only finitely many nonzero terms can be expressed in terms of these special elements as these are precisely the polynomials in . Given this, it is quite natural and convenient to designate a general sequence by the formal expression , even though the latter is not an expression formed by the operations of addition and multiplication defined above (from which only finite sums can be constructed). This notational convention allows reformulation of the above definitions as and which is quite convenient, but one must be aware of the distinction between formal summation (a mere convention) and actual addition. Topological structure Having stipulated conventionally that one would like to interpret the right hand side as a well-defined infinite summation. To that end, a notion of convergence in is defined and a topology on is constructed. There are several equivalent ways to define the desired topology. We may give the product topology, where each copy of is given the discrete topology. We may give the I-adic topology, where is the ideal generated by , which consists of all sequences whose first term is zero. The desired topology could also be derived from the following metric. The distance between distinct sequences is defined to be where is the smallest natural number such that ; the distance between two equal sequences is of course zero. Informally, two sequences and become closer and closer if and only if more and more of their terms agree exactly. Formally, the sequence of partial sums of some infinite summation converges if for every fixed power of the coefficient stabilizes: there is a point beyond which all further partial sums have the same coefficient. This is clearly the case for the right hand side of (), regardless of the values , since inclusion of the term for gives the last (and in fact only) change to the coefficient of . It is also obvious that the limit of the sequence of partial sums is equal to the left hand side. This topological structure, together with the ring operations described above, form a topological ring. This is called the ring of formal power series over and is denoted by . The topology has the useful property that an infinite summation converges if and only if the sequence of its terms converges to 0, which just means that any fixed power of occurs in only finitely many terms. The topological structure allows much more flexible usage of infinite summations. For instance the rule for multiplication can be restated simply as since only finitely many terms on the right affect any fixed . Infinite products are also defined by the topological structure; it can be seen that an infinite product converges if and only if the sequence of its factors converges to 1 (in which case the product is nonzero) or infinitely many factors have no constant term (in which case the product is zero). Alternative topologies The above topology is the finest topology for which always converges as a summation to the formal power series designated by the same expression, and it often suffices to give a meaning to infinite sums and products, or other kinds of limits that one wishes to use to designate particular formal power series. It can however happen occasionally that one wishes to use a coarser topology, so that certain expressions become convergent that would otherwise diverge. This applies in particular when the base ring already comes with a topology other than the discrete one, for instance if it is also a ring of formal power series. In the ring of formal power series , the topology of above construction only relates to the indeterminate , since the topology that was put on has been replaced by the discrete topology when defining the topology of the whole ring. So converges (and its sum can be written as ); however would be considered to be divergent, since every term affects the coefficient of . This asymmetry disappears if the power series ring in is given the product topology where each copy of is given its topology as a ring of formal power series rather than the discrete topology. With this topology, a sequence of elements of converges if the coefficient of each power of converges to a formal power series in , a weaker condition than stabilizing entirely. For instance, with this topology, in the second example given above, the coefficient of converges to , so the whole summation converges to . This way of defining the topology is in fact the standard one for repeated constructions of rings of formal power series, and gives the same topology as one would get by taking formal power series in all indeterminates at once. In the above example that would mean constructing and here a sequence converges if and only if the coefficient of every monomial stabilizes. This topology, which is also the -adic topology, where is the ideal generated by and , still enjoys the property that a summation converges if and only if its terms tend to 0. The same principle could be used to make other divergent limits converge. For instance in the limit does not exist, so in particular it does not converge to This is because for the coefficient of does not stabilize as . It does however converge in the usual topology of , and in fact to the coefficient of . Therefore, if one would give the product topology of where the topology of is the usual topology rather than the discrete one, then the above limit would converge to . This more permissive approach is not however the standard when considering formal power series, as it would lead to convergence considerations that are as subtle as they are in analysis, while the philosophy of formal power series is on the contrary to make convergence questions as trivial as they can possibly be. With this topology it would not be the case that a summation converges if and only if its terms tend to 0. Universal property The ring may be characterized by the following universal property. If is a commutative associative algebra over , if is an ideal of such that the -adic topology on is complete, and if is an element of , then there is a unique with the following properties: is an -algebra homomorphism is continuous . Operations on formal power series One can perform algebraic operations on power series to generate new power series. Besides the ring structure operations defined above, we have the following. Power series raised to powers For any natural number , the th power of a formal power series is defined recursively by If and are invertible in the ring of coefficients, one can prove where In the case of formal power series with complex coefficients, the complex powers are well defined for series with constant term equal to . In this case, can be defined either by composition with the binomial series , or by composition with the exponential and the logarithmic series, or as the solution of the differential equation (in terms of series) with constant term 1; the three definitions are equivalent. The rules of calculus and easily follow. Multiplicative inverse The series is invertible in if and only if its constant coefficient is invertible in . This condition is necessary, for the following reason: if we suppose that has an inverse then the constant term of is the constant term of the identity series, i.e. it is 1. This condition is also sufficient; we may compute the coefficients of the inverse series via the explicit recursive formula An important special case is that the geometric series formula is valid in : If is a field, then a series is invertible if and only if the constant term is non-zero, i.e. if and only if the series is not divisible by . This means that is a discrete valuation ring with uniformizing parameter . Division The computation of a quotient assuming the denominator is invertible (that is, is invertible in the ring of scalars), can be performed as a product and the inverse of , or directly equating the coefficients in : Extracting coefficients The coefficient extraction operator applied to a formal power series in X is written and extracts the coefficient of Xm, so that Composition Given two formal power series such that one may form the composition where the coefficients cn are determined by "expanding out" the powers of f(X): Here the sum is extended over all (k, j) with and with Since one must have and for every This implies that the above sum is finite and that the coefficient is the coefficient of in the polynomial , where and are the polynomials obtained by truncating the series at that is, by removing all terms involving a power of higher than A more explicit description of these coefficients is provided by Faà di Bruno's formula, at least in the case where the coefficient ring is a field of characteristic 0. Composition is only valid when has no constant term, so that each depends on only a finite number of coefficients of and . In other words, the series for converges in the topology of . Example Assume that the ring has characteristic 0 and the nonzero integers are invertible in . If one denotes by the formal power series then the equality makes perfect sense as a formal power series, since the constant coefficient of is zero. Composition inverse Whenever a formal series has f0 = 0 and f1 being an invertible element of R, there exists a series that is the composition inverse of , meaning that composing with gives the series representing the identity function . The coefficients of may be found recursively by using the above formula for the coefficients of a composition, equating them with those of the composition identity X (that is 1 at degree 1 and 0 at every degree greater than 1). In the case when the coefficient ring is a field of characteristic 0, the Lagrange inversion formula (discussed below) provides a powerful tool to compute the coefficients of g, as well as the coefficients of the (multiplicative) powers of g. Formal differentiation Given a formal power series we define its formal derivative, denoted Df or f ′, by The symbol D is called the formal differentiation operator. This definition simply mimics term-by-term differentiation of a polynomial. This operation is R-linear: for any a, b in R and any f, g in Additionally, the formal derivative has many of the properties of the usual derivative of calculus. For example, the product rule is valid: and the chain rule works as well: whenever the appropriate compositions of series are defined (see above under composition of series). Thus, in these respects formal power series behave like Taylor series. Indeed, for the f defined above, we find that where Dk denotes the kth formal derivative (that is, the result of formally differentiating k times). Formal antidifferentiation If is a ring with characteristic zero and the nonzero integers are invertible in , then given a formal power series we define its formal antiderivative or formal indefinite integral by for any constant . This operation is R-linear: for any a, b in R and any f, g in Additionally, the formal antiderivative has many of the properties of the usual antiderivative of calculus. For example, the formal antiderivative is the right inverse of the formal derivative: for any . Properties Algebraic properties of the formal power series ring is an associative algebra over which contains the ring of polynomials over ; the polynomials correspond to the sequences which end in zeros. The Jacobson radical of is the ideal generated by and the Jacobson radical of ; this is implied by the element invertibility criterion discussed above. The maximal ideals of all arise from those in in the following manner: an ideal of is maximal if and only if is a maximal ideal of and is generated as an ideal by and . Several algebraic properties of are inherited by : if is a local ring, then so is (with the set of non units the unique maximal ideal), if is Noetherian, then so is (a version of the Hilbert basis theorem), if is an integral domain, then so is , and if is a field, then is a discrete valuation ring. Topological properties of the formal power series ring The metric space is complete. The ring is compact if and only if R is finite. This follows from Tychonoff's theorem and the characterisation of the topology on as a product topology. Weierstrass preparation The ring of formal power series with coefficients in a complete local ring satisfies the Weierstrass preparation theorem. Applications Formal power series can be used to solve recurrences occurring in number theory and combinatorics. For an example involving finding a closed form expression for the Fibonacci numbers, see the article on Examples of generating functions. One can use formal power series to prove several relations familiar from analysis in a purely algebraic setting. Consider for instance the following elements of : Then one can show that The last one being valid in the ring For K a field, the ring is often used as the "standard, most general" complete local ring over K in algebra. Interpreting formal power series as functions In mathematical analysis, every convergent power series defines a function with values in the real or complex numbers. Formal power series over certain special rings can also be interpreted as functions, but one has to be careful with the domain and codomain. Let and suppose is a commutative associative algebra over , is an ideal in such that the I-adic topology on is complete, and is an element of . Define: This series is guaranteed to converge in given the above assumptions on . Furthermore, we have and Unlike in the case of bona fide functions, these formulas are not definitions but have to be proved. Since the topology on is the -adic topology and is complete, we can in particular apply power series to other power series, provided that the arguments don't have constant coefficients (so that they belong to the ideal ): , and are all well defined for any formal power series With this formalism, we can give an explicit formula for the multiplicative inverse of a power series whose constant coefficient is invertible in : If the formal power series with is given implicitly by the equation where is a known power series with , then the coefficients of can be explicitly computed using the Lagrange inversion formula. Generalizations Formal Laurent series The formal Laurent series over a ring are defined in a similar way to a formal power series, except that we also allow finitely many terms of negative degree. That is, they are the series that can be written as for some integer , so that there are only finitely many negative with . (This is different from the classical Laurent series of complex analysis.) For a non-zero formal Laurent series, the minimal integer such that is called the order of and is denoted (The order ord(0) of the zero series is .) Multiplication of such series can be defined. Indeed, similarly to the definition for formal power series, the coefficient of of two series with respective sequences of coefficients and is This sum has only finitely many nonzero terms because of the assumed vanishing of coefficients at sufficiently negative indices. The formal Laurent series form the ring of formal Laurent series over , denoted by . It is equal to the localization of the ring of formal power series with respect to the set of positive powers of . If is a field, then is in fact a field, which may alternatively be obtained as the field of fractions of the integral domain . As with , the ring of formal Laurent series may be endowed with the structure of a topological ring by introducing the metric (In particular, implies that One may define formal differentiation for formal Laurent series in the natural (term-by-term) way. Precisely, the formal derivative of the formal Laurent series above is which is again a formal Laurent series. If is a non-constant formal Laurent series and with coefficients in a field of characteristic 0, then one has However, in general this is not the case since the factor for the lowest order term could be equal to 0 in . Formal residue Assume that is a field of characteristic 0. Then the map defined above is a -derivation that satisfies The latter shows that the coefficient of in is of particular interest; it is called formal residue of and denoted . The map is -linear, and by the above observation one has an exact sequence Some rules of calculus. As a quite direct consequence of the above definition, and of the rules of formal derivation, one has, for any if Property (i) is part of the exact sequence above. Property (ii) follows from (i) as applied to . Property (iii): any can be written in the form , with and : then implies is invertible in whence Property (iv): Since we can write with . Consequently, and (iv) follows from (i) and (iii). Property (v) is clear from the definition. The Lagrange inversion formula As mentioned above, any formal series with f0 = 0 and f1 ≠ 0 has a composition inverse The following relation between the coefficients of gn and f−k holds (""): In particular, for n = 1 and all k ≥ 1, Since the proof of the Lagrange inversion formula is a very short computation, it is worth reporting one residue-based proof here (a number of different proofs exist, using, e.g., Cauchy's coefficient formula for holomorphic functions, tree-counting arguments, or induction). Noting , we can apply the rules of calculus above, crucially Rule (iv) substituting , to get: Generalizations. One may observe that the above computation can be repeated plainly in more general settings than K((X)): a generalization of the Lagrange inversion formula is already available working in the -modules where α is a complex exponent. As a consequence, if f and g are as above, with , we can relate the complex powers of f / X and g / X: precisely, if α and β are non-zero complex numbers with negative integer sum, then For instance, this way one finds the power series for complex powers of the Lambert function. Power series in several variables Formal power series in any number of indeterminates (even infinitely many) can be defined. If I is an index set and XI is the set of indeterminates Xi for i∈I, then a monomial Xα is any finite product of elements of XI (repetitions allowed); a formal power series in XI with coefficients in a ring R is determined by any mapping from the set of monomials Xα to a corresponding coefficient cα, and is denoted . The set of all such formal power series is denoted and it is given a ring structure by defining and Topology The topology on is such that a sequence of its elements converges only if for each monomial Xα the corresponding coefficient stabilizes. If I is finite, then this the J-adic topology, where J is the ideal of generated by all the indeterminates in XI. This does not hold if I is infinite. For example, if then the sequence with does not converge with respect to any J-adic topology on R, but clearly for each monomial the corresponding coefficient stabilizes. As remarked above, the topology on a repeated formal power series ring like is usually chosen in such a way that it becomes isomorphic as a topological ring to Operations All of the operations defined for series in one variable may be extended to the several variables case. A series is invertible if and only if its constant term is invertible in R. The composition f(g(X)) of two series f and g is defined if f is a series in a single indeterminate, and the constant term of g is zero. For a series f in several indeterminates a form of "composition" can similarly be defined, with as many separate series in the place of g as there are indeterminates. In the case of the formal derivative, there are now separate partial derivative operators, which differentiate with respect to each of the indeterminates. They all commute with each other. Universal property In the several variables case, the universal property characterizing becomes the following. If S is a commutative associative algebra over R, if I is an ideal of S such that the I-adic topology on S is complete, and if x1, ..., xr are elements of I, then there is a unique map with the following properties: Φ is an R-algebra homomorphism Φ is continuous Φ(Xi) = xi for i = 1, ..., r. Non-commuting variables The several variable case can be further generalised by taking non-commuting variables Xi for i ∈ I, where I is an index set and then a monomial Xα is any word in the XI; a formal power series in XI with coefficients in a ring R is determined by any mapping from the set of monomials Xα to a corresponding coefficient cα, and is denoted . The set of all such formal power series is denoted R«XI», and it is given a ring structure by defining addition pointwise and multiplication by where · denotes concatenation of words. These formal power series over R form the Magnus ring over R. On a semiring Given an alphabet and a semiring . The formal power series over supported on the language is denoted by . It consists of all mappings , where is the free monoid generated by the non-empty set . The elements of can be written as formal sums where denotes the value of at the word . The elements are called the coefficients of . For the support of is the set A series where every coefficient is either or is called the characteristic series of its support. The subset of consisting of all series with a finite support is denoted by and called polynomials. For and , the sum is defined by The (Cauchy) product is defined by The Hadamard product is defined by And the products by a scalar and by and , respectively. With these operations and are semirings, where is the empty word in . These formal power series are used to model the behavior of weighted automata, in theoretical computer science, when the coefficients of the series are taken to be the weight of a path with label in the automata. Replacing the index set by an ordered abelian group Suppose is an ordered abelian group, meaning an abelian group with a total ordering respecting the group's addition, so that if and only if for all . Let I be a well-ordered subset of , meaning I contains no infinite descending chain. Consider the set consisting of for all such I, with in a commutative ring , where we assume that for any index set, if all of the are zero then the sum is zero. Then is the ring of formal power series on ; because of the condition that the indexing set be well-ordered the product is well-defined, and we of course assume that two elements which differ by zero are the same. Sometimes the notation is used to denote . Various properties of transfer to . If is a field, then so is . If is an ordered field, we can order by setting any element to have the same sign as its leading coefficient, defined as the least element of the index set I associated to a non-zero coefficient. Finally if is a divisible group and is a real closed field, then is a real closed field, and if is algebraically closed, then so is . This theory is due to Hans Hahn, who also showed that one obtains subfields when the number of (non-zero) terms is bounded by some fixed infinite cardinality. Examples and related topics Bell series are used to study the properties of multiplicative arithmetic functions Formal groups are used to define an abstract group law using formal power series Puiseux series are an extension of formal Laurent series, allowing fractional exponents Rational series See also Ring of restricted power series Notes References Nicolas Bourbaki: Algebra, IV, §4. Springer-Verlag 1988. Further reading W. Kuich. Semirings and formal power series: Their relevance to formal languages and automata theory. In G. Rozenberg and A. Salomaa, editors, Handbook of Formal Languages, volume 1, Chapter 9, pages 609–677. Springer, Berlin, 1997, Droste, M., & Kuich, W. (2009). Semirings and Formal Power Series. Handbook of Weighted Automata, 3–28. Abstract algebra Ring theory Enumerative combinatorics Mathematical series
Formal power series
Mathematics
6,233
24,574,669
https://en.wikipedia.org/wiki/International%20Year%20of%20Biodiversity
The International Year of Biodiversity (IYB) was a year-long celebration of biological diversity and its importance, taking place internationally in 2010. Coinciding with the date of the 2010 Biodiversity Target, the year was declared by the 61st session of the United Nations General Assembly in 2006. It was meant to help raise awareness of the importance of biodiversity through activities and events, to influence decision makers, and "to elevate biological diversity nearer to the top of the political agenda". Background The United Nations General Assembly declared 2010 as the International Year of Biodiversity (Resolution 61/203). This year coincided with the 2010 Biodiversity Target adopted by the Parties to the Convention on Biological Diversity and by Heads of State and government at the World Summit for Sustainable Development in Johannesburg in 2002. The Secretariat of the Convention on Biological Diversity (CBD), based in Montreal, Canada, was coordinating the International Year of Biodiversity campaign. Established at the Earth Summit in Rio de Janeiro in 1992, the Convention on Biological Diversity is an international treaty for the conservation and sustainable use of biodiversity and the equitable sharing of the benefits of biodiversity. The CBD has near-universal participation, with 193 Parties. Main goals in UN view The main goals of the International Year of Biodiversity were to: Enhance public awareness of the importance of conserving biodiversity and of the underlying threats to biodiversity Raise awareness of the accomplishments to save biodiversity that had been realized by communities and governments Promote innovative solutions to reduce the threats to biodiversity Encourage individuals, organizations and governments to act immediately to halt biodiversity loss Start dialog between stakeholders for what to do in the post-2010 period Slogan Biodiversity is life. Biodiversity is our life. See also United Nations Decade on Biodiversity (2011–20) International Year of Forests (2011) Convention on Biological Diversity COP10 Nagoya Protocol (2010) 2010 in science References The information above, for the most part, is based on the official websites of the Convention on Biological Diversity and of the International Year of Biodiversity. External links International Year of Biodiversity Convention on Biological Diversity Internet sources for the International Year of Biodiversity (by vifabio) United Nations observances Biodiversity Environmental science 2010 in international relations 2010 in science 2010 in the environment Convention on Biological Diversity
International Year of Biodiversity
Biology,Environmental_science
448
22,385,895
https://en.wikipedia.org/wiki/Co-pay%20card
The co-pay card appeared in 2005 as a means by which pharmaceutical marketers could, by offering an instantaneous rebate to patients, combat their challenges to prescription pharmaceuticals, including generic competition, lack of patient compliance and persistency, and an access to the physician population. As of January 2017, in the United States, coupon cards for more than 600 prescription medications are available. Based on the National Council for Prescription Drug Programs standard, all pharmacy software systems contain information fields for both a primary and secondary insurer to pay for patient's prescription. Process Typically, a patient will receive his/her co-pay card from their physician along with a prescription for the medicine. The patient takes the card and prescription to a pharmacy where the pharmacist enters processing information into his/her pharmacy management system to submit a claim. If a patient has insurance, the pharmacist will key in the patient's insurance number in the primary field and an identifier from the co-pay card into the secondary insurer field. Instantaneously the pharmacy benefit manager provides coverage data, relaying the patient's out of pocket, or co-pay to the secondary insurer's benefit manager, who then provides a discount accordingly. An example: A brand offers a co-pay card giving patients the opportunity to save up to $20 off each prescription fill. A patient receives the co-pay card and visits their pharmacy. The patient provides his/her insurance card and co-pay card to the pharmacist. The pharmacist enters information into his/her pharmacy management system from both cards. The insurance benefit manager recognizes the drug as a TIER 3 brand for the patient and relays the patient co-pay to be $30.00. The co-pay card benefit manager recognizes the $30.00 and covers the $20.00 of co-pay, leaving $10 for the patient to pay out of pocket. Another patient without prescription insurance coverage follows the same process. The co-pay card takes the primary insurer position where it recognizes the claim as that of a cash-paying patient and applies $20.00 discount to the patient's out-of-pocket costs. Variations In most cases the service provider of the co-pay card program holds a reimbursement account for the pharmaceutical marketing client, which is used to remit to pharmacies the cost reductions through co-pay card programs. The co-pay service provider remits to pharmacies every 14 to 28 days and deducts these remittances via this account. Some providers have attempted a variation on the original co-pay card by going to a magnetic strip swipe process, by which the card runs through both the pharmacy software and financial software (e.g. Visa/MasterCard and Debit networks). Debit cards are another reimbursement method for co-pay in pharmacies because they offer real-time reimbursement of co-pay claims. However, with new prompt-pay regulations for adjudicators, required for Medicare Part D and implemented by most PBMs, few pharmacies wait more than one week for reimbursement. Pharmacies used to prefer real-time Debit payments because they didn't require the pharmacies to carry the "float" of the 14 to 28 days payment cycles. This is no longer true. References Customer relationship management software Pharmaceutical industry
Co-pay card
Chemistry,Biology
706
438,602
https://en.wikipedia.org/wiki/Rossby%20wave
Rossby waves, also known as planetary waves, are a type of inertial wave naturally occurring in rotating fluids. They were first identified by Sweden-born American meteorologist Carl-Gustaf Arvid Rossby in the Earth's atmosphere in 1939. They are observed in the atmospheres and oceans of Earth and other planets, owing to the rotation of Earth or of the planet involved. Atmospheric Rossby waves on Earth are giant meanders in high-altitude winds that have a major influence on weather. These waves are associated with pressure systems and the jet stream (especially around the polar vortices). Oceanic Rossby waves move along the thermocline: the boundary between the warm upper layer and the cold deeper part of the ocean. Rossby wave types Atmospheric waves Atmospheric Rossby waves result from the conservation of potential vorticity and are influenced by the Coriolis force and pressure gradient. The image on the left sketches fundamental principles of the wave, e.g., its restoring force and westward phase velocity. The rotation causes fluids to turn to the right as they move in the northern hemisphere and to the left in the southern hemisphere. For example, a fluid that moves from the equator toward the north pole will deviate toward the east; a fluid moving toward the equator from the north will deviate toward the west. These deviations are caused by the Coriolis force and conservation of potential vorticity which leads to changes of relative vorticity. This is analogous to conservation of angular momentum in mechanics. In planetary atmospheres, including Earth, Rossby waves are due to the variation in the Coriolis effect with latitude. One can identify a terrestrial Rossby wave as its phase velocity, marked by its wave crest, always has a westward component. However, the collected set of Rossby waves may appear to move in either direction with what is known as its group velocity. In general, shorter waves have an eastward group velocity and long waves a westward group velocity. The terms "barotropic" and "baroclinic" are used to distinguish the vertical structure of Rossby waves. Barotropic Rossby waves do not vary in the vertical, and have the fastest propagation speeds. The baroclinic wave modes, on the other hand, do vary in the vertical. They are also slower, with speeds of only a few centimeters per second or less. Most investigations of Rossby waves have been done on those in Earth's atmosphere. Rossby waves in the Earth's atmosphere are easy to observe as (usually 4–6) large-scale meanders of the jet stream. When these deviations become very pronounced, masses of cold or warm air detach, and become low-strength cyclones and anticyclones, respectively, and are responsible for day-to-day weather patterns at mid-latitudes. The action of Rossby waves partially explains why eastern continental edges in the Northern Hemisphere, such as the Northeast United States and Eastern Canada, are colder than Western Europe at the same latitudes, and why the Mediterranean is dry during summer (Rodwell–Hoskins mechanism). Poleward-propagating atmospheric waves Deep convection (heat transfer) to the troposphere is enhanced over very warm sea surfaces in the tropics, such as during El Niño events. This tropical forcing generates atmospheric Rossby waves that have a poleward and eastward migration. Poleward-propagating Rossby waves explain many of the observed statistical connections between low- and high-latitude climates. One such phenomenon is sudden stratospheric warming. Poleward-propagating Rossby waves are an important and unambiguous part of the variability in the Northern Hemisphere, as expressed in the Pacific North America pattern. Similar mechanisms apply in the Southern Hemisphere and partly explain the strong variability in the Amundsen Sea region of Antarctica. In 2011, a Nature Geoscience study using general circulation models linked Pacific Rossby waves generated by increasing central tropical Pacific temperatures to warming of the Amundsen Sea region, leading to winter and spring continental warming of Ellsworth Land and Marie Byrd Land in West Antarctica via an increase in advection. Rossby waves on other planets Atmospheric Rossby waves, like Kelvin waves, can occur on any rotating planet with an atmosphere. The Y-shaped cloud feature on Venus is attributed to Kelvin and Rossby waves. Oceanic waves Oceanic Rossby waves are large-scale waves within an ocean basin. They have a low amplitude, in the order of centimetres (at the surface) to metres (at the thermocline), compared with atmospheric Rossby waves which are in the order of hundreds of kilometres. They may take months to cross an ocean basin. They gain momentum from wind stress at the ocean surface layer and are thought to communicate climatic changes due to variability in forcing, due to both the wind and buoyancy. Off-equatorial Rossby waves are believed to propagate through eastward-propagating Kelvin waves that upwell against Eastern Boundary Currents, while equatorial Kelvin waves are believed to derive some of their energy from the reflection of Rossby waves against Western Boundary Currents. Both barotropic and baroclinic waves cause variations of the sea surface height, although the length of the waves made them difficult to detect until the advent of satellite altimetry. Satellite observations have confirmed the existence of oceanic Rossby waves. Baroclinic waves also generate significant displacements of the oceanic thermocline, often of tens of meters. Satellite observations have revealed the stately progression of Rossby waves across all the ocean basins, particularly at low- and mid-latitudes. Due to the beta effect, transit times of Rossby waves increase with latitude. In a basin like the Pacific, waves travelling at the equator may take months, while closer to the poles transit may take decades. Rossby waves have been suggested as an important mechanism to account for the heating of the ocean on Europa, a moon of Jupiter. Waves in astrophysical discs Rossby wave instabilities are also thought to be found in astrophysical discs, for example, around newly forming stars. Amplification of Rossby waves It has been proposed that a number of regional weather extremes in the Northern Hemisphere associated with blocked atmospheric circulation patterns may have been caused by quasiresonant amplification of Rossby waves. Examples include the 2013 European floods, the 2012 China floods, the 2010 Russian heat wave, the 2010 Pakistan floods and the 2003 European heat wave. Even taking global warming into account, the 2003 heat wave would have been highly unlikely without such a mechanism. Normally freely travelling synoptic-scale Rossby waves and quasistationary planetary-scale Rossby waves exist in the mid-latitudes with only weak interactions. The hypothesis, proposed by Vladimir Petoukhov, Stefan Rahmstorf, Stefan Petri, and Hans Joachim Schellnhuber, is that under some circumstances these waves interact to produce the static pattern. For this to happen, they suggest, the zonal (east-west) wave number of both types of wave should be in the range 6–8, the synoptic waves should be arrested within the troposphere (so that energy does not escape to the stratosphere) and mid-latitude waveguides should trap the quasistationary components of the synoptic waves. In this case the planetary-scale waves may respond unusually strongly to orography and thermal sources and sinks because of "quasiresonance". A 2017 study by Mann, Rahmstorf, et al. connected the phenomenon of anthropogenic Arctic amplification to planetary wave resonance and extreme weather events. Mathematical definitions Free barotropic Rossby waves under a zonal flow with linearized vorticity equation To start with, a zonal mean flow, U, can be considered to be perturbed where U is constant in time and space. Let be the total horizontal wind field, where u and v are the components of the wind in the x- and y- directions, respectively. The total wind field can be written as a mean flow, U, with a small superimposed perturbation, u′ and v′. The perturbation is assumed to be much smaller than the mean zonal flow. The relative vorticity and the perturbations and can be written in terms of the stream function (assuming non-divergent flow, for which the stream function completely describes the flow): Considering a parcel of air that has no relative vorticity before perturbation (uniform U has no vorticity) but with planetary vorticity f as a function of the latitude, perturbation will lead to a slight change of latitude, so the perturbed relative vorticity must change in order to conserve potential vorticity. Also the above approximation U >> u''' ensures that the perturbation flow does not advect relative vorticity. with . Plug in the definition of stream function to obtain: Using the method of undetermined coefficients one can consider a traveling wave solution with zonal and meridional wavenumbers k and ℓ, respectively, and frequency : This yields the dispersion relation: The zonal (x-direction) phase speed and group velocity of the Rossby wave are then given by where c is the phase speed, cg is the group speed, U is the mean westerly flow, is the Rossby parameter, k is the zonal wavenumber, and ℓ is the meridional wavenumber. It is noted that the zonal phase speed of Rossby waves is always westward (traveling east to west) relative to mean flow U, but the zonal group speed of Rossby waves can be eastward or westward depending on wavenumber. Rossby parameter The Rossby parameter is defined as the rate of change of the Coriolis frequency along the meridional direction: where is the latitude, ω is the angular speed of the Earth's rotation, and a'' is the mean radius of the Earth. If , there will be no Rossby waves; Rossby waves owe their origin to the gradient of the tangential speed of the planetary rotation (planetary vorticity). A "cylinder" planet has no Rossby waves. It also means that at the equator of any rotating, sphere-like planet, including Earth, one will still have Rossby waves, despite the fact that , because . These are known as Equatorial Rossby waves. See also Atmospheric wave Equatorial wave Equatorial Rossby wave – mathematical treatment Harmonic Kelvin wave Polar vortex Rossby whistle References Bibliography External links Description of Rossby Waves from the American Meteorological Society An introduction to oceanic Rossby waves and their study with satellite data Rossby waves and extreme weather (Video) Physical oceanography Atmospheric dynamics Fluid mechanics Waves
Rossby wave
Physics,Chemistry,Engineering
2,213
33,099,208
https://en.wikipedia.org/wiki/Plane-wave%20expansion
In physics, the plane-wave expansion expresses a plane wave as a linear combination of spherical waves: where is the imaginary unit, is a wave vector of length , is a position vector of length , are spherical Bessel functions, are Legendre polynomials, and the hat denotes the unit vector. In the special case where is aligned with the z axis, where is the spherical polar angle of . Expansion in spherical harmonics With the spherical-harmonic addition theorem the equation can be rewritten as where are the spherical harmonics and the superscript denotes complex conjugation. Note that the complex conjugation can be interchanged between the two spherical harmonics due to symmetry. Applications The plane wave expansion is applied in Acoustics Optics S-matrix Quantum mechanics See also Helmholtz equation Plane wave expansion method in computational electromagnetism Weyl expansion References Scattering Mathematical physics
Plane-wave expansion
Physics,Chemistry,Materials_science,Mathematics
176
58,800,461
https://en.wikipedia.org/wiki/Hyperion%20proto-supercluster
The Hyperion proto-supercluster is the largest and earliest known proto-supercluster, 5,000 times the mass of the Milky Way and seen at 20% of the current age of the universe. It was discovered in 2018 by analysing the redshifts of 10,000 objects observed with the Very Large Telescope in Chile. Discovery The discovery was announced in late 2018. The discovery team led by Olga Cucciati used computational astrophysics methods and astroinformatics; statistical techniques were applied to large datasets of galaxy redshifts, using a two-dimensional Voronoi tessellation to correlate gravitational interaction (virialization) of visible structures. The existence of non-visible (dark matter) structures was inferred. Correlation was based on redshift data captured in a sky survey called VIMOS-VLT Deep Survey, using the Visible Multi Object Spectrograph (VIMOS) instrument of the Very Large Telescope in Chile, and other surveys to a lesser extent. Spectroscopic redshift data for 3,822 objects (galaxies) was selected. The discovery was published in Astronomy & Astrophysics in September 2018. Physical description The structure is estimated to weigh 4.8 × 1015 solar masses (about 5,000 times the mass of the Milky Way) and to extend . It lies within the two square degree Cosmic Evolution Survey (COSMOS) field of the constellation Sextans. Hyperion's redshift is z=2.45 putting it 11 billion light years from Earth; it existed at less than 20% of the present age of the Universe. Eventually it is "expected to evolve into something similar to the immense structures in the local universe such as the superclusters making up the Sloan Great Wall or the Virgo Supercluster". Use in cosmology The supercluster contains dark matter, evidenced by a mismatch between the visible objects in it and their computed gravitational binding. As a relic from the early Universe, the dark matter data could be used to test cosmological theories. As the 2018 paper authors note, "the identification of massive/complex proto-clusters at high redshift could be useful to give constraints on dark matter simulations" of the Lambda-CDM model. See also Lynx Supercluster, former record-holder supercluster for red shift z=1.26–1.27 (distance or time of formation) CL J1001+0220, record-holder galaxy cluster since 2016 at z=2.5 References Sources Further reading Galaxy superclusters Astronomical objects discovered in 2018 Sextans
Hyperion proto-supercluster
Astronomy
545
72,891,636
https://en.wikipedia.org/wiki/Data%20version%20control
Data version control is a method of working with data sets. It is similar to the version control systems used in traditional software development, but is optimized to allow better processing of data and collaboration in the context of data analytics, research, and any other form of data analysis. Data version control may also include specific features and configurations designed to facilitate work with large data sets and data lakes. History Background As early as 1985, researchers recognized the need for defining timing attributes in database tables, which would be necessary for tracking changes to databases. This research continued into the 1990s, and the theory was formalized into practical methods for managing data in relational databases, providing some of the foundational concepts for what would later become data version control. In the early 2010s the size of data sets was rapidly expanding, and relational databases were no longer sufficient to manage the amounts of data organizations were accumulating. The rise of the Apache Hadoop eco system, with HDFS as a storage layer, and later object storage had become dominant in big data operations. Research into data management tools and data version control systems increased sharply, along with demand for such tools from both academia and the private and public sectors. Version controlled databases The first versioned database was proposed in 2012 for the SciDB database, and demonstrated it was possible to create chains and trees of different versions of the database while decreasing both the overall storage size and access speeds associated with previous methods. In 2014, a proposal was made to generalize these principles into a platform that could be used for any application. In 2016, a prototype for a data version control system was developed during a Kaggle competition. This software was later used internally at an AI firm, and eventually spun off as a startup. Since then, a number of data version control systems, both open and closed source, have been developed and offered commercially, with a subset dedicated specifically to machine learning. Use cases Reproducibility A wide range of scientific disciplines have adopted automated analysis of large quantities of data, including astrophysics, seismology, biology and medicine, social sciences and economics, and many other fields. The principle of reproducibility is an important aspect of formalizing findings in scientific disciplines, and in the context of data science presents a number of challenges. Most datasets are constantly changing, whether due to the addition of more data or changes in the structure and format of the data, and small changes can have significant effects on the outcome of experiments. Data version control allows for recording the exact state of data sets at a particular moment of time, making it easier to reproduce and understand experimental outcomes. If data practitioners can only know the present state of the data, they may run into a number of challenges such as difficulties in problem debugging or complying with data audits. Development and testing Data version control is sometimes used in testing and development of applications that interact with large quantities of data. Some data version control tools allow users to create replicas of their production environment for testing purposes. This approach allows them to test data integration processes such as extract, transform and load (ETL) and understand the changes made to data without having a negative impact on the consumers of the production data. Machine learning and artificial intelligence In the context of machine learning, data version control can be used for optimizing the performance of models. It can allow automating the process of analyzing outcomes with different versions of a data set to continuously improve performance. It is possible that open source data version control software could eliminate the need for proprietary AI platforms by extending tools like Git and CI/CD for use by machine learning engineers. Many open-source solutions build on Git-like semantics to provide these capabilities, as Git itself was designed for small text files and doesn't support typical machine learning datasets, which are very large. CI/CD for data CI/CD methodologies can be applied to datasets using data version control. Version control enables users to integrate with automation servers that allow establishing a CI/CD process for data. By adding testing platforms to the process, they can guarantee high quality of the data product. In this scenario, teams execute Continuous Integration (CI) tests on data and set checks in place to ensure the data is promoted to production only all the set data quality and data governance criteria are met. Experimentation in isolated environments To experiment on a dataset without impacting production data, one can use data version control to create replicas of the production environment where tests can be carried out. Such replicas allow testing and understanding of changes safely applied to data. Data version control tools allow replication environments without the time- and resource-consuming maintenance. Instead, such tools allow objects to be shared using metadata. Rollback Continuous changes in data sets can sometimes cause functionality issues or lead to undesired outcomes, especially when applications are using the data. Data version control tools allow for the possibility to roll back a data set to an earlier state. This can be used to restore or improve functionality of an application or to correct errors or bad data which has been mistakenly included. Examples Version controlled data sources: Kaggle Quilt Dolt Kamu Data version control for data lakes: LakeFS Project Nessie Git-LFS ML-Ops systems that implement data version control: DVC Pachyderm Neptune activeloop graviti dagshub alectio Galileo Voxel51 dstack dvid See also References Version control systems Technical communication Big data Data management Data analysis
Data version control
Technology
1,110
536,313
https://en.wikipedia.org/wiki/Polycarbonate
Polycarbonates (PC) are a group of thermoplastic polymers containing carbonate groups in their chemical structures. Polycarbonates used in engineering are strong, tough materials, and some grades are optically transparent. They are easily worked, molded, and thermoformed. Because of these properties, polycarbonates find many applications. Polycarbonates do not have a unique resin identification code (RIC) and are identified as "Other", 7 on the RIC list. Products made from polycarbonate can contain the precursor monomer bisphenol A (BPA). Structure Carbonate esters have planar OC(OC)2 cores, which confer rigidity. The unique O=C bond is short (1.173 Å in the depicted example), while the C-O bonds are more ether-like (the bond distances of 1.326 Å for the example depicted). Polycarbonates received their name because they are polymers containing carbonate groups (−O−(C=O)−O−). A balance of useful features, including temperature resistance, impact resistance and optical properties, positions polycarbonates between commodity plastics and engineering plastics. Production Phosgene route The main polycarbonate material is produced by the reaction of bisphenol A (BPA) and phosgene . The overall reaction can be written as follows: The first step of the synthesis involves treatment of bisphenol A with sodium hydroxide, which deprotonates the hydroxyl groups of the bisphenol A. (HOC6H4)2CMe2 + 2 NaOH → Na2(OC6H4)2CMe2 + 2 H2O The diphenoxide (Na2(OC6H4)2CMe2) reacts with phosgene to give a chloroformate, which subsequently is attacked by another phenoxide. The net reaction from the diphenoxide is: Na2(OC6H4)2CMe2 + COCl2 → 1/n [OC(OC6H4)2CMe2]n + 2 NaCl In this way, approximately one billion kilograms of polycarbonate is produced annually. Many other diols have been tested in place of bisphenol A, e.g. 1,1-bis(4-hydroxyphenyl)cyclohexane and dihydroxybenzophenone. The cyclohexane is used as a comonomer to suppress crystallisation tendency of the BPA-derived product. Tetrabromobisphenol A is used to enhance fire resistance. Tetramethylcyclobutanediol has been developed as a replacement for BPA. Transesterification route An alternative route to polycarbonates entails transesterification from BPA and diphenyl carbonate: (HOC6H4)2CMe2 + (C6H5O)2CO → 1/n [OC(OC6H4)2CMe2]n + 2 C6H5OH Properties and processing Polycarbonate is a durable material. Although it has high impact-resistance, it has low scratch-resistance. Therefore, a hard coating is applied to polycarbonate eyewear lenses and polycarbonate exterior automotive components. The characteristics of polycarbonate compare to those of polymethyl methacrylate (PMMA, acrylic), but polycarbonate is stronger and will hold up longer to extreme temperature. Thermally processed material is usually totally amorphous, and as a result is highly transparent to visible light, with better light transmission than many kinds of glass. Polycarbonate has a glass transition temperature of about , so it softens gradually above this point and flows above about . Tools must be held at high temperatures, generally above to make strain-free and stress-free products. Low molecular mass grades are easier to mold than higher grades, but their strength is lower as a result. The toughest grades have the highest molecular mass, but are more difficult to process. Unlike most thermoplastics, polycarbonate can undergo large plastic deformations without cracking or breaking. As a result, it can be processed and formed at room temperature using sheet metal techniques, such as bending on a brake. Even for sharp angle bends with a tight radius, heating may not be necessary. This makes it valuable in prototyping applications where transparent or electrically non-conductive parts are needed, which cannot be made from sheet metal. PMMA/Acrylic, which is similar in appearance to polycarbonate, is brittle and cannot be bent at room temperature. Main transformation techniques for polycarbonate resins: extrusion into tubes, rods and other profiles including multiwall extrusion with cylinders (calenders) into sheets () and films (below ), which can be used directly or manufactured into other shapes using thermoforming or secondary fabrication techniques, such as bending, drilling, or routing. Due to its chemical properties it is not conducive to laser-cutting. injection molding into ready articles Polycarbonate may become brittle when exposed to ionizing radiation above Applications Electronic components Polycarbonate is mainly used for electronic applications that capitalize on its collective safety features. A good electrical insulator with heat-resistant and flame-retardant properties, it is used in products associated with power systems and telecommunications hardware. It can serve as a dielectric in high-stability capacitors. Commercial manufacture of polycarbonate capacitors mostly stopped after sole manufacturer Bayer AG stopped making capacitor-grade polycarbonate film at the end of 2000. Construction materials The second largest consumer of polycarbonates is the construction industry, e.g. for domelights, flat or curved glazing, roofing sheets and sound walls. Polycarbonates are used to create materials used in buildings that must be durable but light. 3D printing Polycarbonates are used extensively in 3D FDM printing, producing durable strong plastic products with a high melting point. Polycarbonate is relatively difficult for casual hobbyists to print compared to thermoplastics such as Polylactic acid (PLA) or Acrylonitrile butadiene styrene (ABS) because of the high melting point, difficulty with print bed adhesion, tendency to warp during printing, and tendency to absorb moisture in humid environments. Despite these issues, 3D printing using polycarbonates is common in the professional community. Data storage A major polycarbonate market is the production of compact discs, DVDs, and Blu-ray discs. These discs are produced by injection-molding polycarbonate into a mold cavity that has on one side a metal stamper containing a negative image of the disc data, while the other mold side is a mirrored surface. Typical products of sheet/film production include applications in advertisement (signs, displays, poster protection). Automotive, aircraft, and security components In the automotive industry, injection-molded polycarbonate can produce very smooth surfaces that make it well-suited for sputter deposition or evaporation deposition of aluminium without the need for a base-coat. Decorative bezels and optical reflectors are commonly made of polycarbonate. Its low weight and high impact resistance have made polycarbonate the dominant material for automotive headlamp lenses. However, automotive headlamps require outer surface coatings because of its low scratch resistance and susceptibility to ultraviolet degradation (yellowing). The use of polycarbonate in automotive applications is limited to low stress applications. Stress from fasteners, plastic welding and molding render polycarbonate susceptible to stress corrosion cracking when it comes in contact with certain accelerants such as salt water and plastisol. It can be laminated to make bullet-proof "glass", although "bullet-resistant" is more accurate for the thinner windows, such as are used in bullet-resistant windows in automobiles. The thicker barriers of transparent plastic used in teller's windows and barriers in banks are also polycarbonate. So-called "theft-proof" large plastic packaging for smaller items, which cannot be opened by hand, is typically made from polycarbonate. The cockpit canopy of the Lockheed Martin F-22 Raptor jet fighter is fabricated from high optical quality polycarbonate. It is the largest item of its type. Niche applications Polycarbonate, being a versatile material with attractive processing and physical properties, has attracted myriad smaller applications. The use of injection molded drinking bottles, glasses and food containers is common, but the use of BPA in the manufacture of polycarbonate has stirred concerns (see Potential hazards in food contact applications), leading to development and use of "BPA-free" plastics in various formulations. Polycarbonate is commonly used in eye protection, as well as in other projectile-resistant viewing and lighting applications that would normally indicate the use of glass, but require much higher impact-resistance. Polycarbonate lenses also protect the eye from UV light. Many kinds of lenses are manufactured from polycarbonate, including automotive headlamp lenses, lighting lenses, sunglass/eyeglass lenses, camera lenses, swimming goggles and SCUBA masks, and safety glasses/goggles/visors including visors in sporting helmets/masks and police riot gear (helmet visors, riot shields, etc.). Windscreens in small motorized vehicles are commonly made of polycarbonate, such as for motorcycles, ATVs, golf carts, and small airplanes and helicopters. The light weight of polycarbonate as opposed to glass has led to development of electronic display screens that replace glass with polycarbonate, for use in mobile and portable devices. Such displays include newer e-ink and some LCD screens, though CRT, plasma screen and other LCD technologies generally still require glass for its higher melting temperature and its ability to be etched in finer detail. As more and more governments are restricting the use of glass in pubs and clubs due to the increased incidence of glassings, polycarbonate glasses are becoming popular for serving alcohol because of their strength, durability, and glass-like feel. Other miscellaneous items include durable, lightweight luggage, MP3/digital audio player cases, ocarinas, computer cases, riot shields, instrument panels, tealight candle containers and food blender jars. Many toys and hobby items are made from polycarbonate parts, like fins, gyro mounts, and flybar locks in radio-controlled helicopters, and transparent LEGO (ABS is used for opaque pieces). Standard polycarbonate resins are not suitable for long term exposure to UV radiation. To overcome this, the primary resin can have UV stabilisers added. These grades are sold as UV stabilized polycarbonate to injection moulding and extrusion companies. Other applications, including polycarbonate sheets, may have the anti-UV layer added as a special coating or a coextrusion for enhanced weathering resistance. Polycarbonate is also used as a printing substrate for nameplate and other forms of industrial grade under printed products. The polycarbonate provides a barrier to wear, the elements, and fading. Medical applications Many polycarbonate grades are used in medical applications and comply with both ISO 10993-1 and USP Class VI standards (occasionally referred to as PC-ISO). Class VI is the most stringent of the six USP ratings. These grades can be sterilized using steam at 120 °C, gamma radiation, or by the ethylene oxide (EtO) method. Trinseo strictly limits all its plastics with regard to medical applications. Aliphatic polycarbonates have been developed with improved biocompatibility and degradability for nanomedicine applications. Mobile phones Some smartphone manufacturers use polycarbonate. Nokia used polycarbonate in their phones starting with the N9's unibody case in 2011. This practice continued with various phones in the Lumia series. Samsung started using polycarbonate with Galaxy S III's hyperglaze-branded removable battery cover in 2012. This practice continues with various phones in the Galaxy series. Apple started using polycarbonate with the iPhone 5C's unibody case in 2013. Benefits over glass and metal back covers include durability against shattering (advantage over glass), bending and scratching (advantage over metal), shock absorption, low manufacturing costs, and no interference with radio signals and wireless charging (advantage over metal). Polycarbonate back covers are available in glossy or matte surface textures. History Polycarbonates were first discovered in 1898 by Alfred Einhorn, a German scientist working at the University of Munich. However, after 30 years' laboratory research, this class of materials was abandoned without commercialization. Research resumed in 1953, when Hermann Schnell at Bayer in Uerdingen, Germany patented the first linear polycarbonate. The brand name "Makrolon" was registered in 1955. Also in 1953, and one week after the invention at Bayer, Daniel Fox at General Electric (GE) in Pittsfield, Massachusetts, independently synthesized a branched polycarbonate. Both companies filed for U.S. patents in 1955, and agreed that the company lacking priority would be granted a license to the technology. Patent priority was resolved in Bayer's favor, and Bayer began commercial production under the trade name Makrolon in 1958. GE began production under the name Lexan in 1960, creating the GE Plastics division in 1973. After 1970, the original brownish polycarbonate tint was improved to "glass-clear". Potential hazards in food contact applications The use of polycarbonate containers for the purpose of food storage is controversial. The basis of this controversy is their hydrolysis (degradation by water, often referred to as leaching) occurring at high temperature, releases bisphenol A: 1/n [OC(OC6H4)2CMe2]n + H2O → (HOC6H4)2CMe2 + CO2 More than 100 studies have explored the bioactivity of bisphenol A derived from polycarbonates. Bisphenol A appeared to be released from polycarbonate animal cages into water at room temperature and it may have been responsible for enlargement of the reproductive organs of female mice. However, the animal cages used in the research were fabricated from industrial grade polycarbonate, rather than FDA food grade polycarbonate. An analysis of the literature on bisphenol A leachate low-dose effects by vom Saal and Hughes published in August 2005 seems to have found a suggestive correlation between the source of funding and the conclusion drawn. Industry-funded studies tend to find no significant effects whereas government-funded studies tend to find significant effects. Sodium hypochlorite bleach and other alkali cleaners catalyze the release of the bisphenol A from polycarbonate containers. Polycarbonate is incompatible with ammonia and acetone. Alcohol is a recommended organic solvent for cleaning grease and oils from polycarbonate. Environmental impact Disposal Studies have shown that at temperatures above 70 °C, and high humidity, polycarbonate will hydrolyze to bisphenol A (BPA). After about 30 days at 85 °C/96% RH, surface crystals are formed which for 70% consisted of BPA. BPA is a compound that is currently on the list of potential environmental hazardous chemicals. It is on the watch list of many countries, such as United States and Germany. -(-OC6H4)2C(CH3)2CO-)-n + H2O → (CH3)2C(C6H4OH)2 + CO2 The leaching of BPA from polycarbonate can also occur at environmental temperature and normal pH (in landfills).The amount of leaching increases as the polycarbonate parts get older. A study found that the decomposition of BPA in landfills (under anaerobic conditions) will not occur. It will therefore be persistent in landfills. Eventually, it will find its way into water bodies and contribute to aquatic pollution. Photo-oxidation of polycarbonate In the presence of UV light, oxidation of this polymer yields compounds such as ketones, phenols, o-phenoxybenzoic acid, benzyl alcohol and other unsaturated compounds. This has been suggested through kinetic and spectral studies. The yellow color formed after long exposure to sun can also be related to further oxidation of phenolic end group (OC6H4)2C(CH3)2CO )n + O2 , R* → (OC6H4)2C(CH3CH2)CO)n This product can be further oxidized to form smaller unsaturated compounds. This can proceed via two different pathways, the products formed depends on which mechanism takes place. Pathway A (OC6H4)2C(CH3CH2)CO + O2, H* HO(OC6H4)OCO + CH3COCH2(OC6H4)OCO Pathway B (OC6H4)2C(CH3CH2)CO)n + O2, H* OCO(OC6H4)CH2OH + OCO(OC6H4)COCH3 Photo-aging reaction Photo-aging is another degradation route for polycarbonates. Polycarbonate molecules (such as the aromatic ring) absorb UV radiation. This absorbed energy causes cleavage of covalent bonds which initiates the photo-aging process. The reaction can be propagated via side chain oxidation, ring oxidation or photo-Fries rearrangement. Products formed include phenyl salicylate, dihydroxybenzophenone groups, and hydroxydiphenyl ether groups. (C16H14O3)n C16H17O3 + C13H10O3 Thermal degradation Waste polycarbonate will degrade at high temperatures to form solid, liquid and gaseous pollutants. A study showed that the products were about 40–50 wt.% liquid, 14–16 wt.% gases, while 34–43 wt.% remained as solid residue. Liquid products contained mainly phenol derivatives (~75wt.%) and bisphenol (~10wt.%) also present. Polycarbonate, however, can be safely used as a carbon source in the steel-making industry. Phenol derivatives are environmental pollutants, classified as volatile organic compounds (VOC). Studies show they are likely to facilitate ground level ozone formation and increase photo-chemical smog. In aquatic bodies, they can potentially accumulate in organisms. They are persistent in landfills, do not readily evaporate and would remain in the atmosphere. Effect of fungi In 2001 a species of fungus in Belize, Geotrichum candidum, was found to consume the polycarbonate found in compact discs (CD). This has prospects for bioremediation. However, this effect has not been reproduced. See also CR-39, allyl diglycol carbonate (ADC) used for eyeglasses Mobile phone accessories Organic electronics Thermoplastic polyurethane Vapor polishing References External links Commodity chemicals Dielectrics Optical materials Plastics Thermoplastics Transparent materials German inventions
Polycarbonate
Physics,Chemistry
4,018
46,880,178
https://en.wikipedia.org/wiki/Penicillium%20neoechinulatum
Penicillium neoechinulatum is a species of fungus in the genus Penicillium which produces patulin. References neoechinulatum Fungi described in 2004 Fungus species
Penicillium neoechinulatum
Biology
40
1,995,761
https://en.wikipedia.org/wiki/Integral%20ecology
Integral ecology is a holistic approach to ecology, emphasizing human and social dimensions, and the interconnectedness of life on Earth. It studies the relationships between living organisms and the ecosystem in which they develop. The concept has been adopted by Pope Francis in his encyclical Laudato si' from 2015. The approach has influenced many fields of research and the development of practices and case studies around the world such as the Parco della Piana of Assisi. Etymology The use of the term 'integral ecology' probably first appeared in Hillary B. Moore's Marine Ecology in 1958. Since then, multiple authors have used the term to convey unique but overlapping concepts in the intellectual atmosphere of ecology. In the two decades leading up to the encyclical's release, the concept evolved into a formal term, largely due to the contributions of Leonardo Boff and Thomas Berry. According to Ryszard F. Sadowski, parts of Pope Francis's encyclical on integral ecology seem to have been influenced by Boff and Berry. Some similar themes include the holistic approach, the common good, and sustainability. Orismology Papal integral ecology Integral ecology, as described by Pope Francis in chapter four of his encyclical Laudato si’, is a holistic approach to understanding the interconnectedness of humans, society, and the environment. It posits that the current pace of consumption, waste accumulation, and environmental change is unsustainable and threatens to precipitate global catastrophes. The encyclical emphasizes the interdependence between humans and nature, insisting that "[a]lthough we are often not aware of it, we depend on these larger systems for our own existence." Vital processes such as carbon dioxide regulation, water purification, waste decomposition, soil formation, and many other processes, which facilitate life on Earth, are too often taken for granted. Pope Francis calls for a shift from an individualistic, consumer-driven culture to one that prioritizes the common good. This includes combating poverty, restoring dignity to marginalized communities, and protecting the environment. He asserts that "[t]he global economic crises have made painfully obvious the detrimental effects of disregarding our common destiny, which cannot exclude those who come after us." Thus, intergenerational solidarity is crucial for sustainable development. Integral ecology extends beyond environmental protection to encompass themes such as the health of societal institutions, cultural preservation, and urban planning. The encyclical stresses the importance of creating inclusive cities that foster a sense of belonging and shared responsibility. It also highlights the ethical dimensions of environmental care, the intrinsic dignity of the human person, and the need to respect the moral law. By framing environmental challenges as interconnected with social and economic issues, Pope Francis offers a comprehensive vision for addressing the complex crises facing humanity. His concept of integral ecology provides a foundation for building a more just, equitable, and sustainable world. Berry's integral ecology The concept of integral ecology has been (significantly) influenced by cultural historian Thomas Berry. According to Berry, humanity has entered a period of ecological crisis due to excessive anthropocentrism and consumerism, leading to the exploitation and devastation of the planet. Berry criticized the destructive impact of modern technologies, such as chemical fertilizers and deforestation, which have depleted natural resources and harmed the environment. He argued that while humans have traditionally held a spiritual connection to nature, this reverence has diminished in recent centuries, leading to a loss of ecological wisdom. To address this crisis, Berry envisioned an "Ecozoic Era", characterized by a harmonious relationship between humans and the earth. Berry added that "[t]his new geobiological period is the condition for the integral functioning of the planet in all phases of its activities, whether these be biological, ecological, economic, cultural, or religious." Being part of the Ecozoic Era would require a fundamental shift in human consciousness and the recognition that everything in this universe is sacred and interconnected. Berry introduced the "integral ecologist" as the personification of the Ecozoic Era. This individual would serve as a spokesperson for the planet, advocating for its protection and restoration. The integral ecologist would be able to bridge the gap between scientific knowledge and spiritual wisdom. In recognizing the complex nature of the universe as a dynamic and evolving system, integral ecologists would be able regain their spiritual understanding of the cosmos and their ability to cultivate planetary well-being. Boff's integral ecology In “Liberation Theology and Ecology: Alternative, Confrontation, or Complementarity?”, a chapter in “Ecology and Poverty: Cry of the Earth, Cry of the Poor” (1995), Leonardo Boff explores the intersection between liberation theology and ecological discourse, emphasizing the shared concern for addressing poverty and environmental degradation. According to Boff, both disciplines originate from cries of oppression; with liberation theology from the cry of the poor for dignity and freedom, and ecology from the cry of the earth under systematic exploitation. Boff cites Exodus 3:7 and Romans 8:22-23 as scriptural foundations for these cries, hereby pairing the struggle of the poor with the suffering of the earth. He advocates for an integrated approach that unites social and ecological liberation in the pursuit of a sustainable and just future. Boff introduces the concept of "integral ecology," as a way to integrate all dimensions of ecology – economic, social, cultural, political, and spiritual – into a new alliance between humanity and nature. Liberation theology, traditionally focused on the plight of the poor, is presented as needing to adopt this new ecological cosmology. In order to ensure our well-being, it must recognize Earth as a conscious entity and see humanity as its mode of expression. Boff emphasizes "it is the earth itself that, through one of its expressions – the human species – takes on a conscious direction in this new phase of the process of evolution." In light of this evolution, the chapter highlights the importance of the landmark document "The Limits to Growth", released by the Club of Rome in 1972, which drew attention to Earth's finite resources and the serious risks associated with industrialization. Boff echoes these concerns, in noting the alarming rate at which species are disappearing, and in criticizing the anthropocentrism and consumerism that underpin contemporary society. He advocates for a shift towards recognizing the earth as a "superorganism", called Gaia, in which all elements – both living and non-living – are interconnected in a dynamic equilibrium. Finally, Boff pleads for sustainability that respects the rhythms of ecosystems and promotes an economy of sufficiency for all, hence ensuring the common good extends beyond humans to all creation. According to Boff, the holistic approach, which combines liberation theology with ecological discourse, is essential in addressing the enduring hostility towards Earth and its inhabitants. In a way, Earth urges us to reconnect with all things, and thus with "the thread that binds everything upwards, God." References Ecology
Integral ecology
Biology
1,446
4,678,739
https://en.wikipedia.org/wiki/Structure%20mapping%20engine
In artificial intelligence and cognitive science, the structure mapping engine (SME) is an implementation in software of an algorithm for analogical matching based on the psychological theory of Dedre Gentner. The basis of Gentner's structure-mapping idea is that an analogy is a mapping of knowledge from one domain (the base) into another (the target). The structure-mapping engine is a computer simulation of the analogy and similarity comparisons. The theory is useful because it ignores surface features and finds matches between potentially very different things if they have the same representational structure. For example, SME could determine that a pen is like a sponge because both are involved in dispensing liquid, even though they do this very differently. Structure mapping theory Structure mapping theory is based on the systematicity principle, which states that connected knowledge is preferred over independent facts. Therefore, the structure mapping engine should ignore isolated source-target mappings unless they are part of a bigger structure. The SME, the theory goes, should map objects that are related to knowledge that has already been mapped. The theory also requires that mappings be done one-to-one, which means that no part of the source description can map to more than one item in the target and no part of the target description can be mapped to more than one part of the source. The theory also requires that if a match maps subject to target, the arguments of subject and target must also be mapped. If both these conditions are met, the mapping is said to be "structurally consistent." Concepts in SME SME maps knowledge from a source into a target. SME calls each description a dgroup. Dgroups contain a list of entities and predicates. Entities represent the objects or concepts in a description — such as an input gear or a switch. Predicates are one of three types and are a general way to express knowledge for SME. Relation predicates contain multiple arguments, which can be other predicates or entities. An example relation is: (transmit (what from to)). This relation has a functor transmit and takes three arguments: what, from, and to. Attribute predicates are the properties of an entity. An example of an attribute is (red gear) which means that gear has the attribute red. Function predicates map an entity into another entity or constant. An example of a function is (joules power source) which maps the entity power source onto the numerical quantity joules. Functions and attributes have different meanings, and consequently SME processes them differently. For example, in SME's true analogy rule set, attributes differ from functions because they cannot match unless there is a higher-order match between them. The difference between attributes and functions will be explained further in this section's examples. All predicates have four parameters. They have (1) a functor, which identifies it, and (2) a type, which is either relation, attribute, or function. The other two parameters (3 and 4) are for determining how to process the arguments in the SME algorithm. If the arguments have to be matched in order, commutative is false. If the predicate can take any number of arguments, N-ary is false. An example of a predicate definition is: (sme:defPredicate behavior-set (predicate) relation :n-ary? t :commutative? t) The predicate's functor is “behavior-set,” its type is “relation,” and its n-ary and commutative parameters are both set to true. The “(predicate)” part of the definition specifies that there will be one or more predicates inside an instantiation of behavior-set. Algorithm details The algorithm has several steps. The first step of the algorithm is to create a set of match hypotheses between source and target dgroups. A match hypothesis represents a possible mapping between any part of the source and the target. This mapping is controlled by a set of match rules. By changing the match rules, one can change the type of reasoning SME does. For example, one set of match rules may perform a kind of analogy called literal similarity, and another performs a kind of analogy called true-analogy. These rules are not the place where domain-dependent information is added, but rather where the analogy process is tweaked, depending on the type of cognitive function the user is trying to emulate. For a given match rule, there are two types of rules that further define how it will be applied: filter rules and intern rules. Intern rules use only the arguments of the expressions in the match hypotheses that the filter rules identify. This limitation makes the processing more efficient by constraining the number of match hypotheses that are generated. At the same time, it also helps to build the structural consistencies that are needed later on in the algorithm. An example of a filter rule from the true-analogy rule set creates match hypotheses between predicates that have the same functor. The true-analogy rule set has an intern rule that iterates over the arguments of any match hypothesis, creating more match hypotheses if the arguments are entities or functions, or if the arguments are attributes and have the same functor. In order to illustrate how the match rules produce match hypotheses consider these two predicates: transmit torque inputgear secondgear (p1) transmit signal switch div10 (p2) Here we use true analogy for the type of reasoning. The filter match rule generates a match between p1 and p2 because they share the same functor, transmit. The intern rules then produce three more match hypotheses: torque to signal, inputgear to switch, and secondgear to div10. The intern rules created these match hypotheses because all the arguments were entities. If the arguments were functions or attributes instead of entities, the predicates would be expressed as: transmit torque (inputgear gear) (secondgear gear) (p3) transmit signal (switch circuit) (div10 circuit) (p4) These additional predicates make inputgear, secondgear, switch, and div10 functions or attributes depending on the value defined in the language input file. The representation also contains additional entities for gear and circuit. Depending on what type inputgear, secondgear, switch, and div10 are, their meanings change. As attributes, each one is a property of the gear or circuit. For example, the gear has two attributes, inputgear and secondgear. The circuit has two attributes, switch and circuit. As functions inputgear, secondgear, switch, and div10 become quantities of the gear and circuit. In this example, the functions inputgear and secondgear now map to the numerical quantities “torque from inputgear” and “torque from secondgear,” For the circuit the quantities map to logical quantity “switch engaged” and the numerical quantity “current count on the divide by 10 counter.” SME processes these differently. It does not allow attributes to match unless they are part of a higher-order relation, but it does allow functions to match, even if they are not part of such a relation. It allows functions to match because they indirectly refer to entities and thus should be treated like relations that involve no entities. However, as next section shows, the intern rules assign lower weights to matches between functions than to matches between relations. The reason SME does not match attributes is because it is trying to create connected knowledge based on relationships and thus satisfy the systematicity principle. For example, if both a clock and a car have inputgear attributes, SME will not mark them as similar. If it did, it would be making a match between the clock and car based on their appearance — not on the relationships between them. When the additional predicates in p3 and p4 are functions, the results from matching p3 and p4 are similar to the results from p1 and p2 except there is an additional match between gear and circuit and the values for the match hypotheses between (inputgear gear) and (switch circuit), and (secondgear gear) and (div10 circuit), are lower. The next section describes the reason for this in more detail. If the inputgear, secondgear, switch, and div10 are attributes instead of entities, SME does not find matches between any of the attributes. It finds matches only between the transmit predicates and between torque and signal. Additionally, the structural-evaluation scores for the remaining two matches decrease. In order to get the two predicates to match, p3 would need to be replaced by p5, which is demonstrated below. transmit torque (inputgear gear) (div10 gear) (p5) Since the true-analogy rule set identifies that the div10 attributes are the same between p5 and p4 and because the div10 attributes are both part of the higher-relation match between torque and signal, SME makes a match between (div10 gear) and (div10 circuit) — which leads to a match between gear and circuit. Being part of a higher-order match is a requirement only for attributes. For example, if (div10 gear) and (div10 circuit) are not part of a higher-order match, SME does not create a match hypothesis between them. However, if div10 is a function or relation, SME does create a match. Structural evaluation score Once the match hypotheses are generated, SME needs to compute an evaluation score for each hypothesis. SME does so by using a set of intern match rules to calculate positive and negative evidence for each match. Multiple amounts of evidence are correlated using Dempster's rule [Shafer, 1978] resulting in positive and negative belief values between 0 and 1. The match rules assign different values for matches involving functions and relations. These values are programmable, however, and some default values that can be used to enforce the systematicity principle are described in [Falkenhainer et al., 1989]. These rules are: If the source and target are not functions and have the same order, the match gets +0.3 evidence. If the orders are within 1 of each other, the match gets +0.2 evidence and -0.05 evidence. If the source and target have the same functor, the match gets 0.2 evidence if the source is a function and 0.5 if the source is a relation. If the arguments match, the match gets +0.4 evidence. The arguments might match if all the pairs of arguments between the source and target are entities, if the arguments have the same functors, or it is never the case that the target is an entity but the source is not. If the predicate type matches, but the elements in the predicate do not match, then the match gets -0.8 evidence. If the source and target expressions are part of a matching higher-order match, add 0.8 of the evidence for the higher-order match. In the example match between p1 and p2, SME gives the match between the transmit relations a positive evidence value of 0.7900, and the others get values of 0.6320. The transmit relation receives the evidence value of 0.7900 because it gains evidence from rules 1, 3, and 2. The other matches get a value of 0.6320 because 0.8 of the evidence from the transmit is propagated to these matches because of rule 5. For predicates p3 and p4, SME assigns less evidence because the arguments of the transmit relations are functions. The transmit relation gets positive evidence of 0.65 because rule 3 no longer adds evidence. The match between (input gear) and (switch circuit) becomes 0.7120. This match gets 0.4 evidence because of rule 3, and 0.52 evidence propagated from the transmit relation because of rule 5. When the predicates in p3 and p4 are attributes, rule 4 adds -0.8 evidence to the transmit match because — though the functors of the transmit relation match — the arguments do not have the potential to match and the arguments are not functions. To summarize, the intern match rules compute a structural evaluation score for each match hypothesis. These rules enforce the systematicity principle. Rule 5 provides trickle-down evidence in order to strengthen matches that are involved in higher-order relations. Rules 1, 3. and 4 add or subtract support for relations that could have matching arguments. Rule 2 adds support for the cases when the functors match. thereby adding support for matches that emphasize relationships. The rules also enforce the difference between attributes, functions, and relations. For example, they have checks which give less evidence for functions than relations. Attributes are not specifically dealt with by the intern match rules, but SME's filter rules ensure that they will only be considered for these rules if they are part of a higher-order relation, and rule 2 ensures that attributes will only match if they have identical functors. Gmap creation The rest of the SME algorithm is involved in creating maximally consistent sets of match hypotheses. These sets are called gmaps. SME must ensure that any gmaps that it creates are structurally consistent; in other words, that they are one-to-one — such that no source maps to multiple targets and no target is mapped to multiple sources. The gmaps must also have support, which means that if a match hypothesis is in the gmap, then so are the match hypothesis that involve the source and target items. The gmap creation process follows two steps. First, SME computes information about each match hypothesis — including entity mappings, any conflicts with other hypotheses, and what other match hypotheses with which it might be structurally inconsistent. SME then uses this information to merge match hypotheses — using a greedy algorithm and the structural evaluation score. It merges the match hypotheses into maximally structurally consistent connected graphs of match hypotheses. Then it combines gmaps that have overlapping structure if they are structurally consistent. Finally, it combines independent gmaps together while maintaining structural consistency. Comparing a source to a target dgroup may produce one or more gmaps. The weight for each gmap is the sum of all the positive evidence values for all the match hypotheses involved in the gmap. For example, if a source containing p1 and p6 below, is compared to a target containing p2, SME will generate two gmaps. Both gmaps have a weight of 2.9186. Source: transmit torque inputgear secondgear (p1) transmit torque secondgear thirdgear (p6) Target: transmit signal switch div10 (p2) These are the gmaps which result from comparing a source containing a p1 and p6 and a target containing p2. Gmap No. 1: (TORQUE SIGNAL) (INPUTGEAR SWITCH) (SECONDGEAR DIV10) (*TRANSMIT-TORQUE-INPUTGEAR-SECONDGEAR *TRANSMIT-SIGNAL-SWITCH-DIV10) Gmap No. 2: (TORQUE SIGNAL) (SECONDGEAR SWITCH) (THIRDGEAR DIV10) (*TRANSMIT-TORQUE-SECONDGEAR-THIRDGEAR *TRANSMIT-SIGNAL-SWITCH-DIV10) The gmaps show pairs of predicates or entities that match. For example, in gmap No. 1, the entities torque and signal match and the behaviors transmit torque inputgear secondgear and transmit signal switch div10 match. Gmap No. 1 represents combining p1 and p2. Gmap No. 2 represents combining p1 and p6. Although p2 is compatible with both p1 and p6, the one-to-one mapping constraint enforces that both mappings cannot be in the same gmap. Therefore, SME produces two independent gmaps. In addition, combining the two gmaps together would make the entity mappings between thirdgear and div10 conflict with the entity mapping between secondgear and div10. Criticisms Chalmers, French, and Hofstadter [1992] criticize SME for its reliance on manually constructed LISP representations as input. They argue that too much human creativity is required to construct these representations; the intelligence comes from the design of the input, not from SME. Forbus et al. [1998] attempted to rebut this criticism. Morrison and Dietrich [1995] tried to reconcile the two points of view. Turney [2008] presents an algorithm that does not require LISP input, yet follows the principles of Structure Mapping Theory. Turney [2008] state that their work, too, is not immune to the criticism of Chalmers, French, and Hofstadter [1992]. In her article How Creative Ideas Take Shape, Liane Gabora writes "According to the honing theory of creativity, creative thought works not on individually considered, discrete, predefined representations but on a contextually-elicited amalgam of items which exist in a state of potentiality and may not be readily separable. This leads to the prediction that analogy making proceeds not by mapping correspondences from candidate sources to target, as predicted by the structure mapping theory of analogy, but by weeding out non-correspondences, thereby whittling away at potentiality." References Further reading Papers by the Qualitative Reasoning Group at Northwestern University Chalmers, D. J., French, R. M., & Hofstadter, D. R.: 1992, High-level perception, representation, and analogy: A critique of artificial intelligence methodology. Journal of Experimental & Theoretical Artificial Intelligence, 4(3), 185–211. Falkenhainer, B: 2005, Structure Mapping Engine Implementation. sme implementation Falkenhainer, B, Forbus, K and Gentner, D: 1989, "The structure-mapping engine: Algorithm and examples". Artificial Intelligence, 20(41): 1–63. Forbus, K.D., Gentner, D., Markman, A.B., and Ferguson, R.W.: 1998, Analogy Just Looks Like High Level Perception: Why a Domain-General Approach to Analogical Mapping is Right. Journal of Experimental and Theoretical Artificial Intelligence, 10(2), 231–257. French, RM: 2002. "The Computational Modeling of Analogy-Making". Trends in Cognitive Sciences, 6(5), 200–205. Gentner, D: 1983, "Structure-mapping: A Theoretical Framework for Analogy", Cognitive Science 7(2) Shafer, G: 1978, A Mathematical Theory of Evidence, Princeton University Press, Princeton, New Jersey. . Morrison, C.T., and Dietrich, E.: 1995, Structure-Mapping vs. High-level Perception: The Mistaken Fight Over The Explanation of Analogy. Proceedings of the Seventeenth Annual Conference of the Cognitive Science Society, 678–682. Turney, P.D.: 2008, The latent relation mapping engine: Algorithm and experiments, Journal of Artificial Intelligence Research (JAIR), 33, 615–655. Artificial intelligence engineering
Structure mapping engine
Engineering
4,039
39,436,664
https://en.wikipedia.org/wiki/Registax
RegiStax is image processing software for amateur astrophotographers, released as freeware, designed to run under Windows, but which also runs on Linux, under wine. Its purpose is to produce enhanced images of astronomic observations through combining consecutive photographs (an image "stack") of the same scene that were taken over a short period of time. The process relies on the subject (e.g. a planet) being unchanged between photographs, so that any differences can be assumed to be random noise or atmospheric interference. The stack of images can be in the form of individual consecutive shots or from frames of a movie camera trained on the scene. History Cor Berrevoets (Netherlands) began development of the program about 2001, and it was released on 19 May 2002. This initial release (version v1.0.0) had facilities for stack alignment, grading and selection of the images to be merged, and image enhancement using techniques such as wavelet processing. The program was regularly updated by its author and on 6 June 2004 a multi-lingual version was begun (v3) and the program was later available in 15 different languages. To date (September 2022) the latest release is v6.1.0.8 (6 May 2011) which was contributed to by a team of 9 people. See also Shift and add image processing technique Speckle imaging Lucky Imaging AstroStakkert References External links Official RegiStax Website. Basic RegiStax 6 tutorial Astronomy software Science education software Pascal (programming language) software
Registax
Astronomy
314
1,717,029
https://en.wikipedia.org/wiki/Acorn%20Online%20Media%20Set%20Top%20Box
The Acorn Online Media Set Top Box was produced by the Online Media division of Acorn Computers Ltd for the Cambridge Cable and Online Media Video on Demand trial and launched early 1996. Part of this trial involved a home-shopping system in partnership with Parcelforce. The hardware was trialled by NatWest bank, as exhibited at the 1995 Acorn World trade show. Specification STB1 The STB1 was a customised Risc PC based system, with a Wild Vision Movie Magic expansion card in a podule slot, and a network card based on Asynchronous Transfer Mode. Memory: 4 MiB RAM Processor: ARM 610 processor at 33 MHz; approx 28.7 MIPS Operating system: RISC OS 3.50 held in 4 MiB ROM STB20 The STB20 was a new PCB based around the ARM7500 System On Chip. Memory: Processor: ARM7500 processor Operating system: RISC OS 3.61, a version specific for this STB, held in 4 MiB ROM. STB22 By this time Online Media had been restructured back into Acorn Computers, so the STB22 is branded as 'Acorn'. Memory: Processor: Operating system: a development of RISC OS held in 4 MiB ROM References External links The Full Acorn Machine List: STB Computer-related introductions in 1996 Online Media Set Top Box Legacy systems Set-top box
Acorn Online Media Set Top Box
Technology
291
54,480,204
https://en.wikipedia.org/wiki/Heinrich%20Adolph%20Baumhauer
Heinrich Adolph Baumhauer (26 October 1848, Bonn, Kingdom of Prussia - 1 August 1926, Freiburg, Switzerland) was a German chemist and mineralogist. Baumhauer was the son of lithographer and merchant Mathias Baumhauer (1810–70) and Anna Margaretha Käuffer (variously Kaeuffer, Keuffer, Kaufmann) of Bonn. He studied in Bonn from 1866 to 1869 with Friedrich August Kekule von Stradonitz, Hans Heinrich Landolt and Gerhard vom Rath, receiving his doctorate for the dissertation “Die Reduction des Nitrobenzols durch Chlor-und Bromwasserstoff.” He spent an additional year studying at Göttingen in 1870. In 1871 Baumhauer became a teacher at the Technical University in Frankenberg, Saxony. After a short period of teaching at the Handelsschule in Hildesheim in 1872, he became a chemistry teacher from 1873 to 1896 at the agricultural school of Lüdinghausen, Westphalia. From 1895 to 1925 he was professor of mineralogy and after 1906/1907 also a professor of inorganic chemistry in Freiburg, Switzerland. He was appointed Director of the newly created Department of Mineralogy at the University of Freiburg in 1896, and led the Freiburger Institut für Mineralogie until 1925. In 1870 he wrote about the relationship between atomic weights and the properties of elements, and proposed his own periodic system on spirals based on increasing atomic weights. He also wrote textbooks on inorganic chemistry (1884), organic chemistry (1885), and mineralogy (1884). He was well-known for his book Das Reich der Kristalle ("The Kingdom of Crystals", 1889). He evaluated etching figures on crystals and made studies on minerals from dolomite and new minerals. The etching method he developed contributed to the understanding of crystalline structures. His book Die Resultate der Aetzmethode ("The results of the Aetz method", 1894) was the standard resource on this method until 1927. He was the first to introduce the idea of polytypism in minerals. Baumhauer was the first to describe the mineral Rathite which he named for Gerhard vom Rath. He also discovered Seligmannite, which was named in honor of Gustav Seligmann. A mineral is named in his honor as well: the rare dark gray lead-arsenic-sulphide Baumhauerite (Pb 3 As 4 S 9), which is found in the Lengenbach Quarry in Binntal, Switzerland. Baumhauer's collection of minerals from the Binntal, containing more than 750 pieces as well as handwritten observation journals, correspondence, and other materials, is held by the Freiburger Institut für Mineralogie. His collection helped to establish the reputation of the Institut. Baumhauer became a member of the Mineralogical Society of St. Petersburg in 1878, an honorary member of the Mineralogical Society of Great Britain and Ireland in 1879, a member of the Mineralogical Society of London in 1905 and a member of the Leopoldina or German Academy of Natural Scientists in 1926. References 1848 births 1926 deaths Members of the German National Academy of Sciences Leopoldina People involved with the periodic table
Heinrich Adolph Baumhauer
Chemistry
669
14,861,492
https://en.wikipedia.org/wiki/3C%20223
3C 223 is a Seyfert galaxy located in the constellation of Leo Minor. It hosts a Type 2 quasar nucleus, found to be radio-loud with a rare, Compton-thick active galactic nucleus. 3C 223 is also a radio galaxy. With a projected size of ≥1 megaparsecs, it is classified a giant radio source according to researchers who presented Very Large Array images. Based on spectral study results, the source of 3C 223 is found to be younger. Gallery References External links www.jb.man.ac.uk/atlas/ (J. P. Leahy) 223 Radio galaxies Seyfert galaxies 3C 223 Leo Minor
3C 223
Astronomy
141
2,796,616
https://en.wikipedia.org/wiki/Service-oriented%20development%20of%20applications
In the field of software application development, service-oriented development of applications (or SODA) is a way of producing service-oriented architecture applications. Use of the term SODA was first used by the Gartner research firm. SODA represents one possible activity for company to engage in when making the transition to service-oriented architecture (SOA). However, it has been argued that an overreliance on SODA can reduce overall system flexibility, reuse, and business agility. This danger is greater for sites that use an application server, which could diminish flexibility in redeployment and composition of services. See also Enterprise service bus Service-oriented modeling References External links Gartner articles on the ROI aspects of SODA (Registration and fee required.) Pillars of Service-Oriented development What's the Big Deal About SOA Software architecture Service-oriented (business computing)
Service-oriented development of applications
Technology
180
1,648,241
https://en.wikipedia.org/wiki/Homolysis%20%28chemistry%29
In chemistry, homolysis () or homolytic fission is the dissociation of a molecular bond by a process where each of the fragments (an atom or molecule) retains one of the originally bonded electrons. During homolytic fission of a neutral molecule with an even number of electrons, two radicals will be generated. That is, the two electrons involved in the original bond are distributed between the two fragment species. Bond cleavage is also possible by a process called heterolysis. The energy involved in this process is called bond dissociation energy (BDE). BDE is defined as the "enthalpy (per mole) required to break a given bond of some specific molecular entity by homolysis," symbolized as D. BDE is dependent on the strength of the bond, which is determined by factors relating to the stability of the resulting radical species. Because of the relatively high energy required to break bonds in this manner, homolysis occurs primarily under certain circumstances: Light (i.e. ultraviolet radiation) Heat Certain intramolecular bonds, such as the O–O bond of a peroxide, are sufficiently weak to spontaneously homolytically dissociate near room temperature. Most bonds homolyse at temperatures above 200°C. Adenosylcobalamin is the cofactor which creates the deoxyadenosyl radical by homolytic cleavage of a cobalt-carbon bond in reactions catalysed by methylmalonyl-CoA mutase, isobutyryl-CoA mutase and related enzymes. This triggers rearrangement reactions in the carbon framework of the substrates on which the enzymes act. Factors that drive homolysis Homolytic cleavage is driven by the ability of a molecule to absorb energy from light or heat, and the bond dissociation energy (enthalpy). If the radical species is better able to stabilize the radical, the energy of the SOMO will be lowered, as will the bond dissociation energy. Bond dissociation energy is determined by multiple factors: Electronegativity Less electronegative atoms are better stabilizers of radicals, meaning that a bond between two electronegative atoms will have a higher BDE than a similar molecule with two less electronegative atoms. Polarizability The larger the electron cloud, the better an atom can stabilize the radical (i.e. Iodine is very polarizable and a radical stabilizer). Orbital hybridization The s-character of an orbital relates to how close electrons are to the nucleus. In the case of a radical, s-character more specifically relates to how close the single electron is to the nucleus. Radicals decrease in stability as they are closer to the nucleus, because the electron affinity of the orbital increases. As a general rule, hybridizations minimizing s-character increase the stability of radicals, and decreases the bond dissociation energy (i.e. sp3 hybridization is most stabilizing). Resonance Radicals can be stabilized by the donation of negative charge from resonance, or in other words, electron delocalization. Hyperconjugation Carbon radicals are stabilized by hyperconjugation, meaning that more substituted carbons are more stable, and hence have lower BDEs. In 2005, Gronert proposed an alternative hypothesis involving the relief of substituent group steric strain (as opposed to the before accepted paradigm, which suggests that carbon radicals are stabilized via alkyl groups). The captodative effect Radicals can be stabilized by a synergistic effect of both electron-withdrawing group and electron-donating group substituents. Electron-withdrawing groups often contain empty π* orbitals that are low in energy and overlap with the SOMO, creating two new orbitals: one that is lower in energy and stabilizing to the radical, and an empty higher energy orbital. Similarly, electron-donating orbitals combine with the radical SOMO, allowing a lone pair to lower in energy and the radical to enter the new higher energy orbital. This interaction is net stabilizing. See also Alpha cleavage References Chemical reactions
Homolysis (chemistry)
Chemistry
835
31,535,603
https://en.wikipedia.org/wiki/Triangle%20of%20death%20%28Italy%29
The triangle of death () is an area approximately 25 km northeast of the city of Naples in the Province of Naples, Campania, Italy, that comprises the comuni of Acerra, Nola and Marigliano. This area contains the largest illegal waste dump in Europe due to a waste management crisis in the 1990s and 2000s. The region has experienced a rise in cancer-related mortality that is linked to exposure of pollution from the illegal waste disposal by the Camorra criminal organization after regional landfills had been filled to capacity. The phenomenon of widespread environmental crime perpetrated by criminal syndicates like the Camorra and 'Ndrangheta has given rise to the term "ecomafia". Etymology The term "triangle of death" was first used with regard to the region in a September 2004 scientific publication in the Lancet Oncology. Overview An estimated 550,000 people live in the triangle of death. The annual death rate per 100,000 inhabitants from liver cancer is approximately 38.4 for men and 20.8 for women in this area, as compared to the national average of 14. The death rate for bladder cancer and cancer of the central nervous system was also higher than the national average. The high death rates from cancers pointed towards the presence of illegal and improper hazardous waste disposal by various organized crime groups including the Camorra. The 2004 Lancet Oncology article noted, "Today, the difference between lawful management of waste and illegal manipulation with regard to their compliance with health regulations is very narrow, and the health risks are rising... The 5000 illegal or uncontrolled landfill sites in Italy drew particular criticism; Italy has already been warned twice for flouting the Hazardous Waste Directive and the Landfill Directive, and the EU has now referred Italy to the European Court of Justice for further action." The report was met with criticism by the National Research Council, dismissing the methods used by Senior and Mazza as biased. Despite this, it sparked the first interest and concern into this matter, and has become the most cited source of evidence throughout the crisis. Though some media outlets report France and Germany as waste sources, the EU has remained silent as to the sources of the waste in its criticism and demands of Italy. Illegal toxic waste dumping By February 1994, several regional landfills in Campania had become overfilled, and Prime Minister Carlo Azeglio Ciampi declared a state of emergency and created the Committee for the Waste Emergency in Campania (Commissariato di Governo per l'emergenza rifiuti in Campania). By December 1999, all regional landfills had reached capacity. Reports in 2008 stated that the crisis was caused at least in part by the Camorra, the powerful Campania-based mafia, which created a lucrative business in the municipal waste disposal sector, mostly in the triangle of death. With the complicity of industrial companies, the illegal dumpers frequently mix heavy metals, industrial waste, and chemicals and household waste together, and then dump them near roads and burn them to avoid detection, leading to severe soil and air pollution. According to Giacomo D'Alisa et al., "the situation worsened during this period as the Camorra diversified their illegal waste disposal strategy: 1) transporting and dumping hazardous waste in the countryside by truck; 2) dumping waste in illegal caves or holes; 3) mixing toxic waste with textiles to avoid explosions and then burning it; and 4) mixing toxic with urban waste for disposal in landfills and incinerators." A Camorra member, Nunzio Perella was arrested in 1992, and began collaborating with authorities; he had stated "the rubbish is gold." The boss of the Casalesi clan, Gaetano Vassallo, admitted to systematically working for 20 years to bribe local politicians and officials to gain their acquiescence to dumping toxic waste. Giorgio Napolitano, then President of Italian Republic, said in June 2008: The triangle of death waste crisis The triangle of death and the waste management crisis are primarily a result of government failure to control illegal waste dumping. The government had attempted to mandate recycling and waste management programs, but were unable to, causing the expansion of opportunities for illegal activities, which caused further barriers to solve the waste crisis. Pollutants such as dioxins are found in the area, particularly around Acerra, as well as illegal waste disposal, even in the business district of Montefibre. As early as 1987, a decree of the Ministry of Environment marked Acerra "at high risk of environmental crisis". High levels of polychlorinated biphenyls (PCBs) were detected both in the soil and in the inhabitants of the region. It is hypothesized that industrial slurry originating from Porto Marghera (industrial docklands near Venice) was disguised as compost and spread on fields in the Acerra countryside by the Casalesi clan, often with help from the landowners. In one case, a company had its assets seized during a 2006 investigation in which it was alleged that the company had illegally disposed of waste from industries in the regions of Veneto and Tuscany in the territories of Bacoli, Giugliano and Qualiano. Approximately one million tonnes of toxic waste are said to have been disposed of, earning €27 million. The company was already the subject of a 2003 investigation. In another case, a tank full of toxic substances was found buried in an illegal dump, in Marigliano. The illegal burning of waste, for example to recover copper from wiring, is known to release dioxins into the atmosphere. Such fires are easily hidden among legitimate incineration resulting from the more general waste disposal problem, and the illegal burning of hazardous materials was particularly noted during 2007 and 2008. The presence of fires in the north area of Naples led author Roberto Saviano to use Terra dei fuochi ("Land of pyres") as a chapter title in his book Gomorrah. In 2000, a Parliamentary Commission inquiry about waste discovered some 800,000 tonnes of mud in Pianura landfill, coming from ACNA of Cengio in Naples, and the Italian Procura della Repubblica found (through telephone wiretappings) some irregularities in the waste disposal into the landfill of Villaricca, managed by FIBA (a company of the Impregilo group). Opposition to landfills Between 2007 and 2008, the waste commissioner Guido Bertolaso, (the head of the civil protection department), planned to open a landfill but this was opposed by residents of Chiaiano. There was similar resistance in Pianosa to reopening a closed landfill proposed by government commissioner Giovanni De Gennaro. Some of the protests turned violent, and in May 2008, it became a penal felony to protest in the vicinity of landfills, incinerators or any plant related to waste management. It is alleged that there was collusion between local political interests and organised crime over building interests. By July 17, 2008, Berlusconi declared that the emergency had ended. The incinerator of Acerra has also received backlash in the local area. In 2009, the Acerra incineration facility was completed at a cost of over €350 million. The incinerator burns 600,000 tons of waste per year to produce refuse-derived fuel. The energy produced from the facility is enough to power 200,000 households per year. Epidemiological research In 2007, research conducted by the World Health Organization, Italian Istituto Superiore di Sanità, Consiglio Nazionale delle Ricerche and Campania Region collected data on cancer and congenital abnormalities in 196 municipalities covering the period between 1994 and 2002 found abnormally high disease incidence. These abnormal patterns may correlate to areas where there are uncontrolled waste sites. However, this work also highlighted the difficulty in determining causality and in establishing a link between increased death and malformation rates and waste disposal. After the Senior and Mazza study, several other studies have been conducted to attempt to definitively link elevated cancer rates to waste exposure. A government-made waste-exposure index that classifies areas of the Campania region as high (5 on index) or low (1 on index) risk based on the type of wastes present in surrounding dumping sites, total waste volume greater than 10,000 cubic metres, and the likelihood of releases on water, soil and air was created. Statistically significant excess relative risks were found for several cancer types in the triangle of death, however, methods often struggle to account for lifestyle confounders such as tobacco consumption and occupation which could skew the results. A US Navy study denied any real ill effects to on-base personnel while however advising their off-base personnel to drink bottled water citing polluted wells. The US Navy report denied any signs of nuclear waste dumping and instead related the traces of uranium to volcanic activity. Pollution and agriculture exports More than half of the regional land in Campania is used for agriculture, and therefore the economy of the region is adversely affected by the waste crisis. Between January and March 2007, 30,000 kilograms of waste were burned on agricultural land, with a revenue of more than €118,000. In the region, over 12,000 cattle, river buffaloes and sheep had been culled before 2006. High levels of mortality and abnormal foetuses were also recorded in farms in Acerra linked to elevated levels of dioxin. Local studies have shown higher than permissible levels of lead in vegetables grown in the area. The government blames the Mafia's illegal garbage disposal racket. In March 2008, dioxin were found in buffalo milk from farms in Caserta. While only 2.8% of farms in Campania were affected, the sale of dairy products from Campania collapsed in both domestic and global markets. A chain reaction followed, in which several countries including Japan, China, Russia and Germany took various measures ranging from the mere raising of the attention threshold to the suspension of imports. The Italian institutions activated almost immediately, even in response to pressing requests from the European Union, a series of checks and suspended, in some cases, the sale of dairy products from the incriminated provinces. Tests had shown levels of dioxins higher than normal in at least 14% of samples taken in the provinces of Naples, Caserta and Avellino. In the provinces of Salerno and Benevento, no control indicated dioxins positivity. In any case, the contamination has affected, in a limited defined manner, the farms used to produce the PDO buffalo mozzarella DOP. On 19 April, China definitively removed the ban on mozzarella, originally activated on 28 March 2008, and tests held in December 2013 in Germany on behalf of four Italian consumer associations have highlighted dioxin and heavy metal levels at least five times lower than the legal limit. See also Cancer Alley Changzhou Foreign Languages School controversy Smokey Mountain and the Payatas dumpsite Valley of the Drums References External links La Terra dei fuochi, website showing claims on illegal waste disposal in Campania La Terra dei fuochi, website on "triangle of poisons" Giugliano-Qualiano-Villaricca Kathryn Senior and Alfredo Mazza, "Italian 'Triangle of death' linked to waste crisis", The Lancet Oncology, Volume 5, Issue 9, September 2004, pages 525–527. Translation by The Lancet Oncology from the site of Centro Nazionale di Epidemiologia, Sorveglianza e Promozione della Salute Fabrizio Bianchi et al., "Italian 'Triangle of death'", The Lancet Oncology, Volume 12, Issue 5, December 2004, page 710. , report by World Health Organization, Italian Health Institute, :it:Consiglio Nazionale delle Ricerche e Regione Campania (Italia). Report: , TV 2008-03-09, and how finished this story in: . Repubblica Radio TV: , del 2007-12-28: , , La 7: Il cancro di Napoli, del 2007-12-12; , Exit del 2007-12-18; , del 2007-09-22 Tg1: interview of Roberto Saviano at 2008-03-01 (evening edition at 8:00 p.m.) Sat 2000: interview of Antonio Marfella, oncologist at Formato famiglia, 2007-12-20 Chemical industry in Italy Pollution in Italy Soil contamination Waste management in Italy History of the Camorra in Italy Cancer clusters Scandals in Italy
Triangle of death (Italy)
Chemistry,Environmental_science
2,584
1,596,063
https://en.wikipedia.org/wiki/Mollifier
In mathematics, mollifiers (also known as approximations to the identity) are particular smooth functions, used for example in distribution theory to create sequences of smooth functions approximating nonsmooth (generalized) functions, via convolution. Intuitively, given a (generalized) function, convolving it with a mollifier "mollifies" it, that is, its sharp features are smoothed, while still remaining close to the original. They are also known as Friedrichs mollifiers after Kurt Otto Friedrichs, who introduced them. Historical notes Mollifiers were introduced by Kurt Otto Friedrichs in his paper , which is considered a watershed in the modern theory of partial differential equations. The name of this mathematical object has a curious genesis, and Peter Lax tells the story in his commentary on that paper published in Friedrichs' "Selecta". According to him, at that time, the mathematician Donald Alexander Flanders was a colleague of Friedrichs; since he liked to consult colleagues about English usage, he asked Flanders for advice on naming the smoothing operator he was using. Flanders was a modern-day puritan, nicknamed by his friends Moll after Moll Flanders in recognition of his moral qualities: he suggested calling the new mathematical concept a "mollifier" as a pun incorporating both Flanders' nickname and the verb 'to mollify', meaning 'to smooth over' in a figurative sense. Previously, Sergei Sobolev had used mollifiers in his epoch making 1938 paper, which contains the proof of the Sobolev embedding theorem: Friedrichs himself acknowledged Sobolev's work on mollifiers, stating "These mollifiers were introduced by Sobolev and the author...". It must be pointed out that the term "mollifier" has undergone linguistic drift since the time of these foundational works: Friedrichs defined as "mollifier" the integral operator whose kernel is one of the functions nowadays called mollifiers. However, since the properties of a linear integral operator are completely determined by its kernel, the name mollifier was inherited by the kernel itself as a result of common usage. Definition Modern (distribution based) definition Let be a smooth function on , , and put for . Then is a mollifier if it satisfies the following three requirements: it is compactly supported, , , where is the Dirac delta function, and the limit must be understood as taking place in the space of Schwartz distributions. The function may also satisfy further conditions of interest; for example, if it satisfies for all , then it is called a positive mollifier, and if it satisfies for some infinitely differentiable function , then it is called a symmetric mollifier. Notes on Friedrichs' definition Note 1. When the theory of distributions was still not widely known nor used, property above was formulated by saying that the convolution of the function with a given function belonging to a proper Hilbert or Banach space converges as ε → 0 to that function: this is exactly what Friedrichs did. This also clarifies why mollifiers are related to approximate identities. Note 2. As briefly pointed out in the "Historical notes" section of this entry, originally, the term "mollifier" identified the following convolution operator:See , paragraph 2, "Integral operators". where and is a smooth function satisfying the first three conditions stated above and one or more supplementary conditions as positivity and symmetry. Concrete example Consider the bump function of a variable in defined by where the numerical constant ensures normalization. This function is infinitely differentiable, non analytic with vanishing derivative for . can be therefore used as mollifier as described above: one can see that defines a positive and symmetric mollifier. Properties All properties of a mollifier are related to its behaviour under the operation of convolution: we list the following ones, whose proofs can be found in every text on distribution theory. Smoothing property For any distribution , the following family of convolutions indexed by the real number where denotes convolution, is a family of smooth functions. Approximation of identity For any distribution , the following family of convolutions indexed by the real number converges to Support of convolution For any distribution , , where indicates the support in the sense of distributions, and indicates their Minkowski addition. Applications The basic application of mollifiers is to prove that properties valid for smooth functions are also valid in nonsmooth situations. Product of distributions In some theories of generalized functions, mollifiers are used to define the multiplication of distributions. Given two distributions and , the limit of the product of the smooth function obtained from one operand via mollification, with the other operand defines, when it exists, their product in various theories of generalized functions: . "Weak=Strong" theorems Mollifiers are used to prove the identity of two different kind of extension of differential operators: the strong extension and the weak extension. The paper by Friedrichs which introduces mollifiers illustrates this approach. Smooth cutoff functions By convolution of the characteristic function of the unit ball with the smooth function '' (defined as in with ), one obtains the function which is a smooth function equal to on , with support contained in . This can be seen easily by observing that if and then . Hence for , . One can see how this construction can be generalized to obtain a smooth function identical to one on a neighbourhood of a given compact set, and equal to zero in every point whose distance from this set is greater than a given . Such a function is called a (smooth) cutoff function; these are used to eliminate singularities of a given (generalized) function via multiplication. They leave unchanged the value of the multiplicand on a given set, but modify its support. Cutoff functions are used to construct smooth partitions of unity. See also Approximate identity Bump function Convolution Distribution (mathematics) Generalized function Kurt Otto Friedrichs Non-analytic smooth function Sergei Sobolev Weierstrass transform Notes References . The first paper where mollifiers were introduced. . A paper where the differentiability of solutions of elliptic partial differential equations is investigated by using mollifiers. . A selection from Friedrichs' works with a biography and commentaries of David Isaacson, Fritz John, Tosio Kato, Peter Lax, Louis Nirenberg, Wolfgag Wasow, Harold Weitzner. . . . The paper where Sergei Sobolev proved his embedding theorem, introducing and using integral operators very similar to mollifiers, without naming them. Functional analysis Smooth functions Schwartz distributions
Mollifier
Mathematics
1,396
51,514,659
https://en.wikipedia.org/wiki/Influenza%20and%20Other%20Respiratory%20Viruses
Influenza and Other Respiratory Viruses is a peer-reviewed scientific journal covering virology, published by John Wiley & Sons for the International Society for Influenza and other Respiratory Virus Diseases. As of 2018, the editor is Benjamin Cowling. According to the Journal Citation Reports, the journal has a 2020 impact factor of 4.380. Influenza and Other Respiratory Viruses is the first journal to specialise exclusively on influenza and other respiratory viruses and strives to play a key role in the dissemination of information in this broad and challenging field.  It is aimed at laboratory and clinical scientists, public health professionals, and others around the world involved in a broad range of activities in this field.  In turn, topics covered will include: surveillance epidemiology prevention by vaccines prevention and treatment by antivirals clinical studies public health & pandemic preparedness basic scientific research transmission between animals and humans References Virology journals Wiley (publisher) academic journals
Influenza and Other Respiratory Viruses
Biology
187
68,653,649
https://en.wikipedia.org/wiki/Mary%20Hudson%20%28scientist%29
Mary Hudson (born January 6, 1949} is the Eleanor and Kelvin Smith Distinguished Professor of Physics at Dartmouth College. She is known for her research on the weather patterns that occur due to solar eruptions. She was elected a fellow of the American Geophysical Union in 1984. Education and career While in college, Hudson worked for the McDonnell-Douglas Corporation as a mathematician and earned her B.S. from the University of California, Los Angeles (UCLA) in 1969. She then worked for the Aerospace Corporation while working on her M.S. degree which she earned from UCLA in 1971. She earned her Ph.D. in 1974 from the University of California, Los Angeles. Following her Ph.D., Hudson joined the University of California, Berkeley where she remained until 1985 when she moved to Dartmouth College. In 1990 she was promoted to professor. From 2010 until 2016, she retained an affiliate position at the National Center for Atmospheric Research in the High Altitude Observatory. Research Hudson's interest in space developed as a child raised during the space race who had her own childhood telescope. Starting with her Ph.D. research, Hudson worked on the spread F problem, a phenomenon known to impact the transmission of signals by satellites. During her time at the University of California Berkeley, Hudson worked on the team led by Forrest Mozer that made the first electric field measurements in the ionosphere using the S3-3 satellite; the electrostatic shocks they measured accelerate electrons to make the auroras that can be seen at night in high latitudes. Hudson's research on geomagnetic storms, disruptions in the Earth's magnetosphere, establishes the conditions that cause radiation belts to form during these storms. From 2002 until 2013, Hudson co-lead the National Science Foundation-funded Center for Integrated Space Weather Modeling. Her research on this project centered on magnetosphere physics, especially the trapping of solar energetic particles, which has consequences for technology used on Earth. Hudson has also examined the movement of particles in radiation belts, the Van Allen radiation belts, that surround the Earth. Selected publications Awards and honors In 1984, Hudson was elected a fellow of the American Geophysical Union and awarded the James B. Macelwane Medal, thereby becoming the first woman to receive the award. She gave the Van Allen Lecture for the American Geophysical Union in 2006, and received the James A. Van Allen Space Environments Award from the American Institute of Aeronautics and Astronautics in 2012. In 2017, she received the John Adam Fleming Medal from the American Geophysical Union. References External links University of California, Los Angeles alumni Dartmouth College faculty National Center for Atmospheric Research faculty Fellows of the American Geophysical Union 20th-century women physicists Theoretical physicists Space scientists 1949 births Living people 21st-century women physicists
Mary Hudson (scientist)
Physics
560
35,466,829
https://en.wikipedia.org/wiki/Combined%20linear%20congruential%20generator
A combined linear congruential generator (CLCG) is a pseudo-random number generator algorithm based on combining two or more linear congruential generators (LCG). A traditional LCG has a period which is inadequate for complex system simulation. By combining two or more LCGs, random numbers with a longer period and better statistical properties can be created. The algorithm is defined as: where: is the "modulus" of the first LCG is the ith input from the jth LCG is the ith generated random integer with: where is a uniformly distributed random number between 0 and 1. Derivation If Wi,1, Wi,2, ..., Wi,k are any independent, discrete, random-variables and one of them is uniformly distributed from 0 to m1 − 2, then Zi is uniformly distributed between 0 and m1 − 2, where: Let Xi,1, Xi,2, ..., Xi,k be outputs from k LCGs. If Wi,j is defined as Xi,j − 1, then Wi,j will be approximately uniformly distributed from 0 to mj − 1. The coefficient "(−1)j−1" implicitly performs the subtraction of one from Xi,j. Properties The CLCG provides an efficient way to calculate pseudo-random numbers. The LCG algorithm is computationally inexpensive to use. The results of multiple LCG algorithms are combined through the CLCG algorithm to create pseudo-random numbers with a longer period than is achievable with the LCG method by itself. The period of a CLCG is the least common multiple of the periods of the individual generators, which are one less than the moduli. Since all the moduli are odd primes, the periods are even and thus share at least a common divisor of 2, but if the moduli are chosen so that 2 is the greatest common divisor of each pair, this will result in a period of: Example The following is an example algorithm designed for use in 32-bit computers: LCGs are used with the following properties: The CLCG algorithm is set up as follows: The maximum period of the two LCGs used is calculated using the formula: This equates to 2.1×109 for the two LCGs used. This CLCG shown in this example has a maximum period of: This represents a tremendous improvement over the period of the individual LCGs. It can be seen that the combined method increases the period by 9 orders of magnitude. Surprisingly the period of this CLCG may not be sufficient for all applications. Other algorithms using the CLCG method have been used to create pseudo-random number generators with periods as long as . The former of the two generators, using b = 40,014 and m = 2,147,483,563, is also used by the Texas Instruments TI-30X IIS scientific calculator. See also Linear congruential generator Wichmann–Hill, a specific combined LCG proposed in 1982 References External links An overview of use and testing of pseudo-random number generators Pseudorandom number generators Modular arithmetic
Combined linear congruential generator
Mathematics
651
61,594,707
https://en.wikipedia.org/wiki/Callitrichine%20gammaherpesvirus%203
Callitrichine gammaherpesvirus 3 (CalHV-3) is a species of virus that infects marmosets. It is in the genus Lymphocryptovirus, subfamily Gammaherpesvirinae, family Herpesviridae, and order Herpesvirales,. References Gammaherpesvirinae
Callitrichine gammaherpesvirus 3
Biology
72
35,952,110
https://en.wikipedia.org/wiki/Dynix%20%28software%29
The Dynix Automated Library System was a popular integrated library system, with a heyday from the mid-1980s to the late-1990s. It was used by libraries to replace the paper-based card catalog, and track lending of materials from the library to patrons. First developed in 1983, it eventually became the most popular library automation software ever released, and was once near-ubiquitous in libraries boasting an electronic card catalog, peaking at over 5,000 installations worldwide in the late 1990s, with a market share of nearly 80%, including the United States' Library of Congress. Typical of 1980s software technology, Dynix had a character-based user interface, involving no graphics except ASCII art/ANSI art boxes. History The first installation, in 1983, was at a public library in Kershaw County, South Carolina. The library actually contracted for the system before the software was written. In the words of Paul Sybrowsky, founder of Dynix: "There was no software, no product. Undaunted, we pitched our plan to create an automated library system to a public library in South Carolina. We didn't have a product, but we said 'You need a system and we'd like to bid on it,' and showed them our business plan." The original Dynix library system was based on software developed at CTI (Computer Translation Incorporated) which was a development project of Brigham Young University, and presided over by Gary Carlson. The initial search engine tools: FSELECT and FSORT were written for the PICK operating system under contract for CTI by Walter Nicholes as part of a bid for a research support systems for AT&T laboratories. Paul Sybrowsky was an employee of CTI. (As was Bruce Park, founder of ALII library systems, later GEAC Library Systems.) Both library systems (Dynix and ALII) were based on these PICK based search engine tools. In 1984, Eyring Research Institute acquired 80 percent of Dynix. Then in 1986, the executives and employees bought out Eyring Research's share and became independent again. In 1987, a New Jersey firm called the Ultimate Corporation purchased a minority share of Dynix. Dynix use grew quickly in the early-and-mid 1990s. In October 1989, Dynix had just 292 installations. Fifteen months later, in January 1991, it was up 71% to 500 installations. A year-and-a-half later, in June 1993, Dynix had doubled its installed base, signing its 1,000th contract. At its peak in the late 1990s, Dynix had over 5,000 libraries using its system, amounting to an 80% market share. The company selling the Dynix software changed hands several more times. When mostly independent it was called Dynix Systems, Inc. In January 1992, Dynix Systems was acquired by Ameritech. Dynix and NOTIS Systems (maker of NOTIS), which Ameritech purchased in October 1991, were consolidated into Ameritech Library Services (ALS) in 1994. In November 1999, Ameritech sold Ameritech Library Systems to a pair of investment companies, the 21st Century Group and Green Leaf Ridge Company, which rebranded ALS as epixtech. In 2003, epixtech reverted to using the Dynix name. The customer base for Dynix did not begin decreasing until 2000, at which point it started being replaced by Internet-based interfaces (so-called "Web PACs"). In 2003, it was reported that Dynix was being phased out by its manufacturer, and approaching "end-of-life" status in terms of functionality and support. By 2004, its market share was down to 62%, still a comfortable majority. In June 2005, SirsiDynix was formed by the merger of the Dynix Corporation and the Sirsi Corporation. Phase-outs of Dynix were constant in the late 2000s, and by the second decade of the 21st century, it was obsolete and remained in very few libraries. By mid-2013, only 88 libraries were on record as having Dynix installed. The majority of phase-outs took place between 2002 and 2007. Special versions At one point, Dynix was benchmarked supporting 1,600 terminals on a single system. This stability would later come in handy; the largest installations ever were the King County Library System in the greater Seattle area, which was largest by collection size (tens of millions of cataloged items), and New York Public Library in New York City, which covered the largest geographical area with 87 branches (requiring dumb terminals numbering into the thousands). Several specialized versions were released, all nearly identical to the mainstream version. For academic libraries, primarily K-12, there was Dynix Scholar (an Intel 80xxx-based microcomputer version of regular Dynix). For very small libraries, with perhaps only one or two terminals, there was Dynix Elite. The original Dynix system, as used in regular public libraries, was renamed Dynix Classic later in its lifespan to distinguish it from other Dynix products. Technical details Based around a relational database, Dynix was originally written in Pick/BASIC and run on the PICK operating system. In 1990, it was ported to VMark's uniVerse BASIC programming language, and run on Unix-based servers, with uniVerse acting as a PICK emulation layer between the software and the operating system. In the late 1990s, Dynix was once again re-ported, this time for Windows NT-based servers; again, uniVerse acted as a Pick emulator between the software and the operating system. Pick/BASIC and uniVerse BASIC are the same programming language, so porting Dynix did not require re-writing the source code. In the words of one Dynix developer, "[Dynix] was programmed in Pick/BASIC ... however, as it matured, it was written in uniVerse BASIC ... It was never re-written. That type of BASIC isn't easy to move to any other language. None other handles data as well. It's a very fast-compiled and -interpreted language, and frankly nothing matches it, then or now. It's too bad that it (uniVerse BASIC) was so good, because it didn't make the transition to object-oriented Web-based technology in time to stay afloat." The software was originally written on computers made by The Ultimate Corp. of East Hanover, New Jersey, which ran Ultimate's proprietary implementation of the PICK operating system. Later, Dynix moved to IBM RISC/6000-based computers running AIX throughout the company, except in Training, which used SCO Unix. While most libraries purchased the same type of servers as Dynix was using, there were installations done on platforms such as DEC and MIPS, Sequent, Sequoia (which used a very expensive native PICK), HP's Unix servers, etc. The Dynix corp. could do software-only installs to any compliant Unix because of uniVerse's scalability and adaptability. Dynix was originally developed around the ADDS Viewpoint A2 terminal's escape sequences, because ADDS terminals were the de facto standard on the PICK-based mainframes on which Dynix was created. Shortly after Dynix started being deployed to libraries around the country, requests started coming back that alternate terminals be provided for patron use; children would bang on the keyboards or throw books at the terminals, or use unauthorized key sequences to mess up the programming. In response, Dynix asked Wyse to develop such a terminal; Wyse created the WY-30, which was a stripped-down version of the best-selling terminal ever made, the WY-60. The swivel base was removed so that the terminal sat flat on whatever surface it was placed on; what the unit now lacked in viewing-angle adjustability, it made up for in physical stability (it could not be knocked over by the force of a child). A specially-designed keyboard reduced the number of keys from 101 to 83, mainly by removing all the function keys; this was designed to keep users out of the internal setup functions and other parts of the software they "weren't supposed to be going". To maintain compatibility with how Dynix was already written, the WY-30 supported the Adds Viewpoint A2 emulation, which was actually one of the only emulations on the terminal. They WY-30 had very few emulations compared to most Wyse products, and notably did not support VT100 or any other ANSI emulations. Years later, when the Dynix company was moving from Ultimate computers running Pick/OS to IBM computers running AIX and uniVerse, compatibility for VT100/102/340 terminals was added to the software; then, other models of Wyse terminal started coming into favor, such as the WY-60 and WY-150, which were easier on the eyes and hands than the WY-30 was. The complete Dynix Classic approached 900,000 lines of source code, and compiled at around 120 MB. It was distributed via tape drive, first on 1/2" reel-to-reel tape, then later 1/4" cartridge tapes for Dynix Elite users, and 8mm cartridges for everyone else. One reason for Dynix's success was that an entire library consortium could be run off of just one server, in one location, with one copy of the software. This meant that a library system with multiple branches—whether a large single-city system such as the one in New York City, or whether a consortium made of several small cities/towns banded together—could pool their funds and only have to purchase one server and one copy of the software. Each branch had their own Circulation module, but the actual catalog database was a single copy on one server in a central location. Each record had a line in it stating which actual branch the item belonged to, allowing users to request holds/transfers from another branch to their branch, as well as see whether it was checked in or out at its home branch. This saved a significant sum of money—millions of dollars, in the case of the largest installations—versus Dynix's competitors, who required a separate server and copy of the software in each library branch. With the single copy of the Dynix software installed on a central server, both patrons and librarians could access it by using dumb terminals. The technology for linking the terminals to the server within each building, and linking the separate buildings (branches) together to the central server location, changed over time as technology progressed. The earliest method was to have the entire system connected via RS-232; there would be many muxes (statistical multiplexers) and many miles of serial lines. Muxes were the phone company's solution for connecting serial lines between branches. Later, dumb terminals were connected via RS-232 to a terminal server, which in turn connected via Ethernet to the branch's LAN. The separate branches would be connected to the central Dynix server via IP-based methods (the Internet). The latest installations used PC's running terminal emulation software, and connecting to the Dynix server via telnet over the Internet. Dynix was made up of several different modules, each of which was purchased independently to create a scaled system based on the library's size and needs. A library could buy as few as two modules. The two basic modules were Cataloging ($15,000 + $1,500 annual maintenance), and Circulation ($12,000 + $1,200 annual maintenance). Some of the other modules included Kids' Catalog, Bookmobile, Homebound, Media Scheduling, Reserve Bookroom, TeleCirc, DebtCollect, Electronic Notification System, and Self Check-Out. A Dialcat/DialPac module was offered, allowing patrons with a modem and terminal emulation software to dial in from home and search the card catalog or renew books. Programs with a text-based interface, such as Dynix, are described as being either "menu-driven" or "command-line-driven", referring to how users interact with the software. Dynix was actually a hybrid of both; the patrons used a menu-driven interface, where they would be given a numbered list of options, and simply have to key in the number of the option they wanted in order to navigate through the system. Unknown to the patrons, the librarians had the ability to manipulate the system in the command-line-driven way, by keying in special codes at the same prompts where patrons would key in menu item numbers. These codes, referred to a "dot commands" due to their structure of being a period followed by one or two letters (such as '.c' to switch between checkout and checkin screens in to the Circulation module), allowed librarians access to advanced/hidden features of the Dynix system, and—along with password-protection—prevented patrons from gaining unauthorized levels of access. Gallery See also Wyse Monochrome monitor NOTIS OPAC References External links SirsiDynix, the current successor of the company that created Dynix Library automation Library and information science software
Dynix (software)
Engineering
2,764
12,835,493
https://en.wikipedia.org/wiki/Julian%20Goldsmith
Julian Royce Goldsmith (1918–1999) was a mineralogist and geochemist at the University of Chicago (Moore, 1971). Goldsmith, along with colleague Fritz Laves, first defined the crystallographic polymorphism of alkali feldspar (Newton, 1989). Goldsmith also experimented on the temperature dependence of the solid solution between calcite and dolomite (Newton, 1989). Goldsmith's research also led him to experiment with the determination of the stability of intermediate structural states of albite (Newton, 1989). For his outstanding contributions to the study of mineralogy and geochemistry, Goldsmith was awarded the prestigious Roebling Medal by the Mineralogical Society of America in 1988 (Newton, 1989). The mineral julgoldite was named for him. References Moore, P.B. (1971) Julgoldite, the Fe +2-Fe+3 dominant pumpellyite. A new mineral from Långban, Sweden. Lithos 4, 93–99. Newton, R. (1989) Presentation of the Roebling Medal of the Mineralogical Society of America for 1988 to Julian R. Goldsmith. American Mineralogist, 74, 715–716. Edward J. Olsen, Memorial for Julian Royce Goldsmith, 1918–1999, American Mineralogist, Volume 85, pages 382–383, 2000 1918 births 1999 deaths American geochemists American mineralogists University of Chicago faculty Presidents of the Geochemical Society 20th-century American chemists
Julian Goldsmith
Chemistry
301
68,445,647
https://en.wikipedia.org/wiki/Positive%20element
In mathematics, an element of a *-algebra is called positive if it is the sum of elements of the form Definition Let be a *-algebra. An element is called positive if there are finitely many elements , so that This is also denoted by The set of positive elements is denoted by A special case from particular importance is the case where is a complete normed *-algebra, that satisfies the C*-identity (), which is called a C*-algebra. Examples The unit element of an unital *-algebra is positive. For each element , the elements and are positive by In case is a C*-algebra, the following holds: Let be a normal element, then for every positive function which is continuous on the spectrum of the continuous functional calculus defines a positive element Every projection, i.e. every element for which holds, is positive. For the spectrum of such an idempotent element, holds, as can be seen from the continuous functional Criteria Let be a C*-algebra and Then the following are equivalent: For the spectrum holds and is a normal element. There exists an element , such that There exists a (unique) self-adjoint element such that If is a unital *-algebra with unit element , then in addition the following statements are for every and is a self-adjoint element. for some and is a self-adjoint element. Properties In *-algebras Let be a *-algebra. Then: If is a positive element, then is self-adjoint. The set of positive elements is a convex cone in the real vector space of the self-adjoint elements This means that holds for all and If is a positive element, then is also positive for every element For the linear span of the following holds: and In C*-algebras Let be a C*-algebra. Then: Using the continuous functional calculus, for every and there is a uniquely determined that satisfies , i.e. a unique -th root. In particular, a square root exists for every positive element. Since for every the element is positive, this allows the definition of a unique absolute value: For every real number there is a positive element for which holds for all The mapping is continuous. Negative values for are also possible for invertible elements Products of commutative positive elements are also positive. So if holds for positive , then Each element can be uniquely represented as a linear combination of four positive elements. To do this, is first decomposed into the self-adjoint real and imaginary parts and these are then decomposed into positive and negative parts using the continuous functional For it holds that , since If both and are positive If is a C*-subalgebra of , then If is another C*-algebra and is a *-homomorphism from to , then If are positive elements for which , they commutate and holds. Such elements are called orthogonal and one writes Partial order Let be a *-algebra. The property of being a positive element defines a translation invariant partial order on the set of self-adjoint elements If holds for , one writes or This partial order fulfills the properties and for all with If is a C*-algebra, the partial order also has the following properties for : If holds, then is true for every For every that commutates with and even If holds, then If holds, then holds for all real numbers If is invertible and holds, then is invertible and for the inverses See also Nonnegative matrix Positive operator (Hilbert space) Citations References Bibliography English translation of Abstract algebra C*-algebras
Positive element
Mathematics
751
7,072,506
https://en.wikipedia.org/wiki/Ignition%20timing
In a spark ignition internal combustion engine, ignition timing is the timing, relative to the current piston position and crankshaft angle, of the release of a spark in the combustion chamber near the end of the compression stroke. The need for advancing (or retarding) the timing of the spark is because fuel does not completely burn the instant the spark fires. The combustion gases take a period of time to expand and the angular or rotational speed of the engine can lengthen or shorten the time frame in which the burning and expansion should occur. In a vast majority of cases, the angle will be described as a certain angle advanced before top dead center (BTDC). Advancing the spark BTDC means that the spark is energized prior to the point where the combustion chamber reaches its minimum size, since the purpose of the power stroke in the engine is to force the combustion chamber to expand. Sparks occurring after top dead center (ATDC) are usually counter-productive (producing wasted spark, back-fire, engine knock, etc.) unless there is need for a supplemental or continuing spark prior to the exhaust stroke. Setting the correct ignition timing is crucial in the performance of an engine. Sparks occurring too soon or too late in the engine cycle are often responsible for excessive vibrations and even engine damage. The ignition timing affects many variables including engine longevity, fuel economy, and engine power. Many variables also affect what the "best" timing is. Modern engines that are controlled in real time by an engine control unit use a computer to control the timing throughout the engine's RPM and load range. Older engines that use mechanical distributors rely on inertia (by using rotating weights and springs) and manifold vacuum in order to set the ignition timing throughout the engine's RPM and load range. Early cars required the driver to adjust timing via controls according to driving conditions, but this is now automated. There are many factors that influence proper ignition timing for a given engine. These include the timing of the intake valve(s) or fuel injector(s), the type of ignition system used, the type and condition of the spark plugs, the contents and impurities of the fuel, fuel temperature and pressure, engine speed and load, air and engine temperature, turbo boost pressure or intake air pressure, the components used in the ignition system, and the settings of the ignition system components. Usually, any major engine changes or upgrades will require a change to the ignition timing settings of the engine. Background The spark ignition system of mechanically controlled gasoline internal combustion engines consists of a mechanical device, known as a distributor, that triggers and distributes ignition spark to each cylinder relative to piston position—in crankshaft degrees relative to top dead centre (TDC). Spark timing, relative to piston position, is based on static (initial or base) timing without mechanical advance. The distributor's centrifugal timing advance mechanism makes the spark occur sooner as engine speed increases. Many of these engines will also use a vacuum advance that advances timing during light loads and deceleration, independent of the centrifugal advance. This typically applies to automotive use; marine gasoline engines generally use a similar system but without vacuum advance. In mid-1963, Ford offered transistorized ignition on their new 427 FE V8. This system only passed a very low current through the ignition points, using a PNP transistor to perform high-voltage switching of the ignition current, allowing for a higher voltage ignition spark, as well as reducing variations in ignition timing due to arc-wear of the breaker points. Engines so equipped carried special stickers on their valve covers reading “427-T.” AC Delco’s Delcotron Transistor Control Magnetic Pulse Ignition System became optional on a number of General Motors vehicles beginning in 1964. The Delco system eliminated the mechanical points completely, using magnetic flux variation for current switching, virtually eliminating point wear concerns. In 1967, Ferrari and Fiat Dinos came equipped with Magneti Marelli Dinoplex electronic ignition, and all Porsche 911s had electronic ignition beginning with the B-Series 1969 models. In 1972, Chrysler introduced a magnetically-triggered pointless electronic ignition system as standard equipment on some production cars, and included it as standard across the board by 1973. Electronic control of ignition timing was introduced a few years later in 1975-'76 with the introduction of Chrysler's computer-controlled "Lean-Burn" electronic spark advance system. By 1979 with the Bosch Motronic engine management system, technology had advanced to include simultaneous control of both the ignition timing and fuel delivery. These systems form the basis of modern engine management systems. Setting the ignition timing "Timing advance" refers to the number of degrees before top dead center (BTDC) that the sparkplug will fire to ignite the air-fuel mixture in the combustion chamber before the end of the compression stroke. Retarded timing can be defined as changing the timing so that fuel ignition happens later than the manufacturer's specified time. For example, if the timing specified by the manufacturer was set at 12 degrees BTDC initially and adjusted to 11 degrees BTDC, it would be referred to as retarded. In a classic ignition system with breaker points, the basic timing can be set statically using a test light or dynamically using the timing marks and a timing light. Timing advance is required because it takes time to burn the air-fuel mixture. Igniting the mixture before the piston reaches TDC will allow the mixture to fully burn soon after the piston reaches TDC. If the mixture is ignited at the correct time, maximum pressure in the cylinder will occur sometime after the piston reaches TDC allowing the ignited mixture to push the piston down the cylinder with the greatest force. Ideally, the time at which the mixture should be fully burnt is about 20 degrees ATDC. This will maximize the engine's power producing potential. If the ignition spark occurs at a position that is too advanced relative to piston position, the rapidly combusting mixture can actually push against the piston still moving up in its compression stroke, causing knocking (pinking or pinging) and possible engine damage, this usually occurs at low RPM and is known as pre-ignition or in severe cases detonation. If the spark occurs too retarded relative to the piston position, maximum cylinder pressure will occur after the piston is already too far down in the cylinder on its power stroke. This results in lost power, overheating tendencies, high emissions, and unburned fuel. The ignition timing will need to become increasingly advanced (relative to TDC) as the engine speed increases so that the air-fuel mixture has the correct amount of time to fully burn. As the engine speed (RPM) increases, the time available to burn the mixture decreases but the burning itself proceeds at the same speed, it needs to be started increasingly earlier to complete in time. Poor volumetric efficiency at higher engine speeds also requires increased advancement of ignition timing. The correct timing advance for a given engine speed will allow for maximum cylinder pressure to be achieved at the correct crankshaft angular position. When setting the timing for an automobile engine, the factory timing setting can usually be found on a sticker in the engine bay. The ignition timing is also dependent on the load of the engine with more load (larger throttle opening and therefore air:fuel ratio) requiring less advance (the mixture burns faster). Also it is dependent on the temperature of the engine with lower temperature allowing for more advance. The speed with which the mixture burns depends on the type of fuel, the amount of turbulence in the airflow (which is tied to the design the cylinder head and valvetrain system) and on the air-fuel ratio. It is a common myth that burn speed is linked with octane rating. Dynamometer tuning Setting the ignition timing while monitoring engine power output with a dynamometer is one way to correctly set the ignition timing. After advancing or retarding the timing, a corresponding change in power output will usually occur. A load type dynamometer is the best way to accomplish this as the engine can be held at a steady speed and load while the timing is adjusted for maximum output. Using a knock sensor to find the correct timing is one method used to tune an engine. In this method, the timing is advanced until knock occurs. The timing is then retarded one or two degrees and set there. This method is inferior to tuning with a dynamometer since it often leads to ignition timing which is excessively advanced particularly on modern engines which do not require as much advance to deliver peak torque. With excessive advance, the engine will be prone to pinging and detonation when conditions change (fuel quality, temperature, sensor issues, etc). After achieving the desired power characteristics for a given engine load/rpm, the spark plugs should be inspected for signs of engine detonation. If there are any such signs, the ignition timing should be retarded until there are none. The best way to set ignition timing on a load type dynamometer is to slowly advance the timing until peak torque output is reached. Some engines (particularly turbo or supercharged) will not reach peak torque at a given engine speed before they begin to knock (pinging or minor detonation). In this case, engine timing should be retarded slightly below this timing value (known as the "knock limit"). Engine combustion efficiency and volumetric efficiency will change as ignition timing is varied, which means fuel quantity must also be changed as the ignition is varied. After each change in ignition timing, fuel is adjusted also to deliver peak torque. Mechanical ignition systems Mechanical ignition systems use a mechanical spark distributor to distribute a high voltage current to the correct spark plug at the correct time. In order to set an initial timing advance or timing retard for an engine, the engine is allowed to idle and the distributor is adjusted to achieve the best ignition timing for the engine at idle speed. This process is called "setting the base advance". There are two methods of increasing timing advance past the base advance. The advances achieved by these methods are added to the base advance number in order to achieve a total timing advance number. Mechanical timing advance An increasing mechanical advancement of the timing takes place with increasing engine speed. This is possible by using the law of inertia. Weights and springs inside the distributor rotate and affect the timing advance according to engine speed by altering the angular position of the timing sensor shaft with respect to the actual engine position. This type of timing advance is also referred to as centrifugal timing advance. The amount of mechanical advance is dependent solely on the speed at which the distributor is rotating. In a 2-stroke engine, this is the same as engine RPM. In a 4-stroke engine, this is half the engine RPM. The relationship between advance in degrees and distributor RPM can be drawn as a simple 2-dimensional graph. Lighter weights or heavier springs can be used to reduce the timing advance at lower engine RPM. Heavier weights or lighter springs can be used to advance the timing at lower engine RPM. Usually, at some point in the engine's RPM range, these weights contact their travel limits, and the amount of centrifugal ignition advance is then fixed above that rpm. Vacuum timing advance The second method used to advance (or retard) the ignition timing is called vacuum timing advance. This method is almost always used in addition to mechanical timing advance. It generally increases fuel economy and driveability, particularly at lean mixtures. It also increases engine life through more complete combustion, leaving less unburned fuel to wash away the cylinder wall lubrication (piston ring wear), and less lubricating oil dilution (bearings, camshaft life, etc.). Vacuum advance works by using a manifold vacuum source to advance the timing at low to mid engine load conditions by rotating the position sensor (contact points, hall effect or optical sensor, reluctor stator, etc.) mounting plate in the distributor with respect to the distributor shaft. Vacuum advance is diminished at wide open throttle (WOT), causing the timing advance to return to the base advance in addition to the mechanical advance. One source for vacuum advance is a small opening located in the wall of the throttle body or carburetor adjacent to but slightly upstream of the edge of the throttle plate. This is called a ported vacuum. The effect of having the opening here is that there is little or no vacuum at idle, hence little or no advance. Other vehicles use vacuum directly from the intake manifold. This provides full engine vacuum (and hence, full vacuum advance) at idle. Some vacuum advance units have two vacuum connections, one at each side of the actuator membrane, connected to both manifold vacuum and ported vacuum. These units will both advance and retard the ignition timing. On some vehicles, a temperature sensing switch will apply manifold vacuum to the vacuum advance system when the engine is hot or cold, and ported vacuum at normal operating temperature. This is a version of emissions control; the ported vacuum allowed carburetor adjustments for a leaner idle mixture. At high engine temperature, the increased advance raised engine speed to allow the cooling system to operate more efficiently. At low temperature the advance allowed the enriched warm-up mixture to burn more completely, providing better cold-engine running. Electrical or mechanical switches may be used to prevent or alter vacuum advance under certain conditions. Early emissions electronics would engage some in relation to oxygen sensor signals or activation of emissions-related equipment. It was also common to prevent some or all of the vacuum advance in certain gears to prevent detonation due to lean-burning engines. Computer-controlled ignition systems Newer engines typically use computerized ignition systems. The computer has a timing map (lookup table) with spark advance values for all combinations of engine speed and engine load. The computer will send a signal to the ignition coil at the indicated time in the timing map in order to fire the spark plug. Most computers from original equipment manufacturers (OEM) cannot be modified so changing the timing advance curve is not possible. Overall timing changes are still possible, depending on the engine design. Aftermarket engine control units allow the tuner to make changes to the timing map. This allows the timing to be advanced or retarded based on various engine applications. A knock sensor may be used by the ignition system to allow for fuel quality variation. Bibliography Hartman, J. (2004). How to Tune and Modify Engine Management Systems. Motorbooks See also Electronic fuel injection (EFI) Firing order Valve timing References External links Setting Ignition Timing Curves Getting the Ignition Timing Right Ignition systems Synchronization
Ignition timing
Engineering
2,979
15,224,993
https://en.wikipedia.org/wiki/60S%20ribosomal%20protein%20L17
Large ribosomal subunit protein uL22 is a protein that in humans is encoded by the RPL17 gene. Ribosomes, the organelles that catalyze protein synthesis, consist of a small 40S subunit and a large 60S subunit. Together these subunits are composed of 4 RNA species and approximately 80 structurally distinct proteins. This gene encodes a ribosomal protein that is a component of the 60S subunit. The protein belongs to the L22P family of ribosomal proteins. It is located in the cytoplasm. This gene has been referred to as RPL23 because the encoded protein shares amino acid identity with ribosomal protein L23 from Haloarcula marismortui; however, its official symbol is RPL17. Two alternative splice variants have been observed, each encoding the same protein. As is typical for genes encoding ribosomal proteins, there are multiple processed pseudogenes of this gene dispersed through the genome. See also Mitochondrial ribosomal protein L22, another human protein of uL22 (L22p or L17e) family References Further reading External links Ribosomal proteins
60S ribosomal protein L17
Chemistry
225
66,001,579
https://en.wikipedia.org/wiki/Even%E2%80%93even%20nucleus
In atomic physics, even–even (EE) nuclei are nuclei with an even number of neutrons and an even number of protons. Even-mass-number nuclei, which comprise 151/251 = ~60% of all stable nuclei, are bosons, i.e. they have integer spin. The vast majority of them, 146 out of 151, belong to the EE class; they have spin 0 because of pairing effects. See also Even and odd atomic nuclei Nuclear shell model References Bosons Atomic physics Subatomic particles with spin 0
Even–even nucleus
Physics,Chemistry
112
11,422,151
https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20Z103
In molecular biology, Small nucleolar RNA Z103 is a non-coding RNA (ncRNA) molecule which functions in the modification of other small nuclear RNAs (snRNAs). This type of modifying RNA is usually located in the nucleolus of the eukaryotic cell which is a major site of snRNA biogenesis. It is known as a small nucleolar RNA (snoRNA) and also often referred to as a guide RNA. snoRNA Z103 belongs to the C/D box class of snoRNAs which contain the conserved sequence motifs known as the C box (UGAUGA) and the D box (CUGA). Most of the members of the box C/D family function in directing site-specific 2'-O-methylation of substrate RNAs. Plant snoRNA Z103 was identified in a screen of Oryza sativa. References External links Small nuclear RNA
Small nucleolar RNA Z103
Chemistry
197
58,194,584
https://en.wikipedia.org/wiki/Akahogi%20Tile%20Kiln%20Site
The is the remains of a late Nara period, early Heian period roof tile and pottery production site located in the Akahogi neighborhood of the city of Takayama, Gifu Prefecture in the Chūbu region of Japan. It has been protected as a National Historic Site since 1976. Overview Akahogi Tile Kiln site is located in the southeastern foothills of the Mihaka hills northwest of the central area of Takayama city. The site consists of the ruins of six kilns within a range of approximately 70 square meters. The kilns consists of four semi-underground anagama kilns with a length of approximately eight meters and two semi-underground noborigama-type climbing kilns. The former were used for the production of roof tiles, and the latter for Sue pottery. From the ruins of the tile kilns, shards of various types of roof tiles were discovered, including round tiles for use on eaves, flat tiles, corner tiles and Onigawara tiles, all of which are identical to shards found at the site of the Hida Kokubun-ji provincial temple established in the Nara period. The tile kilns are almost identical in structure: The No. 1 kiln has a total length of 8.34 meters, a width of 0.75 meters, and contains 16 steps. It appears to have been repaired twice, but most of the firing chamber has been lost due to a landslide. The No. 2 kiln has a total length 9.38 meters, a width 1.08 meters. Although the ceiling has completely survived, it appears to have been abandoned at an early date. It has been repaired four times, and the number of steps is currently unknown. The No. 3 kiln has a total length of 8.14 meters and a width of 0.70 meters. The firing chamber has six stages and the width of each step is wide, which is different from the others. It has been repaired three times. The No.4 kiln is located on the lowest portion of the hill, and has a width of 1.0 meter. The ceiling part is completely intact, and it appears to be the oldest of the group, but the kiln itself remains unexcavated. From the shards excavated as the site, these four kilns produced the roof tiles used at the Hida Kokubun-ji, including the round eaves tiles and the Onigawara tiles. The Sue pottery kilns are labelled No.5 and No.6 and were in continuous use until into the Kamakura period. The site is about 15 minutes by car from Takayama Station on the JR East Takayama Main Line. See also List of Historic Sites of Japan (Gifu) References External links Gifu Prefecture official site Takayama city official site Takayama, Gifu Historic Sites of Japan History of Gifu Prefecture Japanese pottery kiln sites Nara period Hida Province
Akahogi Tile Kiln Site
Chemistry,Engineering
608
78,892,966
https://en.wikipedia.org/wiki/Andotrope
An Andotrope is a device that allows viewing of a two-dimensional video from any direction, without limiting it to a specific perspective as with conventional screens. Name and idea The device is named after its inventor, Mike Ando, with the suffix -trope (Ancient Greek τρόπος, "turning"). The idea came to Ando from Gehn’s Holographic Imager in the computer game Riven – The Sequel to Myst. The Andotrope is a replica of this device, but differs from a technical perspective in that the image is not a three-dimensional hologram. Functionality According to Ando, the device updates "a 150-year-old children’s toy into the 21st century.” The functionality of an Andotrope is similar to a Zoetrope, but differs technically on several key points. The device is built from a cylindrical housing containing at least one fixed screen. The cylinder has a slot for each screen that allows it to be viewed from outside. When at least two screens are used, the content must be synchronized to show the same image at the same time. This case is attached to a rotating electric turntable to create the stroboscopic effect of a film projector, whereby a linear sequence of images becomes a fluid video. Ando states that his device can reach 1200 revolutions per minute, and with two screens, can show a video at approximately 40 frames per second. According to Ando, important requirements for the screens are: No flickering High brightness Strong resistance (due to high rotation speed) Andotrope in the media The invention was met with a generally positive reception. For example, the Australian Diyode Magazine published an interview with Ando covering the device and process of its invention. The German magazine Coolsten also covered the Andotrope. The YouTube channel The Action Lab released a video in November 2024 covering the use of 3D printing and two smartphones in creating its own Andotrope. References Animation_technology Australian_inventions Display_technology
Andotrope
Engineering
420
1,613,357
https://en.wikipedia.org/wiki/List%20of%20inorganic%20compounds
Although most compounds are referred to by their IUPAC systematic names (following IUPAC nomenclature), traditional names have also been kept where they are in wide use or of significant historical interests. A Ac Actinium(III) chloride – Actinium(III) fluoride – Actinium(III) oxide – Al Aluminium antimonide – AlSb Aluminium arsenate – Aluminium arsenide – AlAs Aluminium diboride – Aluminium bromide – Aluminium carbide – Aluminium iodide – Aluminium nitride – AlN Aluminium oxide – Aluminium phosphide – AlP Aluminium chloride – Aluminium fluoride – Aluminium hydroxide – Aluminium nitrate – Aluminium sulfide – Aluminium sulfate – Aluminium potassium sulfate – Am Americium(II) bromide − Americium(III) bromide − Americium(II) chloride − Americium(III) chloride – Americium(III) fluoride − Americium(IV) fluoride − Americium(II) iodide − Americium(III) iodide − Americium dioxide – / Ammonia – Ammonium azide – Ammonium bicarbonate – Ammonium bisulfate – Ammonium bromide – Ammonium chromate – Ammonium cerium(IV) nitrate – Ammonium cerium(IV) sulfate – Ammonium chloride – Ammonium chlorate – Ammonium cyanide – Ammonium dichromate – Ammonium dihydrogen phosphate – Ammonium hexafluoroaluminate – AlF6H12 N3 Ammonium hexafluorophosphate – F6H4 NP Ammonium hexachloroplatinate – Ammonium hexafluorosilicate Ammonium hexafluorotitanate Ammonium hexafluorozirconate Ammonium hydroxide – Ammonium nitrate – Ammonium orthomolybdate – Ammonium sulfamate – Ammonium sulfide – Ammonium sulfite – Ammonium sulfate – Ammonium perchlorate – Ammonium permanganate – Ammonium persulfate – Ammonium diamminetetrathiocynatochromate(III) – Ammonium thiocyanate – Ammonium triiodide – Diammonium dioxido(dioxo)molybdenum – Diammonium phosphate – Tetramethylammonium perchlorate – Sb Antimony hydride (stybine) – Antimony pentachloride – Antimony pentafluoride – Antimony potassium tartrate – Antimony sulfate – Antimony trichloride – Antimony trifluoride – Antimony trioxide – Antimony trisulfide – Antimony pentasulfide – Ar Argon fluorohydride – HArF As Arsenic trifluoride – Arsenic triiodide –AsI3 Arsenic pentafluoride – Arsenic trioxide (Arsenic(III) oxide) – Arsenous acid – Arsenic acid – Arsine – B Ba Barium azide – Barium bromide – Barium carbonate – Barium chlorate – Barium chloride – Barium chromate – Barium ferrate – Barium ferrite – Barium fluoride – Barium hydroxide – Barium iodide – Barium manganate – Barium nitrate – Barium oxalate – Barium oxide – BaO Barium permanganate – Barium peroxide – Barium sulfate – Barium sulfide – BaS Barium titanate – Barium thiocyanate – Be Beryllium borohydride – Beryllium bromide – Beryllium carbonate – Beryllium chloride – Beryllium fluoride – Beryllium hydride – Beryllium hydroxide – Beryllium iodide – Beryllium nitrate – Beryllium nitride – Beryllium oxide – BeO Beryllium sulfate – Beryllium sulfide – BeS Beryllium telluride – BeTe Bi Bismuth chloride – BiCl3 Bismuth ferrite – Bismuth hydroxide–BiH3O3 Bismuth(III) iodide–BiI3 Bismuth(III) nitrate–BiN3O9 Bismuth(III) oxide – Bismuth oxychloride – BiOCl Bismuth pentafluoride – Bismuth(III) sulfide– Bi2S3 Bismuth(III) telluride – Bismuth(III) telluride – Bismuth tribromide – Bismuth tungstate – B Borane – Borax – Borazine – Borazocine ((3Z,5Z,7Z)-azaborocine) – Boric acid – Boron carbide – Boron nitride – BN Boron suboxide – Boron tribromide – Boron trichloride – Boron trifluoride – Boron triiodide –BI3 Boron oxide – Boroxine – Decaborane – Diborane – Diboron tetrafluoride – Pentaborane – Tetraborane – Br Bromine monochloride – BrCl Bromine pentafluoride – Perbromic acid – Aluminium Bromide – Ammonium bromide – Boron tribromide – Bromic acid – Bromine monoxide – Bromine pentafluoride – Bromine trifluoride – Bromine monofluoride – BrF Calcium bromide – Carbon tetrabromide – Copper(I) bromide – CuBr Copper(II) bromide – Hydrobromic acid – HBr(aq) Hydrogen bromide – HBr Hypobromous acid – HOBr Iodine monobromide – IBr Iron(II) bromide – Iron(III) bromide – Lead(II) bromide – Lithium bromide – LiBr Magnesium bromide – Mercury(I) bromide – Mercury(II) bromide – Nitrosyl bromide – NOBr Phosphorus pentabromide – Phosphorus tribromide – Phosphorus heptabromide – PBr7 Potassium bromide – KBr Potassium bromate – Potassium perbromate – Tribromosilane – Silicon tetrabromide – Silver bromide – AgBr Sodium bromide – NaBr Sodium bromate – Sodium perbromate – Thionyl bromide – Tin(II) bromide – Zinc bromide – C Cd Cadmium arsenide – Cadmium bromide – Cadmium chloride – Cadmium fluoride – Cadmium iodide – Cadmium nitrate – Cadmium oxide – CdO Cadmium phosphide – Cadmium selenide – CdSe Cadmium sulfate – Cadmium sulfide – CdS Cadmium telluride – CdTe Cs Caesium bicarbonate – Caesium carbonate – Caesium chloride – CsCl Caesium chromate – Caesium fluoride – CsF Caesium hydride – CsH Caesium hydrogen sulfate – Caesium iodide – CsI Caesium sulfate – Cf Californium(III) bromide – Californium(III) carbonate – Californium(III) chloride – Californium(III) fluoride – Californium(III) iodide – Californium(II) iodide – Californium(III) nitrate – Californium(III) oxide – Californium(III) phosphate – Californium(III) sulfate – Californium(III) sulfide – Californium oxyfluoride – CfOF Californium oxychloride – CfOCl Ca Calcium bromide – Calcium carbide – Calcium carbonate (Precipitated Chalk) – Calcium chlorate – Calcium chloride – Calcium chromate – Calcium cyanamide – Calcium fluoride – Calcium hydride – Calcium hydroxide – Calcium monosilicide – CaSi Calcium oxalate – Calcium hydroxychloride – Calcium perchlorate – Calcium permanganate – Calcium sulfate (gypsum) – C Carbon dioxide – Carbon disulfide – Carbon monoxide – CO Carbon tetrabromide – Carbon tetrachloride – Carbon tetrafluoride – Carbon tetraiodide – Carbonic acid – Carbonyl chloride – Carbonyl fluoride – Carbonyl sulfide – COS Carboplatin – Ce Cerium(III) bromide – Cerium(III) carbonate – Cerium(III) chloride – Cerium(III) fluoride – Cerium(III) hydroxide – Cerium(III) iodide – Cerium(III) nitrate – Cerium(III) oxide – Cerium(III) sulfate – Cerium(III) sulfide – Cerium(IV) hydroxide – Cerium(IV) nitrate – Cerium(IV) oxide – Cerium(IV) sulfate – Cerium(III,IV) oxide – Ceric ammonium nitrate – Cerium hexaboride – Cerium aluminium – CeAl Cerium cadmium – CeCd Cerium magnesium – CeMg Cerium mercury – CeHg Cerium silver – CeAg Cerium thallium – CeTl Cerium zinc – CeZn Cl Actinium(III) chloride – Aluminium chloride – Americium(III) chloride – Ammonium chloride – Antimony(III) chloride – Antimony(V) chloride – Arsenic(III) chloride – Barium chloride – Beryllium chloride – Bismuth(III) chloride – Boron trichloride – Bromine monochloride – BrCl Cadmium chloride – Caesium chloride – CsCl Calcium chloride – Calcium hypochlorite – Carbon tetrachloride – Cerium(III) chloride – Chloramine – Chloric acid – Chlorine azide – Chlorine dioxide – Chlorine dioxide – Chlorine monofluoride – ClF Chlorine monoxide – ClO Chlorine pentafluoride – Chlorine perchlorate – Chlorine tetroxide – Chlorine trifluoride – Chlorine trifluoride – Chlorine trioxide – Chlorine trioxide – Chloroplatinic acid – Chlorosulfonic acid – Chlorosulfonyl isocyanate – Chloryl fluoride – Chromium(II) chloride – Chromium(III) chloride – Chromyl chloride – Cisplatin (cis–platinum(II) chloride diamine) – Cobalt(II) chloride – Copper(I) chloride – CuCl Copper(II) chloride – Curium(III) chloride – Cyanogen chloride – ClCN Dichlorine dioxide – Dichlorine heptaoxide – Dichlorine heptoxide – Dichlorine hexoxide – Dichlorine monoxide – Dichlorine monoxide – Dichlorine tetroxide (chlorine perchlorate) – Dichlorine trioxide – Dichlorosilane – Disulfur dichloride – Dysprosium(III) chloride – Erbium(III) chloride – Europium(II) chloride – Europium(III) chloride – Gadolinium(III) chloride – Gallium trichloride – Germanium dichloride – Germanium tetrachloride – Gold(I) chloride – AuCl Gold(III) chloride – Hafnium(IV) chloride – Holmium(III) chloride – Hydrochloric acid – HCl(aq) Hydrogen chloride – HCl Hypochlorous acid – HOCl Indium(I) chloride – InCl Indium(III) chloride – Iodine monochloride – ICl Iridium(III) chloride – Iron(II) chloride – Iron(III) chloride – Lanthanum chloride – Lead(II) chloride – Lithium chloride – LiCl Lithium perchlorate – Lutetium chloride – Magnesium chloride – Magnesium perchlorate – Manganese(II) chloride – Mercury(I) chloride – Mercury(II) chloride – Mercury(II) perchlorate – Molybdenum(III) chloride – Molybdenum(V) chloride – Neodymium(III) chloride – Neptunium(IV) chloride – Nickel(II) chloride – Niobium oxide trichloride – Niobium(IV) chloride – Niobium(V) chloride – Nitrogen trichloride – Nitrosyl chloride – NOCl Nitryl chloride – Osmium(III) chloride – Palladium(II) chloride – Perchloric acid – Perchloryl fluoride – Phosgene – Phosphonitrilic chloride trimer – Phosphorus oxychloride – Phosphorus pentachloride – Phosphorus trichloride – Platinum(II) chloride – Platinum(IV) chloride – Plutonium(III) chloride – Potassium chlorate – Potassium chloride – KCl Potassium perchlorate – Praseodymium(III) chloride – Protactinium(V) chloride – Radium chloride – Rhenium(III) chloride – Rhenium(V) chloride – Rhodium(III) chloride – Rubidium chloride – RbCl Ruthenium(III) chloride – Samarium(III) chloride – Scandium chloride – Selenium dichloride – Selenium tetrachloride – Silicon tetrachloride – Silver chloride – AgCl Silver perchlorate – Sodium chlorate – Sodium chloride (table salt, rock salt) – NaCl Sodium chlorite – Sodium hypochlorite – NaOCl Sodium perchlorate – Strontium chloride – Sulfur dichloride – Sulfuryl chloride – Tantalum(III) chloride – Tantalum(IV) chloride – Tantalum(V) chloride – Tellurium tetrachloride – Terbium(III) chloride – Tetrachloroauric acid – Thallium(I) chloride – TlCl Thallium(III) chloride – Thionyl chloride – Thiophosgene – Thorium(IV) chloride – Thulium(III) chloride – Tin(II) chloride – Tin(IV) chloride – Titanium tetrachloride – Titanium(III) chloride – Trichlorosilane – Trigonal bipyramidal – Tungsten(IV) chloride – Tungsten(V) chloride – Tungsten(VI) chloride – Uranium hexachloride – Uranium(III) chloride – Uranium(IV) chloride – Uranium(V) chloride – Uranyl chloride – Vanadium oxytrichloride – Vanadium(II) chloride – Vanadium(III) chloride – Vanadium(IV) chloride – Ytterbium(III) chloride – Yttrium chloride – Zinc chloride – Zirconium(IV) chloride – Cr Chromic acid – Chromium trioxide (Chromic acid) – Chromium(II) chloride (chromous chloride) – Chromium(II) sulfate – Chromium(III) chloride – Chromium(III) nitrate – Chromium(III) oxide – Chromium(III) sulfate – Chromium(III) telluride – Chromium(IV) oxide – Chromium pentafluoride – Chromyl chloride – Chromyl fluoride – Co Cobalt(II) bromide – Cobalt(II) carbonate – Cobalt(II) chloride – Cobalt(II) nitrate – Cobalt(II) sulfate – Cobalt(III) fluoride – Cu Copper(I) acetylide – Copper(I) chloride – CuCl Copper(I) fluoride – CuF Copper(I) oxide – Copper(I) sulfate – Copper(I) sulfide – Copper(II) azide – Copper(II) borate – Cu3(BO3)2 Copper(II) carbonate – Copper(II) chloride – Copper(II) hydroxide – Copper(II) nitrate – Copper(II) oxide – CuO Copper(II) sulfate – Copper(II) sulfide – CuS Copper oxychloride – Tetramminecopper(II) sulfate – Cm Curium(III) chloride – Curium(III) oxide – Curium(IV) oxide – Curium hydroxide – CN Cyanogen bromide – BrCN Cyanogen chloride – ClCN Cyanogen iodide – ICN Cyanogen – Cyanuric chloride – Cyanogen thiocyanate – Cyanogen selenocyanate – Cyanogen azide – D Disilane – Disulfur dichloride – Dy Dysprosium(III) chloride – Dysprosium oxide – Dysprosium titanate – E Es Einsteinium(III) bromide – Einsteinium(III) carbonate – Einsteinium(III) chloride – Einsteinium(III) fluoride – Einsteinium(III) iodide – Einsteinium(III) nitrate – Einsteinium(III) oxide – Einsteinium(III) phosphate – Einsteinium(III) sulfate – Einsteinium(III) sulfide – Er Erbium(III) chloride – Erbium-copper – ErCu Erbium-gold – ErAu Erbium(III) oxide – Erbium-silver – ErAg Erbium-Iridium – ErIr Eu Europium(II) chloride – Europium(II) sulfate – Europium(III) bromide – Europium(III) chloride – Europium(III) iodate – Europium(III) iodide – Europium(III) nitrate – Europium(III) oxide – Europium(III) perchlorate – Europium(III) sulfate – Europium(III) vanadate – F F Fluoroantimonic acid – Tetrafluorohydrazine – Trifluoromethylisocyanide – Trifluoromethanesulfonic acid – Other fluorides: AlF3, AmF3, NH4F, NH4HF2, NH4BF4, SbF5, SbF3, AsF5, AsF3, BaF2, BeF2, BiF3, F5SOOSF5, BF3, BrF5, BrF3, BrF, CdF2, CsF, CaF2, CF4, COF2, CeF3, CeF4, ClF5, ClF3, ClF, CrF3, CrF5, CrO2F2, CoF2, CoF3, CuF, CuF2, CmF3, N2F2, N2F4, O2F2, P2F4, S2F2, DyF3, ErF3, EuF3, HBF4, FN3, FOSO2F, FNO3, FSO3H, GdF3, GaF3, GeF4, AuF3, HfF4, H2SbF6, HPF6, H2SiF6, H2TiF6, HF, HF(aq), HFO, InF3, IF7, IF, IF5, IrF3, IrF6, FeF2, FeF3, KrF2, LaF3, PbF2, PbF4, LiF, MgF2, MnF2, MnF3, MnF4, Hg2F2, HgF2, MoF3, MoF5, MoF6, NbF4, NbF5, NdF3, NiF2, NpF4, NpF5, NpF6, ONF3, NF3, NO2BF4, NOBF4, NOF, NO2F, OsF4, OsF6, OsF7, OF2, PdF2, PdF4, FSO2OOSO2F, POF3, PF5, PF3, PtF2, PtF4, PtF6, PuF3, PuF4, PuF6, KF, KPF6, KBF4, PrF3, PaF5, RaF2, RnF2, ReF4, ReF6, ReF7, RhF3, RbF, RuF3, RuF4, RuF6, SmF3, ScF3, SeF6, SeF4, SiF4, AgF, AgF2, AgBF4, NaF, NaFSO3, Na3AlF6, NaSbF6, NaPF6, Na2SiF6, Na2TiF6, NaBF4, SrF2, SF2, SF6, SF4, SO2F2, TaF5, TcF6, TeF6, TeF4, TlF, TlF3, SOF2, ThF4, SnF2, SnF4, TiF3, TiF4, HSiF3, WF6, UF4, UF5, UF6, UO2F2, VF3, VF4, VF5, XeF2, XeO2F2, XeF6, XePtF6, XeF4, YbF3, YF3, ZnF2, ZrF4 Fr Francium oxide – Francium chloride – FrCl Francium bromide – FrBr Francium iodide – FrI Francium carbonate – Francium hydroxide – FrOH Francium sulfate – G Gd Gadolinium(III) chloride – Gadolinium(III) oxide – Gadolinium(III) carbonate – Gadolinium(III) chloride – Gadolinium(III) fluoride – Gadolinium gallium garnet – Gadolinium(III) nitrate – Gadolinium(III) oxide – Gadolinium(III) phosphate – Gadolinium(III) sulfate – Ga Gallium antimonide – GaSb Gallium arsenide – GaAs Gallium(III) fluoride – Gallium trichloride – Gallium nitride – GaN Gallium phosphide – GaP Gallium(II) sulfide – GaS Gallium(III) sulfide – Ge Digermane – Germane – Germanium(II) bromide – Germanium(II) chloride – Germanium(II) fluoride – Germanium(II) iodide – Germanium(II) oxide – GeO Germanium(II) selenide – GeSe Germanium(II) sulfide – GeS Germanium(IV) bromide – Germanium(IV) chloride – Germanium(IV) fluoride – Germanium(IV) iodide – Germanium(IV) nitride – Germanium(IV) oxide – Germanium(IV) selenide – Germanium(IV) sulfide – Germanium difluoride – Germanium dioxide – Germanium tetrachloride – Germanium tetrafluoride – Germanium telluride – GeTe Au Gold(I) bromide – AuBr Gold(I) chloride – AuCl Gold(I) cyanide-AuCN Gold(I) hydride – AuH Gold(I) iodide – AuI Gold(I) selenide – Gold(I) sulfide – Gold(III) bromide – Gold(III) chloride – Gold(III) fluoride – Gold(III) iodide – Gold(III) oxide – Gold(III) selenide – Gold(III) sulfide – Gold(III) nitrate – Gold(V) fluoride – Gold(I,III) chloride – Gold ditelluride – Gold heptafluoride – () H Hf Hafnium(IV) bromide – Hafnium(IV) carbide – HfC Hafnium(IV) chloride – Hafnium(IV) fluoride – Hafnium(IV) iodide – Hafnium(IV) oxide – Hafnium(IV) silicate – Hafnium(IV) sulfide – Hexadecacarbonylhexarhodium – Hs Hassium tetroxide – Ho Holmium(III) carbonate – Holmium(III) chloride – Holmium(III) fluoride – Holmium(III) nitrate – Holmium(III) oxide – Holmium(III) phosphate – Holmium(III) sulfate – H Hexafluorosilicic acid – Hydrazine – Hydrazoic acid – Hydroiodic acid – HI Hydrogen bromide – HBr Hydrogen chloride – HCl Hydrogen cyanide – HCN Hydrogen fluoride – HF Hydrogen peroxide – Hydrogen selenide – Hydrogen sulfide – Hydrogen telluride – Hydroxylamine – Hypobromous acid – HBrO Hypochlorous acid – HClO Hypophosphorous acid – Metaphosphoric acid – Protonated molecular hydrogen – Trioxidane – Water - H2O He Sodium helide – I In Indium(I) bromide – InBr Indium(III) bromide – Indium(III) chloride – Indium(III) fluoride – Indium(III) oxide – Indium(III) sulfate – Indium antimonide – InSb Indium arsenide – InAs Indium nitride – InN Indium phosphide – InP Indium(I) iodide – InI Indium(III) nitrate – Indium(I) oxide – Indium(III) selenide – Indium(III) sulfide – Trimethylindium – I Iodic acid – Iodine heptafluoride – Iodine pentafluoride – Iodine monochloride – ICl Iodine trichloride – Periodic acid – Iodine pentachloride - Iodine tribromide - Ir Iridium(IV) chloride – Iridium(V) fluoride – Iridium hexafluoride – Iridium tetrafluoride – Fe Columbite – Iron(II) chloride – Iron(II) oxalate – Iron(II) oxide – FeO Iron(II) selenate – Iron(II) sulfate – Iron(III) chloride – Iron(III) fluoride – Iron(III) oxalate – Iron(III) oxide – Iron(III) nitrate – Iron(III) sulfate – Iron(III) thiocyanate – Iron(II,III) oxide – Iron ferrocyanide – Prussian blue (Iron(III) hexacyanoferrate(II)) – Ammonium iron(II) sulfate – Iron(II) bromide – Iron(III) bromide – Iron(II) chloride – Iron(III) chloride – Iron disulfide – Iron dodecacarbonyl – Iron(III) fluoride – Iron(II) iodide – Iron naphthenate – Iron(III) nitrate – Iron nonacarbonyl – Iron(II) oxalate – Iron(II,III) oxide – Iron(III) oxide – Iron pentacarbonyl – Iron(III) perchlorate – Iron(III) phosphate – Iron(II) sulfamate – Iron(II) sulfate – Iron(III) sulfate – Iron(II) sulfide – FeS K Kr Krypton difluoride – L La Lanthanum aluminium – LaAl Lanthanum cadmium – LaCd Lanthanum carbonate – Lanthanum magnesium – LaMg Lanthanum manganite – Lanthanum mercury – LaHg Lanthanum silver – LaAg Lanthanum thallium – LaTl Lanthanum zinc – LaZn Lanthanum boride – Lanthanum carbonate – Lanthanum(III) chloride – Lanthanum trifluoride – Lanthanum(III) oxide – Lanthanum(III) nitrate – Lanthanum(III) phosphate – Lanthanum(III) sulfate – Pb Lead(II) azide – Lead(II) bromide – Lead(II) carbonate – Lead(II) chloride – Lead(II) fluoride – Lead(II) hydroxide – Lead(II) iodide – Lead(II) nitrate – Lead(II) oxide – PbO Lead(II) phosphate – Lead(II) sulfate – Lead(II) selenide – PbSe Lead(II) sulfide – PbS Lead(II) telluride – PbTe Lead(II) thiocyanate – Lead(II,IV) oxide – Lead(IV) oxide – Lead(IV) sulfide – Lead hydrogen arsenate – Lead styphnate – Lead tetrachloride – Lead tetrafluoride – Lead tetroxide – Lead titanate – Lead zirconate titanate – (e.g., x = 0.52 is lead zirconium titanate) Plumbane – Li Lithium tetrachloroaluminate – Lithium aluminium hydride – Lithium bromide – LiBr Lithium borohydride – Lithium carbonate (Lithium salt) – Lithium chloride – LiCl Lithium hypochlorite – LiClO Lithium chlorate – Lithium perchlorate – Lithium cobalt oxide – Lithium oxide – Lithium peroxide – Lithium hydride – LiH Lithium hydroxide – LiOH Lithium iodide – LiI Lithium iron phosphate – Lithium nitrate – Lithium sulfide – Lithium sulfite – Lithium sulfate – Lithium superoxide – Lithium hexafluorophosphate – M Mg Magnesium antimonide – MgSb Magnesium bromide – Magnesium carbonate – Magnesium chloride – Magnesium citrate – Magnesium oxide – MgO Magnesium perchlorate – Magnesium phosphate – Magnesium sulfate – Magnesium bicarbonate – Magnesium boride – Magnesium bromide – Magnesium carbide – Magnesium carbonate – Magnesium chloride – Magnesium cyanamide – Magnesium fluoride – Magnesium fluorophosphate – Magnesium gluconate – Magnesium hydride – Dimagnesium phosphate – Magnesium hydroxide – Magnesium hypochlorite – Magnesium iodide – Magnesium molybdate – Magnesium nitrate – Magnesium oxalate – Magnesium peroxide – Magnesium phosphate – Magnesium silicate – Magnesium sulfate – Magnesium sulfide – MgS Magnesium titanate – Magnesium tungstate – Magnesium zirconate – Mn Manganese(II) bromide – Manganese(II) chloride – Manganese(II) hydroxide – Manganese(II) oxide – MnO Manganese(II) phosphate – Manganese(II) sulfate – Manganese(II) sulfate monohydrate – Manganese(III) chloride – Manganese(III) oxide – Manganese(IV) fluoride – Manganese(IV) oxide (manganese dioxide) – Manganese(II,III) oxide – Manganese dioxide – Manganese heptoxide – Hg Mercury(I) chloride – Mercury(I) sulfate – Mercury(II) chloride – Mercury(II) hydride – Mercury(II) selenide – HgSe Mercury(II) sulfate – Mercury(II) sulfide – HgS Mercury(II) telluride – HgTe Mercury(II) thiocyanate – Mercury(IV) fluoride – Mercury fulminate – Mo Molybdenum(II) bromide – Molybdenum(II) chloride – Molybdenum(III) bromide – Molybdenum(III) chloride – Molybdenum(IV) carbide – MoC Molybdenum(IV) chloride – Molybdenum(IV) fluoride – Molybdenum(V) chloride – Molybdenum(V) fluoride – Molybdenum disulfide – Molybdenum hexacarbonyl – Molybdenum hexafluoride – Molybdenum tetrachloride – Molybdenum trioxide – Molybdic acid – N Nd Neodymium acetate - Neodymium(III) arsenate – NdAsO4 Neodymium(II) chloride – Neodymium(III) chloride – Neodymium magnet – Neodymium(II) bromide - Neodymium(III) bromide – Neodymium(III) fluoride – Neodymium(III) hydride - Neodymium(II) iodide - Neodymium(III) iodide – Neodymium molybdate - Neodymium perrhenate - Neodymium(III) sulfide - Neodymium tantalate - Neodymium(III) vanadate - Np Neptunium(III) fluoride – Neptunium(IV) fluoride – Neptunium(IV) oxide – Neptunium(VI) fluoride – Ni Nickel(II) carbonate – Nickel(II) chloride – Nickel(II) fluoride – Nickel(II) hydroxide – Nickel(II) nitrate – Nickel(II) oxide – NiO Nickel(II) sulfamate – Nickel(II) sulfide – NiS Nb Niobium(IV) fluoride – Niobium(V) fluoride – Niobium oxychloride – Niobium pentachloride – N Dinitrogen pentoxide (nitronium nitrate) – Dinitrogen tetrafluoride – Dinitrogen tetroxide – Dinitrogen trioxide – Nitric acid – Nitrous acid – Nitrogen dioxide – Nitrogen monoxide – NO Nitrous oxide (dinitrogen monoxide, laughing gas, NOS) – Nitrogen pentafluoride – Nitrogen triiodide – NO Nitrosonium octafluoroxenate(VI) – Nitrosonium tetrafluoroborate – Nitrosylsulfuric acid – O Os Osmium hexafluoride – Osmium tetroxide (osmium(VIII) oxide) – Osmium trioxide (osmium(VI) oxide) – O Tributyltin – Oxygen difluoride – Ozone – Aluminium oxide – Americium(II) oxide – AmO Americium(IV) oxide – Antimony trioxide – Antimony(V) oxide – Arsenic trioxide – Arsenic(V) oxide – Barium oxide – BaO Beryllium oxide – BeO Bismuth(III) oxide – Bismuth oxychloride – BiOCl Boron trioxide – Bromine monoxide – Carbon dioxide – Carbon monoxide – CO Cerium(IV) oxide – Chlorine dioxide – Chlorine trioxide – Dichlorine heptaoxide – Dichlorine monoxide – Chromium(III) oxide – Chromium(IV) oxide – Chromium(VI) oxide – Cobalt(II) oxide – CoO Copper(I) oxide – Copper(II) oxide – CuO Curium(III) oxide – Curium(IV) oxide – Dysprosium(III) oxide – Erbium(III) oxide – Europium(III) oxide – Oxygen difluoride – Dioxygen difluoride – Francium oxide – Gadolinium(III) oxide – Gallium(III) oxide – Germanium dioxide – Gold(III) oxide – Hafnium dioxide – Holmium(III) oxide – Indium(I) oxide – Indium(III) oxide – Iodine pentoxide – Iridium(IV) oxide – Iron(II) oxide – FeO Iron(II,III) oxide – Iron(III) oxide – Lanthanum(III) oxide – Lead(II) oxide – PbO Lead dioxide – Lithium oxide – Magnesium oxide – MgO Potassium oxide – Rubidium oxide – Sodium oxide – Strontium oxide – SrO Tellurium dioxide – Uranium(IV) oxide – (only simple oxides, oxyhalides, and related compounds, not hydroxides, carbonates, acids, or other compounds listed elsewhere) P Pd Palladium(II) chloride – Palladium(II) nitrate – Palladium(II,IV) fluoride – Palladium sulfate – Palladium tetrafluoride – P Diphosphorus tetrachloride – Diphosphorus tetrafluoride – Diphosphorus tetraiodide – Hexachlorophosphazene – Phosphine – Phosphomolybdic acid – Phosphoric acid – Phosphorous acid (Phosphoric(III) acid) – Phosphoroyl nitride – NPO Phosphorus pentabromide – Phosphorus pentafluoride – Phosphorus pentasulfide – Phosphorus pentoxide – Phosphorus sesquisulfide – Phosphorus tribromide – Phosphorus trichloride – Phosphorus trifluoride – Phosphorus triiodide – Phosphotungstic acid – Poly(dichlorophosphazene) – Pt Platinum(II) chloride – Platinum(IV) chloride – Platinum hexafluoride – Platinum pentafluoride – Platinum tetrafluoride – Pu Plutonium(III) bromide – Plutonium(III) chloride – Plutonium(III) fluoride – Plutonium dioxide (Plutonium(IV) oxide) – Plutonium hexafluoride – Plutonium hydride – Plutonium tetrafluoride – Po Polonium hexafluoride – Polonium monoxide – PoO Polonium dioxide – Polonium trioxide – Ps Di-positronium – Positronium hydride – PsH K Potash Alum – Potassium alum – Potassium aluminium fluoride – Potassium amide – Potassium argentocyanide – Potassium arsenite – Potassium azide – Potassium borate – Potassium bromide – KBr Potassium bicarbonate – Potassium bifluoride – Potassium bisulfite – Potassium carbonate – Potassium calcium chloride – Potassium chlorate – Potassium chloride – KCl Potassium chlorite – Potassium chromate – Potassium cyanide – KCN Potassium dichromate – Potassium dithionite – Potassium ferrate – Potassium ferrioxalate – Potassium ferricyanide – Potassium ferrocyanide – Potassium heptafluorotantalate – Potassium hexafluorophosphate – Potassium hydrogen carbonate – Potassium hydrogen fluoride – Potassium hydroxide – KOH Potassium iodide – KI Potassium iodate – Potassium manganate – Potassium monopersulfate – Potassium nitrate – Potassium perbromate – Potassium perchlorate – Potassium periodate – Potassium permanganate – Potassium sodium tartrate – Potassium sulfate – Potassium sulfite – Potassium sulfide – Potassium tartrate – Potassium tetraiodomercurate(II) – Potassium thiocyanate – KSCN Potassium titanyl phosphate – Potassium vanadate – Tripotassium phosphate – Pr Praseodymium(III) chloride – Praseodymium(III) sulfate – Praseodymium(III) bromide – Praseodymium(III) carbonate – Praseodymium(III) chloride – Praseodymium(III) fluoride – Praseodymium(III) iodide – Praseodymium(III) nitrate – Praseodymium(III) oxide – Praseodymium(III) phosphate – Praseodymium(III) sulfate – Praseodymium(III) sulfide – Pm Promethium(III) chloride – Promethium(III) oxide – Promethium(III) bromide – Promethium(III) carbonate – Promethium(III) chloride – Promethium(III) fluoride – Promethium(III) iodide – Promethium(III) nitrate – Promethium(III) oxide – Promethium(III) phosphate – Promethium(III) sulfate – Promethium(III) sulfide – R Ra Radium bromide – Radium carbonate – Radium chloride – Radium fluoride – Rn Radon difluoride – Re Rhenium(IV) oxide – Rhenium(VII) oxide – Rhenium heptafluoride – Rhenium hexafluoride – Rh Rhodium hexafluoride – Rhodium pentafluoride – Rhodium(III) chloride – Rhodium(III) hydroxide – Rhodium(III) iodide – Rhodium(III) nitrate – Rhodium(III) oxide – Rhodium(III) sulfate – Rhodium(III) sulfide – Rhodium(IV) fluoride – Rhodium(IV) oxide – Rb Rubidium azide – Rubidium bromide – RbBr Rubidium chloride – RbCl Rubidium fluoride – RbF Rubidium hydrogen sulfate – Rubidium hydroxide – RbOH Rubidium iodide – RbI Rubidium nitrate – Rubidium oxide – Rubidium telluride – Rubidium titanyl phosphate — Ru Ruthenium hexafluoride – Ruthenium pentafluoride – Ruthenium(VIII) oxide – Ruthenium(III) chloride – Ruthenium(IV) oxide – S Sm Samarium(II) iodide – Samarium(III) chloride – Samarium(III) oxide – Samarium(III) bromide – Samarium(III) carbonate – Samarium(III) fluoride – Samarium(III) iodide – Samarium(III) nitrate – Samarium(III) oxide – Samarium(III) phosphate – Samarium(III) sulfate – Samarium(III) sulfide – Sc Scandium(III) fluoride – Scandium(III) nitrate – Scandium(III) oxide – Scandium(III) triflate – Sg Seaborgium hexacarbonyl – Se Selenic acid – Selenious acid – Selenium dibromide – Selenium dioxide – Selenium disulfide – Selenium hexafluoride – Selenium hexasulfide – Selenium oxybromide – Selenium oxydichloride – Selenium tetrachloride – Selenium tetrafluoride – Selenium trioxide – Selenoyl fluoride – Si Silane – Silica gel – Silicic acid – Silicochloroform, trichlorosilane – Silicofluoric acid – Silicon boride – Silicon carbide (carborundum) – SiC Silicon dioxide – Silicon monoxide – SiO Silicon nitride – Silicon tetrabromide – Silicon tetrachloride – Silicon tetrafluoride – Silicon tetraiodide – Thortveitite – Ag Silver(I) fluoride – AgF Silver(II) fluoride – Silver acetylide – Silver argentocyanide – Silver azide – Silver bromate – Silver bromide – AgBr Silver chlorate – Silver chloride – AgCl Silver chromate – Silver fluoroborate – Silver fulminate – AgCNO Silver hydroxide – AgOH Silver iodide – AgI Silver nitrate – Silver nitride – Silver oxide – Silver perchlorate – Silver permanganate – Silver phosphate (silver orthophosphate) – Silver subfluoride – Silver sulfate – Silver sulfide – Na Sodamide – Sodium aluminate – Sodium arsenate – Sodium azide – Sodium bicarbonate – Sodium biselenide – NaSeH Sodium bisulfate – Sodium bisulfite – Sodium borate – Sodium borohydride – Sodium bromate – Sodium bromide – NaBr Sodium bromite – Sodium carbide – Sodium carbonate – Sodium chlorate – Sodium chloride – NaCl Sodium chlorite – Sodium cobaltinitrite – Sodium copper tetrachloride – Sodium cyanate – NaCNO Sodium cyanide – NaCN Sodium dichromate – Sodium dioxide – Sodium dithionite – Sodium ferrocyanide – Sodium fluoride – NaF Sodium fluorosilicate – Sodium formate – HCOONa Sodium hydride – NaH Sodium hydrogen carbonate (Sodium bicarbonate) – Sodium hydrosulfide – NaSH Sodium hydroxide – NaOH Sodium hypobromite – NaOBr Sodium hypochlorite – NaOCl Sodium hypoiodite – NaOI Sodium hypophosphite – Sodium iodate – Sodium iodide – NaI Sodium manganate – Sodium molybdate – Sodium monofluorophosphate (MFP) – Sodium nitrate – Sodium nitrite – Sodium nitroprusside – Sodium oxide – Sodium perborate – Sodium perbromate – Sodium percarbonate – Sodium perchlorate – Sodium periodate – Sodium permanganate – Sodium peroxide – Sodium peroxycarbonate – Sodium perrhenate – Sodium persulfate – Sodium phosphate; see trisodium phosphate – Sodium selenate – Sodium selenide – Sodium selenite – Sodium silicate – Sodium sulfate – Sodium sulfide – Sodium sulfite – Sodium tartrate – Sodium tellurite – Sodium tetrachloroaluminate – Sodium tetrafluoroborate – Sodium thioantimoniate – Sodium thiocyanate – NaSCN Sodium thiosulfate – Sodium tungstate – Sodium uranate – Sodium zincate – Trisodium phosphate – Sr Strontium bromide – Strontium carbonate – Strontium chloride – Strontium fluoride – Strontium hydroxide – Strontium iodide – Strontium nitrate – Strontium oxide – SrO Strontium titanate – Strontium bicarbonate – Strontium boride – Strontium bromide – Strontium carbide – Strontium carbonate – Strontium chloride – Strontium cyanamide – Strontium fluoride – Strontium fluorophosphate – Strontium gluconate – Strontium hydride – Strontium hydrogen phosphate – Strontium hydroxide – Strontium hypochlorite – Strontium iodide – Strontium molybdate – Strontium nitrate – Strontium oxalate – Strontium oxide – SrO Strontium peroxide – Strontium phosphate – Strontium silicate – Strontium sulfate – Strontium sulfide – SrS Strontium titanate – Strontium tungstate – Strontium zirconate – S Disulfur decafluoride – Hydrogen sulfide (sulfane) – Pyrosulfuric acid – Sulfamic acid – Sulfur dibromide – Sulfur dioxide – Sulfur hexafluoride – Sulfur tetrafluoride – Sulfuric acid – Sulfurous acid – Sulfuryl chloride – Tetrasulfur tetranitride – Persulfuric acid (Caro's acid) – T Ta Tantalum arsenide – TaAs Tantalum carbide – TaC Tantalum pentafluoride – Tantalum(V) oxide – Tc Technetium hexafluoride – Ammonium pertechnetate – Sodium pertechnetate – Te Ditellurium bromide – Telluric acid – Tellurium dioxide – Tellurium hexafluoride – Tellurium tetrabromide – Tellurium tetrachloride – Tellurium tetrafluoride – Tellurium tetraiodide – Tellurous acid – Beryllium telluride – BeTe Bismuth telluride – Cadmium telluride – CdTe Cadmium zinc telluride – Dimethyltelluride – Mercury Cadmium Telluride – Lead telluride – PbTe Mercury telluride – HgTe Mercury zinc telluride – Silver telluride – Tin telluride – SnTe Zinc telluride – ZnTe Teflic acid – Telluric acid – Sodium tellurite – Tellurium dioxide – Tellurium hexafluoride – Tellurium tetrafluoride – Tellurium tetrachloride – Tb Terbium(III) chloride – Terbium(III) bromide – Terbium(III) carbonate – Terbium(III) chloride – Terbium(III) fluoride – Terbium(III) iodide – Terbium(III) nitrate – Terbium(III) oxide – Terbium(III) phosphate – Terbium(III) sulfate – Terbium(III) sulfide – Tl Thallium(I) bromide – TlBr Thallium(I) carbonate – Thallium(I) fluoride – TlF Thallium(I) sulfate – Thallium(III) oxide – Thallium(III) sulfate – Thallium triiodide – Thallium antimonide – TlSb Thallium arsenide – TlAs Thallium(III) bromide – Thallium(III) chloride – Thallium(III) fluoride – Thallium(I) iodide – TlI Thallium(III) nitrate – Thallium(I) oxide – Thallium(III) oxide – Thallium phosphide – TlP Thallium(III) selenide – Thallium(III) sulfate – Thallium(III) sulfide – TrimethylThallium – Thallium(I) hydroxide – TlOH SO Thionyl chloride – Thionyl tetrafluoride – ClS Thiophosgene – Thiophosphoryl chloride – Th Thorium(IV) nitrate – Thorium(IV) sulfate – Thorium dioxide – Thorium tetrafluoride – Tm Thulium(III) bromide – Thulium(III) chloride – Thulium(III) oxide – Sn Stannane – Tin(II) bromide – Tin(II) chloride (stannous chloride) – Tin(II) fluoride – Tin(II) hydroxide – Tin(II) iodide – Tin(II) oxide – SnO Tin(II) sulfate – Tin(II) sulfide – SnS Tin(IV) bromide – Tin(IV) chloride – Tin(IV) fluoride – Tin(IV) iodide – Tin(IV) oxide – Tin(IV) sulfide – Tin(IV) cyanide – Tin selenide – Tin telluride – SnTe Ti Hexafluorotitanic acid – Titanium(II) chloride – Titanium(II) oxide – TiO Titanium(II) sulfide – TiS Titanium(III) bromide – Titanium(III) chloride – Titanium(III) fluoride – Titanium(III) iodide – Titanium(III) oxide – Titanium(III) phosphide – TiP Titanium(IV) bromide (titanium tetrabromide) – Titanium(IV) carbide – TiC Titanium(IV) chloride (titanium tetrachloride) – Titanium(IV) hydride – Titanium(IV) iodide (titanium tetraiodide) – Titanium carbide – TiC Titanium diboride – Titanium dioxide (titanium(IV) oxide) – Titanium diselenide – Titanium disilicide – Titanium disulfide – Titanium nitrate – Titanium nitride – TiN Titanium perchlorate – Titanium silicon carbide – Titanium tetrabromide – Titanium tetrafluoride – Titanium tetraiodide – TiO Titanyl sulfate – W Tungsten(VI) chloride – Tungsten(VI) fluoride – Tungsten boride – Tungsten carbide – WC Tungstic acid – Tungsten hexacarbonyl – U U Triuranium octaoxide (pitchblende or yellowcake) – Uranium hexafluoride – Uranium pentafluoride – Uranium sulfate – Uranium tetrachloride – Uranium tetrafluoride – Uranium(III) chloride – Uranium(IV) chloride – Uranium(V) chloride – Uranium hexachloride – Uranium(IV) fluoride – Uranium pentafluoride – Uranium(VI) fluoride – Uranyl peroxide – Uranium dioxide – UO2 Uranyl carbonate – Uranyl chloride – Uranyl fluoride – Uranyl hydroxide – Uranyl hydroxide – Uranyl nitrate – Uranyl sulfate – V V Vanadium(II) chloride – Vanadium(II) oxide – VO Vanadium(III) bromide – Vanadium(III) chloride – Vanadium(III) fluoride – Vanadium(III) nitride – VN Vanadium(III) oxide – Vanadium(IV) chloride – Vanadium(IV) fluoride – Vanadium(IV) oxide – Vanadium(IV) sulfate – Vanadium(V) oxide – Vanadium carbide – VC Vanadium oxytrichloride (Vanadium(V) oxide trichloride) – Vanadium pentafluoride – Vanadium tetrachloride – Vanadium tetrafluoride – W Water – X Xe Perxenic acid – Xenon difluoride – Xenon hexafluoride – Xenon hexafluoroplatinate – Xenon tetrafluoride – Xenon tetroxide – Xenic acid – Y Yb Ytterbium(III) chloride – Ytterbium(III) oxide – Ytterbium(III) sulfate – Ytterbium(III) bromide – Ytterbium(III) carbonate – Ytterbium(III) chloride – Ytterbium(III) fluoride – Ytterbium(III) iodide – Ytterbium(III) nitrate – Ytterbium(III) oxide – Ytterbium(III) phosphate – Ytterbium(III) sulfate – Ytterbium(III) sulfide – Y Yttrium(III) antimonide – YSb Yttrium(III) arsenate – Yttrium(III) arsenide – YAs Yttrium(III) bromide – Yttrium(III) fluoride – Yttrium(III) oxide – Yttrium(III) nitrate – Yttrium(III) sulfide – Yttrium(III) sulfate – Yttrium aluminium garnet – Yttrium barium copper oxide – Yttrium cadmium – YCd Yttrium copper – YCu Yttrium gold – YAu Yttrium iridium – YIr Yttrium iron garnet – Yttrium magnesium – YMg Yttrium phosphate – Yttrium phosphide – YP Yttrium rhodium – YRh Yttrium silver – YAg Yttrium zinc – YZn Z Zn Zinc arsenide – Zinc bromide – Zinc carbonate – Zinc chloride – Zinc cyanide – Zinc diphosphide – Zinc fluoride – Zinc iodide – Zinc nitrate – Zinc oxide – ZnO Zinc phosphide – Zinc pyrophosphate – Zinc selenate – Zinc selenide – ZnSe Zinc selenite – Zinc selenocyanate – Zinc sulfate – Zinc sulfide – ZnS Zinc sulfite – Zinc telluride – ZnTe Zinc thiocyanate – Zinc tungstate – Zr Zirconia hydrate – Zirconium boride – Zirconium carbide – ZrC Zirconium(IV) chloride – Zirconium(IV) oxide – Zirconium hydroxide – Zirconium orthosilicate – Zirconium nitride – ZrN Zirconium tetrafluoride – Zirconium tetrahydroxide – Zirconium tungstate – Zirconyl bromide – Zirconyl chloride – Zirconyl nitrate – Zirconyl sulfate – Zirconium dioxide – Zirconium nitride – ZrN Zirconium tetrachloride – Zirconium(IV) sulfide – Zirconium(IV) silicide – Zirconium(IV) silicate – Zirconium(IV) fluoride – Zirconium(IV) bromide – Zirconium(IV) iodide – Zirconium(IV) hydroxide – Schwartz's reagent – Zirconium propionate – Zirconium tungstate – Zirconium(II) hydride – Lead zirconate titanate – See also Dictionary of chemical formulas List of alchemical substances List of biomolecules List of compounds List of copper salts List of inorganic compounds named after people List of minerals List of organic compounds List of organic salts Named inorganic compounds Polyatomic ions References External links Inorganic Molecules made thinkable, an interactive visualisation showing inorganic compounds for an array of common metal and non-metal ions Inorganic Inorganic Compounds
List of inorganic compounds
Chemistry
12,225
1,111,491
https://en.wikipedia.org/wiki/Carport
A carport is a covered structure used to offer limited protection to vehicles, primarily cars, from rain and snow. The structure can either be free standing or attached to a wall. Unlike most structures, a carport does not have four walls, and usually has one or two. Carports offer less protection than garages but allow for more ventilation. In particular, a carport prevents frost on the windshield. A "mobile" and/or "enclosed" carport has the same purpose as a standard carport. However, it may be removed/relocated and is typically framed with tubular steel and may have canvas or vinyl type covering which encloses the complete frame, including walls. It may have an accessible front entry or open entryway not typically attached to any structure or fastened in place by permanent means put held in place by stakes. It is differentiated from a tent by its main purpose: to house vehicles and/or motorized equipment (a tent is to shelter people). History The term carport comes from the French term porte-cochère, referring to a covered portal. Renowned architect Frank Lloyd Wright coined the term when he used a carport in the first of his "Usonian" home designs: the house of Herbert Jacobs, built in 1936 in Madison, Wisconsin. Quoting from the Carport Integrity Policy for the Arizona State Historic Preservation Office: As early as 1909, carports were used by the Prairie School architect Walter Burley Griffin in his design for the Sloane House in Elmhurst, Illinois (Gebhard, 1991: 110). By 1913, carports were also being employed by other Prairie School architects such as the Minneapolis firm of Purcell, Feick & Elmslie in their design for a residence at Lockwood Lake, Wisconsin. In this instance, the carport was termed an "Auto Space" (Gebhard, 1991: 110). The late architectural historian David Gebhard suggested that the term "carport" originated from the feature's use in 1930s Streamline Moderne residences (Gebhard, 1991: 107). This term, which entered popular jargon in 1939, stemmed from the visual connection between these streamlined residences and nautical imagery. In the 1930s through the 1950s, carports were also being used by Frank Lloyd Wright in his Usonian Houses, an idea that he may have gotten from Griffin, a former associate. The W. B. Sloane House in Elmhurst, Illinois, in 1910, is credited as being the first known home designed with a carport. In describing the carport to Mr. Jacob, architect Wright said, "A car is not a horse, and it doesn't need a barn." He then added, "Cars are built well enough now so that they do not require elaborate shelter." Cars prior to this time were not completely water tight; the era of robotic-assembly, advanced materials, and perfect closure lines was still 50 years in the future. The carport was therefore a cheap and effective device for protecting a car. Mr. Jacobs added: "Our cheap second-hand car had stood out all winter at the curb, often in weather far below zero (Fahrenheit). A carport was a downright luxury for it." Solar canopy A solar canopy carport is a structure that elevates an array of photovoltaic panels above ground level so that the area under the panels can be used for other purposes. Many solar canopies are built over parking lots, where in addition to generating renewable power, they also protect the cars from sun, rain and snow. When the lot is not needed for parking, the covered area can be used for other purposes. References Home Parking Architectural elements Garden features
Carport
Technology,Engineering
761
64,685,131
https://en.wikipedia.org/wiki/Expandable%20graphite
Expandable graphite is produced from the naturally occurring mineral graphite. The layered structure of graphite allows some molecules to be intercalated in between the graphite layers. Through incorporation of acids, usually sulfuric acid graphite can be converted into expandable graphite. Characteristics If expandable graphite is heated, the graphite flakes will expand to a multiple of their starting volume. The main products in the market have a starting temperature in the range of 200 °C. The expanded flakes have a “worm-like” appearance and are generally several millimeters long. Production To produce expandable graphite, natural graphite flakes are treated in a bath of acid and oxidizing agent.Usually used oxidizing agents are hydrogen peroxide, potassium permanganate or chromic acid. Concentrated sulphuric acid or nitric acid are usually used as the compound to be incorporated, with the reaction taking place at temperatures of 30 °C to 130 °C for up to four hours. After the reaction time, the flakes are washed with water and then dried. Starting temperature and expansion rate depend on the production conditions and particle size of the graphite. temperature and expansion rate are depending on the degree of fineness of the graphite used. Applications Flame retardant One of the main applications of expandable graphite is as a flame retardant. When exposed to heat, expandable graphite expands and forms an intumescent layer on the material surface. This slows down the spread of fire and counteracts the most dangerous consequences of fire for humans, the formation of toxic gases and smoke. Graphite foil By compressing expanded graphite, foils can be produced from pure graphite. These are mainly used as thermally and chemically highly resistant seals in chemical plant construction or as heat spreaders. Expandable graphite for metallurgy Expandable graphite is also used in metallurgy to cover melts and moulds. The material serves here as an oxidation protection and insulator. Expandable graphite for the chemical industry Expandable graphite is included in the chemical processes for paints and varnishes. References Fire protection Graphite
Expandable graphite
Engineering
444
9,277
https://en.wikipedia.org/wiki/Ellipse
In mathematics, an ellipse is a plane curve surrounding two focal points, such that for all points on the curve, the sum of the two distances to the focal points is a constant. It generalizes a circle, which is the special type of ellipse in which the two focal points are the same. The elongation of an ellipse is measured by its eccentricity , a number ranging from (the limiting case of a circle) to (the limiting case of infinite elongation, no longer an ellipse but a parabola). An ellipse has a simple algebraic solution for its area, but for its perimeter (also known as circumference), integration is required to obtain an exact solution. Analytically, the equation of a standard ellipse centered at the origin with width and height is: Assuming , the foci are for . The standard parametric equation is: Ellipses are the closed type of conic section: a plane curve tracing the intersection of a cone with a plane (see figure). Ellipses have many similarities with the other two forms of conic sections, parabolas and hyperbolas, both of which are open and unbounded. An angled cross section of a right circular cylinder is also an ellipse. An ellipse may also be defined in terms of one focal point and a line outside the ellipse called the directrix: for all points on the ellipse, the ratio between the distance to the focus and the distance to the directrix is a constant. This constant ratio is the above-mentioned eccentricity: Ellipses are common in physics, astronomy and engineering. For example, the orbit of each planet in the Solar System is approximately an ellipse with the Sun at one focus point (more precisely, the focus is the barycenter of the Sunplanet pair). The same is true for moons orbiting planets and all other systems of two astronomical bodies. The shapes of planets and stars are often well described by ellipsoids. A circle viewed from a side angle looks like an ellipse: that is, the ellipse is the image of a circle under parallel or perspective projection. The ellipse is also the simplest Lissajous figure formed when the horizontal and vertical motions are sinusoids with the same frequency: a similar effect leads to elliptical polarization of light in optics. The name, (, "omission"), was given by Apollonius of Perga in his Conics. Definition as locus of points An ellipse can be defined geometrically as a set or locus of points in the Euclidean plane: The midpoint of the line segment joining the foci is called the center of the ellipse. The line through the foci is called the major axis, and the line perpendicular to it through the center is the minor axis. The major axis intersects the ellipse at two vertices , which have distance to the center. The distance of the foci to the center is called the focal distance or linear eccentricity. The quotient is the eccentricity. The case yields a circle and is included as a special type of ellipse. The equation can be viewed in a different way (see figure): is called the circular directrix (related to focus of the ellipse. This property should not be confused with the definition of an ellipse using a directrix line below. Using Dandelin spheres, one can prove that any section of a cone with a plane is an ellipse, assuming the plane does not contain the apex and has slope less than that of the lines on the cone. In Cartesian coordinates Standard equation The standard form of an ellipse in Cartesian coordinates assumes that the origin is the center of the ellipse, the x-axis is the major axis, and: For an arbitrary point the distance to the focus is and to the other focus . Hence the point is on the ellipse whenever: Removing the radicals by suitable squarings and using (see diagram) produces the standard equation of the ellipse: or, solved for y: The width and height parameters are called the semi-major and semi-minor axes. The top and bottom points are the co-vertices. The distances from a point on the ellipse to the left and right foci are and . It follows from the equation that the ellipse is symmetric with respect to the coordinate axes and hence with respect to the origin. Parameters Principal axes Throughout this article, the semi-major and semi-minor axes are denoted and , respectively, i.e. In principle, the canonical ellipse equation may have (and hence the ellipse would be taller than it is wide). This form can be converted to the standard form by transposing the variable names and and the parameter names and Linear eccentricity This is the distance from the center to a focus: . Eccentricity The eccentricity can be expressed as: assuming An ellipse with equal axes () has zero eccentricity, and is a circle. Semi-latus rectum The length of the chord through one focus, perpendicular to the major axis, is called the latus rectum. One half of it is the semi-latus rectum . A calculation shows: The semi-latus rectum is equal to the radius of curvature at the vertices (see section curvature). Tangent An arbitrary line intersects an ellipse at 0, 1, or 2 points, respectively called an exterior line, tangent and secant. Through any point of an ellipse there is a unique tangent. The tangent at a point of the ellipse has the coordinate equation: A vector parametric equation of the tangent is: Proof: Let be a point on an ellipse and be the equation of any line containing . Inserting the line's equation into the ellipse equation and respecting yields: There are then cases: Then line and the ellipse have only point in common, and is a tangent. The tangent direction has perpendicular vector , so the tangent line has equation for some . Because is on the tangent and the ellipse, one obtains . Then line has a second point in common with the ellipse, and is a secant. Using (1) one finds that is a tangent vector at point , which proves the vector equation. If and are two points of the ellipse such that , then the points lie on two conjugate diameters (see below). (If , the ellipse is a circle and "conjugate" means "orthogonal".) Shifted ellipse If the standard ellipse is shifted to have center , its equation is The axes are still parallel to the x- and y-axes. General ellipse In analytic geometry, the ellipse is defined as a quadric: the set of points of the Cartesian plane that, in non-degenerate cases, satisfy the implicit equation provided To distinguish the degenerate cases from the non-degenerate case, let ∆ be the determinant Then the ellipse is a non-degenerate real ellipse if and only if C∆ < 0. If C∆ > 0, we have an imaginary ellipse, and if ∆ = 0, we have a point ellipse. The general equation's coefficients can be obtained from known semi-major axis , semi-minor axis , center coordinates , and rotation angle (the angle from the positive horizontal axis to the ellipse's major axis) using the formulae: These expressions can be derived from the canonical equation by a Euclidean transformation of the coordinates : Conversely, the canonical form parameters can be obtained from the general-form coefficients by the equations: where is the 2-argument arctangent function. Parametric representation Standard parametric representation Using trigonometric functions, a parametric representation of the standard ellipse is: The parameter t (called the eccentric anomaly in astronomy) is not the angle of with the x-axis, but has a geometric meaning due to Philippe de La Hire (see below). Rational representation With the substitution and trigonometric formulae one obtains and the rational parametric equation of an ellipse which covers any point of the ellipse except the left vertex . For this formula represents the right upper quarter of the ellipse moving counter-clockwise with increasing The left vertex is the limit Alternately, if the parameter is considered to be a point on the real projective line , then the corresponding rational parametrization is Then Rational representations of conic sections are commonly used in computer-aided design (see Bézier curve). Tangent slope as parameter A parametric representation, which uses the slope of the tangent at a point of the ellipse can be obtained from the derivative of the standard representation : With help of trigonometric formulae one obtains: Replacing and of the standard representation yields: Here is the slope of the tangent at the corresponding ellipse point, is the upper and the lower half of the ellipse. The vertices, having vertical tangents, are not covered by the representation. The equation of the tangent at point has the form . The still unknown can be determined by inserting the coordinates of the corresponding ellipse point : This description of the tangents of an ellipse is an essential tool for the determination of the orthoptic of an ellipse. The orthoptic article contains another proof, without differential calculus and trigonometric formulae. General ellipse Another definition of an ellipse uses affine transformations: Any ellipse is an affine image of the unit circle with equation . Parametric representation An affine transformation of the Euclidean plane has the form , where is a regular matrix (with non-zero determinant) and is an arbitrary vector. If are the column vectors of the matrix , the unit circle , , is mapped onto the ellipse: Here is the center and are the directions of two conjugate diameters, in general not perpendicular. Vertices The four vertices of the ellipse are , for a parameter defined by: (If , then .) This is derived as follows. The tangent vector at point is: At a vertex parameter , the tangent is perpendicular to the major/minor axes, so: Expanding and applying the identities gives the equation for Area From Apollonios theorem (see below) one obtains: The area of an ellipse is Semiaxes With the abbreviations the statements of Apollonios's theorem can be written as: Solving this nonlinear system for yields the semiaxes: Implicit representation Solving the parametric representation for by Cramer's rule and using , one obtains the implicit representation Conversely: If the equation with of an ellipse centered at the origin is given, then the two vectors point to two conjugate points and the tools developed above are applicable. Example: For the ellipse with equation the vectors are Rotated standard ellipse For one obtains a parametric representation of the standard ellipse rotated by angle : Ellipse in space The definition of an ellipse in this section gives a parametric representation of an arbitrary ellipse, even in space, if one allows to be vectors in space. Polar forms Polar form relative to center In polar coordinates, with the origin at the center of the ellipse and with the angular coordinate measured from the major axis, the ellipse's equation is where is the eccentricity, not Euler's number. Polar form relative to focus If instead we use polar coordinates with the origin at one focus, with the angular coordinate still measured from the major axis, the ellipse's equation is where the sign in the denominator is negative if the reference direction points towards the center (as illustrated on the right), and positive if that direction points away from the center. The angle is called the true anomaly of the point. The numerator is the semi-latus rectum. Eccentricity and the directrix property Each of the two lines parallel to the minor axis, and at a distance of from it, is called a directrix of the ellipse (see diagram). For an arbitrary point of the ellipse, the quotient of the distance to one focus and to the corresponding directrix (see diagram) is equal to the eccentricity: The proof for the pair follows from the fact that and satisfy the equation The second case is proven analogously. The converse is also true and can be used to define an ellipse (in a manner similar to the definition of a parabola): For any point (focus), any line (directrix) not through , and any real number with the ellipse is the locus of points for which the quotient of the distances to the point and to the line is that is: The extension to , which is the eccentricity of a circle, is not allowed in this context in the Euclidean plane. However, one may consider the directrix of a circle to be the line at infinity in the projective plane. (The choice yields a parabola, and if , a hyperbola.) Proof Let , and assume is a point on the curve. The directrix has equation . With , the relation produces the equations and The substitution yields This is the equation of an ellipse (), or a parabola (), or a hyperbola (). All of these non-degenerate conics have, in common, the origin as a vertex (see diagram). If , introduce new parameters so that , and then the equation above becomes which is the equation of an ellipse with center , the x-axis as major axis, and the major/minor semi axis . Construction of a directrix Because of point of directrix (see diagram) and focus are inverse with respect to the circle inversion at circle (in diagram green). Hence can be constructed as shown in the diagram. Directrix is the perpendicular to the main axis at point . General ellipse If the focus is and the directrix , one obtains the equation (The right side of the equation uses the Hesse normal form of a line to calculate the distance .) Focus-to-focus reflection property An ellipse possesses the following property: The normal at a point bisects the angle between the lines . Proof Because the tangent line is perpendicular to the normal, an equivalent statement is that the tangent is the external angle bisector of the lines to the foci (see diagram). Let be the point on the line with distance to the focus , where is the semi-major axis of the ellipse. Let line be the external angle bisector of the lines and Take any other point on By the triangle inequality and the angle bisector theorem, therefore must be outside the ellipse. As this is true for every choice of only intersects the ellipse at the single point so must be the tangent line. Application The rays from one focus are reflected by the ellipse to the second focus. This property has optical and acoustic applications similar to the reflective property of a parabola (see whispering gallery). Additionally, because of the focus-to-focus reflection property of ellipses, if the rays are allowed to continue propagating, reflected rays will eventually align closely with the major axis. Conjugate diameters Definition of conjugate diameters A circle has the following property: The midpoints of parallel chords lie on a diameter. An affine transformation preserves parallelism and midpoints of line segments, so this property is true for any ellipse. (Note that the parallel chords and the diameter are no longer orthogonal.) Definition Two diameters of an ellipse are conjugate if the midpoints of chords parallel to lie on From the diagram one finds: Two diameters of an ellipse are conjugate whenever the tangents at and are parallel to . Conjugate diameters in an ellipse generalize orthogonal diameters in a circle. In the parametric equation for a general ellipse given above, any pair of points belong to a diameter, and the pair belong to its conjugate diameter. For the common parametric representation of the ellipse with equation one gets: The points (signs: (+,+) or (−,−) ) (signs: (−,+) or (+,−) ) are conjugate and In case of a circle the last equation collapses to Theorem of Apollonios on conjugate diameters For an ellipse with semi-axes the following is true: Let and be halves of two conjugate diameters (see diagram) then . The triangle with sides (see diagram) has the constant area , which can be expressed by , too. is the altitude of point and the angle between the half diameters. Hence the area of the ellipse (see section metric properties) can be written as . The parallelogram of tangents adjacent to the given conjugate diameters has the Proof Let the ellipse be in the canonical form with parametric equation The two points are on conjugate diameters (see previous section). From trigonometric formulae one obtains and The area of the triangle generated by is and from the diagram it can be seen that the area of the parallelogram is 8 times that of . Hence Orthogonal tangents For the ellipse the intersection points of orthogonal tangents lie on the circle . This circle is called orthoptic or director circle of the ellipse (not to be confused with the circular directrix defined above). Drawing ellipses Ellipses appear in descriptive geometry as images (parallel or central projection) of circles. There exist various tools to draw an ellipse. Computers provide the fastest and most accurate method for drawing an ellipse. However, technical tools (ellipsographs) to draw an ellipse without a computer exist. The principle was known to the 5th century mathematician Proclus, and the tool now known as an elliptical trammel was invented by Leonardo da Vinci. If there is no ellipsograph available, one can draw an ellipse using an approximation by the four osculating circles at the vertices. For any method described below, knowledge of the axes and the semi-axes is necessary (or equivalently: the foci and the semi-major axis). If this presumption is not fulfilled one has to know at least two conjugate diameters. With help of Rytz's construction the axes and semi-axes can be retrieved. de La Hire's point construction The following construction of single points of an ellipse is due to de La Hire. It is based on the standard parametric representation of an ellipse: Draw the two circles centered at the center of the ellipse with radii and the axes of the ellipse. Draw a line through the center, which intersects the two circles at point and , respectively. Draw a line through that is parallel to the minor axis and a line through that is parallel to the major axis. These lines meet at an ellipse point (see diagram). Repeat steps (2) and (3) with different lines through the center. Pins-and-string method The characterization of an ellipse as the locus of points so that sum of the distances to the foci is constant leads to a method of drawing one using two drawing pins, a length of string, and a pencil. In this method, pins are pushed into the paper at two points, which become the ellipse's foci. A string is tied at each end to the two pins; its length after tying is . The tip of the pencil then traces an ellipse if it is moved while keeping the string taut. Using two pegs and a rope, gardeners use this procedure to outline an elliptical flower bed—thus it is called the gardener's ellipse. The Byzantine architect Anthemius of Tralles () described how this method could be used to construct an elliptical reflector, and it was elaborated in a now-lost 9th-century treatise by Al-Ḥasan ibn Mūsā. A similar method for drawing confocal ellipses with a closed string is due to the Irish bishop Charles Graves. Paper strip methods The two following methods rely on the parametric representation (see , above): This representation can be modeled technically by two simple methods. In both cases center, the axes and semi axes have to be known. Method 1 The first method starts with a strip of paper of length . The point, where the semi axes meet is marked by . If the strip slides with both ends on the axes of the desired ellipse, then point traces the ellipse. For the proof one shows that point has the parametric representation , where parameter is the angle of the slope of the paper strip. A technical realization of the motion of the paper strip can be achieved by a Tusi couple (see animation). The device is able to draw any ellipse with a fixed sum , which is the radius of the large circle. This restriction may be a disadvantage in real life. More flexible is the second paper strip method. A variation of the paper strip method 1 uses the observation that the midpoint of the paper strip is moving on the circle with center (of the ellipse) and radius . Hence, the paperstrip can be cut at point into halves, connected again by a joint at and the sliding end fixed at the center (see diagram). After this operation the movement of the unchanged half of the paperstrip is unchanged. This variation requires only one sliding shoe. Method 2 The second method starts with a strip of paper of length . One marks the point, which divides the strip into two substrips of length and . The strip is positioned onto the axes as described in the diagram. Then the free end of the strip traces an ellipse, while the strip is moved. For the proof, one recognizes that the tracing point can be described parametrically by , where parameter is the angle of slope of the paper strip. This method is the base for several ellipsographs (see section below). Similar to the variation of the paper strip method 1 a variation of the paper strip method 2 can be established (see diagram) by cutting the part between the axes into halves. Most ellipsograph drafting instruments are based on the second paperstrip method. Approximation by osculating circles From Metric properties below, one obtains: The radius of curvature at the vertices is: The radius of curvature at the co-vertices is: The diagram shows an easy way to find the centers of curvature at vertex and co-vertex , respectively: mark the auxiliary point and draw the line segment draw the line through , which is perpendicular to the line the intersection points of this line with the axes are the centers of the osculating circles. (proof: simple calculation.) The centers for the remaining vertices are found by symmetry. With help of a French curve one draws a curve, which has smooth contact to the osculating circles. Steiner generation The following method to construct single points of an ellipse relies on the Steiner generation of a conic section: Given two pencils of lines at two points (all lines containing and , respectively) and a projective but not perspective mapping of onto , then the intersection points of corresponding lines form a non-degenerate projective conic section. For the generation of points of the ellipse one uses the pencils at the vertices . Let be an upper co-vertex of the ellipse and . is the center of the rectangle . The side of the rectangle is divided into n equal spaced line segments and this division is projected parallel with the diagonal as direction onto the line segment and assign the division as shown in the diagram. The parallel projection together with the reverse of the orientation is part of the projective mapping between the pencils at and needed. The intersection points of any two related lines and are points of the uniquely defined ellipse. With help of the points the points of the second quarter of the ellipse can be determined. Analogously one obtains the points of the lower half of the ellipse. Steiner generation can also be defined for hyperbolas and parabolas. It is sometimes called a parallelogram method because one can use other points rather than the vertices, which starts with a parallelogram instead of a rectangle. As hypotrochoid The ellipse is a special case of the hypotrochoid when , as shown in the adjacent image. The special case of a moving circle with radius inside a circle with radius is called a Tusi couple. Inscribed angles and three-point form Circles A circle with equation is uniquely determined by three points not on a line. A simple way to determine the parameters uses the inscribed angle theorem for circles: For four points (see diagram) the following statement is true: The four points are on a circle if and only if the angles at and are equal. Usually one measures inscribed angles by a degree or radian θ, but here the following measurement is more convenient: In order to measure the angle between two lines with equations one uses the quotient: Inscribed angle theorem for circles For four points no three of them on a line, we have the following (see diagram): The four points are on a circle, if and only if the angles at and are equal. In terms of the angle measurement above, this means: At first the measure is available only for chords not parallel to the y-axis, but the final formula works for any chord. Three-point form of circle equation As a consequence, one obtains an equation for the circle determined by three non-collinear points : For example, for the three-point equation is: , which can be rearranged to Using vectors, dot products and determinants this formula can be arranged more clearly, letting : The center of the circle satisfies: The radius is the distance between any of the three points and the center. Ellipses This section considers the family of ellipses defined by equations with a fixed eccentricity . It is convenient to use the parameter: and to write the ellipse equation as: where q is fixed and vary over the real numbers. (Such ellipses have their axes parallel to the coordinate axes: if , the major axis is parallel to the x-axis; if , it is parallel to the y-axis.) Like a circle, such an ellipse is determined by three points not on a line. For this family of ellipses, one introduces the following q-analog angle measure, which is not a function of the usual angle measure θ: In order to measure an angle between two lines with equations one uses the quotient: Inscribed angle theorem for ellipses Given four points , no three of them on a line (see diagram). The four points are on an ellipse with equation if and only if the angles at and are equal in the sense of the measurement above—that is, if At first the measure is available only for chords which are not parallel to the y-axis. But the final formula works for any chord. The proof follows from a straightforward calculation. For the direction of proof given that the points are on an ellipse, one can assume that the center of the ellipse is the origin. Three-point form of ellipse equation A consequence, one obtains an equation for the ellipse determined by three non-collinear points : For example, for and one obtains the three-point form and after conversion Analogously to the circle case, the equation can be written more clearly using vectors: where is the modified dot product Pole-polar relation Any ellipse can be described in a suitable coordinate system by an equation . The equation of the tangent at a point of the ellipse is If one allows point to be an arbitrary point different from the origin, then point is mapped onto the line , not through the center of the ellipse. This relation between points and lines is a bijection. The inverse function maps line onto the point and line onto the point Such a relation between points and lines generated by a conic is called pole-polar relation or polarity. The pole is the point; the polar the line. By calculation one can confirm the following properties of the pole-polar relation of the ellipse: For a point (pole) on the ellipse, the polar is the tangent at this point (see diagram: For a pole outside the ellipse, the intersection points of its polar with the ellipse are the tangency points of the two tangents passing (see diagram: For a point within the ellipse, the polar has no point with the ellipse in common (see diagram: The intersection point of two polars is the pole of the line through their poles. The foci and , respectively, and the directrices and , respectively, belong to pairs of pole and polar. Because they are even polar pairs with respect to the circle , the directrices can be constructed by compass and straightedge (see Inversive geometry). Pole-polar relations exist for hyperbolas and parabolas as well. Metric properties All metric properties given below refer to an ellipse with equation except for the section on the area enclosed by a tilted ellipse, where the generalized form of Eq.() will be given. Area The area enclosed by an ellipse is: where and are the lengths of the semi-major and semi-minor axes, respectively. The area formula is intuitive: start with a circle of radius (so its area is ) and stretch it by a factor to make an ellipse. This scales the area by the same factor: However, using the same approach for the circumference would be fallacious – compare the integrals and . It is also easy to rigorously prove the area formula using integration as follows. Equation () can be rewritten as For this curve is the top half of the ellipse. So twice the integral of over the interval will be the area of the ellipse: The second integral is the area of a circle of radius that is, So An ellipse defined implicitly by has area The area can also be expressed in terms of eccentricity and the length of the semi-major axis as (obtained by solving for flattening, then computing the semi-minor axis). So far we have dealt with erect ellipses, whose major and minor axes are parallel to the and axes. However, some applications require tilted ellipses. In charged-particle beam optics, for instance, the enclosed area of an erect or tilted ellipse is an important property of the beam, its emittance. In this case a simple formula still applies, namely where , are intercepts and , are maximum values. It follows directly from Apollonios's theorem. Circumference The circumference of an ellipse is: where again is the length of the semi-major axis, is the eccentricity, and the function is the complete elliptic integral of the second kind, which is in general not an elementary function. The circumference of the ellipse may be evaluated in terms of using Gauss's arithmetic-geometric mean; this is a quadratically converging iterative method (see here for details). The exact infinite series is: where is the double factorial (extended to negative odd integers in the usual way, giving and ). This series converges, but by expanding in terms of James Ivory, Bessel and Kummer derived a series that converges much more rapidly. It is most concisely written in terms of the binomial coefficient with : The coefficients are slightly smaller (by a factor of ), but also is numerically much smaller than except at and . For eccentricities less than 0.5 the error is at the limits of double-precision floating-point after the term. Srinivasa Ramanujan gave two close approximations for the circumference in §16 of "Modular Equations and Approximations to "; they are and where takes on the same meaning as above. The errors in these approximations, which were obtained empirically, are of order and respectively. This is because the second formula's infinite series expansion matches Ivory's formula up to the term. Arc length More generally, the arc length of a portion of the circumference, as a function of the angle subtended (or of any two points on the upper half of the ellipse), is given by an incomplete elliptic integral. The upper half of an ellipse is parameterized by Then the arc length from to is: This is equivalent to where is the incomplete elliptic integral of the second kind with parameter Some lower and upper bounds on the circumference of the canonical ellipse with are Here the upper bound is the circumference of a circumscribed concentric circle passing through the endpoints of the ellipse's major axis, and the lower bound is the perimeter of an inscribed rhombus with vertices at the endpoints of the major and the minor axes. Given an ellipse whose axes are drawn, we can construct the endpoints of a particular elliptic arc whose length is one eighth of the ellipse's circumference using only straightedge and compass in a finite number of steps; for some specific shapes of ellipses, such as when the axes have a length ratio of , it is additionally possible to construct the endpoints of a particular arc whose length is one twelfth of the circumference. (The vertices and co-vertices are already endpoints of arcs whose length is one half or one quarter of the ellipse's circumference.) However, the general theory of straightedge-and-compass elliptic division appears to be unknown, unlike in the case of the circle and the lemniscate. The division in special cases has been investigated by Legendre in his classical treatise. Curvature The curvature is given by: and the radius of curvature, ρ = 1/κ, at point : The radius of curvature of an ellipse, as a function of angle from the center, is: where e is the eccentricity. Radius of curvature at the two vertices and the centers of curvature: Radius of curvature at the two co-vertices and the centers of curvature: The locus of all the centers of curvature is called an evolute. In the case of an ellipse, the evolute is an astroid. In triangle geometry Ellipses appear in triangle geometry as Steiner ellipse: ellipse through the vertices of the triangle with center at the centroid, inellipses: ellipses which touch the sides of a triangle. Special cases are the Steiner inellipse and the Mandart inellipse. As plane sections of quadrics Ellipses appear as plane sections of the following quadrics: Ellipsoid Elliptic cone Elliptic cylinder Hyperboloid of one sheet Hyperboloid of two sheets Applications Physics Elliptical reflectors and acoustics If the water's surface is disturbed at one focus of an elliptical water tank, the circular waves of that disturbance, after reflecting off the walls, converge simultaneously to a single point: the second focus. This is a consequence of the total travel length being the same along any wall-bouncing path between the two foci. Similarly, if a light source is placed at one focus of an elliptic mirror, all light rays on the plane of the ellipse are reflected to the second focus. Since no other smooth curve has such a property, it can be used as an alternative definition of an ellipse. (In the special case of a circle with a source at its center all light would be reflected back to the center.) If the ellipse is rotated along its major axis to produce an ellipsoidal mirror (specifically, a prolate spheroid), this property holds for all rays out of the source. Alternatively, a cylindrical mirror with elliptical cross-section can be used to focus light from a linear fluorescent lamp along a line of the paper; such mirrors are used in some document scanners. Sound waves are reflected in a similar way, so in a large elliptical room a person standing at one focus can hear a person standing at the other focus remarkably well. The effect is even more evident under a vaulted roof shaped as a section of a prolate spheroid. Such a room is called a whisper chamber. The same effect can be demonstrated with two reflectors shaped like the end caps of such a spheroid, placed facing each other at the proper distance. Examples are the National Statuary Hall at the United States Capitol (where John Quincy Adams is said to have used this property for eavesdropping on political matters); the Mormon Tabernacle at Temple Square in Salt Lake City, Utah; at an exhibit on sound at the Museum of Science and Industry in Chicago; in front of the University of Illinois at Urbana–Champaign Foellinger Auditorium; and also at a side chamber of the Palace of Charles V, in the Alhambra. Planetary orbits In the 17th century, Johannes Kepler discovered that the orbits along which the planets travel around the Sun are ellipses with the Sun [approximately] at one focus, in his first law of planetary motion. Later, Isaac Newton explained this as a corollary of his law of universal gravitation. More generally, in the gravitational two-body problem, if the two bodies are bound to each other (that is, the total energy is negative), their orbits are similar ellipses with the common barycenter being one of the foci of each ellipse. The other focus of either ellipse has no known physical significance. The orbit of either body in the reference frame of the other is also an ellipse, with the other body at the same focus. Keplerian elliptical orbits are the result of any radially directed attraction force whose strength is inversely proportional to the square of the distance. Thus, in principle, the motion of two oppositely charged particles in empty space would also be an ellipse. (However, this conclusion ignores losses due to electromagnetic radiation and quantum effects, which become significant when the particles are moving at high speed.) For elliptical orbits, useful relations involving the eccentricity are: where is the radius at apoapsis, i.e., the farthest distance of the orbit to the barycenter of the system, which is a focus of the ellipse is the radius at periapsis, the closest distance is the length of the semi-major axis Also, in terms of and , the semi-major axis is their arithmetic mean, the semi-minor axis is their geometric mean, and the semi-latus rectum is their harmonic mean. In other words, Harmonic oscillators The general solution for a harmonic oscillator in two or more dimensions is also an ellipse. Such is the case, for instance, of a long pendulum that is free to move in two dimensions; of a mass attached to a fixed point by a perfectly elastic spring; or of any object that moves under influence of an attractive force that is directly proportional to its distance from a fixed attractor. Unlike Keplerian orbits, however, these "harmonic orbits" have the center of attraction at the geometric center of the ellipse, and have fairly simple equations of motion. Phase visualization In electronics, the relative phase of two sinusoidal signals can be compared by feeding them to the vertical and horizontal inputs of an oscilloscope. If the Lissajous figure display is an ellipse, rather than a straight line, the two signals are out of phase. Elliptical gears Two non-circular gears with the same elliptical outline, each pivoting around one focus and positioned at the proper angle, turn smoothly while maintaining contact at all times. Alternatively, they can be connected by a link chain or timing belt, or in the case of a bicycle the main chainring may be elliptical, or an ovoid similar to an ellipse in form. Such elliptical gears may be used in mechanical equipment to produce variable angular speed or torque from a constant rotation of the driving axle, or in the case of a bicycle to allow a varying crank rotation speed with inversely varying mechanical advantage. Elliptical bicycle gears make it easier for the chain to slide off the cog when changing gears. An example gear application would be a device that winds thread onto a conical bobbin on a spinning machine. The bobbin would need to wind faster when the thread is near the apex than when it is near the base. Optics In a material that is optically anisotropic (birefringent), the refractive index depends on the direction of the light. The dependency can be described by an index ellipsoid. (If the material is optically isotropic, this ellipsoid is a sphere.) In lamp-pumped solid-state lasers, elliptical cylinder-shaped reflectors have been used to direct light from the pump lamp (coaxial with one ellipse focal axis) to the active medium rod (coaxial with the second focal axis). In laser-plasma produced EUV light sources used in microchip lithography, EUV light is generated by plasma positioned in the primary focus of an ellipsoid mirror and is collected in the secondary focus at the input of the lithography machine. Statistics and finance In statistics, a bivariate random vector is jointly elliptically distributed if its iso-density contours—loci of equal values of the density function—are ellipses. The concept extends to an arbitrary number of elements of the random vector, in which case in general the iso-density contours are ellipsoids. A special case is the multivariate normal distribution. The elliptical distributions are important in the financial field because if rates of return on assets are jointly elliptically distributed then all portfolios can be characterized completely by their mean and variance—that is, any two portfolios with identical mean and variance of portfolio return have identical distributions of portfolio return. Computer graphics Drawing an ellipse as a graphics primitive is common in standard display libraries, such as the MacIntosh QuickDraw API, and Direct2D on Windows. Jack Bresenham at IBM is most famous for the invention of 2D drawing primitives, including line and circle drawing, using only fast integer operations such as addition and branch on carry bit. M. L. V. Pitteway extended Bresenham's algorithm for lines to conics in 1967. Another efficient generalization to draw ellipses was invented in 1984 by Jerry Van Aken. In 1970 Danny Cohen presented at the "Computer Graphics 1970" conference in England a linear algorithm for drawing ellipses and circles. In 1971, L. B. Smith published similar algorithms for all conic sections and proved them to have good properties. These algorithms need only a few multiplications and additions to calculate each vector. It is beneficial to use a parametric formulation in computer graphics because the density of points is greatest where there is the most curvature. Thus, the change in slope between each successive point is small, reducing the apparent "jaggedness" of the approximation. Drawing with Bézier paths Composite Bézier curves may also be used to draw an ellipse to sufficient accuracy, since any ellipse may be construed as an affine transformation of a circle. The spline methods used to draw a circle may be used to draw an ellipse, since the constituent Bézier curves behave appropriately under such transformations. Optimization theory It is sometimes useful to find the minimum bounding ellipse on a set of points. The ellipsoid method is quite useful for solving this problem. See also Cartesian oval, a generalization of the ellipse Circumconic and inconic Distance of closest approach of ellipses Ellipse fitting Elliptic coordinates, an orthogonal coordinate system based on families of ellipses and hyperbolae Elliptic partial differential equation Elliptical distribution, in statistics Elliptical dome Geodesics on an ellipsoid Great ellipse Kepler's laws of planetary motion n-ellipse, a generalization of the ellipse for n foci Oval Perimeter of an ellipse Spheroid, the ellipsoid obtained by rotating an ellipse about its major or minor axis Stadium (geometry), a two-dimensional geometric shape constructed of a rectangle with semicircles at a pair of opposite sides Steiner circumellipse, the unique ellipse circumscribing a triangle and sharing its centroid Superellipse, a generalization of an ellipse that can look more rectangular or more "pointy" True, eccentric, and mean anomaly Notes References External links Apollonius' Derivation of the Ellipse at Convergence The Shape and History of The Ellipse in Washington, D.C. by Clark Kimberling Ellipse circumference calculator Collection of animated ellipse demonstrations Trammel according Frans van Schooten by Matt Parker Conic sections Plane curves Elementary shapes Algebraic curves
Ellipse
Mathematics
9,392
19,015,982
https://en.wikipedia.org/wiki/Spectral%20purity
Spectral purity is a term used in both optics and signal processing. In optics, it refers to the quantification of the monochromaticity of a given light sample. This is a particularly important parameter in areas like laser operation and time measurement. Spectral purity is easier to achieve in devices that generate visible and ultraviolet light, since higher frequency light results in greater spectral purity. In signal processing, spectral purity is defined as the inherent stability of a signal, or how clean a spectrum is compared to what it should be. See also Frequency drift Frequency deviation Jitter Automatic frequency control Allan variance References Spectroscopy
Spectral purity
Physics,Chemistry
121
48,384,756
https://en.wikipedia.org/wiki/GoTenna
goTenna (goTenna Inc.) is a technology startup that designs and develops professional mesh networking technologies for off-grid and decentralized communications. goTenna devices pair with smartphones and, through intelligent mobile ad hoc networking protocols, enable users to send texts and share locations on a peer-to-peer basis, foregoing the need for centralized communications infrastructure of any kind. History The idea for goTenna came about after Hurricane Sandy knocked out 25 percent of cell towers, and caused outages for 25 percent of Internet services, across 10 states on the East Coast. Officially incorporated in April 2013, the company's stated goal is to build "people-powered peer-to-peer communication systems" reducing our reliance on cell towers and wifi routers, and providing anyone the ability to create a network on their terms. In 2014, goTenna rolled out its first consumer product, the goTenna, a pocket-size communication tool that lets off-grid travelers talk to one another without cell service. In September 2016, goTenna launched goTenna Plus, a, subscription-based upgrade to the goTenna applications, which includes the capability to use other goTenna users as gateways to relay messages through to traditional SMS networks. The company also released its software development kit, enabling developers to create new applications using goTenna hardware. However, its license does not permit use with open source copyleft licenses. Around the same time, goTenna unveiled a second-generation device: goTenna Mesh, the first consumer-ready mesh network of its kind, available to 49 countries. goTenna Pro In March 2017, the company announced its goTenna Pro line, for professional mobile radio communications needs, shifting its focus from consumer tech to filling the needs of public sector clients. To finance its expansion of operations, the company raised $24M in Series C equity and debt funding in 2019, led by Founders Fund with participation from Comcast Ventures and existing investors Union Square Ventures, Collaborative Fund, Walden VC, MentorTech, and Bloomberg Beta. In 2022, goTenna secured a $22.3M funded, $24.9M ceiling SBIR Phase III contract with U.S. Customs and Border Protection (CBP) to support the deployment of hardware, training, as well as development to expand the Agent Visualization Program (AVP), a program designed to improve the safety and effectiveness of law enforcement officers by providing comprehensive situational awareness in the border enforcement zone. In February 2023, goTenna was awarded a Small Business Innovation Research (SBIR) Phase II contract to provide mission-critical communication network monitoring and analysis platform for the United States Air Force (USAF). Existing Product Suite goTenna Pro X2 goTenna Pro Deployment Kit 2 goTenna EdgeRelay TAK plugin goTenna Pro App Awards CES Innovation Award 2017: Tech for a Better World CES Innovation Award 2017: Wireless Accessory Industrial Designers Society of America – IDEA 2016 Gold Edison Awards Gold – Innovative Services CES Innovation Award 2015: Tech for a Better World CES Innovation Award 2015: Wireless Accessory Fast Company 2015 Innovation by Design Core77 2015 Design Awards NAVWAR Project Overmatch: Networks Prize Challenge AFWERX SBIR 203-CSO1-Phase I NSF SBIR Phase I AFWERX SBIR Phase II See Also Meshtastic - an open source equivalent References External links goTenna Pro official website Networking hardware companies Radio technology Radio electronics Peer-to-peer Telecommunications companies of the United States Companies based in New York City Companies based in Brooklyn Computer companies of the United States Computer hardware companies
GoTenna
Technology,Engineering
725
25,986,983
https://en.wikipedia.org/wiki/Vulnerable%20species
A vulnerable species is a species which has been categorized by the International Union for Conservation of Nature as being threatened with extinction unless the circumstances that are threatening its survival and reproduction improve. Vulnerability is mainly caused by habitat loss or destruction of the species' home. Vulnerable habitat or species are monitored and can become increasingly threatened. Some species listed as "vulnerable" may be common in captivity, an example being the military macaw. In 2012 there were 5,196 animals and 6,789 plants classified as vulnerable, compared with 2,815 and 3,222, respectively, in 1998. Practices such as cryoconservation of animal genetic resources have been enforced in efforts to conserve vulnerable breeds of livestock specifically. Criteria The International Union for Conservation of Nature uses several criteria to enter species in this category. A taxon is Vulnerable when it is not critically endangered or Endangered but is facing a high risk of extinction in the wild in the medium-term future, as defined by any of the following criteria (A to E): A) Population reduction in the form of either of the following: An observed, estimated, inferred or suspected population size reduction of ≥ 50% over the last 10 years or three generations, whichever is the longer, provided the causes of the reduction are clearly reversible AND understood AND ceased. This measurement is based on (and specifying) any of the following: direct observation an index of abundance appropriate for the taxon a decline in area of occupancy, extent of occurrence or quality of habitat actual or potential levels of exploitation the effects of introduced taxa, hybridisation, pathogens, pollutants, competitors or parasites. A reduction of at least 20%, projected or suspected to be met within the next ten years or three generations, whichever is the longer, based on (and specifying) any of (2), (3), (4) or (5) above. B) Extent of occurrence estimated to be less than 20,000 km2 or area of occupancy estimated to be less than 2,000 km2, and estimates indicating any two of the following: Severely fragmented or known to exist at no more than ten locations. Continuing decline, inferred, observed or projected, in any of the following: extent of occurrence area of occupancy area, extent or quality of habitat number of locations or subpopulations number of mature individuals Extreme fluctuations in any of the following: extent of occurrence area of occupancy number of locations or subpopulations number of mature individuals C) Population estimated to number fewer than 10,000 mature individuals and either: An estimated continuing decline of at least 10% within 10 years or three generations, whichever is longer, or A continuing decline, observed, projected, or inferred, in numbers of mature individuals and population structure in the form of either: severely fragmented (i.e. no subpopulation estimated to contain more than 1,000 mature individuals) all mature individuals are in a single subpopulation D) Population very small or restricted in the form of either of the following: Population estimated to number less than 1,000 mature individuals. Population is characterised by an acute restriction in its area of occupancy (typically less than 20 km2) or in the number of locations (typically less than five). Such a taxon would thus be prone to the effects of human activities (or stochastic events whose impact is increased by human activities) within a very short period of time in an unforeseeable future, and is thus capable of becoming Critically Endangered or even Extinct in a very short period. E) Quantitative analysis showing the probability of extinction in the wild is at least 10% within 100 years. The examples of vulnerable animal species are hyacinth macaw, mountain zebra, gaur, black crowned crane and blue crane See also :Category:IUCN Red List vulnerable species for an alphabetical list Cryoconservation of animal genetic resources List of vulnerable amphibians List of vulnerable arthropods List of vulnerable birds List of vulnerable fishes List of vulnerable insects List of vulnerable invertebrates List of vulnerable mammals List of vulnerable molluscs List of vulnerable reptiles List of IUCN Red List Vulnerable plants Notes and references External links List of Vulnerable species as identified by the IUCN Red List of Threatened Species Biota by conservation status IUCN Red List Environmental conservation
Vulnerable species
Biology
887
6,723,049
https://en.wikipedia.org/wiki/Vaccine-associated%20sarcoma
A vaccine-associated sarcoma (VAS) or feline injection-site sarcoma (FISS) is a type of malignant tumor found in cats (and, often, dogs and ferrets) which has been linked to certain vaccines. VAS has become a concern for veterinarians and cat owners alike and has resulted in changes in recommended vaccine protocols. These sarcomas have been most commonly associated with rabies and feline leukemia virus vaccines, but other vaccines and injected medications have also been implicated. History VAS was first recognized at the University of Pennsylvania School of Veterinary Medicine in 1991. An association between highly aggressive fibrosarcomas and typical vaccine location (between the shoulder blades) was made. Two possible factors for the increase of VAS at this time were the introduction in 1985 of vaccines for rabies and feline leukemia virus (FeLV) that contained aluminum adjuvant, and a law in 1987 requiring rabies vaccination in cats in Pennsylvania. In 1993, a causal relationship between VAS and administration of aluminium adjuvanted rabies and FeLV vaccines was established through epidemiologic methods, and in 1996 the Vaccine-Associated Feline Sarcoma Task Force was formed to address the problem and promote research. In 2003, a study of ferret fibrosarcoma indicated that this species also may develop VAS. Several of the tumors were located in common injection sites and had similar histologic features to VAS in cats. Also in 2003, a study in Italy compared fibrosarcoma in dogs from injection sites and non-injection sites to VAS in cats, and found distinct similarities between the injection site tumors in dogs and VAS in cats. This suggests that VAS may occur in dogs. Pathology Inflammation in the subcutis following vaccination is considered to be a risk factor in the development of VAS, and vaccines containing aluminum were found to produce more inflammation. Furthermore, particles of aluminum adjuvant have been discovered in tumor macrophages. In addition, individual genetic characteristics can also contribute to these injection-site sarcomas. The incidence of VAS is between 1 in 1,000 to 1 in 10,000 vaccinated cats and has been found to be dose-dependent. The time from vaccination to tumor formation varies from three months to eleven years. Fibrosarcoma is the most common VAS; other types include rhabdomyosarcoma, myxosarcoma, chondrosarcoma, malignant fibrous histiocytoma, and undifferentiated sarcoma. Similar examples of sarcomas developing secondary to inflammation include tumors associated with metallic implants and foreign body material in humans, and sarcomas of the esophagus associated with Spirocerca lupi infection in dogs and ocular sarcomas in cats following trauma. Cats may be the predominant species to develop VAS because they have an increased susceptibility to oxidative injury, as evidenced also by an increased risk of Heinz body anemia and acetaminophen toxicity. Diagnosis VAS appears as a rapidly growing firm mass in and under the skin. The mass is often quite large when first detected and can become ulcerated or infected. It often contains fluid-filled cavities, probably because of its rapid growth. Diagnosis of VAS is through a biopsy. The biopsy will show the presence of a sarcoma, but information like location and the presence of inflammation or necrosis will increase the suspicion of VAS. It is possible for cats to have a granuloma form after vaccination, so it is important to differentiate between the two before radical surgery is performed. One guideline for biopsy is if a growth is present three months after surgery, if a growth is greater than two centimeters, or if a growth is becoming larger one month after vaccination. X-rays are taken prior to surgery because about one in five cases of VAS will develop metastasis, usually to the lungs but possibly to the lymph nodes or skin. Treatment Treatment of VAS is through aggressive surgery. As soon as the tumor is recognized, it should be removed with very wide margins to ensure complete removal. Treatment may also include chemotherapy or radiation therapy. The most significant prognostic factor is initial surgical treatment. One study showed that cats with radical (extensive) initial surgery had a median time to recurrence of 325 days versus 79 days for cats with marginal initial excision. The expression of a mutated form of p53, a tumor suppressor gene, is found commonly in VAS and indicates a poorer prognosis. Precautionary measures New vaccine protocols have been put forth by the American Association of Feline Practitioners that limit the type and frequency of vaccinations given to cats. Specifically, the vaccine for feline leukemia virus should only be given to kittens and high risk cats. Feline rhinotracheitis/panleukopenia/calicivirus vaccines should be given as kittens, a year later and then every three years. Also, vaccines should be given in areas making removal of VAS easier, namely: as close as possible to the tip of the right rear paw for rabies, the tip of the left rear paw for feline leukemia (unless combined with rabies), and on the right shoulder—being careful to avoid the midline or interscapular space—for other vaccines (such as FVRCP). There have been no specific associations between the development of VAS and vaccine brand or manufacturer, concurrent infections, history of trauma, or environment. See also Vaccine injury References External links Vaccine-Associated Feline Sarcoma Task Force (VAFSTF) Vaccines and Sarcomas Informational Brochure from the Cornell Feline Health Center "Vaccine-Associated Fibrosarcoma in Cats" from Pet Cancer Center 2006 Feline Vaccination Guidelines (Summary) Cat Vaccines Can Lead to Cancer Cat diseases Sarcoma Vaccine-associated sarcoma in animals
Vaccine-associated sarcoma
Biology
1,249
4,811,381
https://en.wikipedia.org/wiki/Sulfite%20oxidase
Sulfite oxidase () is an enzyme in the mitochondria of all eukaryotes, with exception of the yeasts. It oxidizes sulfite to sulfate and, via cytochrome c, transfers the electrons produced to the electron transport chain, allowing generation of ATP in oxidative phosphorylation. This is the last step in the metabolism of sulfur-containing compounds and the sulfate is excreted. Sulfite oxidase is a metallo-enzyme that utilizes a molybdopterin cofactor and a heme group (in the case of animals). It is one of the cytochromes b5 and belongs to the enzyme super-family of molybdenum oxotransferases that also includes DMSO reductase, xanthine oxidase, and nitrite reductase. In mammals, the expression levels of sulfite oxidase is high in the liver, kidney, and heart, and very low in spleen, brain, skeletal muscle, and blood. Structure As a homodimer, sulfite oxidase contains two identical subunits with an N-terminal domain and a C-terminal domain. These two domains are connected by ten amino acids forming a loop. The N-terminal domain has a heme cofactor with three adjacent antiparallel beta sheets and five alpha helices. The C-terminal domain hosts a molybdopterin cofactor that is surrounded by thirteen beta sheets and three alpha helices. The molybdopterin cofactor has a Mo(VI) center, which is bonded to a sulfur from cysteine, an ene-dithiolate from pyranopterin, and two terminal oxygens. It is at this molybdenum center that the catalytic oxidation of sulfite takes place. The pyranopterin ligand which coordinates the molybdenum centre via the enedithiolate. The molybdenum centre has a square pyramidal geometry and is distinguished from the xanthine oxidase family by the orientation of the oxo group facing downwards rather than up. Active site and mechanism The active site of sulfite oxidase contains the molybdopterin cofactor and supports molybdenum in its highest oxidation state, +6 (MoVI). In the enzyme's oxidized state, molybdenum is coordinated by a cysteine thiolate, the dithiolene group of molybdopterin, and two terminal oxygen atoms (oxos). Upon reacting with sulfite, one oxygen atom is transferred to sulfite to produce sulfate, and the molybdenum center is reduced by two electrons to MoIV. Water then displaces sulfate, and the removal of two protons (H+) and two electrons (e−) returns the active site to its original state. A key feature of this oxygen atom transfer enzyme is that the oxygen atom being transferred arises from water, not from dioxygen (O2). Electrons are passed one at a time from the molybdenum to the heme group which reacts with cytochrome c to reoxidize the enzyme. The electrons from this reaction enter the electron transport chain (ETC). This reaction is generally the rate limiting reaction. Upon reaction of the enzyme with sulfite, it is reduced by 2 electrons. The negative potential seen with re-reduction of the enzyme shows the oxidized state is favoured. Among the Mo enzyme classes, sulfite oxidase is the most easily oxidized. Although under low pH conditions the oxidative reaction become partially rate limiting. Deficiency Sulfite oxidase is required to metabolize the sulfur-containing amino acids cysteine and methionine in foods. Lack of functional sulfite oxidase causes a disease known as sulfite oxidase deficiency. This rare but fatal disease causes neurological disorders, mental retardation, physical deformities, the degradation of the brain, and death. Reasons for the lack of functional sulfite oxidase include a genetic defect that leads to the absence of a molybdopterin cofactor and point mutations in the enzyme. A G473D mutation impairs dimerization and catalysis in human sulfite oxidase. See also Sulfur metabolism Bioinorganic chemistry References Further reading Kisker, C. “Sulfite oxidase”, Messerschimdt, A.; Huber, R.; Poulos, T.; Wieghardt, K.; eds. Handbook of Metalloproteins, vol 2; John Wiley and Sons, Ltd: New York, 2002 External links Research Activity of Sarkar Group PDBe-KB provides an overview of all the structure information available in the PDB for Human Sulfite oxidase, mitochondrial EC 1.8.3 Metalloproteins Molybdenum compounds
Sulfite oxidase
Chemistry
1,062
20,264
https://en.wikipedia.org/wiki/Mushroom
A mushroom or toadstool is the fleshy, spore-bearing fruiting body of a fungus, typically produced above ground, on soil, or on its food source. Toadstool generally denotes one poisonous to humans. The standard for the name "mushroom" is the cultivated white button mushroom, Agaricus bisporus; hence, the word "mushroom" is most often applied to those fungi (Basidiomycota, Agaricomycetes) that have a stem (stipe), a cap (pileus), and gills (lamellae, sing. lamella) on the underside of the cap. "Mushroom" also describes a variety of other gilled fungi, with or without stems; therefore the term is used to describe the fleshy fruiting bodies of some Ascomycota. The gills produce microscopic spores which help the fungus spread across the ground or its occupant surface. Forms deviating from the standard morphology usually have more specific names, such as "bolete", "puffball", "stinkhorn", and "morel", and gilled mushrooms themselves are often called "agarics" in reference to their similarity to Agaricus or their order Agaricales. By extension, the term "mushroom" can also refer to either the entire fungus when in culture, the thallus (called mycelium) of species forming the fruiting bodies called mushrooms, or the species itself. Etymology The terms "mushroom" and "toadstool" go back centuries and were never precisely defined, nor was there consensus on application. During the 15th and 16th centuries, the terms mushrom, mushrum, muscheron, mousheroms, mussheron, or musserouns were used. The term "mushroom" and its variations may have been derived from the French word mousseron in reference to moss (mousse). Delineation between edible and poisonous fungi is not clear-cut, so a "mushroom" may be edible, poisonous, or unpalatable. The word toadstool appeared first in 14th century England as a reference for a "stool" for toads, possibly implying an inedible poisonous fungus. Identification Identifying what is and is not a mushroom requires a basic understanding of their macroscopic structure. Most are basidiomycetes and gilled. Their spores, called basidiospores, are produced on the gills and fall in a fine rain of powder from under the caps as a result. At the microscopic level, the basidiospores are shot off basidia and then fall between the gills in the dead air space. As a result, for most mushrooms, if the cap is cut off and placed gill-side-down overnight, a powdery impression reflecting the shape of the gills (or pores, or spines, etc.) is formed (when the fruit body is sporulating). The color of the powdery print, called a spore print, is useful in both classifying and identifying mushrooms. Spore print colors include white (most common), brown, black, purple-brown, pink, yellow, and creamy, but almost never blue, green, or red. While modern identification of mushrooms is quickly becoming molecular, the standard methods for identification are still used by most and have developed into a fine art harking back to medieval times and the Victorian era, combined with microscopic examination. The presence of juices upon breaking, bruising-reactions, odors, tastes, shades of color, habitat, habit, and season are all considered by both amateur and professional mycologists. Tasting and smelling mushrooms carries its own hazards because of poisons and allergens. Chemical tests are also used for some genera. In general, identification to genus can often be accomplished in the field using a local field guide. Identification to species, however, requires more effort. A mushroom develops from a button stage into a mature structure, and only the latter can provide certain characteristics needed for the identification of the species. However, over-mature specimens lose features and cease producing spores. Many novices have mistaken humid water marks on paper for white spore prints, or discolored paper from oozing liquids on lamella edges for colored spored prints. Classification Typical mushrooms are the fruit bodies of members of the order Agaricales, whose type genus is Agaricus and type species is the field mushroom, Agaricus campestris. However in modern molecularly defined classifications, not all members of the order Agaricales produce mushroom fruit bodies, and many other gilled fungi, collectively called mushrooms, occur in other orders of the class Agaricomycetes. For example, chanterelles are in the Cantharellales, false chanterelles such as Gomphus are in the Gomphales, milk-cap mushrooms (Lactarius, Lactifluus) and russulas (Russula), as well as Lentinellus, are in the Russulales, while the tough, leathery genera Lentinus and Panus are among the Polyporales, but Neolentinus is in the Gloeophyllales, and the little pin-mushroom genus, Rickenella, along with similar genera, are in the Hymenochaetales. Within the main body of mushrooms, in the Agaricales, are common fungi like the common fairy-ring mushroom, shiitake, enoki, oyster mushrooms, fly agarics and other Amanitas, magic mushrooms like species of Psilocybe, paddy straw mushrooms, shaggy manes, etc. An atypical mushroom is the lobster mushroom, which is a fruitbody of a Russula or Lactarius mushroom that has been deformed by the parasitic fungus Hypomyces lactifluorum. This gives the affected mushroom an unusual shape and red color that resembles that of a boiled lobster. Other mushrooms are not gilled, so the term "mushroom" is loosely used, and giving a full account of their classifications is difficult. Some have pores underneath (and are usually called boletes), others have spines, such as the hedgehog mushroom and other tooth fungi, and so on. "Mushroom" has been used for polypores, puffballs, jelly fungi, coral fungi, bracket fungi, stinkhorns, and cup fungi. Thus, the term is more one of common application to macroscopic fungal fruiting bodies than one having precise taxonomic meaning. Approximately 14,000 species of mushrooms are described. Morphology A mushroom develops from a nodule, or pinhead, less than two millimeters in diameter, called a primordium, which is typically found on or near the surface of the substrate. It is formed within the mycelium, the mass of threadlike hyphae that make up the fungus. The primordium enlarges into a roundish structure of interwoven hyphae roughly resembling an egg, called a "button". The button has a cottony roll of mycelium, the universal veil, that surrounds the developing fruit body. As the egg expands, the universal veil ruptures and may remain as a cup, or volva, at the base of the stalk, or as warts or volval patches on the cap. Many mushrooms lack a universal veil, therefore they do not have either a volva or volval patches. Often, a second layer of tissue, the partial veil, covers the bladelike gills that bear spores. As the cap expands the veil breaks, and remnants of the partial veil may remain as a ring, or annulus, around the middle of the stalk or as fragments hanging from the margin of the cap. The ring may be skirt-like as in some species of Amanita, collar-like as in many species of Lepiota, or merely the faint remnants of a cortina (a partial veil composed of filaments resembling a spiderweb), which is typical of the genus Cortinarius. Mushrooms lacking partial veils do not form an annulus. The stalk (also called the stipe, or stem) may be central and support the cap in the middle, or it may be off-center or lateral, as in species of Pleurotus and Panus. In other mushrooms, a stalk may be absent, as in the polypores that form shelf-like brackets. Puffballs lack a stalk, but may have a supporting base. Other mushrooms including truffles, jellies, earthstars, and bird's nests usually do not have stalks, and a specialized mycological vocabulary exists to describe their parts. The way the gills attach to the top of the stalk is an important feature of mushroom morphology. Mushrooms in the genera Agaricus, Amanita, Lepiota and Pluteus, among others, have free gills that do not extend to the top of the stalk. Others have decurrent gills that extend down the stalk, as in the genera Omphalotus and Pleurotus. There are a great number of variations between the extremes of free and decurrent, collectively called attached gills. Finer distinctions are often made to distinguish the types of attached gills: adnate gills, which adjoin squarely to the stalk; notched gills, which are notched where they join the top of the stalk; adnexed gills, which curve upward to meet the stalk, and so on. These distinctions between attached gills are sometimes difficult to interpret, since gill attachment may change as the mushroom matures, or with different environmental conditions. Microscopic features A hymenium is a layer of microscopic spore-bearing cells that covers the surface of gills. In the nongilled mushrooms, the hymenium lines the inner surfaces of the tubes of boletes and polypores, or covers the teeth of spine fungi and the branches of corals. In the Ascomycota, spores develop within microscopic elongated, sac-like cells called asci, which typically contain eight spores in each ascus. The Discomycetes, which contain the cup, sponge, brain, and some club-like fungi, develop an exposed layer of asci, as on the inner surfaces of cup fungi or within the pits of morels. The Pyrenomycetes, tiny dark-colored fungi that live on a wide range of substrates including soil, dung, leaf litter, and decaying wood, as well as other fungi, produce minute, flask-shaped structures called perithecia, within which the asci develop. In the basidiomycetes, usually four spores develop on the tips of thin projections called sterigmata, which extend from club-shaped cells called a basidia. The fertile portion of the Gasteromycetes, called a gleba, may become powdery as in the puffballs or slimy as in the stinkhorns. Interspersed among the asci are threadlike sterile cells called paraphyses. Similar structures called cystidia often occur within the hymenium of the Basidiomycota. Many types of cystidia exist, and assessing their presence, shape, and size is often used to verify the identification of a mushroom. The most important microscopic feature for identification of mushrooms is the spores. Their color, shape, size, attachment, ornamentation, and reaction to chemical tests often can be the crux of an identification. A spore often has a protrusion at one end, called an apiculus, which is the point of attachment to the basidium, termed the apical germ pore, from which the hypha emerges when the spore germinates. Growth Many species of mushrooms seemingly appear overnight, growing or expanding rapidly. This phenomenon is the source of several common expressions in the English language including "to mushroom" or "mushrooming" (expanding rapidly in size or scope) and "to pop up like a mushroom" (to appear unexpectedly and quickly). In reality, all species of mushrooms take several days to form primordial mushroom fruit bodies, though they do expand rapidly by the absorption of fluids. The cultivated mushroom, as well as the common field mushroom, initially form a minute fruiting body, referred to as the pin stage because of their small size. Slightly expanded, they are called buttons, once again because of the relative size and shape. Once such stages are formed, the mushroom can rapidly pull in water from its mycelium and expand, mainly by inflating preformed cells that took several days to form in the primordia. Similarly, there are other mushrooms, like Parasola plicatilis (formerly Coprinus plicatlis), that grow rapidly overnight and may disappear by late afternoon on a hot day after rainfall. The primordia form at ground level in lawns in humid spaces under the thatch and after heavy rainfall or in dewy conditions balloon to full size in a few hours, release spores, and then collapse. Not all mushrooms expand overnight; some grow very slowly and add tissue to their fruiting bodies by growing from the edges of the colony or by inserting hyphae. For example, Pleurotus nebrodensis grows slowly, and because of this combined with human collection, it is now critically endangered. Though mushroom fruiting bodies are short-lived, the underlying mycelium can itself be long-lived and massive. A colony of Armillaria solidipes (formerly known as Armillaria ostoyae) in Malheur National Forest in the United States is estimated to be 2,400 years old, possibly older, and spans an estimated . Most of the fungus is underground and in decaying wood or dying tree roots in the form of white mycelia combined with black shoelace-like rhizomorphs that bridge colonized separated woody substrates. Nutrition Raw brown mushrooms are 92% water, 4% carbohydrates, 2% protein and less than 1% fat. In a amount, raw mushrooms provide 22 calories and are a rich source (20% or more of the Daily Value, DV) of B vitamins, such as riboflavin, niacin and pantothenic acid, selenium (37% DV) and copper (25% DV), and a moderate source (10–19% DV) of phosphorus, zinc and potassium (table). They have minimal or no vitamin C and sodium content. Vitamin D The vitamin D content of a mushroom depends on postharvest handling, in particular the unintended exposure to sunlight. The US Department of Agriculture provided evidence that UV-exposed mushrooms contain substantial amounts of vitamin D. When exposed to ultraviolet (UV) light, even after harvesting, ergosterol in mushrooms is converted to vitamin D2, a process now used intentionally to supply fresh vitamin D mushrooms for the functional food grocery market. In a comprehensive safety assessment of producing vitamin D in fresh mushrooms, researchers showed that artificial UV light technologies were equally effective for vitamin D production as in mushrooms exposed to natural sunlight, and that UV light has a long record of safe use for production of vitamin D in food. Human use Edible mushrooms Mushrooms are used extensively in cooking, in many cuisines (notably Chinese, Korean, European, and Japanese). Humans have valued them as food since antiquity. Most mushrooms sold in supermarkets have been commercially grown on mushroom farms. The most common of these, Agaricus bisporus, is considered safe for most people to eat because it is grown in controlled, sterilized environments. Several varieties of A. bisporus are grown commercially, including whites, crimini, and portobello. Other cultivated species available at many grocers include Hericium erinaceus, shiitake, maitake (hen-of-the-woods), Pleurotus, and enoki. In recent years, increasing affluence in developing countries has led to a considerable growth in interest in mushroom cultivation, which is now seen as a potentially important economic activity for small farmers. China is a major edible mushroom producer. The country produces about half of all cultivated mushrooms, and around of mushrooms are consumed per person per year by 1.4 billion people. In 2014, Poland was the world's largest mushroom exporter, reporting an estimated annually. Separating edible from poisonous species requires meticulous attention to detail; there is no single trait by which all toxic mushrooms can be identified, nor one by which all edible mushrooms can be identified. People who collect mushrooms for consumption are known as mycophagists, and the act of collecting them for such is known as mushroom hunting, or simply "mushrooming". Even edible mushrooms may produce allergic reactions in susceptible individuals, from a mild asthmatic response to severe anaphylactic shock. Even the cultivated A. bisporus contains small amounts of hydrazines, the most abundant of which is agaritine (a mycotoxin and carcinogen). However, the hydrazines are destroyed by moderate heat when cooking. A number of species of mushrooms are poisonous; although some resemble certain edible species, consuming them could be fatal. Eating mushrooms gathered in the wild is risky and should only be undertaken by individuals knowledgeable in mushroom identification. Common best practice is for wild mushroom pickers to focus on collecting a small number of visually distinctive, edible mushroom species that cannot be easily confused with poisonous varieties. Common mushroom hunting advice is that if a mushroom cannot be positively identified, it should be considered poisonous and not eaten. Toxic mushrooms Many mushroom species produce secondary metabolites that can be toxic, mind-altering, antibiotic, antiviral, or bioluminescent. Although there are only a small number of deadly species, several others can cause particularly severe and unpleasant symptoms. Toxicity likely plays a role in protecting the function of the basidiocarp: the mycelium has expended considerable energy and protoplasmic material to develop a structure to efficiently distribute its spores. One defense against consumption and premature destruction is the evolution of chemicals that render the mushroom inedible, either causing the consumer to vomit the meal (see emetics), or to learn to avoid consumption altogether. In addition, due to the propensity of mushrooms to absorb heavy metals, including those that are radioactive, as late as 2008, European mushrooms may have included toxicity from the 1986 Chernobyl disaster and continued to be studied. Psychoactive mushrooms Mushrooms with psychoactive properties have long played a role in various native medicine traditions in cultures all around the world. They have been used as sacrament in rituals aimed at mental and physical healing, and to facilitate visionary states. One such ritual is the velada ceremony. A practitioner of traditional mushroom use is the shaman or curandera (priest-healer). Psilocybin mushrooms, also referred to as psychedelic mushrooms, possess psychedelic properties. Commonly known as "magic mushrooms" or shrooms", they are openly available in smart shops in many parts of the world, or on the black market in those countries which have outlawed their sale. Psilocybin mushrooms have been reported to facilitate profound and life-changing insights often described as mystical experiences. Recent scientific work has supported these claims, as well as the long-lasting effects of such induced spiritual experiences. Psilocybin, a naturally occurring chemical in certain psychedelic mushrooms such as Psilocybe cubensis, is being studied for its ability to help people suffering from psychological disorders, such as obsessive–compulsive disorder. Minute amounts have been reported to stop cluster and migraine headaches. A double-blind study, done by Johns Hopkins Hospital, showed psychedelic mushrooms could provide people an experience with substantial personal meaning and spiritual significance. In the study, one third of the subjects reported ingestion of psychedelic mushrooms was the single most spiritually significant event of their lives. Over two-thirds reported it among their five most meaningful and spiritually significant events. On the other hand, one-third of the subjects reported extreme anxiety. However the anxiety went away after a short period of time. Psilocybin mushrooms have also shown to be successful in treating addiction, specifically with alcohol and cigarettes. A few species in the genus Amanita, most recognizably A. muscaria, but also A. pantherina, among others, contain the psychoactive compound muscimol. The muscimol-containing chemotaxonomic group of Amanitas contains no amatoxins or phallotoxins, and as such are not hepatoxic, though if not properly cured will be non-lethally neurotoxic due to the presence of ibotenic acid. The Amanita intoxication is similar to Z-drugs in that it includes CNS depressant and sedative-hypnotic effects, but also dissociation and delirium in high doses. Folk medicine Some mushrooms are used in folk medicine. In a few countries, extracts, such as polysaccharide-K, schizophyllan, polysaccharide peptide, or lentinan, are government-registered adjuvant cancer therapies, but clinical evidence for efficacy and safety of these extracts in humans has not been confirmed. Although some mushroom species or their extracts may be consumed for therapeutic effects, some regulatory agencies, such as the US Food and Drug Administration, regard such use as a dietary supplement, which does not have government approval or common clinical use as a prescription drug. Other uses Mushrooms can be used for dyeing wool and other natural fibers. The chromophores of mushroom dyes are organic compounds and produce strong and vivid colors, and all colors of the spectrum can be achieved with mushroom dyes. Before the invention of synthetic dyes, mushrooms were the source of many textile dyes. Some fungi, types of polypores loosely called mushrooms, have been used as fire starters (known as tinder fungi). Mushrooms and other fungi play a role in the development of new biological remediation techniques (e.g., using mycorrhizae to spur plant growth) and filtration technologies (e.g. using fungi to lower bacterial levels in contaminated water). There is an ongoing research in the field of genetic engineering aimed towards creation of the enhanced qualities of mushrooms for such domains as nutritional value enhancement, as well as medical use. Gallery See also Fungiculture List of psilocybin mushroom species Largest fungal fruit bodies Lists of fungal species Mushroom poisoning Mushrooms in art References Literature cited External links Identification Mushroom Observer, a collaborative mushroom recording and identification project An Aid to Mushroom Identification, Simon's Rock College Online Edible Wild Mushroom Field Guide Basidiomycota Edible fungi Fungus common names Non-timber forest products et:Seened fi:Sienet
Mushroom
Biology
4,707
694,751
https://en.wikipedia.org/wiki/Optimum%20sustainable%20yield
In population ecology and economics, optimum sustainable yield is the level of effort (LOE) that maximizes the difference between total revenue and total cost. Or, where marginal revenue equals marginal cost. This level of effort maximizes the economic profit, or rent, of the resource being used. It usually corresponds to an effort level lower than that of maximum sustainable yield. In environmental science, optimum sustainable yield is the largest economical yield of a renewable resource achievable over a long time period without decreasing the ability of the population or its environment to support the continuation of this level of yield, and enables an ecosystem to have a high aesthetic value. This concept is widely used specifically in the management of fisheries, where surplus fish are removed so the population stays at its carrying capacity. This allows the most fish to be harvested while still maintaining maximum population growth. References Ecological metrics
Optimum sustainable yield
Mathematics
178
8,517,870
https://en.wikipedia.org/wiki/War%20Research%20Service
The War Research Service (WRS) was a civilian agency of the United States government established during World War II to pursue research relating to biological warfare. Established in May 1942 by Secretary of War Henry L. Stimson, the WRS was embedded in the Federal Security Agency, the federal agency that administered Social Security and other New Deal programs in the administration of President Franklin D. Roosevelt. Headed by George W. Merck, president of the Merck & Co. pharmaceutical firm, the WRS was headquartered at Fort Detrick, Maryland. Being a civilian agency, the WRS was initially tasked to supervise the military Chemical Warfare Service's biological program. However, the WRS was disbanded in 1944, and the weapons research was continued under the exclusive oversight of the CWS. References National Academies: Committees on Biological Warfare, 1941-1948 Cutting Edge: A History of Fort Detrick (Chapter 4) Agencies of the United States government during World War II Biological warfare Defunct agencies of the United States government Government agencies established in 1942 Government agencies disestablished in 1944 Fort Detrick 1942 establishments in the United States 1944 disestablishments in the United States
War Research Service
Biology
235
9,634,634
https://en.wikipedia.org/wiki/Mioara%20Mugur-Sch%C3%A4chter
Mioara Mugur-Schächter is a French-Romanian physicist specialized in fundamental quantum mechanics, probability theory, and theory of communication of information. She is also an epistemologist. As a professor at the University of Reims, she founded there the Laboratoire de Mécanique Quantique et Structures de l'Information, which she directed until 1997. During an interview in 2015, Mugur-Schäcter explained how she worked on the invalidation of John von Neumann's no hidden variables proof during her PhD. Her academic advisor was Louis de Broglie. References Quantum physicists Year of birth missing (living people) Living people Romanian emigrants to France Academic staff of the University of Reims Champagne-Ardenne French physicists
Mioara Mugur-Schächter
Physics
157
17,809,972
https://en.wikipedia.org/wiki/Flomoxef
Flomoxef is an oxacephem antibiotic that was developed by Shionogi. It has been classified either as a second-generation or fourth-generation cephalosporin. It was patented in 1982 and approved for medical use in 1988 under the trade name Flumarin. References Cephalosporin antibiotics Tetrazoles Primary alcohols Sulfides Carboxylic acids Lactams Ethers Carboxamides Organofluorides
Flomoxef
Chemistry
98
41,624,922
https://en.wikipedia.org/wiki/Sequence%20space%20%28evolution%29
In evolutionary biology, sequence space is a way of representing all possible sequences (for a protein, gene or genome). The sequence space has one dimension per amino acid or nucleotide in the sequence leading to highly dimensional spaces. Most sequences in sequence space have no function, leaving relatively small regions that are populated by naturally occurring genes. Each protein sequence is adjacent to all other sequences that can be reached through a single mutation. It has been estimated that the whole functional protein sequence space has been explored by life on the Earth. Evolution by natural selection can be visualised as the process of sampling nearby sequences in sequence space and moving to any with improved fitness over the current one. Representation A sequence space is usually laid out as a grid. For protein sequence spaces, each residue in the protein is represented by a dimension with 20 possible positions along that axis corresponding to the possible amino acids. Hence there are 400 possible dipeptides arranged in a 20x20 space but that expands to 10130 for even a small protein of 100 amino acids arranged in a space with 100 dimensions. Although such overwhelming multidimensionality cannot be visualised or represented diagrammatically, it provides a useful abstract model to think about the range of proteins and evolution from one sequence to another. These highly multidimensional spaces can be compressed to 2 or 3 dimensions using principal component analysis. A fitness landscape is simply a sequence space with an extra vertical axis of fitness added for each sequence. Functional sequences in sequence space Despite the diversity of protein superfamilies, sequence space is extremely sparsely populated by functional proteins. Most random protein sequences have no fold or function. Enzyme superfamilies, therefore, exist as tiny clusters of active proteins in a vast empty space of non-functional sequence. The density of functional proteins in sequence space, and the proximity of different functions to one another is a key determinant in understanding evolvability. The degree of interpenetration of two neutral networks of different activities in sequence space will determine how easy it is to evolve from one activity to another. The more overlap between different activities in sequence space, the more cryptic variation for promiscuous activity will be. Protein sequence space has been compared to the Library of Babel, a theoretical library containing all possible books that are 410 pages long. In the Library of Babel, finding any book that made sense was impossible due to the sheer number and lack of order. The same would be true of protein sequences if it were not for natural selection, which has selected out only protein sequences that make sense. Additionally, each protein sequences is surrounded by a set of neighbours (point mutants) that are likely to have at least some function. On the other hand, the effective "alphabet" of the sequence space may in fact be quite small, reducing the useful number of amino acids from 20 to a much lower number. For example, in an extremely simplified view, all amino acids can be sorted into two classes (hydrophobic/polar) by hydrophobicity and still allow many common structures to show up. Early life on Earth may have only four or five types of amino acids to work with, and researches have shown that functional proteins can be created from wild-type ones by a similar alphabet-reduction process. Reduced alphabets are also useful in bioinformatics, as they provide an easy way of analyzing protein similarity. Exploration through directed evolution and rational design A major focus in the field of protein engineering is on creating DNA libraries that sample regions of sequence space, often with the goal of finding mutants of proteins with enhanced functions compared to the wild type. These libraries are created either by using a wild type sequence as a template and applying one or more mutagenesis techniques to make different variants of it, or by creating proteins from scratch using artificial gene synthesis. These libraries are then screened or selected, and ones with improved phenotypes are used for the next round of mutagenesis. See also Protein Sequence space Directed evolution Protein engineering High-dimensional space References Evolutionary biology Genetics Biochemistry
Sequence space (evolution)
Chemistry,Biology
817
35,011,921
https://en.wikipedia.org/wiki/Calcineurin-like%20phosphoesterase
The calcineurin-like phosphoesterases are a family of enzymes related to calcineurin. It includes a diverse range of phosphoesterases, including protein phosphoserine phosphatases, nucleotidases, sphingomyelin phosphodiesterases and 2'-3' cAMP phosphodiesterases as well as some bacterial nucleases. The most conserved region is a centre on the metal chelating residues. References Protein domains
Calcineurin-like phosphoesterase
Biology
109
830,956
https://en.wikipedia.org/wiki/Digital%20Packet%20Video%20Link
Digital Packet Video Link (DPVL) is a video standard released by VESA in 2004. Unlike previous technologies, in order to save bandwidth, only portions of the screen that are modified are sent by the means of this link. DPVL also introduces metadata video attributes support. The DPVL standard is aimed at mobile and wireless hardware. References VESA-2004-4 DPVL Standard 1.0 June 2004 External links VESA-2004-4 1.0 standard summary Webkeydigital Computer standards VESA
Digital Packet Video Link
Technology
114
11,559,022
https://en.wikipedia.org/wiki/Phanerochaete%20burtii
Phanerochaete burtii is a species of fungus in the family Phanerochaetaceae. It is a plant pathogen that infects plane trees. References Fungal tree pathogens and diseases burtii Fungi described in 1926 Fungus species
Phanerochaete burtii
Biology
51
2,440,091
https://en.wikipedia.org/wiki/Robert%20Henry%20Thurston
Robert Henry Thurston (October 25, 1839 – October 25, 1903) was an American engineer, and the first professor of mechanical engineering at Stevens Institute of Technology. He was assistant professor at the US Naval Academy in Annapolis and a published specialist on iron and steel as well as steam engines, when he was invited in 1871 by Stevens' president Henry Morton to head mechanical engineering at Stevens. The same year Thurston was appointed the first professor of mechanical engineering at Stevens Institute of Technology. Biography Thurston was born 1839 in Providence, Rhode Island, the eldest son of Robert Lawton and Harriet Thurston of Providence. He was trained in the workshop of his father, and graduated from Brown University in 1859. Thurston was engaged with the business firm of which his father was senior partner until 1861, when he entered the navy as an officer of engineers. He served during the civil war on various vessels, and was present at the Battle of Port Royal and at the Siege of Charleston. He was attached to the North and South Atlantic squadrons until the close of 1865. In 1865, he was stationed as Assistant Professor of Natural and Experimental Philosophy at the United States Naval Academy at Annapolis, where he also acted as lecturer on chemistry and physics. In 1870 he visited Europe, for the purpose of studying the British iron manufacturing districts, and in 1871 was appointed professor of mechanical engineering at the Stevens Institute of Technology. In that year he conducted, in behalf of a committee of the American Institute, a series of experiments on steam boilers, in which, for the first time, all losses of heat were noted, and by condensing all the steam generated, the quantity of water entrained by the steam was accurately noted. In 1873, he was appointed a member of the United States Scientific Commission to the Vienna Exhibition; served upon the international jury, edited the Report of the Commissioners (in which he published his own report on machinery and manufactures), in five volumes, 1875–6. In 1874 and subsequently he conducted, at the Stevens Institute of Technology, a series of researches on the efficiency of prime movers and machines, and upon the strength and other essential properties of the materials of construction. In 1875, he was appointed a member of the United States Commission on the causes of boiler explosions, and of the Board to test the metals used in construction. He was a member of various scientific associations in the United States, Great Britain, France, and Germany, wrote numerous papers on technical subjects, which appeared in scientific journals in Europe and America, and prepared articles on similar topics for Johnson's Universal Cyclopedia of 1879. He was made vice-president of the American Institute of Mining Engineers in 1875; he was made vice-president of the American Association for the Advancement of Science, at Nashville, in 1877, in the absence of Professor Pickering, elected at the preceding meeting, and was regularly elected to serve again in 1878, at the St. Louis meeting of the association. From 1880 to 1882 Thurston was the first president of the American Society of Mechanical Engineers. In 1885 he left the Stevens Institute of Technology to replace John Edison Sweet as director of Sibley College at Cornell University, reorganizing it as a college of mechanical engineering. In 1885, he received an honorary degree from Stevens. In 1902, he was elected as a member to the American Philosophical Society. He died on October 25, 1903, his 64th birthday, in Ithaca, New York. Work Thurston's research interest was in the areas of materials, thermodynamics, steam engines and boilers, friction and energetics. Mechanical engineering curriculum At the Stevens Institute of Technology he established Stevens' mechanical engineering curriculum. He was committed to the French and German science-based models of technical education and soon would gain an international reputation for his view of engineering as applied science. His enthusiasm in involving students in funded research led to remarkable pioneering success of the early Stevens' graduates. Historians credit Thurston with establishing the first US mechanical engineering laboratory for conducting funded research at an academic institution for higher learning. Other papers Thurston wrote a number of papers embodying accounts of original investigations of the strength and other properties of construction materials. Among his numerous inventions are the magnesium ribbon lamp, a magnesium-burning naval and army signal apparatus, an autographic recording testing machine, a new form of steam engine governor, and an apparatus for determining the value of lubricants. In 1875, he also developed the three-coordinate solid diagram for testing iron, steel, and other metals. He made a significant contribution to the field of tribology and Duncan Dowson named him one of the 23 "Men of Tribology". Thurston pronounced as economically feasible a plan to enable year-round operation of the Erie Canal by the application of artificially generated heat. Patents Thurston held two patents: one for an autographic recording testing machine for material in torsion and the other for a machine for testing lubricants. Publications Books, a selection: 1878. A history of the growth of the steam engine. D. Appleton and Company; 4th, revised ed. 1902 (online) 1884. Stationary steam engines; especially as adapted to electric lighting purposes. New York, J. Wiley & sons, 1884. 1884. Materials of Engineering. J. Wiley, 1884, Parts, 1, 2 & 3 1889. The development of the philosophy of the steam-engine. An historical sketch. New York, J. Wiley & sons. 1890. Heat as a form of energy. Boston and New York, Houghton, Mifflin and company, 1890. 1891. A manual of the steam-engine. For engineers and technical schools; advanced courses. New York, J. Wiley & sons, 1891. 1894. The animal as a machine and a prime motor, and the laws of energetics. New York, J. Wiley & sons. Some of his more important papers are the following: 1865. On Losses of Propelling Power in the Paddle Wheel 1865. Steam Engines of the French Navy 1870. H. B. M. Iron Clad Monarch 1870. Iron Manufactures in Great Britain 1871. Experimental Steam Boiler Explosions 1871. Report on Test Trials of Steam Boilers 1872. Traction Engines and Road Locomotives 1874. Efficiency of Furnaces Burning Wet Fuel 1874. The Mechanical Engineer, his Preparation and his Work 1877, On a New Method of Planning Researches and of Representing to the Eye the Results of Combination of three or more Elements in Varying Proportions References Further reading Calvert, Monte A. Mechanical Engineer in America, 1830-1910: Professional Cultures in Conflict. Baltimore: The Johns Hopkins University Press, 1967. Clark, Geoffrey W. (2000); History of Stevens Institute of Technology: A Record of Broad-Based Curricula and Technogenesis. Jersey City, New Jersey: Jensen/Daniels. Sinclair, Bruce (1980); A Centennial History of the American Society of Mechanical Engineers, 1880-1980. (Toronto: Published for ASME by University of Toronto Press, 1980). . Durand, William F. (1929): "Robert Henry Thurston" The Riverside Press Cambridge, Massachusetts 1929 Copyright by the American Society of Mechanical Engineers A.S.M.E. First Edition. External links 1839 births 1903 deaths American mechanical engineers Brown University School of Engineering alumni Cornell University faculty Presidents of the American Society of Mechanical Engineers Stevens Institute of Technology faculty Tribologists United States Naval Academy faculty 19th-century American engineers
Robert Henry Thurston
Materials_science
1,515
53,991,267
https://en.wikipedia.org/wiki/CCPForge
The Collaborative Computational Projects (CCP) group was responsible for the development of CCPForge, which is a software development tool produced through collaborations by the CCP community. CCPs allow experts in computational research to come together and develop scientific software which can be applied to numerous research fields. It is used as a tool in many research and development areas, and hosts a variety of projects. Every CCP project is the result of years of valuable work by computational researchers. It is advised for projects to have one application, this helps users to search a category and classification system so they can find the right project for their work. Furthermore, the project can be under up to three CCPs provided it is a collaboration. Each classification category will have sub-sections to filter the category further. CCPForge projects, such provide essential information which has been used in publications such as 'Recent developments in R-matrix applications to molecular processes' and 'Ab initio derivation of Hubbard models for cold atoms in optical lattices', in which codes from CCPQ were used. The Joint Information Systems Committee (JISC) and EPSRC both fund the CCPForge project. The Scientific Computing Department (SCD) of the Science and Technology Facilities Council is responsible for the development and maintenance of CCPForge, and this is funded by a long-term support grant from EPSRC. Current Projects * CCPQ was formed from CCP2 "Continuum States of Atoms and Molecules", incorporating aspects of CCP6 "Molecular Quantum Dynamics". References Engineering and Physical Sciences Research Council Jisc Science and Technology Facilities Council Science and technology in Oxfordshire Computational physics Computational chemistry
CCPForge
Physics,Chemistry
338
22,540,554
https://en.wikipedia.org/wiki/Roberts%20Loom
The Roberts loom was a cast-iron power loom introduced by Richard Roberts in 1830. It was the first loom that was more viable than a hand loom and was easily adjustable and reliable, which led to its widespread use in the Lancashire cotton industry. Richard Roberts Roberts was born at Llanymynech, on the border between England and Wales. He was the son of William Roberts, a shoemaker, who also kept the New Bridge tollgate. Roberts was educated by the parish priest, and early found employment with a boatman on the Ellesmere Canal and later at the local limestone quarries. He received some instruction in drawing from Robert Bough, a road surveyor, who was working under Thomas Telford. He was responsible for developing ever more precise machine tools, working eventually from 15 Deansgate, Manchester. Here he worked on improving textile machinery. He patented the cast-iron loom in 1822 and in 1830 patented the self-acting mule thus revolutionising the production of both the spinning and weaving industries. The weaving process The major components of the loom are the warp beam, heddles, harnesses, shuttle, reed and takeup roll. In the loom, yarn processing includes shedding, picking, battening and taking-up operations. The loom The Roberts loom of 1830 incorporated ideas embodied in an 1822 patent. The frame of the loom was cast iron. There were two side frames cast as single pieces. The three cross tails were machined for an accurate assembly. The great arched rail at the top supports the healds. The front and back cross rails bifurcate at each side to give a larger binding surface. The warp passes from the warp beam, passes over a friction guide roller, where it horizontally passes through the loom to a breastbeam. Here it turns vertically to the cloth beam. Even tension is essential as any variation will lead to broken threads. As the warp beam empties its effective diameter changes making the warp slacker- tension is maintained by adding a wooden pulley to the beam, around which are two turns of rope that are attached to mill weights- thus retarding the beam through friction. The cloth beam bears a toothed wheel which works a pinion. A ratchet wheel is attached with a click level to take up the slack in the cloth. This was Roberts invention. The heddles are of standard construction. They are arranged in groups of four, obviously even threads and odd must go up and down alternatively but two heddles are used for the evens and two for the odds so adjacent threads do not rub. The lower end of the heddle leaves is attached to treadles or marches. These are depressed by cam referred to as eccentrics.. The loom is powered by a leather steam-belt which drives the driving shaft. Here there is a flywheel to smooth the motion and a crank mechanism to drive the battens (swords) and a toothed wheel. This engages a second shaft known as the tappet shaft or wiper shaft whose job is to lower the treadles and throw the shuttle. This turns half the speed of the driving shaft, so its toothed wheel is twice the size. The shuttle is thrown by two levers attached to the side frame, but activated by a friction roller on the tappet shaft. As the shuttle enters the shuttle-box at the end of its travel, it depresses a lever which acts as a brake. If this lever is not depressed then the loom is stopped. Economics The Roberts was made at a time when the power loom industry was set to expand. Until this moment, hand looms were more common than power looms. The reliable Roberts loom was quickly adopted and again it was the spinning side that was short of capacity. Roberts then addressed this, with the construction of a self-acting (automatic) spinning mule. Essentially, textile production was no longer a skilled craft but an industrial process that could be manned by semi-skilled labour. Mule spinning became the man's occupation, and weaving a girl's occupation. References Bibliography External links Selected Cotton Chats- Draper Corporation 1901- 1923 (Last checked 3 October 2012) Textile machinery Weaving equipment History of the textile industry Industrial Revolution in England 1830 introductions Textile mills in Lancashire 19th-century inventions Welsh inventions 19th century in Lancashire
Roberts Loom
Engineering
886
57,210,774
https://en.wikipedia.org/wiki/Spinor%20condensate
Spinor condensates are degenerate Bose gases that have degrees of freedom arising from the internal spin of the constituent particles . They are described by a multi-component (spinor) order parameter. Since their initial experimental realisation, a wealth of studies have appeared, both experimental and theoretical, focusing on the physical properties of spinor condensates, including their ground states, non-equilibrium dynamics, and vortices. Early work The study of spinor condensates was initiated in 1998 by experimental groups at JILA and MIT. These experiments utilised 23Na and 87Rb atoms, respectively. In contrast to most prior experiments on ultracold gases, these experiments utilised a purely optical trap, which is spin-insensitive. Shortly thereafter, theoretical work appeared which described the possible mean-field phases of spin-one spinor condensates. Underlying Hamiltonian The Hamiltonian describing a spinor condensate is most frequently written using the language of second quantization. Here the field operator creates a boson in Zeeman level at position . These operators satisfy bosonic commutation relations: The free (non-interacting) part of the Hamiltonian is where denotes the mass of the constituent particles and is an external potential. For a spin-one spinor condensate, the interaction Hamiltonian is In this expression, is the operator corresponding to the density, is the local spin operator ( is a vector composed of the spin-one matrices), and :: denotes normal ordering. The parameters can be expressed in terms of the s-wave scattering lengths of the constituent particles. Higher spin versions of the interaction Hamiltonian are slightly more involved, but can generally be expressed by using Clebsch–Gordan coefficients. The full Hamiltonian then is . Mean-field phases In Gross-Pitaevskii mean field theory, one replaces the field operators with c-number functions: . To find the mean-field ground states, one then minimises the resulting energy with respect to these c-number functions. For a spatially uniform system spin-one system, there are two possible mean-field ground states. When , the ground state is while for the ground state is The former expression is referred to as the polar state while the latter is the ferromagnetic state. Both states are unique up to overall spin rotations. Importantly, cannot be rotated into . The Majorana stellar representation provides a particularly insightful description of the mean-field phases of spinor condensates with larger spin. Vortices Due to being described by a multi-component order parameter, numerous types of topological defects (vortices) can appear in spinor condensates . Homotopy theory provides a natural description of topological defects, and is regularly employed to understand vortices in spinor condensates. References Bose–Einstein condensates Exotic matter Phases of matter
Spinor condensate
Physics,Chemistry,Materials_science
591
7,777,703
https://en.wikipedia.org/wiki/Hit-and-miss%20engine
A hit-and-miss engine or Hit 'N' Miss is a type of stationary internal combustion engine that is controlled by a governor to only fire at a set speed. They are usually 4-stroke, but 2-stroke versions were also made. It was conceived in the late 19th century and produced by various companies from the 1890s through approximately the 1940s. The name comes from the speed control on these engines: they fire ("hit") only when operating at or below a set speed, and cycle without firing ("miss") when they exceed their set speed. This is as compared to the "throttle-governed" method of speed control. The sound made when the engine is running without a load is a distinctive "Snort POP whoosh whoosh whoosh whoosh snort POP" as the engine fires and then coasts until the speed decreases and it fires again to maintain its average speed. The snorting is caused by the atmospheric intake valve used on many of these engines. Many engine manufacturers made hit-and-miss engines during their peak use—from approximately 1910 through the early 1930s, when more modern designs began to replace them. Some of the largest engine manufacturers were Stover, Hercules, International Harvester (McCormick Deering), John Deere (Waterloo Engine Works), Maytag, and Fairbanks Morse. In the Canadian Atlantic Provinces, primarily in Newfoundland, these engines were known, in colloquial conversation, as "Make-and-Break" engines. The main usage here was to drive traditional skiff style utility and fishing boats. Construction A hit-and-miss engine is a type of flywheel engine. A flywheel engine is an engine that has a large flywheel or set of flywheels connected to the crankshaft. The flywheels maintain engine speed during engine cycles that do not produce driving mechanical forces. The flywheels store energy on the combustion stroke and supply the stored energy to the mechanical load on the other three strokes of the piston. When these engines were designed, technology was less advanced, and manufacturers made all parts very large. A typical engine weighs approximately . Typically, the material for all significant engine parts was cast iron. Small functional pieces were made of steel and machined to tolerance. The fuel system of a hit-and-miss engine consists of a fuel tank, fuel line, check valve, and fuel mixer. The fuel tank most typically holds gasoline, but many users started the engines with gasoline and then switched to a cheaper fuel, such as kerosene or diesel. The fuel line connects the fuel tank to the mixer. Along the fuel line, a check valve keeps the fuel from running back to the tank between combustion strokes. The mixer creates the correct fuel-air mixture by means of a needle valve attached to a weighted or spring-loaded piston, usually in conjunction with an oil-damped dashpot. Mixer operation is simple; it contains only one moving part, the needle valve. While there are exceptions, a mixer does not store fuel in a bowl of any kind. Fuel is simply fed to the mixer, where due to the effect of Bernoulli's principle, it is self-metered in the Venturi created below the weighted piston by the action of the attached needle valve, the method used to this day in the SU carburetor. Sparks to ignite the fuel mixture are created by either a spark plug or a device called an igniter. When a spark plug is used, the spark was generated by either a magneto or else a trembler (or "buzz") coil. A buzz coil uses battery power to generate a series of high voltage pulses that are fed to the spark plug. For igniter ignition, either a battery and coil is used or a "low-tension" magneto is used. With battery and coil ignition, a battery is wired in series with a wire coil and the igniter contacts. When the contacts of the ignitor are closed (the contacts reside inside the combustion chamber), electricity flows through the circuit. When the contacts are opened by the timing mechanism, a spark is generated across the contacts, which ignite the mixture. When a low-tension magneto (really a low-voltage high-current generator) is used, the output of the magneto is fed directly to the igniter points and the spark is generated as with a battery and coil. Except for very large examples, lubrication was almost always manual. Main crankshaft bearings and the connecting rod bearing on the crankshaft generally have a grease cup—a small container with grease and a screwed-on cover. When the cover is screwed down tighter, grease is forced out of the bottom of the cup and into the bearing. Some early engines have just a hole in the bearing casting cap where an operator squirts lubricating oil while the engine is running. The piston is lubricated by a drip oiler that continuously feeds drips of oil onto the piston. The excess oil from the piston runs out of the cylinder onto the engine and eventually onto the ground. The drip oiler can be adjusted to drip faster or slower depending on the need for lubrication, dictated by how hard the engine is working. The rest of the moving engine components were all lubricated by oil that the engine operator had to apply periodically while the engine was running. Virtually all hit-and-miss engines are of the "open crank" style, that is, there is no enclosed crankcase. The crankshaft, connecting rod, camshaft, gears, governor, etc. are all completely exposed and can be viewed in operation when the engine is running. This makes for a messy environment, as oil and sometimes grease are thrown from the engine and run onto the ground. Another disadvantage is that dirt and dust can get on all moving engine parts, causing excessive wear and malfunctions. Frequent cleaning of the engine is therefore required to keep it in proper operating condition. Cooling of the majority of hit-and-miss engines is by hopper cooling, with water in an open reservoir. There was a small portion of small and fractional horsepower engines that were air-cooled with the aid of an incorporated fan. The water-cooled engine has a built in reservoir (larger engines usually do not have a reservoir and require connection to a large external tank for cooling water via pipe connections on the cylinder). The water reservoir includes the area around the cylinder as well as the cylinder head (most cases) and a tank mounted or cast above the cylinder. When the engine runs, it heats the water. Cooling is accomplished by the water steaming off and removing heat from the engine. When an engine runs under load for a period of time, it is common for the water in the reservoir to boil. Replacement of lost water is needed from time to time. A danger of the water-cooled design is freezing in cold weather. Many engines were ruined when a forgetful operator neglected to drain the water when the engine was not in use, and the water froze and broke the cast iron engine pieces. However, New Holland patented a V-shaped reservoir, so that expanding ice pushed up and into a larger space rather than break the reservoir. Water jacket repairs are common on many of the engines that still exist. Design These were simple engines compared to modern engine design. However, they incorporate some innovative designs in several areas, often in an attempt to circumvent patent infringement for a particular component. This is particularly true of the governor. Governors are centrifugal, swinging-arm, pivot-arm, and many others. The actuator mechanism to govern speed is also varied depending on patents existing and the governor used. See, for example, U.S. Patent 543,157 from 1895 or 980,658 from 1911. However accomplished, the governor has one job: to control the speed of the engine. In modern engines, power output is controlled by throttling the flow of the air through the intake by means of a butterfly valve, the only exception to this being in diesels and Valvetronic petrol engines. Operation The intake valve on hit-and-miss engines has no actuator; instead, a light spring holds the intake valve closed unless a vacuum in the cylinder draws it open. This vacuum only occurs if the exhaust valve is closed during the piston's down-stroke. When the hit-and-miss engine is operating above its set speed, the governor holds the exhaust valve open, preventing a vacuum in the cylinder and causing the intake valve to remain closed, thus interrupting the Otto cycle firing mechanism. When the engine is operating at or below its set speed, the governor lets the exhaust valve close. On the next down-stroke, a vacuum in the cylinder opens the intake valve and lets the fuel-air mixture enter. This mechanism prevents fuel consumption during the intake stroke of "miss" cycles. A video explanation on the workings of a hit and miss engine can be found here Usage Hit-and-miss engines produced power outputs from 1 through approximately 100 horsepower (0.75–75 kW). These engines run slowly—typically from 250 revolutions per minute (rpm) for large horsepower engines to 600 rpm for small horsepower engines. They powered pumps for cultivation, saws for cutting wood, generators for electricity in rural areas, farm equipment, and many other stationary applications. Some were mounted on cement mixers. These engines also ran some early washing machines. They were a labor-saving device on farms and helped farmers accomplish much more than they could previously. The engine was typically belted to the device being powered by a wide, flat belt, typically 2–6 inches (5–15 cm) wide. The flat belt was driven by a pulley on the engine that attached either to a flywheel or to the crankshaft. The pulley was specially made to have a circumference slightly tapered from the middle to each edge (like an over-inflated car tire) so that the middle of the pulley was a slightly larger diameter. This kept the flat belt in the center of the pulley. Replacement with throttle-governed engines By the 1930s, more-advanced engines became common. Flywheel engines are extremely heavy for the power produced, and run at very slow speeds. Older engines required a lot of maintenance and were not easily incorporated into mobile applications. In the late 1920s, International Harvester already had the model M engine, which was an enclosed version of a flywheel engine. Their next step was the model LA, which was a totally enclosed engine (except for the valve system) featuring self-lubrication (oil in the crankcase), reliable spark plug ignition, faster-speed operation (up to about 750-800 RPM), and light in weight compared to earlier generations. While the model LA still weighed about , it was far lighter than the model M 1½-hp engine, which is in the 300–350 pound (136–159 kg) range. Later, a slightly improved LA, the LB, was produced. The models M, LA, and LB are throttle governed. As time passed, more engine manufacturers moved to the enclosed-crankcase engine. Companies like Briggs and Stratton were also producing lightweight air-cooled engines in the 0.5–2 hp (0.37–1.5 kW) range and used much lighter-weight materials. These engines also run at much higher speeds (up to approximately 2,000–4,000 rpm) and therefore produce more power for a given size than slow flywheel engines. Most flywheel engine production ceased in the 1940s, but modern engines of this kind remain in use for applications where the low speed is desirable, mostly in oil field applications such as pumpjacks. Maintenance is less of a problem with modern flywheel engines than older ones due to their enclosed crankcases and more advanced materials. Preservation Thousands of out-of-use flywheel engines were scrapped in the iron and steel drives of World War II, but many survived and have been restored to working order by enthusiasts. Numerous preserved hit-and-miss engines may be seen in action at shows dedicated to antique engines (which often also have antique tractors), as well as in the stationary engine section of steam fairs, vintage vehicle rallies, and county fairs. See also Bang-bang control References External links Harry's Old Engine "Antique gas engine collection" – a wide variety of hit-and-miss engine manuals (different makes, different uses), each with a detailed, illustrated description page, some including audio clips of the engines running Video of a 6hp Root & Vandervoort Hit & Miss Engine Description of Novo 6HP engine (manufactured in Lansing Michigan) with video showing engine in operation Description of a Fairbanks Jack-of-all-trades engine Description of a Jaeger 2HP engine Description of a Reid 15HP engine Video of large hit-and-miss engine Video of small hit-and-miss engine "International Harvester Famous 3 Horsepower Hit-Miss Engine" – Description of International Harvester Famous 3 Horsepower Hit-Miss Engine Gas Engine Magazine (features) – Enthusiast's magazine covering the history and preservation of hit-and-miss engines 7 hp Fuller & Johnson Restoration Engine technology Stationary engines Articles containing video clips
Hit-and-miss engine
Technology
2,697
803,478
https://en.wikipedia.org/wiki/ATC%20code%20S
References S
ATC code S
Chemistry
4
345,141
https://en.wikipedia.org/wiki/Product%20detector
A product detector is a type of demodulator used for AM and SSB signals. Rather than converting the envelope of the signal into the decoded waveform like an envelope detector, the product detector takes the product of the modulated signal and a local oscillator, hence the name. A product detector is a frequency mixer. Product detectors can be designed to accept either IF or RF frequency inputs. A product detector which accepts an IF signal would be used as a demodulator block in a superheterodyne receiver, and a detector designed for RF can be combined with an RF amplifier and a low-pass filter into a direct-conversion receiver. A simple product detector The simplest form of product detector mixes (or heterodynes) the RF or IF signal with a locally derived carrier (the Beat Frequency Oscillator, or BFO) to produce an audio frequency copy of the original audio signal and a mixer product at twice the original RF or IF frequency. This high-frequency component can then be filtered out, leaving the original audio frequency signal. Mathematical model of the simple product detector If m(t) is the original message, the AM signal can be shown to be Multiplying the AM signal x(t) by an oscillator at the same frequency as and in phase with the carrier yields which can be re-written as After filtering out the high-frequency component based around cos(2ωt) and the DC component C, the original message will be recovered. Drawbacks of the simple product detector Although this simple detector works, it has two major drawbacks: The frequency of the local oscillator must be the same as the frequency of the carrier, or else the output message will fade in and out in the case of AM, or be frequency shifted in the case of SSB Once the frequency is matched, the phase of the carrier must be obtained, or else the demodulated message will be attenuated, but the noise will not be. The local oscillator can be synchronized with the carrier using a phase-locked loop in a synchronous detector arrangement. For SSB, the only solution is to construct a highly stable oscillator. Another example There are many other kinds of product detectors as well, which are practical if one has access to digital signal processing equipment. For instance, it is possible to multiply the incoming signal by the carrier, times the square of another carrier 90° out of phase with it. This will produce a copy of the original message, and another AM signal at the fourth harmonic, by means of the trigonometric identity The high-frequency component can again be filtered out, leaving the original signal. Mathematical model of the detector If m(t) is the original message, the AM signal can be shown to be Multiplying the AM signal by the new set of frequencies yields After filtering out the component based around cos(4ωt) and the DC component C, the original message will be recovered. A more sophisticated product detector A more sophisticated product detector can be constructed in a way much like a single-sideband modulator. Two copies of the modulated input signals are created. The first copy is mixed with a local oscillator and low-pass filtered. The second copy is mixed with a 90° phase-shifted copy of the oscillator and the output of this mixer is also 90° phase-shifted and then low-pass filtered. These copies are then combined to produce the original message. This operation is similar to that performed by a dual-phase lock-in amplifier. Example: I-Q Demodulator Advantages and disadvantages The product demodulator has some advantages over an envelope detector for AM signal reception. The product demodulator can decode overmodulated AM and AM with suppressed carrier. A signal demodulated with a product detector will have a higher signal-to-noise ratio than the same signal demodulated with an envelope detector. On the other hand, the envelope detector is a simple and relatively inexpensive circuit, and it can provide higher fidelity, since there is no possibility of mistuning the local oscillator. A product detector (or equivalent) is needed to demodulate SSB signals. Frequency mixers Communication circuits Demodulation de:Amplitudenmodulation#Koh.C3.A4rente_Demodulation
Product detector
Engineering
909
55,242,327
https://en.wikipedia.org/wiki/Sarcodontia%20fragilissima
Sarcodontia fragilissima is a species of toothed crust fungus in the family Meruliaceae. The fungus was originally described as Hydnum fragilissimum by Miles Joseph Berkeley and Moses Ashley Curtis in 1873. It was transferred to the genus Sarcodontia by T.L. Nikolajeva in 1961. References Fungi described in 1873 Fungi of Europe Meruliaceae Taxa named by Miles Joseph Berkeley Fungus species
Sarcodontia fragilissima
Biology
89
38,223,205
https://en.wikipedia.org/wiki/Cultural%20conflict
Cultural conflict is a type of conflict that occurs when different cultural values and beliefs clash. Broad and narrow definitions exist for the concept, both of which have been used to explain violence (including war) and crime, on either a micro or macro scale. Conflicting values Jonathan H. Turner defines cultural conflict as a conflict caused by "differences in cultural values and beliefs that place people at odds with one another." On a micro level, Alexander Grewe discusses a cultural conflict between guests of different culture and nationality as seen in a British 1970 sitcom, Fawlty Towers. He defines this conflict as one that occurs when people's expectations of a certain behavior coming from their cultural backgrounds are not met, as others have different cultural backgrounds and different expectations. Cultural conflicts are difficult to resolve as parties to the conflict have different beliefs. Cultural conflicts intensify when those differences become reflected in politics, particularly on a macro level. An example of cultural conflict is the debate over abortion. Ethnic cleansing is another extreme example of cultural conflict. Wars can also be a result of a cultural conflict; for example the differing views on slavery were one of the reasons for the American Civil War. Crime and deviance A more narrow definition of a cultural conflict dates to Daniel Bell's 1962 essay, "Crime as an American Way of Life", and focuses on criminal-enabling consequences of a clash in cultural values. William Kornblum defines it as a conflict that occurs when conflicting norms create "opportunities for deviance and criminal gain in deviant subcultures." Kornblum notes that, whenever laws impose cultural values on a group that does not share those views (often, this is the case of the majority imposing their laws on a minority), illegal markets supplied by criminals are created to circumvent those laws. He discusses the example of prohibition in the interbellum United States, and notes how the cultural conflict between pro- and anti-alcohol groups created opportunities for illegal activity; another similar example he lists is that of the war on drugs. Kornblum also classifies the cultural conflict as one of the major types of conflict theory. In The Clash of Civilizations Samuel P. Huntington proposes that people's cultural and religious identities will be the primary source of conflict in the post-Cold War world. Influence and understanding Michelle LeBaron describes different cultures as "underground rivers that run through our lives and relationships, giving us messages that shape our perceptions, attributions, judgments, and ideas of self and other." She states that cultural messages "shape our understandings" when two or more people are present in regards to relationships, conflict, and peace. LeBaron discusses the influence of culture as being powerful and "unconscious, influencing conflict and attempts to resolve conflict in imperceptible ways." She states that the impact of culture is huge, affecting "name, frame, blame, and attempt to tame conflicts." Due to the huge impact that culture has on us, LeBaron finds it important to explain the "complications of conflict:" First, "culture is multi-layered," meaning that "what you see on the surface may mask differences below the surface." Second, "culture is constantly in flux," meaning that "cultural groups adapt in dynamic and sometimes unpredictable ways." Third, "culture is elastic," meaning that one member of a cultural group may not participate in the norms of the culture. Lastly, "culture is largely below the surface," meaning that it isn't easy to reach the deeper levels of culture and its meanings. See also Cultural diversity Cultural divide Cultural genocide Cultural hegemony Cultural imperialism Cultural tourism Culture shock Culture war Ethnic conflict Identity politics Language policy Linguistic imperialism Linguistic rights Multiculturalism Regionalism (politics) Religious war Social cohesion War against Islam War against Judaism Kulturkampf Clash of civilizations References Further reading Croissant, Aurel, Uwe Wagschal, Nicolas Schwank, and Christoph Trinn. 2009. Culture and Conflict in Global Perspective: The Cultural Dimensions of Conflicts from 1945 to 2007. . Markus, Hazel Rose, and Alana Conner. 2014). Clash!: How to Thrive in a Multicultural World. . Cultural politics Conflict (process)
Cultural conflict
Biology
860
14,445,101
https://en.wikipedia.org/wiki/HD%20104985
HD 104985, formally named Tonatiuh (), is a solitary star with a exoplanetary companion in the northern constellation of Camelopardalis. The companion is designated HD 104985 b and named Meztli (). This star has an apparent visual magnitude of 5.78 and thus is dimly visible to the naked eye under favorable seeing conditions. It is located at a distance of approximately 329 light years from the Sun based on parallax, but is drifting closer with a radial velocity of −20 km/s. The stellar classification of this star is G8.5IIIb, indicating this is an evolved giant star that has exhausted the supply of hydrogen at its core then cooled and expanded off the main sequence. It is located in the red clump region of the HR diagram, suggesting it is on the horizontal branch and generating energy through core helium fusion. The star is approximately 4.4 billion years old with 1.2 times the mass of the Sun and has expanded to 10.6 times the Sun's radius. It is radiating 51 times the luminosity of the Sun from its enlarged photosphere at an effective temperature of 4,730 K. In 2003, radial velocity measurements made by the Okayama Planet Search Program led to the announcement of an exoplanetary companion. It is orbiting at a distance of with a period of 199.5 days with an eccentricity (ovalness) of 0.09. Since the inclination of the exoplanet's orbital plane is unknown, only a lower bound on its mass can be determined. It has at least 8.3 times the mass of Jupiter. Naming HD 104985 is the star's entry in the Henry Draper Catalogue. Following its discovery in 2003 the planet was designated HD 104985 b. In July 2014 the International Astronomical Union launched NameExoWorlds, a process for giving proper names to certain exoplanets and their host stars. The process involved public nomination and voting for the new names. In December 2015, the IAU announced the winning names were Tonatiuh for this star and Meztli for its planet. The winning names were those submitted by the Sociedad Astronomica Urania of Morelos, Mexico. 'Tonatiuh' was the Aztec god of the Sun; 'Meztli' was the Aztec goddess of the Moon. In 2016, the IAU organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. In its first bulletin of July 2016, the WGSN explicitly recognized the names of exoplanets and their host stars approved by the Executive Committee Working Group Public Naming of Planets and Planetary Satellites, including the names of stars adopted during the 2015 NameExoWorlds campaign. This star is now so entered in the IAU Catalog of Star Names. See also List of extrasolar planets References G-type giants Horizontal-branch stars Planetary systems with one confirmed planet Camelopardalis Durchmusterung objects 104985 058952 4609 Tonatiuh
HD 104985
Astronomy
636
155,214
https://en.wikipedia.org/wiki/Pudendal%20nerve
The pudendal nerve is the main nerve of the perineum. It is a mixed (motor and sensory) nerve and also conveys sympathetic autonomic fibers. It carries sensation from the external genitalia of both sexes and the skin around the anus and perineum, as well as the motor supply to various pelvic muscles, including the male or female external urethral sphincter and the external anal sphincter. If damaged, most commonly by childbirth, loss of sensation or fecal incontinence may result. The nerve may be temporarily anesthetized, called pudendal anesthesia or pudendal block. The pudendal canal that carries the pudendal nerve is also known by the eponymous term "Alcock's canal", after Benjamin Alcock, an Irish anatomist who documented the canal in 1836. Structure Origin The pudendal nerve is paired, meaning there are two nerves, one on the left and one on the right side of the body. Each is formed as three roots immediately converge above the upper border of the sacrotuberous ligament and the coccygeus muscle. The three roots become two cords when the middle and lower root join to form the lower cord, and these in turn unite to form the pudendal nerve proper just proximal to the sacrospinous ligament. The three roots are derived from the ventral rami of the 2nd, 3rd, and 4th sacral spinal nerves, with the primary contribution coming from the 4th. Course and relations The pudendal nerve passes between the piriformis muscle and coccygeus (ischiococcygeus) muscles and leaves the pelvis through the lower part of the greater sciatic foramen. It crosses over the lateral part of the sacrospinous ligament and reenters the pelvis through the lesser sciatic foramen. After reentering the pelvis, it accompanies the internal pudendal artery and internal pudendal vein upwards and forwards along the lateral wall of the ischiorectal fossa, being contained in a sheath of the obturator fascia termed the pudendal canal, along with the internal pudendal blood vessels. Branches Inside the pudendal canal, the nerve divides into branches, first giving off the inferior rectal nerve, then the perineal nerve, before continuing as the dorsal nerve of the penis (in males) or the dorsal nerve of the clitoris (in females). Nucleus The nerve is a major branch of the sacral plexus, with fibers originating in Onuf's nucleus in the sacral region of the spinal cord. Variation The pudendal nerve may vary in its origins. For example, the pudendal nerve may actually originate in the sciatic nerve. Consequently, damage to the sciatic nerve can affect the pudendal nerve as well. Sometimes dorsal rami of the first sacral nerve contribute fibers to the pudendal nerve, and even more rarely . Function The pudendal nerve has both motor (control of muscles) and sensory functions. It also carries sympathetic autonomic fibers (but not parasympathetic fibers). Sensory The pudendal nerve supplies sensation to the penis in males, and to the clitoris in females, which travels through the branches of both the dorsal nerve of the penis and the dorsal nerve of the clitoris. The posterior scrotum in males and the labia majora in females are also supplied, via the posterior scrotal nerves (males) or posterior labial nerves (females). The pudendal nerve is one of several nerves supplying sensation to these areas. Branches also supply sensation to the anal canal. By providing sensation to the penis and the clitoris, the pudendal nerve is responsible for the afferent component of penile erection and clitoral erection. Motor Branches innervate muscles of the perineum and the pelvic floor; namely, the bulbospongiosus and the ischiocavernosus muscles respectively, the levator ani muscle (including the Iliococcygeus, pubococcygeus, puborectalis and either pubovaginalis in females or puboprostaticus in males) the external anal sphincter (via the inferior anal branch), and male or female external urethral sphincter. As it functions to innervate the external urethral sphincter it is responsible for the tone of the sphincter mediated via acetylcholine release. This means that during periods of increased acetylcholine release the skeletal muscle in the external urethral sphincter contracts, causing urinary retention. Whereas in periods of decreased acetylcholine release the skeletal muscle in the external urethral sphincter relaxes, allowing voiding of the bladder to occur. (Unlike the internal sphincter muscle, the external sphincter is made of skeletal muscle, therefore it is under voluntary control of the somatic nervous system.) It is also responsible for ejaculation. Clinical significance The pudendal nerve may be tested by elicitation of the anocutaneous reflex ("anal wink"). Anesthesia A pudendal nerve block, also known as a saddle nerve block, is a local anesthesia technique used in an obstetric procedure to anesthetize the perineum during labor. In this procedure, an anesthetic agent such as lidocaine is injected through the inner wall of the vagina about the pudendal nerve. Abnormal loss of sensation in the same region as a medical symptom is also sometimes termed saddle anesthesia. Damage The pudendal nerve can be compressed or stretched, resulting in temporary or permanent neuropathy. Injury to the pudendal nerve manifests more as sensory problems (pain or alteration/loss of sensation) rather than loss of muscle control. Irreversible nerve injury may occur when nerves are stretched by 12% or more of their normal length. If the pelvic floor is over-stretched, acutely (e.g. prolonged or difficult childbirth) or chronically (e.g. chronic straining during defecation caused by constipation), the pudendal nerve is vulnerable to stretch-induced neuropathy. After repeated traction of the pudendal nerve, it starts to be replaced by fibrous tissue with subsequent loss of function. Pudendal nerve entrapment, also known as Alcock canal syndrome, is neuropathic pain in the distribution of the pudendal nerve. It is caused by entrapment of the nerve. The condition is estimated to have a prevalence of 1 in 100000, and is sometimes associated with professional cycling. Systemic diseases such as diabetes and multiple sclerosis can damage the pudendal nerve via demyelination or other mechanisms. A pelvic tumor (most notably a large sacrococcygeal teratoma), or surgery to remove the tumor, can also cause permanent damage. Unilateral pudendal nerve neuropathy inconsistently causes fecal incontinence in some, but not others. This is because crossover innervation of the external anal sphincter occurs in some individuals. There is significant overlap of the innervation of the external anal sphincter from the pudendal nerves of both sides. This allows partial re-innervation from the opposite side after nerve injury. Imaging The pudendal nerve is difficult to visualize on routine CT or MR imaging, however under CT guidance, a needle may be placed adjacent to the pudendal neurovascular bundle. The ischial spine, an easily identifiable structure on CT, is used as the level of injection. A spinal needle is advanced via the gluteal muscles and advanced within several millimeters of the ischial spine. Contrast (X-ray dye) is then injected, highlighting the nerve in the canal and allowing for confirmation of correct needle placement. The nerve may then be injected with cortisone and local anesthetic to confirm and also treat chronic pain of the external genitalia (known as vulvodynia in females), pelvic and anorectal pain. Nerve latency testing The time taken for a muscle supplied by the pudendal nerve to contract in response to an electrical stimulus applied to the sensory and motor fibers can be quantified. Increased conduction time (terminal motor latency) signifies damage to the nerve. 2 stimulating electrodes and 2 measuring electrodes are mounted on the examiner's gloved finger ("St Mark's electrode"). History The term pudendal comes from Latin , meaning external genitals, derived from , meaning "parts to be ashamed of". The pudendal canal is also known by the eponymous term "Alcock's canal", after Benjamin Alcock, an Irish anatomist who documented the canal in 1836. Alcock documented the existence of the canal and pudendal nerve in a contribution about iliac arteries in Robert Bentley Todd's "The Cyclopaedia of Anatomy and Physiology". Additional images See also Neurogenic bladder Pudendal neuralgia Sacral plexus Inferior rectal nerve Perineal nerve Dorsal nerve of the penis Dorsal nerve of the clitoris Pudendal canal References External links - "Inferior view of female perineum, branches of the internal pudendal artery." Diagnosis and treatment at www.nervemed.com www.pudendal.com Pudendal nerve entrapment at chronicprostatitis.com CT sequence showing a pudendal nerve block. Nerves of the lower limb and lower torso Sexual anatomy
Pudendal nerve
Biology
2,011
26,201,825
https://en.wikipedia.org/wiki/Population%20equivalent
Population equivalent (PE) or unit per capita loading, or equivalent person (EP), is a parameter for characterizing industrial wastewaters. It essentially compares the polluting potential of an industry (in terms of biodegradable organic matter) with a population (or certain number of people), which would produce the same polluting load. In other words, it is the number expressing the ratio of the sum of the pollution load produced during 24 hours by industrial facilities and services to the individual pollution load in household sewage produced by one person in the same time. This refers to the amount of oxygen-demanding substances in wastewater which will consume oxygen as it bio-degrades, usually as a result of bacterial activity. Equation and base value A value frequently used in the international literature for PE, which was based on a German publication, is 54 gram of BOD (Biochemical oxygen demand) per person (or per capita or per inhabitant) per day. This has been adopted by many countries for design purposes but other values are also in use. For example, a commonly used definition used in Europe is: 1 PE equates to 60 gram of BOD per person per day, and it also equals 200 liters of sewage per day. In the United States, a figure of 80 grams BOD per day is normally used. If the base value is taken as 60 grams of BOD per person per day, then the equation to calculate PE from an industrial wastewater is: Population equivalents for industrial wastewaters {| class="wikitable sortable" border="1" |+ BOD population equivalents of wastewater from some industries |- ! Type ! Activity !Unit of production ! BOD PE [inhab/(unit/d)] |- | Food | Canning (fruit/vegetables) |1 ton processed | 500 |- || | Pea processing |1 ton processed | 85-400 |- || | Tomato |1 ton processed | 50-185 |- || | Carrot |1 ton processed | 160-390 |- || | Potato |1 ton processed | 215-545 |- || | Citrus fruit |1 ton processed | 55 |- || | Chicken meat |1 ton processed | 70-1600 |- || | Beef |1 ton processed | 20-600 |- || | Fish |1 ton processed | 300-2300 |- || | Sweets/candies |1 ton produced | 40-150 |- || | Sugar cane |1 ton produced | 50 |- || | Dairy (without cheese) |1000 L milk | 20-100 |- || | Dairy (with cheese) |1000 L milk | 100-800 |- || | Margarine |1 ton produced | 500 |- || | Slaughter house |1 cow / 2.5 pigs | 10-100 |- || | Yeast production |1 ton produced | 21000 |- | Confined animals breeding | Pigs |live t.d | 35-100 |- || | Dairy cattle (milking room) |live t.d | 1-2 |- || | Cattle |live t.d | 65-150 |- || | Horses |live t.d | 65-150 |- || | Poultry |live t.d | 15-20 |- | Sugar-alcohol | Alcohol distillation |1 ton cane processed | 4000 |- | Drinks | Brewery |1 m3 produced | 150-350 |- || | Soft drinks |1 m3 produced | 50-100 |- || | Wine |1 m3 produced | 5 |- | Textiles | Cotton |1 ton produced | 2800 |- || | Wool |1 ton produced | 5600 |- || | Rayon |1 ton produced | 550 |- || | Nylon |1 ton produced | 800 |- || | Polyester |1 ton produced | 3700 |- || | Wool washing |1 ton produced | 2000-4500 |- || | Dyeing |1 ton produced | 2000-3500 |- || | Textile bleaching |1 ton produced | 250-350 |- | Leather and tanneries | Tanning |1 ton hide processed | 1000-3500 |- || | Shoes |1000 pairs produced | 300 |- | Pulp and paper | Pulp |1 ton produced | 600 |- || | Paper |1 ton produced | 100-300 |- || | Pulp and paper integrated |1 ton produced | 1000-10000 |- | Chemical industrial | Paint |1 employee | 20 |- || | Soap |1 ton produced | 1000 |- || | Petroleum refinery |1 barrel (117 L) | 1 |- || | PVC |1 ton produced | 200 |- | Steelworks | Foundry |1 ton pig iron produced | 12-30 |- || | Lamination |1 ton produced | 8-50 |} See also Sewage treatment References Environmental science Waste treatment technology Sewerage Equivalent units
Population equivalent
Chemistry,Mathematics,Engineering,Environmental_science
1,039
1,365,238
https://en.wikipedia.org/wiki/Distelfink
A distelfink is a stylized goldfinch, probably based on the European variety. It frequently appears in Pennsylvania Dutch folk art. It represents happiness and good fortune and the Pennsylvania German people, and is a common theme in hex signs and in fraktur. The word distelfink (literally 'thistle-finch') is (besides Stieglitz) the German name for the European goldfinch. In popular culture During the 1940s, variations of Distelfink birds with flowers, hearts and tulips became popular designs for crochet, pottery and wallpaper patterns. Distelfink was adopted as the name for a chain of drive-in restaurants serving Pennsylvania Dutch food that became popular across Pennsylvania during the twentieth century. Sandoe's Distelfink, which was located in Gettysburg, which was built by Cecil Sandoe in 1954, was patronized by a number of prominent Americans, including former first lady of the United States Mamie Eisenhower and Baltimore Orioles baseball star Brooks Robinson. In the story "The Sign of the Triple Distelfink", the American cartoonist Don Rosa used a triple distelfink hex sign as the origin for Gladstone Gander's remarkable luck. Notes External links Example of a distelfink American art German-American culture in Pennsylvania Pennsylvania Dutch culture Visual motifs
Distelfink
Mathematics
274
69,214,902
https://en.wikipedia.org/wiki/Olivetti%20M28
The Olivetti M28 personal computer, introduced in 1986, was the successor to the Olivetti M24. It had an Intel 80286 CPU running at 8 MHz and 512 KB (expandable to 1024 KB on the motherboard) of RAM, featuring a 5.25" floppy drive and a 20 MB hard drive. The operating systems were MS-DOS 3.2 and XENIX. The computer had room to install three disk units, as opposed to only two on the M24. It was possible to install a 70 MB hard drive, a 80287 math coprocessor and an enhanced CGA compatible graphic card capable of displaying pixels with 16 colors. The Olivetti M28 was rebranded as the AT&T PC 6310 by AT&T in 1987 and sold on the US market. It was available in France as the Persona 1800, sold by LogAbax. See also Olivetti M24 External links Brochure (in Italian) References Olivetti personal computers Computer-related introductions in 1986
Olivetti M28
Technology
209
11,479,997
https://en.wikipedia.org/wiki/Electromagnetic%20brake
Electromagnetic brakes or EM brakes are used to slow or stop vehicles using electromagnetic force to apply mechanical resistance (friction). They were originally called electro-mechanical brakes but over the years the name changed to "electromagnetic brakes", referring to their actuation method which is generally unrelated to modern electro-mechanical brakes. Since becoming popular in the mid-20th century, especially in trains and trams, the variety of applications and brake designs has increased dramatically, but the basic operation remains the same. Both electromagnetic brakes and eddy current brakes use electromagnetic force, but electromagnetic brakes ultimately depend on friction whereas eddy current brakes use magnetic force directly. Applications In locomotives, a mechanical linkage transmits torque to an electromagnetic braking component. Trams and trains use electromagnetic track brakes where the braking element is pressed by magnetic force to the rail. They are distinguished from mechanical track brakes, where the braking element is mechanically pressed on the rail. Electric motors in industrial and robotic applications also employ electromagnetic brakes. Recent design innovations have led to the application of electromagnetic brakes to aircraft applications. In this application, a combination motor/generator is used first as a motor to spin the tires up to speed prior to touchdown, thus reducing wear on the tires, and then as a generator to provide regenerative braking. Types Single face brake A friction-plate brake uses a single plate friction surface to engage the input and output members of the clutch. Single face electromagnetic brakes make up approximately 80% of all of the power applied brake applications. Power off brake Power off brakes stop or hold a load when electrical power is either accidentally lost or intentionally disconnected. In the past, some companies have referred to these as "fail safe" brakes. These brakes are typically used on or near an electric motor. Typical applications include robotics, holding brakes for Z axis ball screws and servo motor brakes. Brakes are available in multiple voltages and can have either standard backlash or zero backlash hubs. Multiple disks can also be used to increase brake torque, without increasing brake diameter. There are 2 main types of holding brakes. The first is spring applied brakes. The second is permanent magnet brakes. Spring type - When no electricity is applied to the brake, a spring pushes against a pressure plate, squeezing the friction disk between the inner pressure plate and the outer cover plate. This frictional clamping force is transferred to the hub, which is mounted to a shaft. Permanent magnet type – A permanent magnet holding brake looks very similar to a standard power applied electromagnetic brake. Instead of squeezing a friction disk, via springs, it uses permanent magnets to attract a single face armature. When the brake is engaged, the permanent magnets create magnetic lines of flux, which can in turn attract the armature to the brake housing. To disengage the brake, power is applied to the coil which sets up an alternate magnetic field that cancels out the magnetic flux of the permanent magnets. Both power off brakes are considered to be engaged when no power is applied to them. They are typically required to hold or to stop alone in the event of a loss of power or when power is not available in a machine circuit. Permanent magnet brakes have a very high torque for their size, but also require a constant current control to offset the permanent magnetic field. Spring applied brakes do not require a constant current control, they can use a simple rectifier, but are larger in diameter or would need stacked friction disks to increase the torque. Particle brake Magnetic particle brakes are unique in their design from other electro-mechanical brakes because of the wide operating torque range available. Like an electro-mechanical brake, torque to voltage is almost linear; however, in a magnetic particle brake, torque can be controlled very accurately (within the operating RPM range of the unit). This makes these units ideally suited for tension control applications, such as wire winding, foil, film, and tape tension control. Because of their fast response, they can also be used in high cycle applications, such as magnetic card readers, sorting machines and labeling equipment. Magnetic particles (very similar to iron filings) are located in the powder cavity. When electricity is applied to the coil, the resulting magnetic flux tries to bind the particles together, almost like a magnetic particle slush. As the electric current is increased, the binding of the particles becomes stronger. The brake rotor passes through these bound particles. The output of the housing is rigidly attached to some portion of the machine. As the particles start to bind together, a resistant force is created on the rotor, slowing, and eventually stopping the output shaft. Hysteresis power brake Electrical hysteresis units have an extremely wide torque range. Since these units can be controlled remotely, they are ideal for test stand applications where varying torque is required. Since drag torque is minimal, these units offer the widest available torque range of any of the hysteresis products. Most applications involving powered hysteresis units are in test stand requirements. When electricity is applied to the field, it creates an internal magnetic flux. That flux is then transferred into a hysteresis disk (that may be made from an AlNiCo alloy) passing through the field. The hysteresis disk is attached to the brake shaft. A magnetic drag on the hysteresis disk allows for a constant drag, or eventual stoppage of the output shaft. When electricity is removed from the brake, the hysteresis disk is free to turn, and no relative force is transmitted between either member. Therefore, the only torque seen between the input and the output is bearing drag. Multiple disk brake Multiple disk brakes are used to deliver extremely high torque within a small space. These brakes can be used either wet or dry, which makes them ideal to run in multi-speed gear box applications, machine tool applications, or in off-road equipment. Electro-mechanical disk brakes operate via electrical actuation, but transmit torque mechanically. When electricity is applied to the coil of an electromagnet, the magnetic flux attracts the armature to the face of the brake. As it does so, it squeezes the inner and outer friction disks together. The hub is normally mounted on the shaft that is rotating. The brake housing is mounted solidly to the machine frame. As the disks are squeezed, torque is transmitted from the hub into the machine frame, stopping and holding the shaft. When electricity is removed from the brake, the armature is free to turn with the shaft. Springs keep the friction disk and armature away from each other. There is no contact between braking surfaces and minimal drag. See also Brake run Electromagnetic clutch Regenerative brake Eddy current brake Dynamic braking References Brakes Railway brakes Electromagnetism Electromagnetic brakes and clutches cs:Elektrodynamická brzda
Electromagnetic brake
Physics,Engineering
1,376
579,902
https://en.wikipedia.org/wiki/Rest%20%28music%29
A rest is the absence of a sound for a defined period of time in music, or one of the musical notation signs used to indicate that. The length of a rest corresponds with that of a particular note value, thus indicating how long the silence should last. Each type of rest is named for the note value it corresponds with (e.g. quarter note and quarter rest, or quaver and quaver rest), and each of them has a distinctive sign. Description Rests are intervals of silence in pieces of music, marked by symbols indicating the length of the silence. Each rest symbol and name corresponds with a particular note value, indicating how long the silence should last, generally as a multiplier of a measure or whole note. The quarter (crotchet) rest (𝄽) may take a different form in older music. The four-measure rest or longa rest are only used in long silent passages which are not divided into bars. The combination of rests used to mark a silence follows the same rules as for note values. One-bar rests When an entire bar is devoid of notes, a whole (semibreve) rest is used, regardless of the actual time signature. Historically exceptions were made for a time signature (four half notes per bar), when a double whole (breve) rest was typically used for a bar's rest, and for time signatures shorter than , when a rest of the actual measure length would be used. Some published (usually earlier) music places the numeral "" above the rest to confirm the extent of the rest. Occasionally in manuscripts and facsimiles of them, bars of rest are sometimes left completely empty and unmarked, possibly even without the staves. Multiple measure rests In instrumental parts, rests of more than one bar in the same meter and key may be indicated with a multimeasure rest (British English: multiple bar rest), showing the number of bars of rest, as shown. A multimeasure rest is usually drawn in one of two ways: As a thick horizontal line placed on the middle line of the staff, with serifs at both ends (see above middle picture), or as thick diagonal lines placed between the second and fourth lines of the staff, resembling a large heavy minus sign or equals sign set at a slant (the diagonal style is much less common than the horizontal one; although a small number of publishers use it, it is more commonly found in modern manuscripts in a casual style). Both variants of thick line rests are drawn in the same shape each time, regardless of how many bars' rest they represent. The older system of notating multirests (deriving from Baroque notation conventions that were adapted from the old mensural rest system dating from Medieval times) draws each multimeasure rest according to the picture above right unless it will exceed a certain number of bars; rests longer than that limit are drawn using the thick horizontal line mentioned above. How long a multimeasure rest must be before resorting to a horizontal line is a matter of personal taste or editorial policy; most publishers use ten bars as the changing point, however, larger and smaller changing points are used, especially in earlier music. The number of bars for which a horizontal line multimeasure rest lasts is indicated by a number printed above the musical staff (usually at the same size as the numerals in a time signature). If a change of meter or key occurs during a multimeasure rest, that rest must be divided into shorter sections for clarity, with the changes of key and/or meter indicated between the rests. Multimeasure rests must also be divided at double barlines, which demarcate musical phrases or sections, and at rehearsal letters. Dotted rests A rest may also have a dot after it, increasing its duration by half, but this is less commonly used than with notes, except occasionally in modern music notated in compound meters such as or . In these meters the long-standing convention has been to indicate one beat of rest as a quarter rest followed by an eighth rest (equivalent to three eighths). See: Anacrusis. General pause In a score for an ensemble piece, "G.P." (general pause) indicates silence for one bar or more for the entire ensemble. Specifically marking general pauses each time they occur (rather than writing them as ordinary rests) is relevant for performers, as making any kind of noise should be avoided there—for instance, page turns in sheet music are not made during general pauses, as the sound of turning the page becomes noticeable when no one is playing. See also Caesura List of silent musical compositions List of musical symbols Tacet References Musical notation Rhythm and meter Silence de:Notenwert#Pausen
Rest (music)
Physics
973
42,153,373
https://en.wikipedia.org/wiki/Antoni%20Abad
Antoni Abad i Roses (born 1956 in Lleida) is a Spanish artist. He began his career as a sculptor, and evolved over time towards video art and later in net.art and other forms of new media. Biography Abad was born in 1956 in Lleida, Spain. Abad's artistic training began with the teachings of his father, followed by a degree in Art History from the University of Barcelona (1979), and studies of engraving in Cuenca, London and Perugia. Art career His work has evolved away from a traditional sculptural practice to the use of new technologies, and in particular the creation of community-based artworks using cell phones. He moved also from photography to video art, followed by interest in computers Net.art. He uses Internet as a creative & research platform. Antoni Abad's expresses the desire to formal experimentation around the concepts of space and time, always present in his work, not exempt lately of certain ironic and critical aspects. Exhibitions Some of his most important exhibits: 1986 — Espai 10 (Fundació Joan Miró) Escultures mal·leables 1991 — Premi d'Arts Plàstiques Medalla Morera. Museu d'Art Jaume Morera. June 1997 — Medidas de emergencia. Espacio Uno, Museo Nacional Centro de Arte Reina Sofía, Madrid. 1999 — Museo de Arte Moderno de Buenos Aires & Venice Biennale. 2003 — The Real Royal Trip. P.S.1. – Museum of Modern Art, New York. 2006 — Centre d'Art Santa Mònica, Barcelona. 2014 — megafone.net/2004-2014, MACBA. 2017 — Venice Biennale. Awards Premi d'Arts Plàstiques Medalla Morera (medal) 1990 Premi Ciutat de Barcelona (City of Barcelona Prize) in the category of Multimedia (2002) for his work Z, shown in Metrònom and Centre d'Art Santa Monica Golden Nica at Ars Electronica within the category of virtual communities in 2006. Considered the most important prize in the world in terms of art and new technologies. Premi Nacional d'Arts Visuals (National Prize for Visual Arts) in 2006 given by the Government of Catalonia. References 1956 births Artists from Catalonia Living people Net.artists People from Lleida Spanish contemporary artists Spanish video artists University of Barcelona alumni
Antoni Abad
Technology
488
28,347,167
https://en.wikipedia.org/wiki/Chicago%20Climate%20Action%20Plan
The Chicago Climate Action Plan (CCAP) is Chicago's climate change mitigation and adaptation strategy that was adopted in September 2008. The CCAP has an overarching goal of reducing Chicago's greenhouse gas emissions to 80 percent below 1990 levels by 2050, with an interim goal of 25 percent below 1990 levels by 2020. Background A greenhouse gas emissions forecast projected that Chicago’s emissions would increase to 39.3 million metric tons of carbon dioxide equivalent by 2020 under a business-as-usual scenario. One projected global warming impact is an increase in days that have temperatures over one hundred degrees. Under the business-as-usual scenario, the number of these days would increase to thirty-one annually, while under a lower emissions scenario, such as that called for in the Chicago Climate Action Plan, the number of these days would increase to eight annually. Climate change has many impacts, including an economic impact. The Chicago Climate Action Plan seeks to address climate change by decreasing greenhouse gas emissions to mitigate its effects, while preparing for climate change through adaptation actions. Design The Chicago Climate Action Plan consists of five strategies: Energy Efficient Buildings; Clean & Renewable Energy Sources; Improved Transportation Options; Reduced Waste & Industrial Pollution; and Adaptation. The first four strategies are designed to mitigate climate change, while the fifth strategy aims to adapt to climate change. Energy efficient buildings According to a 2000 greenhouse gas emission inventory, building and other energy uses are responsible for 70 percent of Chicago's emissions. The Energy Efficient Buildings strategy accounts for 30 percent of Chicago's total greenhouse gas reductions. Building energy efficiency improvements are projected to have a diverse set of benefits, including savings on energy bills for building owners, job creation in the building retrofit field, and decreased greenhouse gas emissions. Clean & renewable energy sources The Clean & Renewable Energy strategy accounts for 34 percent of Chicago's total greenhouse gas reductions. This strategy includes a focus on distributed generation as an efficient and lower-emission alternative to central power plants. Household renewable power is another action that reduces greenhouse gas emissions. Improved transportation options According to a 2000 greenhouse gas emissions inventory, transportation is responsible for 30 percent of Chicago's emissions. The Improved Transportation Options strategy accounts for 23 percent of Chicago's total greenhouse gas reductions. This strategy focuses on the availability and use of alternative modes of transportation to driving as well as reducing the emissions associated with driving. Reduced waste & industrial pollution According to a 2000 greenhouse gas emissions inventory, waste and industrial processes are responsible for nine percent of Chicago's emissions. The Reduced Waste & Industrial Pollution strategy accounts for 13 percent of Chicago's total greenhouse gas reductions. In addition to focusing on waste, this strategy has actions to reduce the emissions from refrigerants and use green infrastructure to capture stormwater. Adaptation The Adaptation strategy does not include a greenhouse gas emission reduction target. Instead, this strategy focuses on preparing for the effects of climate change. This strategy has actions to prepare for extreme heat and urban heat island, extreme precipitation and heavy flooding, and ecosystem changes. In addition, there are actions to engage the public and businesses. Progress In 2010, a Chicago Climate Action Plan Progress Report was released and covers highlights from January 2008 through December 2009 . See also Politics of global warming (United States) References Emissions reduction Climate change policy Climate action plans Chicago Environment of Illinois
Chicago Climate Action Plan
Chemistry
672
14,457,042
https://en.wikipedia.org/wiki/Dynactin
Dynactin is a 23 subunit protein complex that acts as a co-factor for the microtubule motor cytoplasmic dynein-1. It is built around a short filament of actin related protein-1 (Arp1). Discovery Dynactin was identified as an activity that allowed purified cytoplasmic dynein to move membrane vesicles along microtubules in vitro. It was shown to be a multiprotein complex and named "dynactin" because of its role in dynein activation. The main features of dynactin were visualized by quick-freeze, deep-etch, rotary shadow electron microscopy. It appears as a short filament, 37-nm in length, which resembles F-actin, plus a thinner, laterally oriented arm. Antibody labelling was used to map the location of the dynactin subunits. Structure Dynactin consists of three major structural domains: (1) sidearm-shoulder: DCTN1/p150Glued, DCTN2/p50/dynamitin, DCTN3/p24/p22;(2)the Arp1 filament: ACTR1A/Arp1/centractin, actin, CapZ; and (3) the pointed end complex: Actr10/Arp11, DCTN4/p62, DCTN5/p25, and DCTN6/p27. A 4Å cryo-EM structure of dynactin revealed that its filament contains eight Arp1 molecules, one β-actin and one Arp11. In the pointed end complex p62/DCTN4 binds to Arp11 and β-actin and p25 and p27 bind both p62 and Arp11. At the barbed end the capping protein (CapZαβ) binds the Arp1 filament in the same way that it binds actin, although with more charge complementarity, explaining why it binds dynactin more tightly than actin. The shoulder contains two copies of p150Glued/DCTN1, four copies of p50/DCTN2 and two copies of p24/DCTN3. These proteins form long bundles of alpha helices, which wrap over each other and contact the Arp1 filament. The N-termini of p50/DCTN2 emerge from the shoulder and coat the filament, providing a mechanism for controlling the filament length. The C-termini of the p150Glued/DCTN1 dimer are embedded in the shoulder, whereas the N-terminal 1227 amino acids form the projecting arm. The arm consists of an N-terminal CAPGly domain which can bind the C-terminal tails of microtubules and the microtubule plus end binding protein EB1. This is followed by a basic region, also involved in microtubule binding, a folded-back coiled coil (CC1), the intercoiled domain (ICD) and a second coiled coil domain (CC2). The p150Glued arm can dock into against the side of the Arp1 filament and pointed end complex. DCTN2 (dynamitin) is also involved in anchoring microtubules to centrosomes and may play a role in synapse formation during brain development. Arp1 has been suggested as the domain for dynactin binding to membrane vesicles (such as Golgi or late endosome) through its association with β-spectrin. The pointed end complex (PEC) has been shown to be involved in selective cargo binding. PEC subunits p62/DCTN4 and Arp11/Actr10 are essential for dynactin complex integrity and dynactin/dynein targeting to the nuclear envelope before mitosis. Actr10 along with Drp1 (Dynamin related protein 1) have been documented as vital to the attachment of mitochondria to the dynactin complex. Dynactin p25/DCTN5 and p27/DCTN6 are not essential for dynactin complex integrity, but are required for early and recycling endosome transport during the interphase and regulation of the spindle assembly checkpoint in mitosis. Interaction with dynein Dynein and dynactin were reported to interact directly by the binding of dynein intermediate chains with p150Glued. The affinity of this interaction is around 3.5μM. Dynein and dynactin do not run together in a sucrose gradient, but can be induced to form a tight complex in the presence of the N-terminal 400 amino acids of Bicaudal D2 (BICD2), a cargo adaptor that links dynein and dynactin to Golgi derived vesicles. In the presence of BICD2, dynactin binds to dynein and activates it to move for long distances along microtubules. A cryo-EM structure of dynein, dynactin and BICD2 showed that the BICD2 coiled coil runs along the dynactin filament. The tail of dynein also binds to the Arp1 filament, sitting in the equivalent site that myosin uses to bind actin. The contacts between the dynein tail and dynactin all involve BICD, explaining why it is needed to bring them together. The dynein/dynactin/BICD2 (DDB) complex has also been observed, by negative stain EM, on microtubules. This shows that the cargo (Rab6) binding end of BICD2 extends out through the pointed end complex at the opposite end away from the dynein motor domains. Functions Dynactin is often essential for dynein activity and can be thought of as a "dynein receptor" that modulates binding of dynein to cell organelles which are to be transported along microtubules. Dynactin also enhances the processivity of cytoplasmic dynein and kinesin-2 motors. Dynactin is involved in various processes like chromosome alignment and spindle organization in cell division. Dynactin contributes to mitotic spindle pole focusing through its binding to nuclear mitotic apparatus protein (NuMA). Dynactin also targets to the kinetochore through binding between DCTN2/dynamitin and zw10 and has a role in mitotic spindle checkpoint inactivation. During prometaphase, dynactin also helps target polo-like kinase 1 (Plk1) to kinetochores through cyclin dependent kinase 1 (Cdk1)-phosphorylated DCTN6/p27, which is involved in proper microtubule-kinetochore attachment and recruitment of spindle assembly checkpoint protein Mad1. In addition, dynactin has been shown to play an essential role in maintaining nuclear position in Drosophila, zebrafish or in different fungi. Dynein and dynactin concentrate on the nuclear envelope during the prophase and facilitate nuclear envelope breakdown via its DCTN4/p62 and Arp11 subunits. Dynactin is also required for microtubule anchoring at centrosomes and centrosome integrity. Destabilization of the centrosomal pool of dynactin also causes abnormal G1 centriole separation and delayed entry into S phase, suggesting that dynactin contributes to the recruitment of important cell cycle regulators to centrosomes. In addition to transport of various organelles in the cytoplasm, dynactin also links kinesin II to organelles. See also Motor protein Dynein DCTN1 Centractin References Further reading Protein families Motor proteins
Dynactin
Chemistry,Biology
1,698
2,279,544
https://en.wikipedia.org/wiki/Women%20in%20computing
Women in computing were among the first programmers in the early 20th century, and contributed substantially to the industry. As technology and practices altered, the role of women as programmers has changed, and the recorded history of the field has downplayed their achievements. Since the 18th century, women have developed scientific computations, including Nicole-Reine Lepaute's prediction of Halley's Comet, and Maria Mitchell's computation of the motion of Venus. The first algorithm intended to be executed by a computer was designed by Ada Lovelace who was a pioneer in the field. Grace Hopper was the first person to design a compiler for a programming language. Throughout the 19th and early 20th century, and up to World War II, programming was predominantly done by women; significant examples include the Harvard Computers, codebreaking at Bletchley Park and engineering at NASA. After the 1960s, the computing work that had been dominated by women evolved into modern software, and the importance of women decreased. The gender disparity and the lack of women in computing from the late 20th century onward has been examined, but no firm explanations have been established. Nevertheless, many women continued to make significant and important contributions to the IT industry, and attempts were made to readdress the gender disparity in the industry. In the 21st century, women held leadership roles in multiple tech companies, such as Meg Cushing Whitman, president and chief executive officer of Hewlett Packard Enterprise, and Marissa Mayer, president and CEO of Yahoo! and key spokesperson at Google. History 1700s Nicole-Reine Etable de la Brière Lepaute was one of a team of human computers who worked with Alexis-Claude Clairaut and Joseph-Jérôme Le Français de Lalande to predict the date of the return of Halley's Comet. They began work on the calculations in 1757, working throughout the day and sometimes during mealtimes. Their methods were followed by successive human computers. They divided large calculations into "independent pieces, assembled the results from each piece into a final product" and then checked for errors. Lepaute continued to work on computing for the rest of her life, working for the Connaissance des Temps and publishing predictions of solar eclipses. 1800s One of the first computers for the American Nautical Almanac was Maria Mitchel. Her work on the assignment was to compute the motion of the planet Venus. The Almanac never became a reality, but Mitchell became the first astronomy professor at Vassar. Ada Lovelace was the first person to publish an algorithm intended to be executed by the first modern computer, the Analytical Engine created by Charles Babbage. As a result, she is often regarded as the first computer programmer. Lovelace was introduced to Babbage's difference engine when she was 17. In 1840, she wrote to Babbage and asked if she could become involved with his first machine. By this time, Babbage had moved on to his idea for the Analytical Engine. A paper describing the Analytical Engine, Notions sur la machine analytique, published by L.F. Menabrea, came to the attention of Lovelace, who not only translated it into English, but corrected mistakes made by Menabrea. Babbage suggested that she expand the translation of the paper with her own ideas, which, signed only with her initials, AAL, "synthesized the vast scope of Babbage's vision." Lovelace imagined the kind of impact of the Analytical Engine might have on society. She drew up explanations of how the engine could handle inputs, outputs, processing and data storage. She also created several proofs to show how the engine would handle calculations of Bernoulli Numbers on its own. The proofs are considered the first examples of a computer program. Lovelace downplayed her role in her work during her life, for example, in signing her contributions with AAL so as not be "accused of bragging." After the Civil War in the United States, more women were hired as human computers. Many were war widows looking for ways to support themselves. Others were hired when the government opened positions to women because of a shortage of men to fill the roles. Anna Winlock asked to become a computer for the Harvard Observatory in 1875 and was hired to work for 25 cents an hour. By 1880, Edward Charles Pickering had hired several women to work for him at Harvard because he knew that women could do the job as well as men and he could ask them to volunteer or work for less pay. The women, described as "Pickering's harem" and also as the Harvard Computers, performed clerical work that the male employees and scholars considered to be tedious at a fraction of the cost of hiring a man. The women working for Pickering cataloged around ten thousand stars, discovered the Horsehead Nebula and developed the system to describe stars. One of the "computers," Annie Jump Cannon, could classify stars at a rate of three stars per minute. The work for Pickering became so popular that women volunteered to work for free even when the computers were being paid. Even though they performed an important role, the Harvard Computers were paid less than factory workers. By the 1890s, women computers were college graduates looking for jobs where they could use their training in a useful way. Florence Tebb Weldon, was part of this group and provided computations relating to biology and evidence for evolution, working with her husband, W.F. Raphael Weldon. Florence Weldon's calculations demonstrated that statistics could be used to support Darwin's theory of evolution. Another human computer involved in biology was Alice Lee, who worked with Karl Pearson. Pearson hired two sisters to work as part-time computers at his Biometrics Lab, Beatrice and Frances Cave-Brown-Cave. 1910s During World War I, Karl Pearson and his Biometrics Lab helped produce ballistics calculations for the British Ministry of Munitions. Beatrice Cave-Browne-Cave helped calculate trajectories for bomb shells. In 1916, Cave-Brown-Cave left Pearson's employ and started working full-time for the Ministry. In the United States, women computers were hired to calculate ballistics in 1918, working in a building on the Washington Mall. One of the women, Elizabeth Webb Wilson, worked as the chief computer. After the war, women who worked as ballistics computers for the U.S. government had trouble finding jobs in computing and Wilson eventually taught high school math. 1920s In the early 1920s, Iowa State College, professor George Snedecor worked to improve the school's science and engineering departments, experimenting with new punch-card machines and calculators. Snedecor also worked with human calculators most of them women, including Mary Clem. Clem coined the term "zero check" to help identify errors in calculations. The computing lab, run by Clem, became one of the most powerful computing facilities of the time. Women computers also worked at the American Telephone and Telegraph company. These human computers worked with electrical engineers to help figure out how to boost signals with vacuum tube amplifiers. One of the computers, Clara Froelich, was eventually moved along with the other computers to their own division where they worked with a mathematician, Thornton Fry, to create new computational methods. Froelich studied IBM tabulating equipment and desk calculating machines to see if she could adapt the machine method to calculations. Edith Clarke was the first woman to earn a degree in electrical engineering and who worked as the first professionally employed electrical engineer in the United States. She was hired by General Electric as a full engineer in 1923. Clarke also filed a patent in 1921 for a graphical calculator to be used in solving problems in power lines. It was granted in 1925. 1930s The National Advisory Committee for Aeronautics (NACA) which became NASA hired a group of five women in 1935 to work as a computer pool. The women worked on the data coming from wind tunnel and flight tests. 1940s "Tedious" computing and calculating was seen as "women's work" through the 1940s resulting in the term "kilogirl", invented by a member of the Applied Mathematics Panel in the early 1940s. A kilogirl of energy was "equivalent to roughly a thousand hours of computing labor." While women's contributions to the United States war effort during World War II was championed in the media, their roles and the work they did was minimized. This included minimizing the complexity, skill and knowledge needed to work on computers or work as human computers. During WWII, women did most of the ballistics computing, seen by male engineers as being below their level of expertise. Black women computers worked as hard (or more often, even harder) as their white counterparts, but in segregated situations. By 1943, almost all people employed as computers were women; one report said "programming requires lots of patience, persistence and a capacity for detail and those are traits that many girls have". NACA expanded its pool of women human computers in the 1940s. NACA recognized in 1942 that "the engineers admit themselves that the girl computers do the work more rapidly and accurately than they could." In 1943 two groups, segregated by race, worked on the east and west side of Langley Air Force Base. The black women were the West Area Computers. Unlike their white counterparts, the black women were asked by NACA to re-do college courses they had already passed and many never received promotions. Women were also working on ballistic missile calculations. In 1948, women such as Barbara Paulson were working on the WAC Corporal, determining trajectories the missiles would take after launch. Women worked with cryptography and, after some initial resistance, many operated and worked on the Bombe machines. Joyce Aylard operated the Bombe machine testing different methods to break the Enigma code. Joan Clarke was a cryptographer who worked with her friend, Alan Turing, on the Enigma machine at Bletchley Park. When she was promoted to a higher salary grade, there were no positions in the civil service for a "senior female cryptanalyst," and she was listed as a linguist instead. While Clarke developed a method of increasing the speed of double-encrypted messages, unlike many of the men, her decryption technique was not named after her. Other cryptographers at Bletchley included Margaret Rock, Mavis Lever (later Batey), Ruth Briggs and Kerry Howard. In 1941, Batey's work enabled the Allies to break the Italians' naval code before the Battle of Cape Matapan. In the United States, several faster Bombe machines were created. Women, like Louise Pearsall, were recruited from the WAVES to work on code breaking and operate the American Bombe machines. Hedy Lamarr and co-inventor, George Antheil, worked on a frequency hopping method to help the Navy control torpedoes remotely. The Navy passed on their idea, but Lamarr and Antheil received a patent for the work on August 11, 1942. This technique would later be used again, first in the 1950s at Sylvania Electronic Systems Division and is used in everyday technology such as Bluetooth and Wi-Fi. The programmers of the ENIAC computer in 1944, were six female mathematicians; Marlyn Meltzer, Betty Holberton, Kathleen Antonelli, Ruth Teitelbaum, Jean Bartik, and Frances Spence, who were human computers at the Moore School's computation lab. Adele Goldstine was their teacher and trainer and they were known as the "ENIAC girls." The women who worked on ENIAC were warned that they would not be promoted into professional ratings which were only for men. Designing the hardware was "men's work" and programming the software was "women's work." Sometimes women were given blueprints and wiring diagrams to figure out how the machine worked and how to program it. They learned how the ENIAC worked by repairing it, sometimes crawling through the computer, and by fixing "bugs" in the machinery. Even though the programmers were supposed to be doing the "soft" work of programming, in reality, they did that and fully understood and worked with the hardware of the ENIAC. When the ENIAC was revealed in 1946, Goldstine and the other women prepared the machine and the demonstration programs it ran for the public. None of their work in preparing the demonstrations was mentioned in the official accounts of the public events. After the demonstration, the university hosted an expensive celebratory dinner to which none of the ENIAC six were invited. In Canada, Beatrice Worsley started working at the National Research Council of Canada in 1947 where she was an aerodynamics research officer. A year later, she started working in the new Computational Centre at the University of Toronto. She built a differential analyzer in 1948 and also worked with IBM machines in order to do calculations for Atomic Energy of Canada Limited. She went to study the EDSAC at the University of Cambridge in 1949. She wrote the program that was run the first time EDSAC performed its first calculations on May 6, 1949. Grace Hopper was the first person to create a compiler for a programming language and one of the first programmers of the Harvard Mark I computer, an electro-mechanical computer based on Analytical Engine. Hopper's work with computers started in 1943, when she started working at the Bureau of Ordnance's Computation Project at Harvard where she programmed the Harvard Mark I. Hopper not only programmed the computer, but created a 500-page comprehensive manual for it. Even though Hopper created the manual, which was widely cited and published, she was not specifically credited in it. Hopper is often credited with the coining of the term "bug" and "debugging" when a moth caused the Mark II to malfunction. While a moth was found and the process of removing it called "debugging," the terms were already part of the language of programmers. 1950s Grace Hopper continued to contribute to computer science through the 1950s. She brought the idea of using compilers from her time at Harvard to UNIVAC which she joined in 1949. Other women who were hired to program UNIVAC included Adele Mildred Koss, Frances E. Holberton, Jean Bartik, Frances Morello and Lillian Jay. To program the UNIVAC, Hopper and her team used the FLOW-MATIC programming language, which she developed. Holberton wrote a code, C-10, that allowed for keyboard inputs into a general-purpose computer. Holberton also developed the Sort-Merge Generator in 1951 which was used on the UNIVAC I. The Sort-Merge Generator marked the first time a computer "used a program to write a program." Holberton suggested that computer housing should be beige or oatmeal in color which became a long-lasting trend. Koss worked with Hopper on various algorithms and a program that was a precursor to a report generator. Klara Dan von Neumann was one of the main programmers of the MANIAC, a more advanced version of ENIAC. Her work helped the field of meteorology and weather prediction. The NACA, and subsequently NASA, recruited women computers following World War II. By the 1950s, a team was performing mathematical calculations at the Lewis Research Center in Cleveland, Ohio, including Annie Easley, Katherine Johnson and Kathryn Peddrew. At the National Bureau of Standards, Margaret R. Fox was hired to work as part of the technical staff of the Electronic Computer Laboratory in 1951. In 1956, Gladys West was hired by the U.S. Naval Weapons Laboratory as a human computer. West was involved in calculations that let to the development of GPS. At Convair Aircraft Corporation, Joyce Currie Little was one of the original programmers for analyzing data received from the wind tunnels. She used punch cards on an IBM 650 which was located in a different building from the wind tunnel. To save time in the physical delivery of the punch cards, she and her colleague, Maggie DeCaro, put on roller skates to get to and from the building faster. In Israel, Thelma Estrin worked on the design and development of WEIZAC, one of the world's first large-scale programmable electronic computers. In the Soviet Union a team of women helped design and build the first digital computer in 1951. In the UK, Kathleen Booth worked with her husband, Andrew Booth on several computers at Birkbeck College. Kathleen Booth was the programmer and Andrew built the machines. Kathleen developed Assembly Language at this time. Mary Coombs (of England) was employed in 1952 as the first female programmer to work on the LEO computers, and as such she is recognized as the first female commercial programmer. Ukrainian Kateryna Yushchenko created Address (programming language) for the cоmputer "Kyiv" in 1955 and invented indirect addressing of the highest rank, called pointers. 1960s Milly Koss who had worked at UNIVAC with Hopper, started work at Control Data Corporation (CDC) in 1965. There she developed algorithms for graphics, including graphic storage and retrieval. Mary K. Hawes of Burroughs Corporation set up a meeting in 1959 to discuss the creation a computer language that would be shared between businesses. Six people, including Hopper, attended to discuss the philosophy of creating a common business language (CBL). Hopper became involved in developing COBOL (Common Business Oriented Language) where she innovated new symbolic ways to write computer code. Hopper developed programming language that was easier to read and "self-documenting." After COBOL was submitted to the CODASYL Executive Committee, Betty Holberton did further editing on the language before it was submitted to the Government Printing Office in 1960. IBM were slow to adopt COBOL, which hindered its progress but it was accepted as a standard in 1962, after Hopper had demonstrated the compiler working both on UNIVAC and RCA computers. The development of COBOL led to the generation of compilers and generators, most of which were created or refined by women such as Koss, Nora Moser, Deborah Davidson, Sue Knapp, Gertrude Tierney and Jean E. Sammet. Sammet, who worked at IBM starting in 1961 was responsible for developing the programming language, FORMAC. She published a book, Programming Languages: History and Fundamentals (1969), which was considered the "standard work on programming languages," according to Denise Gürer It was "one of the most used books in the field," according to The Times in 1972. Between 1961 and 1963, Margaret Hamilton began to study software reliability while she was working at the US SAGE air defense system. In 1965, she was responsible for programming the software for the onboard flight software on the Apollo mission computers. After Hamilton had completed the program, the code was sent to Raytheon where "expert seamstresses" called the "Little Old Ladies" actually hardwired the code by threading copper wire through magnetic rings. Each system could store more than 12,000 words that were represented by the copper wires. In 1964, the British Prime Minister Harold Wilson announced a "White-Hot" revolution in technology, that would give greater prominence to IT work. As women still held most computing and programming positions at this time, it was hoped that it would give them more positive career prospects. In 1965, Sister Mary Kenneth Keller became the first American woman to earn a doctorate in computer science. Keller helped develop BASIC while working as a graduate student at Dartmouth, where the university "broke the 'men only' rule" so she could use its computer science center. In 1966, Frances "Fran" Elizabeth Allen who was developing programming language compilers at IBM Research, published a paper entitled "Program Optimization,". It laid the conceptual basis for systematic analysis and transformation of computer programs. This paper introduced the use of graph-theoretic structures to encode program content in order to automatically and efficiently derive relationships and identify opportunities for optimization. Christine Darden began working for NASA's computing pool in 1967 having graduated from the Hampton Institute. Women were involved in the development of Whirlwind, including Judy Clapp. She created the prototype for an air defense system for Whirlwind which used radar input to track planes in the air and could direct aircraft courses. In 1969, Elizabeth "Jake" Feinler, who was working for Stanford, made the first Resource Handbook for ARPANET. This led to the creation of the ARPANET directory, which was built by Feinler with a staff of mostly women. Without the directory, "it was nearly impossible to navigate the ARPANET." By the end of the decade, the general demographics of programmers had shifted away from being predominantly women, as they had before the 1940s. Though women accounted for around 30 to 50 percent of computer programmers during the 1960s, few were promoted to leadership roles and women were paid significantly less than their male counterparts. Cosmopolitan ran an article in the April 1967 issue about women in programming called "The Computer Girls." Even while magazines such as Cosmopolitan saw a bright future for women in computers and computer programming in the 1960s, the reality was that women were still being marginalized. 1970s In the early 1970s, Pam Hardt-English led a group to create a computer network they named Resource One and which was part of a group called Project One. Her idea to connect Bay Area bookstores, libraries and Project One was an early prototype of the Internet. To work on the project, Hardt-English obtained an expensive SDS-940 computer as a donation from TransAmerica Leasing Corporation in April 1972. They created an electronic library and housed it in a record store called Leopold's in Berkeley. This became the Community Memory database and was maintained by hacker Jude Milhon. After 1975, the SDS-940 computer was repurposed by Sherry Reson, Mya Shone, Chris Macie and Mary Janowitz to create a social services database and a Social Services Referral Directory. Hard copies of the directory, printed out as a subscription service, were kept at city buildings and libraries. The database was maintained and in use until 2009. In the early 1970s, Elizabeth "Jake" Feinler, who worked on the Resource Directory for ARPANET, and her team created the first WHOIS directory. Feinler set up a server at the Network Information Center (NIC) at Stanford which would work as a directory that could retrieve relevant information about a person or entity. She and her team worked on the creation of domains, with Feinler suggesting that domains be divided by categories based on where the computers were kept. For example, military computers would have the domain of .mil, computers at educational institutions would have .edu. Feinler worked for NIC until 1989. Jean E. Sammet served as the first woman president of the Association for Computing Machinery (ACM), holding the position between 1974 and 1976. Adele Goldberg was one of seven programmers that developed Smalltalk in the 1970s, and wrote the majority of the language's documentation. It was one of the first object-oriented programming languages the base of the current graphic user interface, that has its roots in the 1968 The Mother of All Demos by Douglas Engelbart. Smalltalk was used by Apple to launch Apple Lisa in 1983, the first personal computer with a GUI, and a year later its Macintosh. Windows 1.0, based on the same principles, was launched a few months later in 1985. In the late 1970s, women such as Paulson and Sue Finley wrote programs for the Voyager mission. Voyager continues to carry their codes inside its own memory banks as it leaves the solar system. In 1979, Ruzena Bajcsy founded the General Robotics, Automation, Sensing and Perception (GRASP) Lab at the University of Pennsylvania. In the mid-70s, Joan Margaret Winters began working at IBM as part of a "human factors project," called SHARE. In 1978, Winters was the deputy manager of the project and went on to lead the project between 1983 and 1987. The SHARE group worked on researching how software should be designed to consider human factors. Erna Schneider Hoover developed a computerized switching system for telephone calls that would replace switchboards. Her software patent for the system, issued in 1971, was one of the first software patents ever issued. 1980s Gwen Bell developed the Computer Museum in 1980. The museum, which collected computer artifacts became a non-profit organization in 1982 and in 1984, Bell moved it to downtown Boston. Adele Goldberg served as president of ACM between 1984 and 1986. In 1981, Deborah Washington Brown became the first African American woman to earn a Ph.D. in computer science from Harvard University (at the time the degree was part of the applied mathematics program). Her thesis was titled "The solution of difference equations describing array manipulation in program loops". Shortly after, in 1982, Marsha R. Williams became the second African American woman to earn a Ph.D. in computer science. Sometimes known as the "Betsy Ross of the personal computer," according to the New York Times, Susan Kare worked with Steve Jobs to design the original icons for the Macintosh. Kare designed the moving watch, paintbrush and trash can elements that made using a Mac user-friendly. Kare worked for Apple until the mid-1980s, going on to work on icons for Windows 3.0. Other types of computer graphics were being developed by Nadia Magnenat Thalmann in Canada. Thalmann started working on computer animation to develop "realistic virtual actors" first at the University of Montréal in 1980 and later in 1988 at the École Polytechnique Fédérale de Lausanne. Computer and video games became popular in the 1980s, but many were primarily action-oriented and not designed from a woman's point of view. Stereotypical characters such as the damsel in distress featured prominently and consequently were not inviting towards women. Dona Bailey designed Centipede, where the player shoots insects, as a reaction to such games, later saying "It didn't seem bad to shoot a bug". Carol Shaw, considered to be the first modern female games designer, released a 3D version of tic-tac-toe for the Atari 2600 in 1980. Roberta Williams and her husband Ken, founded Sierra Online and pioneered the graphic adventure game format in Mystery House and the King's Quest series. The games had a friendly graphical user interface and introduced humor and puzzles. Cited as an important game designer, her influence spread from Sierra to other companies such as LucasArts and beyond. Brenda Laurel ported games from arcade versions to the Atari 8-bit computers in the late 1970s and early 1980s. She then went to work for Activision and later wrote the manual for Maniac Mansion. 1984 was the year of Women into Science and Engineering (WISE Campaign). A 1984 report by Ebury Publishing reported that in a typical family, only 5% of mothers and 19% of daughters were using a computer at home, compared to 25% of fathers and 51% of sons. To counteract this, the company launched a series of software titles designed towards women and publicized in Good Housekeeping. Anita Borg, who had been noticing that women were under-represented in computer science, founded an email support group, Systers, in 1987. As Ethernet became the standard for networking computers locally, Radia Perlman, who worked at Digital Equipment Corporation (DEC), was asked to "fix" limitations that Ethernet imposed on large network traffic. In 1985, Perlman came up with a way to route information packets from one computer to another in an "infinitely scalable" way that allowed large networks like the Internet to function. Her solution took less than a few days to design and write up. The name of the algorithm she created is the Spanning Tree Protocol. In 1986, Lixia Zhang was the only woman and graduate student to participate in the early Internet Engineering Task Force (IETF) meetings. Zhang was involved in early Internet development. In Europe, project was developed in the mid-1980s to create an academic network in Europe using the Open System Interconnection (OSI) standards. Borka Jerman Blažič, a Yugoslavian computer scientist was invited to work on the project. She was involved in establishing a Yugoslav Research and Academic Network (YUNAC) in 1989 and registered the domain of .yu for the country. In the field of human–computer interaction (HCI), French computer scientist, Joëlle Coutaz developed the presentation-abstraction-control (PAC) model in 1987. She founded the User Interface group at the Laboratorire de Génie Informatique of IMAG where they worked on different problems relating to user interface and other software tools. In 1988, Stacy Horn, who had been introduced to bulletin board systems (BBS) through The WELL, decided to create her own online community in New York, which she called the East Coast Hang Out (ECHO). Horn invested her own money and pitched the idea for ECHO to others after bankers refused to hear her business plan. Horn built her BBS using UNIX, which she and her friends taught to one another. Eventually ECHO moved an office in Tribeca in the early 1990s and started getting press attention. ECHO's users could post about topics that interested them, and chat with one another, and were provided email accounts. Around half of ECHO's users were women. ECHO was still online as of 2018. 1990s By the 1990s, computing was dominated by men. The proportion of female computer science graduates peaked in 1984 around 37 per cent, and then steadily declined. Although the end of the 20th century saw an increase in women scientists and engineers, this did not hold true for computing, which stagnated. Despite this, they were very involved in working on hypertext and hypermedia projects in the late 1980s and early 1990s. A team of women at Brown University, including Nicole Yankelovich and Karen Catlin, developed Intermedia and invented the anchor link. Apple partially funded their project and incorporated their concepts into Apple operating systems. Sun Microsystems Sun Link Service was developed by Amy Pearl. Janet Walker developed the first system to use bookmarks when she created the Symbolics Document Examiner. In 1989, Wendy Hall created a hypertext project called Microcosm, which was based on digitized multimedia material found in the Mountbatten archive. Cathy Marshall worked on the NoteCards system at Xerox PARC. NoteCards went on to influence Apple's HyperCard. As the Internet became the World Wide Web, developers like Hall adapted their programs to include Web viewers. Her Microcosm was especially adaptable to new technologies, including animation and 3-D models. In 1994, Hall helped organize the first conference for the Web. Sarah Allen, the co-founder of After Effects, co-founded a commercial software company called CoSA in 1990. In 1995, she started working on the Shockwave team for Macromedia where she was the lead developer of the Shockwave Mulituser Server, the Flash Media Server and Flash video. Following the increased popularity of the Internet in the 1990s, online spaces were set up to cater for women, including the online community Women's WIRE and the technical and support forum LinuxChix. Women's WIRE, launched by Nancy Rhine and Ellen Pack in October 1993, was the first Internet company to specifically target this demographic. A conference for women in computer-related jobs, the Grace Hopper Celebration of Women in Computing, was first launched in 1994 by Anita Borg. Game designer Brenda Laurel started working at Interval Research in 1992, and began to think about the differences in the way girls and boys experienced playing video games. After interviewing around 1,000 children and 500 adults, she determined that games weren't designed with girls' interests in mind. The girls she spoke with wanted more games with open worlds and characters they could interact with. Her research led to Interval Research giving Laurel's research team their own company in 1996, Purple Moon. Also in 1996, Mattel's game, Barbie Fashion Designer, became the first best-selling game for girls. Purple Moon's first two games based on a character called Rockett, made it to the 100 best-selling games in the years they were released. In 1999, Mattel bought out Purple Moon. Jaime Levy created one of the first e-Zines in the early 1990s, starting with CyberRag, which included articles, games and animations loaded onto diskettes that anyone with a Mac could access. Later, she renamed the zine to Electronic Hollywood. Billy Idol commissioned Levy to create a disk for his album, Cyberpunk. She was hired to be the creative director of the online magazine, Word, in 1995. Cyberfeminists, VNS Matrix, made up of Josephine Starrs, Juliane Pierce, Francesca da Rimini and Virginia Barratt, created art in the early 1990s linking computer technology and women's bodies. In 1997, there was a gathering of cyberfeminists in Kassel, called the First Cyberfeminist International. In China, Hu Qiheng, was the leader of the team who installed the first TCP/IP connection for China, connecting to the Internet on April 20, 1994. In 1995, Rosemary Candlin went to write software for CERN in Geneva. In the early 1990s, Nancy Hafkin was an important figure in working with the Association for Progressive Communications (APC) in enabling email connections in 10 African countries. Starting in 1999, Anne-Marie Eklund Löwinder began to work with Domain Name System Security Extensions (DNSSEC) in Sweden. She later made sure that the domain, .se, was the world's first top level domain name to be signed with DNSSEC. In the late 1990s, research by Jane Margolis led Carnegie Mellon to try to correct the male-female imbalance in computer science. From the late 1980s until the mid-1990s, Misha Mahowald developed several key foundations of the field of Neuromorphic engineering, while working at the California Institute of Technology and later at the ETH Zurich. More than 20 years after her untimely death, the Misha Mahowald Prize was named after her to recognize excellence in the field which she helped to create. 2000s In the 21st century, several attempts have been made to reduce the gender disparity in IT and get more women involved in computing again. A 2001 survey found that while both sexes use computers and the internet in equal measure, women were still five times less likely to choose it as a career or study the subject beyond standard secondary education. Journalist Emily Chang said a key problem has been personality tests in job interviews and the belief that good programmers are introverts, which tends to self-select the stereotype of an asocial white male nerd. In 2004, the National Center for Women & Information Technology was established by Lucy Sanders to address the gender gap. Carnegie Mellon University has made a concerted attempt to increase gender diversity in the computer science field, by selecting students based on a wide criteria including leadership ability, a sense of "giving back to the community" and high attainment in maths and science, instead of traditional computer programming expertise. As well as increase the intake of women into CMU, the programme produced better quality students because of the increased diversity making a stronger team. 2010s Despite the pioneering work of some designers, video games are still considered biased towards men. A 2013 survey by the International Game Developers Association revealed only 22% of game designers are women, although this is substantially higher than figures in previous decades. Working to bring inclusion to the world of open source project development, Coraline Ada Ehmke drafted the Contributor Covenant in 2014. By 2018, over 40,000 software projects have started using the Contributor Covenant, including TensorFlow, Vue and Linux. In 2014, Danielle George, professor at the School of Electrical and Electronic Engineering, University of Manchester spoke at the Royal Institution Christmas Lectures on the subject of "how to hack your home", describing simple experiments involving computer hardware and demonstrating a giant game of Tetris by remote controlling lights in an office building. In 2017, Michelle Simmons founded the first quantum computing company in Australia. The team, which has made "great strides" in 2018, plans to develop a 10-qubit prototype silicon quantum integrated circuit by 2022. In the same year, Doina Precup became the head of DeepMind Montreal, working on artificial intelligence. Xaviera Kowo is a programmer from Cameroon, who won the Margaret award, for programming a robot which processes waste in 2022. 2020s In 2023 the EU-Startups the leading online publication with a focus on startups in Europe published the list of top 100 of the most influential women in the startup and venture capital space in Europe. The theme of the list reflects the era of innovation and technological change. That being said, there are plenty of inspiring women in Europe's startup and all around the world in VC space who are making daily changes possible and encouraging a new generation of female for entrepreneurship and innovation. Gender gap in computing While computing began as a field heavily dominated by women, this changed in western countries shortly after World War II. In the US, recognizing software development was a significant expense, companies wanted to hire an "ideal programmer". Psychologists William Cannon and Dallis Perry were hired to develop an aptitude test for programmers, and from an industry that was more than 50% women they selected 1400 people, 1200 of whom were male. This paper was highly influential and claimed to have "trained the industry" in hiring programmers, with a heavy focus on introverts and men. In Britain, following the war, women programmers were selected for redundancy and forced retirement, leading to the country losing its position as computer science leader by 1974. Popular theories are favored about the lack of women in computer science, which discount historical and social circumstances. In 1992, John Gray's Men Are from Mars, Women Are from Venus theorized that men and women tend to differ in ways of thinking, leading to them approaching technology and computing in different ways. A significant issue is that women find themselves working in an environment that is largely unpleasant, so they decline to continue in those careers. A further issue is that if a class of computer scientists contains few women, those few can be singled out, leading to isolation and feelings of non-belonging, which can culminate in leaving the area. The gender disparity in IT is not global. The ratio of female to male computer scientists is significantly higher in India compared to the West, and in 2015, over half of internet entrepreneurs in China were women. In Europe, Bulgaria and Romania have the highest rates of women going into computer programming. In government universities in Saudi Arabia in 2014, Arab women made up 59% of students enrolled in computer science. It has been suggested there is a greater gap in countries where people of both sexes are treated more equally, contradicting any theories that society in general is to blame for any disparity. However, the ratio of African American female computer scientists in the US is significantly lower than the global average. In IT-based organisations, the ratio of men to women can vary between roles; for example, while most software developers at InfoWatch are male, half of usability designers and 80% of project managers are female. In 1991, Massachusetts Institute of Technology undergraduate Ellen Spertus wrote an essay "Why Are There So Few Women in Computer Science?", examining inherent sexism in IT, which was responsible for a lack of women in computing. She subsequently taught computer science at Mills College, Oakland in order to increase interest in IT for women. A key problem is a lack of female role models in the IT industry, alongside computer programmers in fiction and the media generally being male. The University of Southampton's Wendy Hall has said the attractiveness of computers to women decreased significantly in the 1980s when they "were sold as toys for boys", and believes the cultural stigma has remained ever since, and may even be getting worse. Kathleen Lehman, project manager of the BRAID Initiative at UCLA has said a problem is that typically women aim for perfection and feel disillusioned when code does not compile, whereas men may simply treat it as a learning experience. A report in the Daily Telegraph suggested that women generally prefer people-facing jobs, which many computing and IT positions do not have, while men prefer jobs geared towards objects and tasks. One issue is that the history of computing has focused on the hardware, which was a male dominated field, despite software being written predominantly by women in the early to mid 20th century. In 2013, a National Public Radio report said 20% of computer programmers in the US are female. There is no general consensus for any key reason there are less women in computing. In 2017, an engineer was fired from Google after claiming there was a biological reason for a lack of female computer scientists. Dame Stephanie Shirley using the name Steve Shirley addressed some of the problems facing women in computing in the UK by setting up the software company Freelance Programmers (later F.I, then Xansa now Steria Sopra) offering the chance for women to work from home and part-time work. Awards The Association for Computing Machinery Turing Award, sometimes referred to as the "Nobel Prize" of computing, was named in honor of Alan Turing. This award has been won by three women between 1966 and 2015. 2006 – Frances "Fran" Elizabeth Allen 2008 – Barbara Liskov 2012 – Shafi Goldwasser The British Computer Society Information Retrieval Specialist Group (BCS IRSG) in conjunction with the British Computer Society created an award in 2008 to commemorate the achievements of Karen Spärck Jones, a Professor Emerita of Computers and Information at the University of Cambridge and one of the most remarkable women in computer science. The KSJ award has been won by four women between 2009 and 2017: 2009 – Mirella Lapata 2012 – Diane Kelly 2016 – Jaime Teevan Organizations Several important groups have been established to encourage women in the IT industry. The Association for Women in Computing was one of the first and is dedicated to promoting the advancement of women in computing professions. The CRA-W: Committee on the Status of Women in Computing Research established in 1991 focused on increasing the number of women in Computer Science and Engineering (CSE) research and education at all levels. AnitaB.org runs the Grace Hopper Celebration of Women in Computing yearly conference. The National Center for Women & Information Technology is a nonprofit that aims to increase the number of women in technology and computing. The Women in Technology International (WITI) is a global organization dedicated to the advancement of women in business and technology. The Arab Women in Computing has many chapters across the world and focuses on encouraging women to work with technology and provides networking opportunities between industry experts and academicians and university students. Some major societies and groups have offshoots dedicated to women. The Association for Computing Machinery's Council on Women in Computing (ACM-W) has over 36,000 members. BCSWomen is a women-only specialist group of the British Computer Society, founded in 2001. In Ireland, the charity Teen Turn run after school training and work placements for girls, and Women in Technology and Science (WITS) advocate for the inclusion and promotion of women within STEM industries. The Women's Technology Empowerment Centre (W.TEC) is a non-profit organization focused on providing technology education and mentoring to Nigerian women and girls. Black Girls Code is a non-profit focused on providing technology education to young African-American women. Other organisations dedicated to women in IT include Girl Develop It, a nonprofit organization that provides affordable programs for adult women interested in learning web and software development in a judgment-free environment, Girl Geek Dinners, an International group for women of all ages, Girls Who Code: a national non-profit organization dedicated to closing the gender gap in technology, LinuxChix, a women-oriented community in the open source movement and Systers, a moderated listserv dedicated to mentoring women in the IT industry. See also List of female mathematicians List of female scientists List of organizations for women in science List of women astronauts List of prizes, medals, and awards for women in science List of women in the video game industry Timeline of women in computing Women and video games Women in computing in Canada Women in engineering Women in science Women in STEM fields Women in the workforce Women in venture capital References Citations Works cited Further reading Natarajan, Priyamvada, "Calculating Women" (review of Margot Lee Shetterly, Hidden Figures: The American Dream and the Untold Story of the Black Women Mathematicians Who Helped Win the Space Race, William Morrow; Dava Sobel, The Glass Universe: How the Ladies of the Harvard Observatory Took the Measure of the Stars, Viking; and Nathalia Holt, Rise of the Rocket Girls: The Women Who Propelled Us, from Missiles to the Moon to Mars, Little, Brown), The New York Review of Books, vol. LXIV, no. 9 (May 25, 2017), pp. 38–39. External links Carnegie Mellon Project on Gender and Computer Science National Center for Women & Information Technology US Equate Scotland Institute for Women in Trades, Technology and Science MNT – Mulheres na Tecnologia Brazil Resources related to Women in Computing US Society for Canadian Women in Science and Technology Women in Science, Engineering, and Technology UK Women's Engineering Society UK When Women Stopped Coding Global Gender Gap Report 2021I INSIGHT REPORTMARCH 2021 Global Annual Results Report 2022: Gender equality History of computer science
Women in computing
Technology
9,298
27,937,297
https://en.wikipedia.org/wiki/TriG%20%28syntax%29
TriG is a serialization format for RDF (Resource Description Framework) graphs. It is a plain text format for serializing named graphs and RDF Datasets which offers a compact and readable alternative to the XML-based TriX syntax. Example This example encodes three interlinked named graphs: http://www.example.org/exampleDocument#G1 http://www.example.org/exampleDocument#G2 http://www.example.org/exampleDocument#G3 @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . @prefix xsd: <http://www.w3.org/2001/XMLSchema#> . @prefix swp: <http://www.w3.org/2004/03/trix/swp-1/> . @prefix dc: <http://purl.org/dc/elements/1.1/> . @prefix ex: <http://www.example.org/vocabulary#> . @prefix : <http://www.example.org/exampleDocument#> . :G1 { :Monica ex:name "Monica Murphy" . :Monica ex:homepage <http://www.monicamurphy.org> . :Monica ex:email <mailto:monica@monicamurphy.org> . :Monica ex:hasSkill ex:Management } :G2 { :Monica rdf:type ex:Person . :Monica ex:hasSkill ex:Programming } :G3 { :G1 swp:assertedBy _:w1 . _:w1 swp:authority :Chris . _:w1 dc:date "2003-10-02"^^xsd:date . :G2 swp:quotedBy _:w2 . :G3 swp:assertedBy _:w2 . _:w2 dc:date "2003-09-03"^^xsd:date . _:w2 swp:authority :Chris . :Chris rdf:type ex:Person . :Chris ex:email <mailto:chris@bizer.de> } External links TriG Specification (2007) RDF 1.1 TriG W3C Recommendation (2014) Yacker TriG validator, which does not handle sub-graphs, and does not validate the above example. Resource Description Framework Syntax Computer file formats
TriG (syntax)
Technology
543
1,540,093
https://en.wikipedia.org/wiki/International%20Federation%20of%20Chemical%2C%20Energy%2C%20Mine%20and%20General%20Workers%27%20Unions
The International Federation of Chemical, Energy, Mine and General Workers' Unions (ICEM) was a global union federation of trade unions. As of November 2007, ICEM represented 467 industrial trade unions in 132 countries, claiming a membership of over 20 million workers. History The federation was founded in 1995 in Washington, DC, when the Miners' International Federation merged with the International Federation of Chemical and General Workers' Unions. In 2000, the small Universal Alliance of Diamond Workers merged into the federation, while in 2007, the World Federation of Industry Workers joined. In June 2012, affiliates of ICEM merged into the new global federation IndustriALL Global Union. The organization represented workers employed in a wide range of industries, including energy, mining, chemicals and bioscience, pulp and paper, rubber, gems and jewellery, glass, ceramics, cement, environmental services and others. Organization and activities The international headquarters of ICEM was variously based in Brussels, Belgium, and Geneva, Switzerland, where meetings of the Presidium and the executive committee were held. These governing bodies organized activities on a higher level while the regional offices organized regional conferences, workshops and solidarity actions. The Presidium oversaw the grand line of ICEM whilst the executive committee was more involved in the day-to-day routine of the organization. Every four years, starting in 1995, a worldwide congress was organized in which new committee members were elected and policies were changed. The congresses were held in the following order: Washington, DC, 1995. Durban, South Africa, in November 1999. Stavanger, Norway, in August 2003. Bangkok, Thailand, in November 2007. The regional offices dealt with specific geographical areas such as Africa, Asia Pacific, Europe, Latin America and the Caribbean and North America. The regional office of the Asia Pacific area was housed in Seoul, South Korea. This regional office was one of the most active offices of ICEM. ICEM supported many strikes in various regions including the strike of 7 October 1998 in Russia by communists and the Federation of Independent Trade Unions of Russia during the 1998 Russian Financial Crisis. Affiliates of ICEM have also organized protests in South Africa. ICEM worked together with human rights and environmental activists who were in conflict with multinationals such as Rio Tinto by raising awareness and funding research. ICEM published two quarterly bulletins called ICEM Info and ICEM Global which merged in 2002 to become ICEM Global Info. Research Richard Croucher and Elizabeth Cotton's book Global Unions, Global Business contains a case study of the ICEM's dealings with the Anglo-American mining company. This is in Chapter Eight. The book is published by Middlesex University Press (2009). . The archive of ICEM is housed in the International Institute of Social History in Amsterdam and is open to the public. Leadership General Secretaries 1995: Vic Thorpe 1999: Fred Higgs 2007: Manfred Warda Presidents 1995: Hans Berger Germany 1999/2003: John Maitland Australia 2005: Senzeni ZokwanaSouth Africa References External links Chemical industry trade unions Energy industry trade unions Mining trade unions Organisations based in Brussels Organisations based in Geneva Trade unions established in 1995 Trade unions disestablished in 2012
International Federation of Chemical, Energy, Mine and General Workers' Unions
Chemistry
640
22,052,605
https://en.wikipedia.org/wiki/Pierce%E2%80%93Birkhoff%20conjecture
In abstract algebra, the Pierce–Birkhoff conjecture asserts that any piecewise-polynomial function can be expressed as a maximum of finite minima of finite collections of polynomials. It was first stated, albeit in non-rigorous and vague wording, in the 1956 paper of Garrett Birkhoff and Richard S. Pierce in which they first introduced f-rings. The modern, rigorous statement of the conjecture was formulated by Melvin Henriksen and John R. Isbell, who worked on the problem in the early 1960s in connection with their work on f-rings. Their formulation is as follows: For every real piecewise-polynomial function , there exists a finite set of polynomials such that . Isbell is likely the source of the name Pierce–Birkhoff conjecture, and popularized the problem in the 1980s by discussing it with several mathematicians interested in real algebraic geometry. The conjecture was proved true for n = 1 and 2 by Louis Mahé. Local Pierce–Birkhoff conjecture In 1989, James J. Madden provided an equivalent statement that is in terms of the real spectrum of and the novel concepts of local polynomial representatives and separating ideals. Denoting the real spectrum of A by , the separating ideal of α and β in is the ideal of A generated by all polynomials that change sign on and , i.e., and . Any finite covering of closed, semi-algebraic sets induces a corresponding covering , so, in particular, when f is piecewise polynomial, there is a polynomial for every such that and . This is termed the local polynomial representative of f at . Madden's so-called local Pierce–Birkhoff conjecture at and , which is equivalent to the Pierce–Birkhoff conjecture, is as follows: Let , be in and f be piecewise-polynomial. It is conjectured that for every local representative of f at , , and local representative of f at , , is in the separating ideal of and . References Further reading Conjectures Real algebraic geometry Unsolved problems in geometry
Pierce–Birkhoff conjecture
Mathematics
406
11,421,421
https://en.wikipedia.org/wiki/Ribosomal%20protein%20L20%20leader
L20 ribosomal protein leader is a ribosomal protein leader involved in the ribosome biogenesis. It is used as an autoregulatory mechanism to control the concentration of ribosomal proteins L20. The structure is typically located in the 5′ untranslated regions of mRNAs encoding initiation factor 3 followed by ribosomal proteins L35 and L20 (infC-rpmI-rplT), but the regulated mRNAs always contain an L20 gene. A Rho-independent transcription terminator structure that is probably involved in regulation is included at the 3′ end in many examples of L20 ribosomal protein leaders. Three structurally distinct forms of L20 leaders have been experimentally established. One such leader motif occurs in Bacillota and the other two are found in Gammaproteobacteria. Of the latter two, one is found in a wide variety of Gammaproteobacteria, while the other is only reported in Escherichia coli. All three types of leader exhibit apparent similarities to the region of Ribosomal RNA to which the L20 protein normally binds. However, in terms of RNA secondary structure, the context of the similar region is distinct in each leader type. A fourth example of an L20 ribosomal protein leader was predicted in Deltaproteobacteria using bioinformatic approaches. Like the three experimentally validated kinds of leader, the Deltaproteobacterial version resembles the relevant portion of ribosomal RNA, but presents this similarity in yet another structural context. See also Ribosomal protein leader References External links Ribosomal protein leader
Ribosomal protein L20 leader
Chemistry
328
24,714
https://en.wikipedia.org/wiki/Precession
Precession is a change in the orientation of the rotational axis of a rotating body. In an appropriate reference frame it can be defined as a change in the first Euler angle, whereas the third Euler angle defines the rotation itself. In other words, if the axis of rotation of a body is itself rotating about a second axis, that body is said to be precessing about the second axis. A motion in which the second Euler angle changes is called nutation. In physics, there are two types of precession: torque-free and torque-induced. In astronomy, precession refers to any of several slow changes in an astronomical body's rotational or orbital parameters. An important example is the steady change in the orientation of the axis of rotation of the Earth, known as the precession of the equinoxes. Torque-free or torque neglected Torque-free precession implies that no external moment (torque) is applied to the body. In torque-free precession, the angular momentum is a constant, but the angular velocity vector changes orientation with time. What makes this possible is a time-varying moment of inertia, or more precisely, a time-varying inertia matrix. The inertia matrix is composed of the moments of inertia of a body calculated with respect to separate coordinate axes (e.g. , , ). If an object is asymmetric about its principal axis of rotation, the moment of inertia with respect to each coordinate direction will change with time, while preserving angular momentum. The result is that the component of the angular velocities of the body about each axis will vary inversely with each axis' moment of inertia. The torque-free precession rate of an object with an axis of symmetry, such as a disk, spinning about an axis not aligned with that axis of symmetry can be calculated as follows: where is the precession rate, is the spin rate about the axis of symmetry, is the moment of inertia about the axis of symmetry, is moment of inertia about either of the other two equal perpendicular principal axes, and is the angle between the moment of inertia direction and the symmetry axis. When an object is not perfectly rigid, inelastic dissipation will tend to damp torque-free precession, and the rotation axis will align itself with one of the inertia axes of the body. For a generic solid object without any axis of symmetry, the evolution of the object's orientation, represented (for example) by a rotation matrix that transforms internal to external coordinates, may be numerically simulated. Given the object's fixed internal moment of inertia tensor and fixed external angular momentum , the instantaneous angular velocity is Precession occurs by repeatedly recalculating and applying a small rotation vector for the short time ; e.g.: for the skew-symmetric matrix . The errors induced by finite time steps tend to increase the rotational kinetic energy: this unphysical tendency can be counteracted by repeatedly applying a small rotation vector perpendicular to both and , noting that Torque-induced Torque-induced precession (gyroscopic precession) is the phenomenon in which the axis of a spinning object (e.g., a gyroscope) describes a cone in space when an external torque is applied to it. The phenomenon is commonly seen in a spinning toy top, but all rotating objects can undergo precession. If the speed of the rotation and the magnitude of the external torque are constant, the spin axis will move at right angles to the direction that would intuitively result from the external torque. In the case of a toy top, its weight is acting downwards from its center of mass and the normal force (reaction) of the ground is pushing up on it at the point of contact with the support. These two opposite forces produce a torque which causes the top to precess. The device depicted on the right is gimbal mounted. From inside to outside there are three axes of rotation: the hub of the wheel, the gimbal axis, and the vertical pivot. To distinguish between the two horizontal axes, rotation around the wheel hub will be called spinning, and rotation around the gimbal axis will be called pitching. Rotation around the vertical pivot axis is called rotation. First, imagine that the entire device is rotating around the (vertical) pivot axis. Then, spinning of the wheel (around the wheelhub) is added. Imagine the gimbal axis to be locked, so that the wheel cannot pitch. The gimbal axis has sensors, that measure whether there is a torque around the gimbal axis. In the picture, a section of the wheel has been named . At the depicted moment in time, section is at the perimeter of the rotating motion around the (vertical) pivot axis. Section , therefore, has a lot of angular rotating velocity with respect to the rotation around the pivot axis, and as is forced closer to the pivot axis of the rotation (by the wheel spinning further), because of the Coriolis effect, with respect to the vertical pivot axis, tends to move in the direction of the top-left arrow in the diagram (shown at 45°) in the direction of rotation around the pivot axis. Section of the wheel is moving away from the pivot axis, and so a force (again, a Coriolis force) acts in the same direction as in the case of . Note that both arrows point in the same direction. The same reasoning applies for the bottom half of the wheel, but there the arrows point in the opposite direction to that of the top arrows. Combined over the entire wheel, there is a torque around the gimbal axis when some spinning is added to rotation around a vertical axis. It is important to note that the torque around the gimbal axis arises without any delay; the response is instantaneous. In the discussion above, the setup was kept unchanging by preventing pitching around the gimbal axis. In the case of a spinning toy top, when the spinning top starts tilting, gravity exerts a torque. However, instead of rolling over, the spinning top just pitches a little. This pitching motion reorients the spinning top with respect to the torque that is being exerted. The result is that the torque exerted by gravity – via the pitching motion – elicits gyroscopic precession (which in turn yields a counter torque against the gravity torque) rather than causing the spinning top to fall to its side. Precession or gyroscopic considerations have an effect on bicycle performance at high speed. Precession is also the mechanism behind gyrocompasses. Classical (Newtonian) Precession is the change of angular velocity and angular momentum produced by a torque. The general equation that relates the torque to the rate of change of angular momentum is: where and are the torque and angular momentum vectors respectively. Due to the way the torque vectors are defined, it is a vector that is perpendicular to the plane of the forces that create it. Thus it may be seen that the angular momentum vector will change perpendicular to those forces. Depending on how the forces are created, they will often rotate with the angular momentum vector, and then circular precession is created. Under these circumstances the angular velocity of precession is given by: where is the moment of inertia, is the angular velocity of spin about the spin axis, is the mass, is the acceleration due to gravity, is the angle between the spin axis and the axis of precession and is the distance between the center of mass and the pivot. The torque vector originates at the center of mass. Using , we find that the period of precession is given by: Where is the moment of inertia, is the period of spin about the spin axis, and is the torque. In general, the problem is more complicated than this, however. Relativistic (Einsteinian) The special and general theories of relativity give three types of corrections to the Newtonian precession, of a gyroscope near a large mass such as Earth, described above. They are: Thomas precession, a special-relativistic correction accounting for an object (such as a gyroscope) being accelerated along a curved path. de Sitter precession, a general-relativistic correction accounting for the Schwarzschild metric of curved space near a large non-rotating mass. Lense–Thirring precession, a general-relativistic correction accounting for the frame dragging by the Kerr metric of curved space near a large rotating mass. The Schwarzschild geodesics (sometimes Schwarzschild precession) is used in the prediction of the anomalous perihelion precession of the planets, most notably for the accurate prediction of the apsidal precession of Mercury. Astronomy In astronomy, precession refers to any of several gravity-induced, slow and continuous changes in an astronomical body's rotational axis or orbital path. Precession of the equinoxes, perihelion precession, changes in the tilt of Earth's axis to its orbit, and the eccentricity of its orbit over tens of thousands of years are all important parts of the astronomical theory of ice ages. (See Milankovitch cycles.) Axial precession (precession of the equinoxes) Axial precession is the movement of the rotational axis of an astronomical body, whereby the axis slowly traces out a cone. In the case of Earth, this type of precession is also known as the precession of the equinoxes, lunisolar precession, or precession of the equator. Earth goes through one such complete precessional cycle in a period of approximately 26,000 years or 1° every 72 years, during which the positions of stars will slowly change in both equatorial coordinates and ecliptic longitude. Over this cycle, Earth's north axial pole moves from where it is now, within 1° of Polaris, in a circle around the ecliptic pole, with an angular radius of about 23.5°. The ancient Greek astronomer Hipparchus (c. 190–120 BC) is generally accepted to be the earliest known astronomer to recognize and assess the precession of the equinoxes at about 1° per century (which is not far from the actual value for antiquity, 1.38°), although there is some minor dispute about whether he was. In ancient China, the Jin-dynasty scholar-official Yu Xi ( 307–345 AD) made a similar discovery centuries later, noting that the position of the Sun during the winter solstice had drifted roughly one degree over the course of fifty years relative to the position of the stars. The precession of Earth's axis was later explained by Newtonian physics. Being an oblate spheroid, Earth has a non-spherical shape, bulging outward at the equator. The gravitational tidal forces of the Moon and Sun apply torque to the equator, attempting to pull the equatorial bulge into the plane of the ecliptic, but instead causing it to precess. The torque exerted by the planets, particularly Jupiter, also plays a role. Apsidal precession The orbits of planets around the Sun do not really follow an identical ellipse each time, but actually trace out a flower-petal shape because the major axis of each planet's elliptical orbit also precesses within its orbital plane, partly in response to perturbations in the form of the changing gravitational forces exerted by other planets. This is called perihelion precession or apsidal precession. In the adjunct image, Earth's apsidal precession is illustrated. As the Earth travels around the Sun, its elliptical orbit rotates gradually over time. The eccentricity of its ellipse and the precession rate of its orbit are exaggerated for visualization. Most orbits in the Solar System have a much smaller eccentricity and precess at a much slower rate, making them nearly circular and nearly stationary. Discrepancies between the observed perihelion precession rate of the planet Mercury and that predicted by classical mechanics were prominent among the forms of experimental evidence leading to the acceptance of Einstein's Theory of Relativity (in particular, his General Theory of Relativity), which accurately predicted the anomalies. Deviating from Newton's law, Einstein's theory of gravitation predicts an extra term of , which accurately gives the observed excess turning rate of 43 arcseconds every 100 years. Nodal precession Orbital nodes also precess over time. See also Larmor precession Nutation Polar motion Precession (mechanical) Precession as a form of parallel transport References External links Explanation and derivation of formula for precession of a top Precession and the Milankovich theory From Stargazers to Starships Earth Dynamics (mechanics)
Precession
Physics
2,713
2,820,459
https://en.wikipedia.org/wiki/Plateau%27s%20laws
Plateau's laws describe the structure of soap films. These laws were formulated in the 19th century by the Belgian physicist Joseph Plateau from his experimental observations. Many patterns in nature are based on foams obeying these laws. Laws for soap films Plateau's laws describe the shape and configuration of soap films as follows: Soap films are made of entire (unbroken) smooth surfaces. The mean curvature of a portion of a soap film is everywhere constant on any point on the same piece of soap film. Soap films always meet in threes along an edge called a Plateau border, and they do so at an angle of arccos(−) = 120°. These Plateau borders meet in fours at a vertex, at the tetrahedral angle of arccos(−) ≈ 109.47°. Configurations other than those of Plateau's laws are unstable, and the film will quickly tend to rearrange itself to conform to these laws. That these laws hold for minimal surfaces was proved mathematically by Jean Taylor using geometric measure theory. See also Young–Laplace equation, governing the curvature of surfaces in a soap film Notes Sources External links Minimal surfaces Bubbles (physics)
Plateau's laws
Chemistry
236
20,961
https://en.wikipedia.org/wiki/M%C3%B6bius%20function
The Möbius function is a multiplicative function in number theory introduced by the German mathematician August Ferdinand Möbius (also transliterated Moebius) in 1832. It is ubiquitous in elementary and analytic number theory and most often appears as part of its namesake the Möbius inversion formula. Following work of Gian-Carlo Rota in the 1960s, generalizations of the Möbius function were introduced into combinatorics, and are similarly denoted . Definition The Möbius function is defined by The Möbius function can alternatively be represented as where is the Kronecker delta, is the Liouville function, is the number of distinct prime divisors of , and is the number of prime factors of , counted with multiplicity. Another characterization by Gauss is the sum of all primitive roots. Values The values of for the first 50 positive numbers are The first 50 values of the function are plotted below: Larger values can be checked in: Wolframalpha the b-file of OEIS Applications Mathematical series The Dirichlet series that generates the Möbius function is the (multiplicative) inverse of the Riemann zeta function; if is a complex number with real part larger than 1 we have This may be seen from its Euler product Also: where is Euler's constant. The Lambert series for the Möbius function is which converges for . For prime , we also have Algebraic number theory Gauss proved that for a prime number the sum of its primitive roots is congruent to . If denotes the finite field of order (where is necessarily a prime power), then the number of monic irreducible polynomials of degree over is given by The Möbius function is used in the Möbius inversion formula. Physics The Möbius function also arises in the primon gas or free Riemann gas model of supersymmetry. In this theory, the fundamental particles or "primons" have energies . Under second quantization, multiparticle excitations are considered; these are given by for any natural number . This follows from the fact that the factorization of the natural numbers into primes is unique. In the free Riemann gas, any natural number can occur, if the primons are taken as bosons. If they are taken as fermions, then the Pauli exclusion principle excludes squares. The operator that distinguishes fermions and bosons is then none other than the Möbius function . The free Riemann gas has a number of other interesting connections to number theory, including the fact that the partition function is the Riemann zeta function. This idea underlies Alain Connes's attempted proof of the Riemann hypothesis. Properties The Möbius function is multiplicative (i.e., whenever and are coprime). Proof: Given two coprime numbers , we induct on . If , then . Otherwise, , so The sum of the Möbius function over all positive divisors of (including itself and 1) is zero except when : The equality above leads to the important Möbius inversion formula and is the main reason why is of relevance in the theory of multiplicative and arithmetic functions. Other applications of in combinatorics are connected with the use of the Pólya enumeration theorem in combinatorial groups and combinatorial enumerations. There is a formula for calculating the Möbius function without directly knowing the factorization of its argument: i.e. is the sum of the primitive -th roots of unity. (However, the computational complexity of this definition is at least the same as that of the Euler product definition.) Other identities satisfied by the Möbius function include and . The first of these is a classical result while the second was published in 2020. Similar identities hold for the Mertens function. Proof of the formula for the sum of over divisors The formula can be written using Dirichlet convolution as: where is the identity under the convolution. One way of proving this formula is by noting that the Dirichlet convolution of two multiplicative functions is again multiplicative. Thus it suffices to prove the formula for powers of primes. Indeed, for any prime and for any , while for . Other proofs Another way of proving this formula is by using the identity The formula above is then a consequence of the fact that the th roots of unity sum to 0, since each th root of unity is a primitive th root of unity for exactly one divisor of . However it is also possible to prove this identity from first principles. First note that it is trivially true when . Suppose then that . Then there is a bijection between the factors of for which and the subsets of the set of all prime factors of . The asserted result follows from the fact that every non-empty finite set has an equal number of odd- and even-cardinality subsets. This last fact can be shown easily by induction on the cardinality of a non-empty finite set . First, if , there is exactly one odd-cardinality subset of , namely itself, and exactly one even-cardinality subset, namely . Next, if , then divide the subsets of into two subclasses depending on whether they contain or not some fixed element in . There is an obvious bijection between these two subclasses, pairing those subsets that have the same complement relative to the subset . Also, one of these two subclasses consists of all the subsets of the set , and therefore, by the induction hypothesis, has an equal number of odd- and even-cardinality subsets. These subsets in turn correspond bijectively to the even- and odd-cardinality -containing subsets of . The inductive step follows directly from these two bijections. A related result is that the binomial coefficients exhibit alternating entries of odd and even power which sum symmetrically. Average order The mean value (in the sense of average orders) of the Möbius function is zero. This statement is, in fact, equivalent to the prime number theorem. sections if and only if is divisible by the square of a prime. The first numbers with this property are 4, 8, 9, 12, 16, 18, 20, 24, 25, 27, 28, 32, 36, 40, 44, 45, 48, 49, 50, 52, 54, 56, 60, 63, ... . If is prime, then , but the converse is not true. The first non prime for which is . The first such numbers with three distinct prime factors (sphenic numbers) are 30, 42, 66, 70, 78, 102, 105, 110, 114, 130, 138, 154, 165, 170, 174, 182, 186, 190, 195, 222, ... . and the first such numbers with 5 distinct prime factors are 2310, 2730, 3570, 3990, 4290, 4830, 5610, 6006, 6090, 6270, 6510, 6630, 7410, 7590, 7770, 7854, 8610, 8778, 8970, 9030, 9282, 9570, 9690, ... . Mertens function In number theory another arithmetic function closely related to the Möbius function is the Mertens function, defined by for every natural number . This function is closely linked with the positions of zeroes of the Riemann zeta function. See the article on the Mertens conjecture for more information about the connection between and the Riemann hypothesis. From the formula it follows that the Mertens function is given by where is the Farey sequence of order . This formula is used in the proof of the Franel–Landau theorem. Generalizations Incidence algebras In combinatorics, every locally finite partially ordered set (poset) is assigned an incidence algebra. One distinguished member of this algebra is that poset's "Möbius function". The classical Möbius function treated in this article is essentially equal to the Möbius function of the set of all positive integers partially ordered by divisibility. See the article on incidence algebras for the precise definition and several examples of these general Möbius functions. Popovici's function Constantin Popovici defined a generalised Möbius function to be the -fold Dirichlet convolution of the Möbius function with itself. It is thus again a multiplicative function with where the binomial coefficient is taken to be zero if . The definition may be extended to complex by reading the binomial as a polynomial in . Implementations Mathematica Maxima geeksforgeeks C++, Python3, Java, C#, PHP, JavaScript Rosetta Code Sage See also Liouville function Mertens function Ramanujan's sum Sphenic number Notes Citations Sources External links Multiplicative functions
Möbius function
Mathematics
1,827
9,004,141
https://en.wikipedia.org/wiki/Construction%20Industry%20Council%20%28United%20Kingdom%29
Construction Industry Council (CIC) is the representative forum for professional bodies, research organisations and specialist business associations in the United Kingdom construction industry. History The first proposals for a Building Industry Council were made in 1985 (backed by the Chartered Institute of Building, Chartered Institution of Building Services Engineers and the Institution of Structural Engineers) but came to nothing. A further attempt followed in 1987 with support from the Royal Institute of British Architects, and the BIC was publicly launched on 16 September 1987. However, it was more than a year before a first meeting, including the Royal Institution of Chartered Surveyors, took place on 1 November 1988. The body was incorporated in May 1999, and with the Institution of Civil Engineers then a member, changed its name to the Construction Industry Council in April 1990. Activities CIC provides a single voice for professionals across the built environment through its collective membership of 500,000 individual professionals and more than 25,000 firms of construction consultants. The breadth and depth of its membership means that CIC (like a small number of other bodies, including Constructing Excellence) can speak with authority on issues connected with construction without being constrained by the self-interest of any particular sector of the industry. As representative of the views of professionals, it has a seat on the government/industry body, the Strategic Forum for Construction. Construction Industry Council developed and operates the Design Quality Indicator (DQI) tool to measure the design quality of buildings. From 2014, CICAIR Limited, a specially created wholly owned subsidiary of CIC, was the sole body authorised to approve Approved Inspectors to undertake building control work in England and Wales. In 2024, responsibility for Approved Inspectors (and Registered Building Control Approvers) was transferred to the Health and Safety Executive; CIC staff working for CICAIR transferred to the HSE. Organisation Construction Industry Council's work is managed on a day-to day basis by a small secretariat which works under the direction of the Chief Executive who is responsible to the Council. The Board acts as the main policy and strategy vehicle of the Council. Chair Chairs of the Building or Construction Industry Council and their terms of office: In June 2023, Dr Wei Yang became CIC's first female chair. A town planner and past president of the Royal Town Planning Institute, her appointment also put women in the majority on the CIC board. Membership The Construction Industry Council has three categories of membership: Full; Associate; and Honorary Affiliate Members. Full members, as of February 2022, are: References External links Construction Industry Council Design Quality Indicator Interviews with Key Members at the CIC 20th Anniversary Party (Video) Architecture organisations based in the United Kingdom Building Business organisations based in London Construction trade groups based in the United Kingdom Engineering organizations Organisations based in the London Borough of Camden Organizations established in 1988 Sustainability organizations 1988 establishments in the United Kingdom
Construction Industry Council (United Kingdom)
Engineering
571
36,596,339
https://en.wikipedia.org/wiki/Ezekiel%20Stone%20Wiggins
Ezekiel Stone Wiggins (December 4, 1839 – August 14, 1910) was a Canadian weather and earthquake predictor known as the "Ottawa Prophet". He was the author of several scientific, educational and religious works. Early life and education Ezekiel Wiggins was born in Grand Lake, Queens County, New Brunswick, in 1839 to Daniel Slocum Wiggins and Elizabeth Titus Stone, both of United Empire Loyalist descent. The Wiggins family claims descent from Capt Thomas Wiggins of Shrewsbury, England, who became the first Governor of New Hampshire in 1630. Ezekiel was a pupil at the Oakwood Grammar School (1858). He attended secondary school in Ontario, and stayed to become a teacher in Mariposa Township, Ontario. On August 2, 1862, he married his sixteen-year-old cousin Susan Anna Wiggins, the daughter of Vincent White Wiggins and Charlotte E. Wiggins. The couple did not have any children. Their religion was Episcopal. Susan became an author and poet. He was a student at the Philadelphia University of Medicine and Surgery, where he earned an MD in 1867-69. He earned a Bachelor of Arts from Albert University, in Belleville, Ontario, in 1870, while serving as the head master of a highschool in Ingersoll, Ontario. Career and theories Wiggins wrote The Architecture of the Heavens, which was published in Montreal by John Lovell in 1864. He worked as a local superintendent of schools in 1866. In 1867, Wiggins wrote a criticism about Universalism in Christianity Universalism unfounded: being a complete analysis and refutation of the system which was published in Napanee, Ontario, by Henry & Co. in 1867 According to the preface, "Here every Orthodox minister and private Christian is furnished with a text book on Universalism. Containing a complete refutation of every position, hithero assumed either in the affirmative of universal salvation or the negative of punishment" Wiggins served as the first principal (1872–1874) of W. Ross Macdonald School, whose motto is "the impossible is only the untried". The school, which opened its doors in Brantford, Ontario, in March 1872, provides instruction from kindergarten to secondary school graduation for blind and deafblind students. Wiggins wrote English Grammar, which was published by Copp, Clarks & Co, Toronto in 1874. Wiggins founded Thompson's School in 1874, a boy's day school housed upstairs in Whelpley Hall near the Rothesay railway station, in Rothesay, an affluent suburb of the prosperous city of Saint John, New Brunswick. Wiggins, who was an amateur Cryptozoologist, argued in "Days of the Creation" published in the St. John New Brunswick Globe in July 1876 that Plesiosaurus dilichodeirus, genus of large marine Sauropterygian reptile that lived in the Oolitic era was not extinct, based on reported sightings by passengers and crew of the Steamer New York, of a marine animal swimming with its head twelve feet above the water near Boston. He later theorized that "the Plesiosaurus exists in Rice Lake is certain and it is probably twenty feet in length," In 1876, the Wiggins advertised their summer home, consisting of Hunting Lodge dwelling, guests' house, wood, ice, and bath house on a wilderness property on the west shore of the Grand Lake, New Brunswick. Wiggins was an amateur historian who wrote "The History of Queens County" New Brunswick in a series of articles in the Saint John newspaper, The Watchman, in the fall and winter of 1876 and 1877. The History of Queens County by E. Stone Wiggins was edited by Richard and Sandra Thorne and was published in 1993 by the Queens County Historical Society, Saint John, New Brunswick. He ran unsuccessfully in 1878 to represent Queen's (New Brunswick federal electoral district), which was a federal electoral district in New Brunswick, Canada, that was represented in the House of Commons of Canada. The Wiggins collection of "Scraps concerning Queen's County election, Sept. 17th, 1878" is in Library and Archives Canada. He was appointed federal civil servant in the finance department by Prime Minister John A. Macdonald in 1878. He continued to serve as a federal civil servant until 2 years before his death. From 1878-1892, the couple lived at 237 Daly Avenue, in Ottawa, Ontario. From 1892–93, the couple lived at Arbour House in Britannia, Ottawa. Wiggins wrote the Architecture of the Heavens containing a new theory of the universe and the extent of the deluge, and testimony of the Bible and geology in opposition to the views of Dr. Colenso. Wiggins' theorized that storms, unusual tides, earthquakes and cyclones were all caused by planetary attraction, and that both visible and invisible planets could shift the Earth's centre of Gravity. He claimed to have predicted the 1869 Saxby Gale. He claimed that the Sun was merely an electric light, which did not generate any heat. In 1901-02, Wiggins served as rector's warden at St. Stephen's Anglican Church (Ottawa). Wiggins theorized that the unusual proximity of Jupiter to the Earth and the action of the Moon upon Jupiter were responsible for the cold weather Canada experienced in the winter of 1904. Although Wiggins discovered Earth's second moon in 1882, it was reported in The Comber Herald in 1907 that astronomers were unable to verify the discovery. Wiggins' prophecies about storms and earthquakes, which were based on his astronomical calculations, appeared in Wiggins' storm herald, with almanac, 1883 and in his warning letters reprinted in various newspapers. Wiggins predicted a number of storms in February 1883. The Auckland Star reported that Wiggins' prediction of a storm on the Atlantic March 7, 1883, came to pass "A severe gale, accompanied by a heavy fall of snow, has been experienced over the greater part of England. Much damage has been caused on land by the wind and snowdrifts, and many disasters at sea are reported owing to the severity of the gale which raged along the coast." Wiggins predicted that a great storm would strike Earth between the March 9–11, 1883 with a theatre of its ravages India, the south of Europe, England and North America, and leading to the submerging of the lowlands of the Atlantic. He predicted that no vessel smaller than a Cunarder would be able to live in this tempest. Wiggins predicted that a great Hurricane and Tidal Wave would strike America on March 9, 1883. Wiggins advised the Canadian Minister of Marine and Lords of Admiralty that all vessels should be in safe harbours not later than March 5 since minor storms precede great ones. Some Canadians accorded the prophet credit for having made a fair prediction based on a severe storm on 7 March, a few days early of Wiggins' prediction of the 9th. Wiggins explained that the severe snowstorm on March 7–8 was caused by one of planets moving into position to take part in great storm he predicted on 9th and 11 March. Wiggins predicted that the Northern Lights would precede his storm; The Aurora borealis was bright on March 8. On March 10 there was a light rain and hail followed by a gale in Halifax. After the storm failed to appear at Montreal, Quebec; Ottawa, Ontario, Halifax, Nova Scotia, and Toronto, Ontario, on March 9 and 11, fishermen complained about the Fishing industry losses stemming from keeping the fishing fleet in port. The Hull, England Fishing fleet was hard hit in a storm on March 9, 1883, losing 37 vessels and leaving 500 families destitute. Sydney Australia suffered a northerly buster, which fell short of the tidal wave and hurricane predicted by Wiggins. "If the storm does not come as predicted, Wiggins must go to the foot of the class. We shall have nothing to do with Canadian prophets. If we must have weather prophets we shall raise them ourselves and thus stimulate home industry" Wiggins lost credibility and was termed a "false prophet" and "a fool and his folly" since the storms were not as terrible as Wiggins had predicted; Neither a great tidal wave nor a hurricane appeared. "This Wiggins, as a prophet, is a mushroom creation of American journalism and the ripe result of as shrewd a piece of inferential advertising as had lately been attempted. He achieved fame in the sailing of one balloon." A cartoon by Grant Hamilton from the front page of the New York Daily Graphic on Jan 17, 1883, explained Wiggins' prophesies concisely. "The Great Wiggins shall the weather prophets or the people be snuffed out? Wiggins, the weather prophet prophesied 'We will have a terrible storm in March.' The effect in a country town. This is a US signal service man trying with all the latest improved instruments to foretell the weather 48 hours he can but that is all. But Prof Wiggins has no difficulty to write with his left hand a letter foretelling the weather 3 to 6 months, with an extra month thrown in by way of variety. Prof W up in the wee ours of the night as the great storms occur in March - he prophesies and publishes it accordingly.' Some of the Wiggin's predictions were fulfilled. He predicted, for example, the earthquake that appeared in England in 1884. In 1885, Wiggins' retirement as a weather prophet was reported in Once a Month. "The days of weather prophets are not yet over, despite the immense scientific advancements of meteorology – for who did not hear a year or so ago of "Wiggins" Predictions, and how fleets of ships actually remained in port in the United States, deterred from putting to sea by the Wiggins prophecy of terrific storms on the east coast of America? These storms did not come off and Wiggins retired into the shade" After the Charleston Earthquake of 1886, Wiggins announced that a more powerful disaster would occur at 2 p.m. on September 29; believers in North America panicked, quit work, and dressed in "ascension robes and waited for the end of the world. Wiggins theorized that "Earthquakes are caused by the shifting of the earth's centre of gravity. Suppose this centre of gravity to be moved, say one mile from her normal centre of gravity, or from her centre of volume; now, what must happen? Why, the parts of her surface at the end of the longer axis will be heavier and the parts at the end of the shorter axis will be lighter than normally. These disks, therefore, will grind upon each other, generating heat and lava. Hence earthquakes and volcanic eruptions. If our little visible satellite were brought down and slid around the earth from east to west, in 24 hours earthquakes would occur of such violence as to render our globe uninhabitable." Mark Twain wrote a humorous prophecy about Wiggins, which appeared in American and Canadian newspapers. "As meteor approaches Canada it will make a majestic downward swoop in the direction of Ottawa, affording a spectacle resembling a million inverted rainbows woven together, and will take the Prophet Wiggins right in the seat of his inspiration and lift him straight up into the back yard of the planet Mars, and leave him permanently there in an inconceivably mashed and unpleasant condition." Grip's cartoon about Wiggins' earthquake prophecy had an angry Charleston resident 'Stone Wiggins!' Grip also included the Modern Barney Buntline, a poem about Wiggins' predictions," ...When its only in an almanac it don't do so much harm, 'Cos they're limited to wind or rain or hail; But a special storm prediction causes seamen much alarm While Wiggins in a-wagging of his tail". Although the predicted earthquake and stormy weather did not take place in Charleston; there was an earthquake in Elizabethtown, Pennsylvania, and the Colima (volcano) erupted on September 29, 1886. He lost credibility since the earthquake and storms were not the 'greatest blow of the century' he had predicted. Wiggins and gullible newspapers who carried his predictions were labelled cranks and fools in 1886. "A learned man may become a fool -by assuming the role of a "Weather Prophet" for instance. Wiggins and the crank who publishes and edits the Bradford Prophet are notable instances," advised the editor of the Flesherton Advance. Wiggins was advised to quit weather-propheting. "Give up weather-propheting, Mr. Wiggins, for you have proven on two occasions that you are not constituted for this line of business... Wiggins, you are about the most unmitigated and unabridged fizzle we ever heard of. In fact, you are no Weather-Prophet' American and Canadian newspapers published humorous poems 'The Modern Barney Buntline'; and humorous stories 'The weather prophet's lament' about Wiggins storm prophecy. A. S. Hooker criticized Wiggins and other prophets in "Great earthquakes: their history, phenomena and causes", published by W.C. Regand, 1887 for their prediction methods, predictions which did not come to pass as well as predictions missed. Hooper though the Astronomy-based prediction methodology used by Wiggins and other prophets was weak "the observation, made since the Charleston earthquake, that E. Stone Wiggins, (that follower of Ananias) and other "prophets" had sprinkled their predictions so thickly along the meteorologic way, that it would be impossible for an earthquake or a storm to run amiss of one of them. " A. S. Hooker points out that Wiggins failed to predict the Charleston earthquake on 31 August 1886 or the aftershocks felt over a wide area of the United States in September, October and November. Wiggins also failed to predict a tornado which swept across the Gulf of Mexico on the 12th of October, 1886 which demolished the village of Sabine Pass, with a loss of 200 lives. A.S. Hooker notes that Wiggins changed his prediction for September 29, 1888, of a great storm of unparalleled violence which will sweep across the Atlantic Ocean and traverse the country until exhausting its energies by the Rocky Mountains to an earthquake in the Gulf of Mexico and Central America, nevertheless, the 29th was a calm day without storm or earthquake. Hooker advised that Wiggins had received notice to quit prophesying destructive storms, earthquakes and other natural disturbances, otherwise he will be dismissed from his position as a civil servant of the Dominion. Hooker wrote, "This is a great blow to Wiggins (not one that he prophesied), but a relief to those credulous, or nervously inclined. " Wiggins, an amateur epidemiologist, theorized that the cause of a Yellow fever epidemic in Jacksonville, Florida, in 1888 as astronomical. "The planets were in the same line as the sun and earth and this produced, besides Cyclones, Earthquakes, etc., a denser atmosphere holding more carbon and creating Microbes. Mars had an uncommonly dense atmosphere, but its inhabitants were probably protected from the fever by their newly discovered canals, which were perhaps made to absorb carbon and prevent the disease. " During an interview with The Times on December 7, 1888, Wiggins explained that he hoped the evidence from the eclipse of the Sun on 1 January 1899, would prove his theories, which he'd held since 1864. He theorized that the phosphere of the Sun is electricity, which repels and attracts comets through space by the law of like and unlike electricities. He believed that the coronal streamers are meteors carried through space on the trail of comets. He thought the ridges and lines on Mars observed through the Lick telescope were genuine Mars canals which had been excavated by the Martians for irrigation. He theorized that Encke comet must become a primary or secondary planet in a few years. If Encke comet became another moon to the Earth, Wiggins theorized that the oceans would raise 20 feet or more in a few hours. The flood would not only overwhelm both continents; Australia and the Gulf Stream would be no more. Wiggins theorized that floods and earthquakes are caused by dark or tailless comets, invisible through telescopes, passing near the Earth's surface. Wiggins explained the discrepancies of the storms and earthquakes he predicted by his discovery of a dark second moon of the Earth, which he theorized deflected storms or interfered with earthquakes. The second satellite was termed dark because it eluded the telescopes or analytical spectroscopes of Astronomers. On New Year's Day, 1889 Prof. Wiggins attended the Governor General of Canada reception at Ottawa. After being introduced to his Excellency The Marquess of Lansdowne and the Crown Ministers, Sir John A. Macdonald offered his hand, saying: "Why, Wiggins, you go by like a comet. " The professor replied: "Comets always go swiftly by the sun" and, later "He was greatly obliged to the Prime Minister for catching him at perihelion. " Prof. Wiggins was asked to comment by The New York Times on November 24, 1892, on an alleged collision between the Earth and a comet, reported by Prof. Snyder of Philadelphia. Prof Wiggins stated that no such collision occurred on November 24, 1892, since there was no comet near the Earth at the time of the collision. Prof Wiggins theorized that a Comet could not collide with the Earth because planets and comets are electrically positive and therefore repel each other, "If a comet were to strike the earth it would smash the comet into meteoric dust in twenty minutes. " Wiggins, a teacher, amateur meteorologist and his wife, writer Susie Anna Wiggins built Arbour House, (1892–93) a Designated Heritage Property 1994, as their summer home in Britannia. Currently housing the Arbour House Studios, the corner tower, shingled gables and irregular plan are typical of the Queen Anne Revival-style. Wiggins, wrote a science fiction novel, Jack Suehard; or, Life on Jupiter in 1891 The title is "Jack Suehard"; or "Life on Jupiter" which considered what the people of the earth will be like at the end of the next twenty millions of years. It featured a "'stanlon,' a mirror twenty feet square, which is in every house and a conspicuous object in every street of their cities, " which provided instantaneous image transmission, essentially, "the Jovian newspaper, theatre, pulpit, and tribune. " In 1893, Wiggins, predicted that the temperature in Canada was getting warmer in The Newmarket Era: "In time orange trees will blossom on the banks of the St. Lawrence River and the present products of the Dominion will flourish on the shores of Hudson Bay." In 1895, Wiggins predicted in The Newmarket Era that the Great Lakes of North America are decreasing every year and the Niagara Falls will cease to be. The 'Windsor Evening Record' reported on September 25, 1895, on popular feeling when the Wiggins weather predictions didn't come to pass, "Some people have lived in a state of great trepidation since the 17th, owing to the prophecy of E. Stone Wiggins, and now that the storm has failed to connect these people are kicking. Unhappy Wiggins." In 1896, Wiggins claimed in the Newmarket Era that a tornado in St. Louis, Missouri, in 1896 was caused by the network of telegraph wires, and predicted that a similar fate would befall Canadian cities unless all wires were buried. In 1897, he claimed that a meteorite that fell near Binghamton, New York, in November of that year contained a message from the inhabitants of the planet Mars in the form of hieroglyphs and advanced the theory that such messages had been sent before. He suggested that the Martians sent such meteorites to Earth by utilizing an "electric force", launching the projectiles towards passing comets which would draw the meteorite to Earth, or by launching the projectile into an orbit which would put it ahead of the Martian satellite Phobos, postulating that the "highly electrified" projectile would be repelled by Phobos with enough force to send it to Earth. E. Stone Wiggins served as Commodore of the Britannia Bay Boathouse Club in 1899. Society photographer William James Topley photographed Wiggins and his wife Mrs. E. Stone Wiggins in 1907. Wiggins theorized that the cold and wet summer of 1909, resulted from an unrecognized satellite of the Earth He died on August 14, 1910, in Arbour House, Britannia, at age 70. The couple's gravestone at St Luke Anglican Church Cemetery, Young's Cove Road, Queen's County, New Brunswick reads Professor E. Stone Wiggins B.A., M.A., M.D., L.L.D. Canada's Distinguished Scientist and Scholar. DEC. 4 1839—AUG. 14 1910. His wife Susie. "The Un-Canadians", a 2007 article in Beaver Magazine, includes Ezekiel Stone Wiggins, Jeffery Amherst, 1st Baron Amherst, and Robert Monckton in a list of people in the history of Canada who were considered contemptible: "Civil servant and author Ezekiel Stone Wiggins manipulated the people's obsession with the weather and forecasted a storm that never came." Family Susan Anna Wiggins was born on April 6, 1846. She was privately educated in Latin and Greek. At 16, she married her cousin, Ezekiel Stone Wiggins. Using her pen name of 'Gunhilda', Susan Anna was an author and poet. In 1881, Susan Anna Wiggins used the nom de plume 'Gunhilda' to write the Gunhilda Letters—Marriage with a Deceased Husband's Sister: Letters of a Lady to [John Travers Lewis], the Right Rev. the Lord Bishop of Ontario, which consisted of letters of support for Mr. Girouard's bill regarding the legalization of marriage with a deceased wife's sister. The Gunhilda Letters were dedicated to the members of the Senate of Canada and of the House of Commons of Canada who supported Mr. Girouard's Bill. Nicholas Flood Davin complimented the Gunhilda letters "for felicity of expression, cogency of reasoning, fierceness of invective, keenness of satire and piquancy of style" and "Nothing equal to them has appeared in the Canadian press for years. " Sir David Lewis Macpherson invited Susan Anna Wiggins to take a seat on his right, on the day that the 'Gunhilda' bill received its second reading in the Red Chamber, Parliament of Canada; This honour was previously only accorded to men or to the wife of a Governor General of Canada. The artist, F.A.T Dunbar sculpted a bust of Susan Anna Wiggins, which was placed in the Canadian Parliamentary Library at Ottawa. Mrs. Wiggins wrote a biography of her husband, Prof. E. Stone Wiggins. In 1903, Mrs. Wiggins was included in Henry James Morgan's Types of Canadian women and of women who are or have been connected with Canada: (Volume 1), which was published by Briggs, Toronto in 1903. She died on May 6, 1921. Her obituary read, 'At all events, let us honor her, and remember her, the lone woman great, intellectual, marvelously well-read and cultured, a woman, who in her own way, stirred Canada as few women have ever stirred her'. She was buried with her husband in St. Luke's Anglican Church Cemetery, Youngs Cove, Queens County, New Brunswick, Canada. Bibliography "Wiggins' storm herald, with almanac, 1883" by Ezekiel S Wiggins, Nepean, Ontario "Universalism unfounded being a complete analysis and refutation of the system" 1867 by Ezekiel S Wiggins, Nepean, Ontario "The architecture of the heavens containing a new theory of the universe and the extent of the deluge, and testimony of the Bible and geology in opposition to the views of Dr. Colenso" by Ezekiel S Wiggins 1864, Nepean, Ontario "The history of Queens County by Ezekiel S Wiggins, 1893, Nepean, Ontario The White family in New Brunswick: an historical sketch by Ezekiel Stone Wiggins, Saint John: The Watchman, 1903. AMICUS No. 11242420 monograph Electoral record References External links E. Stone Wiggins Library and Archives Canada Ruby M Cusack - History of Queens County - E Stone Wiggins The Ghost of Wiggins, New York Times Susan Millar Williams and Stephen G. Hoffius "Upheaval in Charleston: How the great Charleston earthquake forever changed an iconic southern city" 1839 births 1910 deaths Canadian non-fiction writers Canadian science fiction writers Prophets Weather lore Writers from Ottawa
Ezekiel Stone Wiggins
Physics
5,105
23,713,739
https://en.wikipedia.org/wiki/Web%20Services%20Description%20Language
The Web Services Description Language (WSDL ) is an XML-based interface description language that is used for describing the functionality offered by a web service. The acronym is also used for any specific WSDL description of a web service (also referred to as a WSDL file), which provides a machine-readable description of how the service can be called, what parameters it expects, and what data structures it returns. Therefore, its purpose is roughly like a type signature in a programming language. The latest version of WSDL, which became a W3C recommendation in 2007, is WSDL 2.0. The meaning of the acronym has changed from version 1.1 where the "D" stood for "Definition". Description The WSDL describes services as collections of network endpoints, or ports. The WSDL specification provides an XML format for documents for this purpose. The abstract definitions of ports and messages are separated from their concrete use or instance, allowing the reuse of these definitions. A port is defined by associating a network address with a reusable binding, and a collection of ports defines a service. Messages are abstract descriptions of the data being exchanged, and port types are abstract collections of supported operations. The concrete protocol and data format specifications for a particular port type constitutes a reusable binding, where the operations and messages are then bound to a concrete network protocol and message format. In this way, WSDL describes the public interface to the Web service. WSDL is often used in combination with SOAP and an XML Schema to provide Web services over the Internet. A client program connecting to a Web service can read the WSDL file to determine what operations are available on the server. Any special datatypes used are embedded in the WSDL file in the form of XML Schema. The client can then use SOAP to actually call one of the operations listed in the WSDL file, using for example XML over HTTP. The current version of the specification is 2.0; version 1.1 has not been endorsed by the W3C but version 2.0 is a W3C recommendation. WSDL 1.2 was renamed WSDL 2.0 because of its substantial differences from WSDL 1.1. By accepting binding to all the HTTP request methods (not only GET and POST as in version 1.1), the WSDL 2.0 specification offers better support for RESTful web services, and is much simpler to implement. However support for this specification is still poor in software development kits for Web Services which often offer tools only for WSDL 1.1. For example, the version 2.0 of the Business Process Execution Language (BPEL) only supports WSDL 1.1. Example WSDL file <?xml version="1.0" encoding="UTF-8"?> <description xmlns="http://www.w3.org/ns/wsdl" xmlns:tns="http://www.tmsws.com/wsdl20sample" xmlns:whttp="http://schemas.xmlsoap.org/wsdl/http/" xmlns:wsoap="http://schemas.xmlsoap.org/wsdl/soap/" targetNamespace="http://www.tmsws.com/wsdl20sample"> <documentation> This is a sample WSDL 2.0 document. </documentation> <!-- Abstract type --> <types> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns="http://www.tmsws.com/wsdl20sample" targetNamespace="http://www.example.com/wsdl20sample"> <xs:element name="request"> ... </xs:element> <xs:element name="response"> ... </xs:element> </xs:schema> </types> <!-- Abstract interfaces --> <interface name="Interface1"> <fault name="Error1" element="tns:response"/> <operation name="Get" pattern="http://www.w3.org/ns/wsdl/in-out"> <input messageLabel="In" element="tns:request"/> <output messageLabel="Out" element="tns:response"/> </operation> </interface> <!-- Concrete Binding Over HTTP --> <binding name="HttpBinding" interface="tns:Interface1" type="http://www.w3.org/ns/wsdl/http"> <operation ref="tns:Get" whttp:method="GET"/> </binding> <!-- Concrete Binding with SOAP--> <binding name="SoapBinding" interface="tns:Interface1" type="http://www.w3.org/ns/wsdl/soap" wsoap:protocol="http://www.w3.org/2003/05/soap/bindings/HTTP/" wsoap:mepDefault="http://www.w3.org/2003/05/soap/mep/request-response"> <operation ref="tns:Get" /> </binding> <!-- Web Service offering endpoints for both bindings--> <service name="Service1" interface="tns:Interface1"> <endpoint name="HttpEndpoint" binding="tns:HttpBinding" address="http://www.example.com/rest/"/> <endpoint name="SoapEndpoint" binding="tns:SoapBinding" address="http://www.example.com/soap/"/> </service> </description> History WSDL 1.0 (Sept. 2000) was developed by IBM, Microsoft, and Ariba to describe Web Services for their SOAP toolkit. It was built by combining two service description languages: NASSL (Network Application Service Specification Language) from IBM and SDL (Service Description Language) from Microsoft. WSDL 1.1, published in March 2001, is the formalization of WSDL 1.0. No major changes were introduced between 1.0 and 1.1. WSDL 1.2 (June 2003) was a working draft at W3C, but has become WSDL 2.0. According to W3C: WSDL 1.2 is easier and more flexible for developers than the previous version. WSDL 1.2 attempts to remove non-interoperable features and also defines the HTTP 1.1 binding better. WSDL 1.2 was not supported by most SOAP servers/vendors. WSDL 2.0 became a W3C recommendation in June 2007. WSDL 1.2 was renamed to WSDL 2.0 because it has substantial differences from WSDL 1.1. The changes are the following: Added further semantics to the description language Removed message constructs Operator overloading not supported PortTypes renamed to interfaces Ports renamed to endpoints Subset WSDL Subset WSDL (SWSDL) is a WSDL with the subset operations of an original WSDL. A developer can use SWSDL to access Subset Service, thus handle subset of web service code. A Subset WSDL can be used to perform web service testing and top down development. Slicing of a web service can be done using a Subset WSDL to access Subset Service. Subset Service can be categorized into layers using SWSDL. SWSDLs are used for Web service analysis, testing and top down development. AWSCM is a tool that can identify subset operations in a WSDL file to construct a subset WSDL. Security considerations Since WSDL files are an XML-based specification for describing a web service, WSDL files are susceptible to attack. To mitigate vulnerability of these files, limiting access to generated WSDL files, setting proper access restrictions on WSDL definitions, and avoiding unnecessary definitions in web services is encouraged. See also SDEP SOAP Web Application Description Language References External links WSDL 1.0 Specification WSDL 1.1 Specification WSDL 2.0 Specification Part 0: Primer (Latest Version) Part 1: Core (Latest Version) Part 2: Adjuncts (Latest Version) Web Services Description Working Group XML protocol activity JSR-110: Java APIs for WSDL JSR 172: Java ME Web Services Specification Online WSDL Validator WSDL Java Bindings for XMLBeans and JAXB. RELAX-WS: Simple web service definition language based on RELAX NG Compact Syntax Kevin Liu. A Look at WSDL 2.0. Date accessed: 20 April 2010. XML-based standards Web service specifications World Wide Web Consortium standards
Web Services Description Language
Technology
1,964
4,459,913
https://en.wikipedia.org/wiki/Sparassis%20crispa
Sparassis crispa is a species of fungus in the family Sparassidaceae. It is sometimes called cauliflower fungus. Description S. crispa grows in an entangled globe that is up to in diameter. The lobes, which carry the spore-bearing surface, are flat and wavy, resembling lasagna noodles, coloured white to creamy yellow. When young they are tough and rubbery but later they become soft. They are monomitic. The odour is pleasant and the taste of the flesh mild. The spore print is cream, the smooth oval spores measuring about 5–7 μm by 3.5–5 μm. The flesh contains clamp connections. Similar species The less well-known S. brevipes, found in Europe, can be distinguished by its less crinkled, zoned folds and lack of clamp connections. Distribution and habitat This species is fairly common in Great Britain and temperate Europe (but not in the boreal zone), from July to November. It is a brown rot fungus, found growing at the base of conifer trunks, often pines, but also spruce, cedar, larch and others. In the North American Pacific Northwest, it can be found from August to November. Uses It is considered a good edible fungus when young and fresh, though it is difficult to clean. (A toothbrush and running water are recommended.) One French cookbook, which gives four recipes for this species, says that grubs and pine needles can get caught up in holes in the jumbled mass of flesh. The Sparassis should be blanched in boiling water for 2–3 minutes before being added to the rest of the dish. It should be cooked slowly. It can also be preserved in oil, cold water or by drying. See also Sparassis spathulata References External links Edible fungi Fungi described in 1781 Fungi of Europe Fungi of North America Polyporales Fungus species
Sparassis crispa
Biology
399
1,460,172
https://en.wikipedia.org/wiki/Cyclic%20homology
In noncommutative geometry and related branches of mathematics, cyclic homology and cyclic cohomology are certain (co)homology theories for associative algebras which generalize the de Rham (co)homology of manifolds. These notions were independently introduced by Boris Tsygan (homology) and Alain Connes (cohomology) in the 1980s. These invariants have many interesting relationships with several older branches of mathematics, including de Rham theory, Hochschild (co)homology, group cohomology, and the K-theory. Contributors to the development of the theory include Max Karoubi, Yuri L. Daletskii, Boris Feigin, Jean-Luc Brylinski, Mariusz Wodzicki, Jean-Louis Loday, Victor Nistor, Daniel Quillen, Joachim Cuntz, Ryszard Nest, Ralf Meyer, and Michael Puschnigg. Hints about definition The first definition of the cyclic homology of a ring A over a field of characteristic zero, denoted HCn(A) or Hnλ(A), proceeded by the means of the following explicit chain complex related to the Hochschild homology complex of A, called the Connes complex: For any natural number n ≥ 0, define the operator which generates the natural cyclic action of on the n-th tensor product of A: Recall that the Hochschild complex groups of A with coefficients in A itself are given by setting for all n ≥ 0. Then the components of the Connes complex are defined as , and the differential is the restriction of the Hochschild differential to this quotient. One can check that the Hochschild differential does indeed factor through to this space of coinvariants. Connes later found a more categorical approach to cyclic homology using a notion of cyclic object in an abelian category, which is analogous to the notion of simplicial object. In this way, cyclic homology (and cohomology) may be interpreted as a derived functor, which can be explicitly computed by the means of the (b, B)-bicomplex. If the field k contains the rational numbers, the definition in terms of the Connes complex calculates the same homology. One of the striking features of cyclic homology is the existence of a long exact sequence connecting Hochschild and cyclic homology. This long exact sequence is referred to as the periodicity sequence. Case of commutative rings Cyclic cohomology of the commutative algebra A of regular functions on an affine algebraic variety over a field k of characteristic zero can be computed in terms of Grothendieck's algebraic de Rham complex. In particular, if the variety V=Spec A is smooth, cyclic cohomology of A are expressed in terms of the de Rham cohomology of V as follows: This formula suggests a way to define de Rham cohomology for a 'noncommutative spectrum' of a noncommutative algebra A, which was extensively developed by Connes. Variants of cyclic homology One motivation of cyclic homology was the need for an approximation of K-theory that is defined, unlike K-theory, as the homology of a chain complex. Cyclic cohomology is in fact endowed with a pairing with K-theory, and one hopes this pairing to be non-degenerate. There has been defined a number of variants whose purpose is to fit better with algebras with topology, such as Fréchet algebras, -algebras, etc. The reason is that K-theory behaves much better on topological algebras such as Banach algebras or C*-algebras than on algebras without additional structure. Since, on the other hand, cyclic homology degenerates on C*-algebras, there came up the need to define modified theories. Among them are entire cyclic homology due to Alain Connes, analytic cyclic homology due to Ralf Meyer or asymptotic and local cyclic homology due to Michael Puschnigg. The last one is very close to K-theory as it is endowed with a bivariant Chern character from KK-theory. Applications One of the applications of cyclic homology is to find new proofs and generalizations of the Atiyah-Singer index theorem. Among these generalizations are index theorems based on spectral triples and deformation quantization of Poisson structures. An elliptic operator D on a compact smooth manifold defines a class in K homology. One invariant of this class is the analytic index of the operator. This is seen as the pairing of the class [D], with the element 1 in HC(C(M)). Cyclic cohomology can be seen as a way to get higher invariants of elliptic differential operators not only for smooth manifolds, but also for foliations, orbifolds, and singular spaces that appear in noncommutative geometry. Computations of algebraic K-theory The cyclotomic trace map is a map from algebraic K-theory (of a ring A, say), to cyclic homology: In some situations, this map can be used to compute K-theory by means of this map. A pioneering result in this direction is a theorem of : it asserts that the map between the relative K-theory of A with respect to a nilpotent two-sided ideal I to the relative cyclic homology (measuring the difference between K-theory or cyclic homology of A and of A/I) is an isomorphism for n≥1. While Goodwillie's result holds for arbitrary rings, a quick reduction shows that it is in essence only a statement about . For rings not containing Q, cyclic homology must be replaced by topological cyclic homology in order to keep a close connection to K-theory. (If Q is contained in A, then cyclic homology and topological cyclic homology of A agree.) This is in line with the fact that (classical) Hochschild homology is less well-behaved than topological Hochschild homology for rings not containing Q. proved a far-reaching generalization of Goodwillie's result, stating that for a commutative ring A so that the Henselian lemma holds with respect to the ideal I, the relative K-theory is isomorphic to relative topological cyclic homology (without tensoring both with Q). Their result also encompasses a theorem of , asserting that in this situation the relative K-theory spectrum modulo an integer n which is invertible in A vanishes. used Gabber's result and Suslin rigidity to reprove Quillen's computation of the K-theory of finite fields. See also Noncommutative geometry Notes References . Errata External links A personal note on Hochschild and Cyclic homology Homological algebra
Cyclic homology
Mathematics
1,417
47,377,849
https://en.wikipedia.org/wiki/HINT2
Histidine triad nucleotide binding protein 2 (HINT2) is a mitochondrial protein that in humans is encoded by the HINT2 gene on chromosome 9. This protein is an AMP-lysine hydrolase and phosphoamidase and may contribute to tumor suppression. Structure As a member of the histidine triad nucleotide-binding (Hint) protein family, which is a subfamily of the histidine triad (HIT) family, HINT2 contains a conserved histidine and HIT sequence motif (His-X-His-X-His-X-X), and the latter two histidines contribute to a catalytic triad. The 163-amino acid protein encoded by this gene forms a 17-kDa homodimer. Compared to other members of the Hint family, HINT2 has a 61% sequence homology to HINT1 and 28% sequence homology to HINT3. When compared with HINT1, the 35–amino acid extension at the HINT2 N-terminal corresponds to a predicted mitochondria import signal. Function HINT2 is a member of the HIT superfamily and Hint subfamily, which are characterized as nucleotide hydrolases and transferases that act on the alpha-phosphate of ribonucleotides. The Hint family is the oldest within the HIT superfamily and thus, its members are highly conserved among eukaryotes and archaebacteria. The Hint proteins function as AMP-lysine hydrolases and phosphoramidases. In mammals, HINT2 is expressed in the liver, adrenal cortex, and pancreas and localizes to the mitochondria within their cells. Specifically, the protein is located in the inner mitochondrial membrane, facing the mitochondrial matrix. This positioning likely facilitates the transport of cholesterol from the cytosol to the matrix, which is necessary for steroidogenesis, by providing a contact site for the hydrophobic molecule and allowing it to cross the mitochondrial intermembrane space. HINT2 regulates steroidogenesis through calcium-dependent and calcium-independent signalling pathways that may serve to maintain a favorable mitochondrial potential. Its role in calcium homeostasis may also contribute to its proapoptotic function in hepatocytes and other non-steroidogenic cells, though the exact mechanism remains unclear. Clinical significance Hint2, one of the three members of the Hint family of proteins, is localized to mitochondria of various cell types. In human adrenocarcinoma cells, Hint2 modulates Ca2+ handling by mitochondria. In all living organisms, intracellular calcium controls a wide variety of physiological processes. Extracellular stimuli generate temporally organized Ca2+ signals, which most of the time occur as repetitive spikes. The frequency of these oscillations controls the nature and the extent of the cellular response. Ca2+ oscillations originate from the repetitive opening of the inositol 1,4,5-trisphosphate (InsP3) receptors that are Ca2+ channels embedded in the membrane of the endoplasmic reticulum (ER). Opening of these channels is initiated by the stimulus-induced rise in InsP3; because their activity is biphasically regulated by the level of cytoplasmic Ca2+, oscillations can occur. Mitochondria also affect cytoplasmic Ca2+ signals. They can both buffer cytosolic Ca2+ changes (7 and 8) and release Ca2+. At rest, intramitochondrial ([Ca2+]m) and cytosolic Ca2+ concentration ([Ca2+]i) are similar, of the order of 100 nM (9). The Hint family has been implicated in tumor suppression. Int2, a member of the superfamily of histidine triad proteins, has been localized exclusively in mitochondria, near the contact sites of the inner membrane. This enzyme is highly expressed in the liver, where it has been shown to stimulate mitochondrial lipid metabolism, respiration, and glucose homeostasis. Hint2 modulates cytoplasmic and mitochondrial Ca2+ dynamics by stimulating the activity of the mitochondrial respiratory chain. It appears that the absence of Hint2 leads to a premature opening of the mitochondrial permeability transition pore (mPTP) in mitochondrial suspensions. As such, HINT2 plays a prominent role in mitochondrial cell death signaling (e.g. apoptosis) and in ischemia-reperfusion injury (for instance during heart attacks) through calcium homeostasis. In particular, HINT2 is also observed to be upregulated in breast, pancreatic, and colon cancer cells, while it is downregulated in hepatocellular carcinoma and endometrial cancer. Its exact role in tumor suppression remains unknown, though studies suggest it may promote apoptosis in hepatocellular carcinoma and endometrial cancer. In double knockout Hint2 mice, higher acylation and morphological alterations were observed in the mitochondria, suggesting that Hint2 may regulate glucose and lipid metabolism. Interactions Currently, HINT2 has no known protein-protein interaction partners. See also Histidine triad nucleotide-binding protein 1 (HINT1) References Proteins
HINT2
Chemistry
1,099
862,694
https://en.wikipedia.org/wiki/Photometer
A photometer is an instrument that measures the strength of electromagnetic radiation in the range from ultraviolet to infrared and including the visible spectrum. Most photometers convert light into an electric current using a photoresistor, photodiode, or photomultiplier. Photometers measure: Illuminance Irradiance Light absorption Scattering of light Reflection of light Fluorescence Phosphorescence Luminescence Historically, photometry was done by estimation, comparing the luminous flux of a source with a standard source. By the 19th century, common photometers included Rumford's photometer, which compared the depths of shadows cast by different light sources, and Ritchie's photometer, which relied on equal illumination of surfaces. Another type was based on the extinction of shadows. Modern photometers utilize photoresistors, photodiodes or photomultipliers to detect light. Some models employ photon counting, measuring light by counting individual photons. They are especially useful in areas where the irradiance is low. Photometers have wide-ranging applications including photography, where they determine the correct exposure, and science, where they are used in absorption spectroscopy to calculate the concentration of substances in a solution, infrared spectroscopy to study the structure of substances, and atomic absorption spectroscopy to determine the concentration of metals in a solution. History Before electronic light sensitive elements were developed, photometry was done by estimation by the eye. The relative luminous flux of a source was compared with a standard source. The photometer is placed such that the illuminance from the source being investigated is equal to the standard source, as the human eye can judge equal illuminance. The relative luminous fluxes can then be calculated as the illuminance decreases proportionally to the inverse square of distance. A standard example of such a photometer consists of a piece of paper with an oil spot on it that makes the paper slightly more transparent. When the spot is not visible from either side, the illuminance from the two sides is equal. By 1861, three types were in common use. These were Rumford's photometer, Ritchie's photometer, and photometers that used the extinction of shadows, which was considered to be the most precise. Rumford's photometer Rumford's photometer (also called a shadow photometer) depended on the principle that a brighter light would cast a deeper shadow. The two lights to be compared were used to cast a shadow onto paper. If the shadows were of the same depth, the difference in distance of the lights would indicate the difference in intensity (e.g. a light twice as far would be four times the intensity). Ritchie's photometer Ritchie's photometer depends upon equal illumination of surfaces. It consists of a box (a,b) six or eight inches long, and one in width and depth. In the middle, a wedge of wood (f,e,g) was angled upwards and covered with white paper. The user's eye looked through a tube (d) at the top of a box. The height of the apparatus was also adjustable via the stand (c). The lights to compare were placed at the side of the box (m, n)—which illuminated the paper surfaces so that the eye saw both surfaces at once. By changing the position of the lights, they were made to illuminate both surfaces equally, with the difference in intensity corresponding to the square of the difference in distance. Method of extinction of shadows This type of photometer depended on the fact that if a light throws the shadow of an opaque object onto a white screen, there is a certain distance that, if a second light is brought there, obliterates all traces of the shadow. Principle of photometers Most photometers detect the light with photoresistors, photodiodes or photomultipliers. To analyze the light, the photometer may measure the light after it has passed through a filter or through a monochromator for determination at defined wavelengths or for analysis of the spectral distribution of the light. Photon counting Some photometers measure light by counting individual photons rather than incoming flux. The operating principles are the same but the results are given in units such as photons/cm2 or photons·cm−2·sr−1 rather than W/cm2 or W·cm−2·sr−1. Due to their individual photon counting nature, these instruments are limited to observations where the irradiance is low. The irradiance is limited by the time resolution of its associated detector readout electronics. With current technology this is in the megahertz range. The maximum irradiance is also limited by the throughput and gain parameters of the detector itself. The light sensing element in photon counting devices in NIR, visible and ultraviolet wavelengths is a photomultiplier to achieve sufficient sensitivity. In airborne and space-based remote sensing such photon counters are used at the upper reaches of the electromagnetic spectrum such as the X-ray to far ultraviolet. This is usually due to the lower radiant intensity of the objects being measured as well as the difficulty of measuring light at higher energies using its particle-like nature as compared to the wavelike nature of light at lower frequencies. Conversely, radiometers are typically used for remote sensing from the visible, infrared though radio frequency range. Photography Photometers are used to determine the correct exposure in photography. In modern cameras, the photometer is usually built in. As the illumination of different parts of the picture varies, advanced photometers measure the light intensity in different parts of the potential picture and use an algorithm to determine the most suitable exposure for the final picture, adapting the algorithm to the type of picture intended (see Metering mode). Historically, a photometer was separate from the camera and known as an exposure meter. The advanced photometers then could be used either to measure the light from the potential picture as a whole, to measure from elements of the picture to ascertain that the most important parts of the picture are optimally exposed, or to measure the incident light to the scene with an integrating adapter. Visible light reflectance photometry A reflectance photometer measures the reflectance of a surface as a function of wavelength. The surface is illuminated with white light, and the reflected light is measured after passing through a monochromator. This type of measurement has mainly practical applications, for instance in the paint industry to characterize the colour of a surface objectively. UV and visible light transmission photometry These are optical instruments for measurement of the absorption of light of a given wavelength (or a given range of wavelengths) of coloured substances in solution. From the light absorption, Beer's law makes it possible to calculate the concentration of the coloured substance in the solution. Due to its wide range of application and its reliability and robustness, the photometer has become one of the principal instruments in biochemistry and analytical chemistry. Absorption photometers for work in aqueous solution work in the ultraviolet and visible ranges, from wavelength around 240 nm up to 750 nm. The principle of spectrophotometers and filter photometers is that (as far as possible) monochromatic light is allowed to pass through a container (cell) with optically flat windows containing the solution. It then reaches a light detector, that measures the intensity of the light compared to the intensity after passing through an identical cell with the same solvent but without the coloured substance. From the ratio between the light intensities, knowing the capacity of the coloured substance to absorb light (the absorbency of the coloured substance, or the photon cross section area of the molecules of the coloured substance at a given wavelength), it is possible to calculate the concentration of the substance using Beer's law. Two types of photometers are used: spectrophotometer and filter photometer. In spectrophotometers a monochromator (with prism or with grating) is used to obtain monochromatic light of one defined wavelength. In filter photometers, optical filters are used to give the monochromatic light. Spectrophotometers can thus easily be set to measure the absorbance at different wavelengths, and they can also be used to scan the spectrum of the absorbing substance. They are in this way more flexible than filter photometers, also give a higher optical purity of the analyzing light, and therefore they are preferably used for research purposes. Filter photometers are cheaper, robuster and easier to use and therefore they are used for routine analysis. Photometers for microtiter plates are filter photometers. Infrared light transmission photometry Spectrophotometry in infrared light is mainly used to study structure of substances, as given groups give absorption at defined wavelengths. Measurement in aqueous solution is generally not possible, as water absorbs infrared light strongly in some wavelength ranges. Therefore, infrared spectroscopy is either performed in the gaseous phase (for volatile substances) or with the substances pressed into tablets together with salts that are transparent in the infrared range. Potassium bromide (KBr) is commonly used for this purpose. The substance being tested is thoroughly mixed with specially purified KBr and pressed into a transparent tablet, that is placed in the beam of light. The analysis of the wavelength dependence is generally not done using a monochromator as it is in UV-Vis, but with the use of an interferometer. The interference pattern can be analyzed using a Fourier transform algorithm. In this way, the whole wavelength range can be analyzed simultaneously, saving time, and an interferometer is also less expensive than a monochromator. The light absorbed in the infrared region does not correspond to electronic excitation of the substance studied, but rather to different kinds of vibrational excitation. The vibrational excitations are characteristic of different groups in a molecule, that can in this way be identified. The infrared spectrum typically has very narrow absorption lines, which makes them unsuited for quantitative analysis but gives very detailed information about the molecules. The frequencies of the different modes of vibration varies with isotope, and therefore different isotopes give different peaks. This makes it possible also to study the isotopic composition of a sample with infrared spectrophotometry. Atomic absorption photometry Atomic absorption photometers are photometers that measure the light from a very hot flame. The solution to be analyzed is injected into the flame at a constant, known rate. Metals in the solution are present in atomic form in the flame. The monochromatic light in this type of photometer is generated by a discharge lamp where the discharge takes place in a gas with the metal to be determined. The discharge then emits light with wavelengths corresponding to the spectral lines of the metal. A filter may be used to isolate one of the main spectral lines of the metal to be analyzed. The light is absorbed by the metal in the flame, and the absorption is used to determine the concentration of the metal in the original solution. See also Radiometry Raman spectroscopy Photodetector – A transducer capable of accepting an optical signal and producing an electrical signal containing the same information as in the optical signal. References Article partly based on the corresponding article in Swedish Wikipedia Electromagnetic radiation meters Optical instruments Photometry
Photometer
Physics,Technology,Engineering
2,316
18,408
https://en.wikipedia.org/wiki/LAME
LAME is a software encoder that converts digital audio into the MP3 audio coding format. LAME is a free software project that was first released in 1998 and has incorporated many improvements since then, including an improved psychoacoustic model. The LAME encoder outperforms early encoders like L3enc and possibly the "gold standard encoder" MP3enc, both marketed by Fraunhofer. LAME was required by some programs released as free software in which LAME was linked for MP3 support. This avoided including LAME itself, which used patented techniques, and so required patent licenses in some countries. All relevant patents have since expired, and LAME is now bundled with Audacity. History The name LAME is a recursive acronym for "LAME Ain't an MP3 Encoder". Around mid-1998, Mike Cheng created LAME 1.0 as a set of modifications against the 8Hz-MP3 encoder source code. After some quality concerns were raised by others, he decided to start again from scratch based on the dist10 MPEG reference software sources. His goal was only to speed up the dist10 sources, and leave its quality untouched. That branch (a patch against the reference sources) became Lame 2.0. The project quickly became a team project. Mike Cheng eventually left leadership and started working on tooLAME (an MP2 encoder). Mark Taylor then started pursuing increased quality in addition to better speed, and released version 3.0 featuring gpsycho, a new psychoacoustic model he developed. A few key improvements since LAME 3.x, in chronological order: May 1999 (LAME 3.0): a new psychoacoustic model (GPSYCHO) is released. June 1999 (LAME 3.11): The first variable bitrate (VBR) implementation is released. Soon after this, LAME also became able to target lower sampling frequencies from MPEG-2. (LAME 3.99 also supports the technologically simpler average bitrate (ABR), but it is unclear whether it was added before or with VBR.) November 1999 (LAME 3.52): LAME switches from a GPL license to an LGPL license, which allows using it with closed-source applications. May 2000 (LAME 3.81): the last pieces of the original ISO demonstration code are removed. LAME is not a patch anymore, but a full encoder. December 2003 (LAME 3.94): substantial improvement to default settings, along with improved speed. LAME no longer requires users to enter complicated parameters to produce good results. May 2007 (LAME 3.98): default variable bitrate encoding speed is vastly improved. Patents and legal issues Like all MP3 encoders, LAME implemented techniques covered by patents owned by the Fraunhofer Society and others. The developers of LAME did not license the technology described by these patents. Distributing compiled binaries of LAME, its libraries, or programs that derive from LAME in countries where those patents have been granted may have constituted infringement, but since 23 April 2017, all of these patents have expired. The LAME developers stated that, since their code was only released in source code form, it should only be considered as an educational description of an MP3 encoder, and thus did not infringe any patent in itself. They also advised users to obtain relevant patent licenses before including a compiled version of the encoder in a product. Some software was released using this strategy: companies used the LAME library, but obtained patent licenses. In the course of the 2005 Sony BMG copy protection rootkit scandal, there were reports that the Extended Copy Protection rootkit included on some Sony compact discs had portions of the LAME library without complying with the terms of the LGPL. See also List of codecs Lossy compression References External links LAME binaries - RareWares LAME binaries for Audacity - recommended for the Audacity free and GPL audio editor LAME Wiki - HydrogenAudio (audiophile information) LAME Mp3 Info Tag revision 1 Specifications 1998 software Audio compression Cross-platform software Free audio codecs MP3
LAME
Engineering
846
30,581,694
https://en.wikipedia.org/wiki/Nadirashvili%20surface
In differential geometry, a Nadirashvili surface is an immersed complete bounded minimal surface in R3 with negative curvature. The first example of such a surface was constructed by in . This simultaneously answered a question of Hadamard about whether there was an immersed complete bounded surface in R3 with negative curvature, and a question of Eugenio Calabi and Shing-Tung Yau about whether there was an immersed complete bounded minimal surface in R3. showed that a complete immersed surface in R3 cannot have constant negative curvature, and show that the curvature cannot be bounded above by a negative constant. So Nadirashvili's surface necessarily has points where the curvature is arbitrarily close to 0. References Differential geometry Surfaces Eponyms in geometry
Nadirashvili surface
Mathematics
153
69,824,413
https://en.wikipedia.org/wiki/Cd1-restricted%20T%20cell
Cd1-restricted T cells are part of the unconventional T cell family, they are stimulated by exposure to CD1+ antigen presenting cells (APCs). Many CD1-restricted T cells are rapidly stimulated to carry out helper and effector functions upon interaction with CD1-expressing antigen-presenting cells. CD1-restricted T cells regulate host defence, antitumor immunity and the balance between tolerance and autoimmunity. In general, CD1-restricted T cells are divided according to their CD1 molecule. Humans express four CD1 isoforms divided in 2 groups: group 1 CD1 (CD1a, CD1b, and CD1c) group 2 CD1 (CD1d). Group 1 CD1-restricted T cells Group 1 CD1-restricted T cells express diverse αβ T-cell receptors (TCRs). They can undergo clonal expansion in the periphery after recognition of stimulatory self-lipids or exogenous lipid antigens derived from bacteria. CD1–restricted T cells produce TH1, IFN-γ and TNF-α cytokines and are cytolytic. They can induce TNF-α dependent dentritic cells maturation. Many group 1 CD1–restricted T cells are autoreactive, and autoreactivity is enhanced by stimulation through pattern recognition receptors (PRRs). CD1a-restricted T cells are among the most frequent self-reactive CD1-restricted T cells in peripheral blood. Moreover, they are common in the skin. Skin CD1a-restricted T cells become activated when in contact with CD1a expressed by Langerhans cells. Upon activation, they produce IFN-𝛾, IL-2, and IL-22, a cytokine with suspected roles in skin immunity. CD1a-restricted T cells are unique in the way that their TCR can directly recognize the CD1a molecule without corecognition of a lipid antigen. Self-reactive CD1b-restricted T cells can acquire the phenotype of T helper 17 (TH17) cells and recruit neutrophils. CD1b is expressed at high levels on myeloid dendritic cells in blood and in tissues, and on certain macrophages and other immune cells in the periphery. TCD1b presents many mycobacterial lipid antigens, including glucose monomycolate (GMM) and free mycolic acid (MA) to human T cell clones. The responding T cell clones show effector functions that are consistent with a role in host protection, including Th1 skewed responses, cytotoxicity toward infected cells, and lack of response to uninfected cells or self-lipids. Germline-Encoded Mycolyl lipid reactive (GEM) T cells are defined by the expression of nearly invariant TRAV1-2/TRAJ9+ TCR α chains and CD4+. LDN5-like T cells, named after the clone LDN5, use TRAV17 or TRBV4-1, but have highly variable joining regions and do not seem to preferentially use any particular J segments. LDN5-like cells show conservation in the TCR β chain outside the CDR3. CD1c autoreactive cells has been identified to play a role in tumor detection. CD1–restricted T cells can kill immature dentritic cells that are infected. CD1d restricted natural killer T cells or group 2 CD1-restricted T cells Natural killer T (NKT) cells represent unusual cells of the innate immune system because they express a surface receptor that is generated by somatic DNA rearrangement, a hallmark of cells of the adaptive immune system. A hallmark of NKT cells is their capacity to rapidly produce copious amounts of cytokines upon antigenic stimulation, including interferon (IFN)-γ, interleukin (IL)-4, tumor necrosis factor (TNF)- α, and IL-2, which endows these cells with potent immunomodulatory activities. As a result, NKT cells are involved in the regulation of various immune responses, including infectious diseases, tumors, transplants, allergic reactions, autoimmune diseases, and inflammatory diseases. These properties of NKT cells have been utilized in vaccine development and immunotherapy using animal models of infection, tumor metastasis, and autoimmunity. CD1d-restricted NKT cells contribute to host defence by influencing the function of macrophages, dentritic cells, B cells and Natural Killer cells. They also contribute to tumor immunosurveillance and can mediate tumor rejection via interleukin 12 (IL-12) production, Natural Killer or T cell activation, or direct cytolysis. CD1d-restricted NKT cells are divided into 2 groups. Type I NKT cells Type I NKT cells are also called ‘invariant NKT cells’ or ‘iNKT cells’, they express an invariant TCRα chain and a limited, but not invariant, range of TCRβ chains. Type I NKT cells are less frequent in humans than in mice (1–3% of T cells in most mouse tissues, 50% in mouse liver and bone marrow, and approximately 0.1% of T cells in human blood). All type I NKT cells recognize the marine sponge-derived glycolipid, α-galactosylceramide (α-GalCer). After the encounter with the antigen Type I NKT cells rapidly become effector cells (minutes to hours) and produce many cytokines. These T cells also have a cytotoxic activity against CD1d+ tumor targets. Furthermore, type I NKT cells upregulate the costimulatory receptor CD154 (CD40 ligand), which, in conjunction with their cytokine production, potently activates DCs to increase expression of the costimulatory molecules CD80 and CD86 and produce interleukin 12. This leads to a more efficient presentation of antigen to MHC-restricted adaptive T cells, activation of NK cells and enhanced B cell responses. Thus, NKT cells can promote downstream innate and adaptive immune responses and, in turn, enhance protection against infection and cancer.  Human iNKT cells can be subdivided into subpopulations according to the produced cytokines and the expression of certain transcription factors. iNKT1 cells producing large amounts of IFNγ and a little IL-4, iNKT2 cells producing large amounts of IL-4, and iNKT17 cells secreting IL-17. A special iNKT cell population called iNKT10 has been identified in adipose tissue, which relies on the expression of the transcription factor E4BP4 for its role in maintaining adipose tissue homeostasis. Type II NKT cells Type II NKT are also called ‘diverse NKT cells’, they use αβ TCRs that do not conform to the TCR motifs described above. Their TCR sequence is more variable than iNKT cell. cells Type II NKT cells recognize CD1d but lack the highly conserved TCRα chain and reactivity to α-GalCer that classify type I NKT cells. Some type II NKT cells recognize the mammalian glycolipid sulfatide (produced at high concentrations in neuroendocrine tissue) phospholipid antigen lysophosphatidylcholine and some other phospholipid, and lysophospholipid antigens, including phosphatidylglycerol, and phosphatidylinositol of microbial and mammalian origin. They can also sense gene products of hepatitis B virus by detecting lysophosphatidylethanolamine generated through the cleavage of phosphatidylethanolamine by virus-induced phospholipases. Even non-lipidic small molecules, such as PPBF (phenyl 2,2,4,6,7-pentamethyldihydrobenzofuran-5-sulfonate), are antigenic for some type II NKT cells. Thus, type II NKT cells seem to recognize diverse antigens presented by CD1d and given that these cells seem to be more abundant than type I NKT cells in humans, it is important to understand their roles and therapeutic potential. References Immune system Cells
Cd1-restricted T cell
Biology
1,762
58,613,266
https://en.wikipedia.org/wiki/The%20Truth%20About%20Killer%20Robots
The Truth About Killer Robots is a 2018 documentary made by Third Party Films. It describes the hitherto ignored issues related to robots that have been involved in human fatalities. Plot The documentary investigates the 2015 killing by a robot of an assembly line manager in a Volkswagen factory in Germany, a driverless Tesla car that hit a white truck ahead of it, and the use of drones by the police in the USA (especially in Dallas) to drop bombs on snipers and suspects. It also follows the use of artificial intelligence in facial tracking, use of robots in Japan including hotels staffed by them, Geminoids in Japan, and the use of facial recognition for targeted marketing. The film uses Isaac Asimov's "Three Laws of Robotics", first proposed by him in his 1942 short story Runaround, and describes how human beings have in recent years ignored them. The film follows these with interviews with experts in the field, footage of real robots being used for bomb disposal, and "smart guns" that are able to shoot people automatically based on facial recognition. The film questions the morality of these uses and highlights the inadequacy of the current legal structure to address these issues. Reception The Truth About Killer Robots premiered at the Toronto Film Festival in September 2018. The Hollywood Reporter found the film interesting, but was critical of the fact that it does not address the widespread use of drones in war torn areas to kill civilians and suspects. On Rotten Tomatoes the film has a score of based on reviews from critics, with an average rating of . References 2018 films American documentary films Robots 2010s English-language films 2010s American films English-language documentary films
The Truth About Killer Robots
Physics,Technology
330
45,158,152
https://en.wikipedia.org/wiki/Melchett%20Medal
The Melchett Award is an honour awarded by the Energy Institute for outstanding contributions to the science of fuel and energy. It was created by and named for Alfred Moritz Mond, 1st Baron Melchett, the 20th century businessman and philanthropist. Winners Source: 1930: Kurt Rummell 1931: W. A. Bone 1932: Charles M. Schwab 1933: John Cadman 1934: Friedrich Bergius 1935: Harry R. Ricardo 1936: Franz Fischer 1937: Morris W. Travers 1938: R.V. Wheeler 1939: H.A. Humphrey 1940: Étienne Audibert 1941: Clarence A. Seyler 1942: Arno C. Fieldner 1943: E S Grumel 1944: J.G. King 1945: C H Lander 1946: Sir James Chadwick 1947: Kenneth Gordon 1949: Sir Frank Whittle 1950: R.J. Sarjan 1951: F.H. Garner 1952: D.T.A. Townend 1953: H. Hartley 1954: H.H. Storch 1955: A. Parker 1956: Sir Alfred Egerton 1957: Sir Christopher Hinton 1959: P.O. Rosin 1960: H.C. Hottel 1961: Sir Harold Hartley (award to MacFarlane): MacFarlane Memorial Lecture 1962: H.E. Crossley 1963: HRH Prince Philip, Duke of Edinburgh 1964: Homi Jehangir Bhabha 1965: F.J. Dent 1966: Sir Owen Saunders 1967: Sir Charles Cawley 1968: A. Ignatieff 1969: William T. Reid 1970: T E Allibone 1971: Lord Rothschild 1972: F T Bacon 1974: Sir Frederick Warner 1975: Sir John Hill, UKAEA 1976: T G Callcott 1977: J H Chesters 1978: G. Brunner 1979: A.W. Pearce 1980: Sir William Hawthorne 1981: J.H. Dunster 1982: J.A. Gray 1985: J.M. Beer 1986: N. Franklin 1987: Sir George Porter 1988: Frank Fitzgerald 1989: Neville Chamberlain 1990: David Lindley 1991: R.N. Hodge 1992: H.L. Beckers 1993: Robert Evans 1994: S. William Gouse, Jnr 1995: John Chesshire 1996: Sir Crispin Tickell 1997: I. Boustead 1998: Brenda Boardman 1999: Ian Fells 2000: Walt Patterson 2001: Lord Browne of Madingley 2002: Mary Archer 2003: Sir John Parker 2004: Sir Roy Gardner 2005: Vincent de Rivaz 2008: Andrew Warren 2010: James Skea 2011: Allan Jones 2013: David MacKay 2014: Lord Oxburgh 2016: David King 2017: Fatih Birol See also List of chemistry awards References Chemistry awards
Melchett Medal
Technology
558
2,022,356
https://en.wikipedia.org/wiki/Microstrip
Microstrip is a type of electrical transmission line which can be fabricated with any technology where a conductor is separated from a ground plane by a dielectric layer known as "substrate". Microstrip lines are used to convey microwave-frequency signals. Typical realisation technologies are printed circuit board (PCB), alumina coated with a dielectric layer or sometimes silicon or some other similar technologies. Microwave components such as antennas, couplers, filters, power dividers etc. can be formed from microstrip, with the entire device existing as the pattern of metallization on the substrate. Microstrip is thus much less expensive than traditional waveguide technology, as well as being far lighter and more compact. Microstrip was developed by ITT laboratories as a competitor to stripline (first published by Grieg and Engelmann in the December 1952 IRE proceedings). The disadvantages of microstrips compared to waveguides are the generally lower power handling capacity, and higher losses. Also, unlike waveguides, microstrips are typically not enclosed, and are therefore susceptible to cross-talk and unintentional radiation. For lowest cost, microstrip devices may be built on an ordinary FR-4 (standard PCB) substrate. However it is often found that the dielectric losses in FR4 are too high at microwave frequencies, and that the dielectric constant is not sufficiently tightly controlled. For these reasons, an alumina substrate is commonly used. From monolithic integration perspective microstrips with integrated circuit/monolithic microwave integrated circuit technologies might be feasible however their performance might be limited by the dielectric layer(s) and conductor thickness available. Microstrip lines are also used in high-speed digital PCB designs, where signals need to be routed from one part of the assembly to another with minimal distortion, and avoiding high cross-talk and radiation. Microstrip is one of many forms of planar transmission line, others include stripline and coplanar waveguide, and it is possible to integrate all of these on the same substrate. A differential microstrip—a balanced signal pair of microstrip lines—is often used for high-speed signals such as DDR2 SDRAM clocks, USB Hi-Speed data lines, PCI Express data lines, LVDS data lines, etc., often all on the same PCB. Most PCB design tools support such differential pairs. Inhomogeneity The electromagnetic wave carried by a microstrip line exists partly in the dielectric substrate, and partly in the air above it. In general, the dielectric constant of the substrate will be different (and greater) than that of the air, so that the wave is travelling in an inhomogeneous medium. In consequence, the propagation velocity is somewhere between the speed of radio waves in the substrate, and the speed of radio waves in air. This behaviour is commonly described by stating the effective dielectric constant of the microstrip; this being the dielectric constant of an equivalent homogeneous medium (i.e., one resulting in the same propagation velocity). Further consequences of an inhomogeneous medium include: The line will not support a true TEM wave; at non-zero frequencies, both the E and H fields will have longitudinal components (a hybrid mode). The longitudinal components are small however, and so the dominant mode is referred to as quasi-TEM. The line is dispersive. With increasing frequency, the effective dielectric constant gradually climbs towards that of the substrate, so that the phase velocity gradually decreases. This is true even with a non-dispersive substrate material (the substrate dielectric constant will usually fall with increasing frequency). The characteristic impedance of the line changes slightly with frequency (again, even with a non-dispersive substrate material). The characteristic impedance of non-TEM modes is not uniquely defined, and depending on the precise definition used, the impedance of microstrip either rises, falls, or falls then rises with increasing frequency. The low-frequency limit of the characteristic impedance is referred to as the quasi-static characteristic impedance, and is the same for all definitions of characteristic impedance. The wave impedance varies over the cross-section of the line. Microstrip lines radiate and discontinuity elements such as stubs and posts, which would be pure reactances in stripline, have a small resistive component due to the radiation from them. Characteristic impedance A closed-form approximate expression for the quasi-static characteristic impedance of a microstrip line was developed by Wheeler: where is the effective width, which is the actual width of the strip, plus a correction to account for the non-zero thickness of the metallization: Here is the impedance of free space, is the relative permittivity of substrate, is the width of the strip, is the thickness ("height") of substrate, and is the thickness of the strip metallization. This formula is asymptotic to an exact solution in three different cases: , any (parallel plate transmission line), , (wire above a ground-plane), and , . It is claimed that for most other cases, the error in impedance is less than 1%, and is always less than 2%. By covering all aspect-ratios in one formula, Wheeler 1977 improves on Wheeler 1965 which gives one formula for and another for (thus introducing a discontinuity in the result at ). Harold Wheeler disliked both the terms 'microstrip' and 'characteristic impedance', and avoided using them in his papers. A number of other approximate formulae for the characteristic impedance have been advanced by other authors. However, most of these are applicable to only a limited range of aspect-ratios, or else cover the entire range piecewise. In particular, the set of equations proposed by Hammerstad, who modifies on Wheeler, are perhaps the most often cited: where is the effective dielectric constant, approximated as: Effect of metallic enclosure Microstrip circuits may require a metallic enclosure, depending upon the application. If the top cover of the enclosure encroaches in the microstrip, the characteristic impedance of the microstrip may be reduced due to the additional path for plate and fringing capacitance. When this happens, equations have been developed to adjust the characteristic impedance in air (εr = 1) of the microstrip, , where , and is the impedance of the uncovered microstrip in air. Equations for may be adjusted to account for the metallic cover and used to compute Zo with dielectric using the expression, , where is the adjusted for the metallic cover. Finite strip thickness compensation may be computed by substituting from above for for both and calculations, using all air calculations and for all dielectric material calculations. In the below expressions, c is the cover height, the distance from the top of the dielectric to the metallic cover. The equation for is: . The equation for is . The equation for is . The equations are claimed to be accurate to within 1% for: . Suspended and inverted microstrip When the dielectric layer is suspended over the lower ground plane by an air layer, the substrate is known as a suspended substrate, which is analogous to the layer D in the microstrip illustration at the top right of the page being nonzero. The advantages of using a suspended substrate over a traditional microstrip are reduced dispersion effects, increased design frequencies, wider strip geometries, reduced structural inaccuracies, more precise electrical properties, and a higher obtainable characteristic impedance. The disadvantage is that suspended substrates are larger than traditional microstrip substrates, and are more difficult to manufacture. When the conductor is placed below the dielectric layer, as opposed to above, the microstrip is known as an inverted microstrip. Characteristic impedance Pramanick and Bhartia documented a series of equations used to approximate the characteristic impedance (Zo) and effective dielectric constant (Ere) for suspended and inverted microstrips. The equations are accessible directly from the reference and are not repeated here. John Smith worked out equations for the even and odd mode fringe capacitance for arrays of coupled microstrip lines in a suspended substrate using Fourier series expansion of the charge distribution, and provides 1960s style Fortran code that performs the function. Smith's work is detailed in the section below. Single single microstrip lines behave like coupled microstrips with infinitely wide gaps. Therefore, Smith's equations may be used to compute fringe capacitance of single microstrip lines by entering a large number for the gap into the equations such that the other coupled microstrip no longer significantly effects the electrical characteristic of the single microstrip, which is typically a value of seven substrate heights or higher. Inverted microstrips may be computed by swapping the cover height and suspended height variables. Microstrips with no metallic enclosure my be computed by entering a large variable into the metallic cover height such that the metallic cover no longer significantly effects the microstrip electrical characteristics, typically 50 or more times the height of the conductor over the substrate. Inverted microstrips may be computed by swapping the metallic cover height and suspended height variables. where B, C, and D are defined by the microstrip geometry that is shown in the upper right of the page. To compute the Zo and Ere values for a suspended or inverted microstrip, the plate capacitance may added to the fringe capacitance for each side of the microstrip line to compute the total capacitance for both the dielectric case (εr) case and air case (εra), and the results may be used to compute Zo and Ere, as shown: Bends In order to build a complete circuit in microstrip, it is often necessary for the path of a strip to turn through a large angle. An abrupt 90° bend in a microstrip will cause a significant portion of the signal on the strip to be reflected back towards its source, with only part of the signal transmitted on around the bend. One means of effecting a low-reflection bend, is to curve the path of the strip in an arc of radius at least 3 times the strip-width. However, a far more common technique, and one which consumes a smaller area of substrate, is to use a mitred bend. To a first approximation, an abrupt un-mitred bend behaves as a shunt capacitance placed between the ground plane and the bend in the strip. Mitring the bend reduces the area of metallization, and so removes the excess capacitance. The percentage mitre is the cut-away fraction of the diagonal between the inner and outer corners of the un-mitred bend. The optimum mitre for a wide range of microstrip geometries has been determined experimentally by Douville and James. They find that a good fit for the optimum percentage mitre is given by subject to and with the substrate dielectric constant . This formula is entirely independent of . The actual range of parameters for which Douville and James present evidence is and . They report a VSWR of better than 1.1 (i.e., a return loss better than −26 dB) for any percentage mitre within 4% (of the original ) of that given by the formula. At the minimum of 0.25, the percentage mitre is 98.4%, so that the strip is very nearly cut through. For both the curved and mitred bends, the electrical length is somewhat shorter than the physical path-length of the strip. Discontinuous junctions Other types of microstrip discontinuities besides bends (see above), also referred to as corners, are open ends, via holes (connections to the ground plane), steps in width, gaps between microstrips, tee junctions, and cross junctions. Extensive work has been performed developing models for these types of junctions, and are documented in publicly available literature, such as Quite universal circuit simulator (QUCS). Coupled microstrips Microstrip lines may be installed close enough to other microstrip lines such that electrical coupling interactions may exist between the lines. This may come about inadvertently as lines are laid out, or intentionally to shape a desired transfer function, or design a distributed filter. If the two lines are identical in width, they may be characterized by a coupled transmission line even and odd mode analysis. Characteristic impedance Closed form expressions for even and odd mode characteristic impedance (Zoe, Zoo) and effective dielectric constant (εree, εreo) have been developed with defined accuracy under stated conditions. They are available from the references and not repeated here. Fourier series solution John Smith worked out equations for the even and odd mode fringe capacitance for arrays of coupled microstrip lines with a metallic cover including suspended microstrips using Fourier series expansion of the charge distribution, and provides 1960s style Fortran code that performs the function. Uncovered microstrips are supported by assigning a cover height of generally 50 or more times the conductor height above the ground plane. Inverted microstrips are supported by reversing the cover height and suspended height variables. Smiths equations are advantageous in that they are theoretically valid for all values of conductor width, conductor separation, dielectric constant, cover height, and dielectric suspension height. Smith's equations contain a bottleneck (equation 37 on page 429) where the inverse of an elliptic integral ratio must be solved, , where is the complete elliptic integral of the first kind, is known, and is the variable that must be solved. Smith provides an elaborate search algorithm that usually converges on a solution for . However, Newton's method or interpolation tables may provide a more rapid and comprehensive solution for . To compute the even and odd mode Zo and εre values for an uncoupled microstrip, the plate capacitance is added to the even and odd mode fringe capacitance for the inside of the microstrip and the uncoupled fringe capacitance of the outer sides. The uncoupled fringe capacitance may be computed by applying a gap or separation value between the conductors to be infinity wide, which may be approximated by a value of 7 or more times the conductor height above the ground plane. even and odd mode Zo and εre are then computed a functions of even and odd mode capacitance for the dielectric case (εr) case and air case (εr=1) as shown: . John Smith's detailed solution Smith's Fourier series requires the inverse solution, k, to the elliptic integral ratio, , where K() is the complete elliptic integral of the first kind. Although Smith provides an elaborate search algorithm to find k, faster and more accurate convergence may be achieved with Newton's method, or interpolation tables may be employed. Since becomes extremely nonlinear as k approaches 0 and 1, Newton's method works better on the function . Once the value klg is solved for, k is obtained by . The Newton's method expression to solve for klg is as follows using standard derivative rules. Elliptic integral derivatives may be found on the elliptic integral page.: An interpolation table to find klg and k is shown below. For values of , it is useful to apply the relation shown in the table to maximize the linearity of the , or , function for use in Newton's method or interpolation. For example, . To compute the value of the total even and odd mode capacitance based on Smith's work using elliptic integrals and jacobi elliptic functions. Smith uses the third fast Jacobi elliptic function estimation algorithm found in the elliptic functions page. To obtain the total capacitance: where may be approximated by or more times the conductor height above the ground plane. Example and accuracy comparison Smith compares the accuracy of his Fourier series capacitance solutions to published tables of the times. However, a more modern approach is to compare the even and odd mode impedance and effective dielectric constants results to those obtains from electromagnetic simulations such as Sonnet. The below example is performed under the following conditions: B = 2.5 mm, C = 0.4 mm, D = 0.6 mm, W = 1.5 mm, G = 0.5 mm, Er = 12, where B, C, and D are defined by the microstrip geometry that is shown in the upper right of the page. The example begins by computing the value of log(k), then k, and goes on to use k, εr, substrate geometry, and conductor geometry to compute the capacitances and subsequently the even and odd mode impedance and effective dielectric constant (Zoe, Zoo, εre and εro). The Sonnet simulation is performed with a high resolution grid resolution of , reference planes of 7 mm on each side, and simulates the coupled line along a 10 mm length. The Y parameters results are translated to even and odd mode Zo and εr by algebraically inverting the Y parameter equations for coupled transmission lines. Asymmetrically coupled microstrips When two microstrip lines exist close enough in proximity for coupling to occur but are not symmetrical in width, even and odd mode analysis is not directly applicable to characterize the lines. In this case, the lines are generally characterized by their self and mutual inductance and capacitance. The defining techniques and expressions are available from the references. Multiple coupled microstrips In some cases, multiple microstrip lines may be coupled together. When this happens, each microstrip line will have a self capacitance and a gap capacitance to all of the other lines, including nonadjacent microstrips. Analysis is similar to the asymmetric coupled case above, but the capacitance and inductance matrices will be of size NXN, where N is the number of microstrips coupled together. Nonadjacent microstrip capacitance may be accurately calculated using the Finite element method (FEM). Losses Attenuation due to losses from the conductor and dielectric are generally considered when simulating a microstrip. Total losses are a function of microstrip length, so attenuation is generally calculated in units of attenuation per unit length, with total losses calculated by attenuation × length, with attenuation units of Nepers, although some applications may use attenuation in units dB. When the microstrip characteristic impedance (Zo), effective dielectric constant (Ere), and total losses () are all known the microstrip may be modeled as a standard transmission line. Conductor losses Conductor losses are defined by the "specific resistance" or "resistivity" of the conductor material, and generally expressed as in the literature. Each conductor material generally has a published resistivity associated with it. For example, the common conductor material of copper has a published resistivity of . E. Hammerstad and Ø. Jensen proposed the following expressions for attenuation due to conductor losses: and = sheet resistance of the conductor = current distribution factor = correction term due to surface roughness = vacuum permeability () = specific resistance, or resistivity, of the conductor = effective (rms) surface roughness of the substrate = skin depth = wave impedance in vacuum () Note that if surface roughness is neglected, the disappears from the expression, and it frequently is. Some authors use conductor thickness instead of skin depth to compute the sheet resistance, Rs. When this is the case, where t is conductor thickness. Dielectric losses Dielectric losses are defined by the "loss tangent" of the dielectric material, and generally expressed as in the literature. Each dielectric material generally has a published loss tangent associated with it. For example, the common dielectric material is alumina has a published loss tangent of depending on the frequency. Welch and Pratt, and Schneider proposed the following expressions for attenuation due to dielectric losses.: . Dielectric losses are in general much less that conductor losses and are frequently neglected in some applications. Coupled microstrip losses Coupled microstrip losses may be estimated using the same even and odd mode analysis as is used for characteristic impedance, dielectric constant. and effective dielectric constant for single line microstrips. Coupled line even mode and odd mode each have their independently calculated conductor and dielectric loss values calculated from the corresponding single line Zo and Ere. Wheeler proposed a conductor loss solution that takes into account the separation between the conductors: where: h = height of the conductor over the ground plane S = separation between the conductors W = width of the conductors t = thickness of the conductors. The partial derivatives with respect to the conductor's separation, thickness, and width may be calculated digitally. See also Distributed element filter Slow-wave coupler Spurline, a microstrip notch-filter References External links Microstrip in Microwave Encyclopedia Microstrip Analysis/Synthesis Calculator Microwave technology Planar transmission lines Printed circuit board manufacturing
Microstrip
Engineering
4,437
77,988,793
https://en.wikipedia.org/wiki/Prodipine
Prodipine (; developmental code name BY-101) is an experimental antiparkinsonian agent of the 4,4-diphenylpiperidine series related to budipine which was never marketed. It was the predecessor of budipine and was similarly found to be effective in the treatment of Parkinson's disease. However, prodipine produced side effects including gastrointestinal adverse effects, nausea and vomiting, and hypotension. Due to the nausea and vomiting with the oral form, it could only be tolerated with intravenous administration. As a result, budipine, which had fewer side effects, was developed instead. Pharmacology The mechanism of action of these drugs is unknown. However, budipine is known to stimulate the catecholaminergic system and to increase motor activity and vigilance in animals. It also increases brain dopamine, norepinephrine, and serotonin levels in animals treated with the monoamine depleting agent reserpine. It does not affect monoamine oxidase nor does it appear to interact with dopamine D2 receptors. Both budipine and prodipine have been described as "central stimulants" in addition to antiparkinsonian agents. Prodipine is said to have more tendency to induce hyperactivity than budipine. Analogues Besides prodipine and budipine, another close analogue, medipine, was also developed. References 4-Phenylpiperidines Abandoned drugs Antiparkinsonian agents Drugs with unknown mechanisms of action Isopropyl compounds Stimulants
Prodipine
Chemistry
340
53,834,425
https://en.wikipedia.org/wiki/Stuart%20Kornfeld
Stuart Arthur Kornfeld (born October 4, 1936) is a professor of medicine at Washington University in St. Louis and researcher in glycobiology. Early life and education Kornfeld was born in St. Louis on October 4, 1936, to Ruth and Max Kornfeld. He graduated from Ladue Horton Watkins High School in 1954. He received his A.B. in 1958 from Dartmouth College and his MD in 1962 from Washington University School of Medicine. In 1959, he married Rosalind Hauk, a PhD student at Washington University. Career After medical school, Kornfeld did an internship at Barnes Hospital in St. Louis, and spent 2 years (1963–1965) as a research associate at the National Institute of Arthritis and Metabolic Diseases of the National Institutes of Health. He then returned to Washington University where he has remained since, serving as the school's hematology division head for thirty years. He and his wife Rosalind, with whom he collaborated scientifically, were recruited to the faculty in 1966 alongside Phil Majerus by the University's Chairman of Medicine. Kornfeld was first an instructor of medicine, was promoted to assistant professor, and eventually professor in 1972. From 1991 to 1997, he served as the director of the Medical Scientist Training Program. Awards 1972—Elected to the American Society for Clinical Investigation 1976—Elected to the Association of American Physicians 1982—Elected to the National Academy of Sciences 1983—Elected to the Institute of Medicine 1984—Elected to the American Academy of Arts and Sciences 1987—Elected to the Finnish Society of Sciences and Letters 1992 E. Donnall Thomas Prize, American Society of Hematology (inaugural recipient) 1999—Karl Meyer Award, Society for Glycobiology 2010—E. B. Wilson Medal, American Society for Cell Biology (with James Rothman and Randy Schekman) 2010—Kober Medal, Association of American Physicians 2012—Herbert Tabor/Journal of Biological Chemistry Lectureship, American Society for Biochemistry and Molecular Biology References Further reading 1936 births Living people Washington University School of Medicine faculty Washington University School of Medicine alumni Dartmouth College alumni American biologists Ladue Horton Watkins High School alumni Members of the National Academy of Medicine Glycobiologists
Stuart Kornfeld
Chemistry
451
1,592,693
https://en.wikipedia.org/wiki/Koschevnikov%20gland
The Koschevnikov gland is a gland of the honeybee located near the sting shaft. The gland produces an alarm pheromone that is released when a bee stings. The pheromone contains more than 40 different compounds, including pentylacetate, butyl acetate, 1-hexanol, n-butanol, 1-octanol, hexylacetate, octylacetate, and 2-nonanol. These components have a low molar mass and evaporate quickly. This collection of compounds is the least specific of all pheromones. The alarm pheromone is released when a honey bee stings another animal to attract other bees to attack, as well. The release of the alarm pheromone may entice more bees to sting at the same location. Smoking the bees can reduce the pheromone's efficacy. References Bees Insect anatomy Arthropod glands
Koschevnikov gland
Chemistry,Biology
199
47,353,676
https://en.wikipedia.org/wiki/Kethoxal
Kethoxal (3-ethoxy-1,1-dihydroxy-2-butanone) is an organic compound that has antiviral and anaplasmosis properties. It also forms a stable covalent adduct with guanine, which makes it useful for nucleic acid structure determination. Nucleic acid binding Kethoxal, as with other 1,2-dicarbonyl compounds, reacts with nucleic acids. It has high specificity for guanine over other ribonucleotides. In whole RNA, it reacts preferentially with guanine residues that are not involved in hydrogen-bonding. It can thus be used to probe the interactions involved with the secondary structure and other binding interactions of RNA and help with nucleic acid sequence analysis. The binding is reversible, which allows the kethoxal to be removed and the original RNA recovered. References Diols Ketones Ethoxy compounds Antiviral drugs
Kethoxal
Chemistry,Biology
204
19,416,576
https://en.wikipedia.org/wiki/Croatian%20Register%20of%20Shipping
Croatian Register of Shipping (), also known as CRS, is an independent classification society established in 1949. It is a non-profit organisation working on the marine market, developing technical rules and supervising their implementation, managing risk and performing surveys on ships. The Society's head office is in Split. Croatian Register of Shipping is the member of the International Association of Classification Societies (IACS) since May 2011. The register is officially recognized by the Malta Maritime Authority. Historical record CRS is a heritor of ship classification activities at the eastern Adriatic coast. The Austrian Veritas was founded in this area, already in 1858, as the third classification society in the world. In 1918 the Austrian Veritas changed its name into the Adriatic Veritas and was acting as such till year 1921. CRS, acting till 1992 as JR (Yugoslav Register of Shipping), was founded in 1949. CRS Head Office is situated in Split, Republic of Croatia. CRS was associated member of International Association of Classification Societies (IACS) from April 1973 till 2004, and from May 2011 CRS gained the status of IACS Member CRS is the recognised classification society (RO) pursuant to the requirements of the Regulation (EC) No. 391/2009 of the European Parliament and of the Council on common rules and standards for ship inspection and survey organisations. CRS is the conformity assessment notified body notified under provisions of the Council Directive 94/25/EC relating to recreational craft, as amended by Directive 2003/44/EC. CRS is the conformity assessments notified body notified under provisions of the Council Directive 96/98/EC on marine equipment, as amended. CRS is the conformity assessments notified body notified under provisions of the Council Directive 97/23/EC (PED) on pressure equipment. CRS is the conformity assessments notified body notified under provisions of the Council Directive 2009/105/EC (SPVD) on simple pressure vessels. CRS is certified by British Standards Institution (BSI) confirming that CRS operates the Quality Management System which complies with the requirements of BS EN 9001:2008 for the scope of classification and statutory certification of ships, statutory certification of marine equipment and recreational crafts, and BSI Annual Statement of Compliance confirming that CRS Quality Management System complies with IACS Quality System Certification Scheme. Status CRS is an independent, not for profit but common welfare oriented, public foundation performing: classification of ships; statutory certification of ships on behalf of the national Maritime Administrations; statutory certification of recreational crafts; certification of materials and products; conformity assessment of marine equipment; conformity assessment of recreational crafts; certification / registration of quality management systems. The present status of CRS is defined by the Law on Croatian Register of Shipping (OFFICIAL GAZETTE No. 1996/81, as amended by OFFICIAL GAZETTE No. 2013/76) and Charter of CRS. Mission CRS mission in the field of classification and statutory certification is to promote the highest internationally adopted standards in the safety of life and property at sea and inland waterways, as well as in the protection of the sea and inland waterways environment. Certification / Accreditation From July 2005 CRS is in possession of the Certificate issued by British Standards Institution (BSI) which is certifying that CRS operate the Quality Management System which complies with the requirements of BS EN 9001:2000 for the scope of classification and statutory certification of ships, statutory certification of marine equipment and recreational crafts From February 2011 CRS is in possession of the “BSI Annual Statement of Compliance confirming Croatian Register of Shipping Compliance with IACS Quality System Certification Scheme. References External links Water transport in Croatia Ship classification societies Organizations based in Split, Croatia Ship registration 1949 establishments in Croatia
Croatian Register of Shipping
Engineering
759
1,299,607
https://en.wikipedia.org/wiki/Cycle%20of%20abuse
The cycle of abuse is a social cycle theory developed in 1979 by Lenore E. Walker to explain patterns of behavior in an abusive relationship. The phrase is also used more generally to describe any set of conditions which perpetuate abusive and dysfunctional relationships, such as abusive child rearing practices which tend to get passed down. Walker used the term more narrowly, to describe the cycling patterns of calm, violence, and reconciliation within an abusive relationship. Critics suggest the theory was based on inadequate research criteria, and cannot therefore be generalized upon. Overview Lenore E. Walker interviewed 1,500 women who had been subject to domestic violence and found that there was a similar pattern of abuse, called the "cycle of abuse". Initially, Walker proposed that the cycle of abuse described the controlling patriarchal behavior of men who felt entitled to abuse their wives to maintain control over them. She used the terms "the battering cycle" and "battered woman syndrome". Terms like "cycle of abuse" have been used instead for different reasons: to maintain objectivity; because the cycle of abuse doesn't always lead to physical abuse; because symptoms of the syndrome have been observed in men and women, and are not confined to marriage and dating. Similarly, Dutton (1994) writes, "The prevalence of violence in homosexual relationships, which also appear to go through abuse cycles is hard to explain in terms of men dominating women." The cycle of abuse concept is widely used in domestic violence programs, particularly in the United States. Critics have argued the theory is flawed as it does not apply as universally as Walker suggested, does not accurately or completely describe all abusive relationships, and may emphasize ideological presumptions rather than empirical data. Phases The cycle usually goes in the following order, and will repeat until the conflict is stopped, usually by the survivor entirely abandoning the relationship or some form of intervention. The cycle can occur hundreds of times in an abusive relationship, the total cycle taking anywhere from a few hours to a year or more to complete. However, the length of the cycle usually diminishes over time so that the "reconciliation" and "calm" stages may disappear, violence becomes more intense and the cycles become more frequent. 1: Tension building Stress builds from the pressures of daily life, like conflict over children, marital issues, misunderstandings, or other family conflicts. It also builds as the result of illness, legal or financial problems, unemployment, or catastrophic events, like floods, rape or war. During this period, the abuser feels ignored, threatened, annoyed or wronged. The feeling lasts on average several minutes to hours, although it may last as long as several months. To prevent violence, the victim may try to reduce the tension by becoming compliant and nurturing. Alternatively, the victim may provoke the abuser to get the abuse over with, prepare for the violence or lessen the degree of injury. However, the abuser is never justified in engaging in violent or abusive behavior. 2: Incident During this stage, the abuser attempts to dominate their victim. Outbursts of violence and abuse occur which may include verbal abuse and psychological abuse. In intimate partner violence, children are negatively affected by having witnessed the violence, and the partner's relationship degrades as well. The release of energy reduces the tension, and the abuser may feel or express that the victim "had it coming" to them. 3: Reconciliation The perpetrator may begin to feel remorse, guilty feelings, or fear that their partner will leave or call the police. The victim feels pain, fear, humiliation, disrespect, confusion, and may mistakenly feel responsible. Characterized by affection, apology, or, alternatively, ignoring the incident, this phase marks an apparent end of violence, with assurances that it will never happen again, or that the abuser will do their best to change. During this stage the abuser may feel or claim to feel overwhelming remorse and sadness. Some abusers walk away from the situation with little comment, but most will eventually shower the survivor with love and affection. The abuser may use self-harm or threats of suicide to gain sympathy and/or prevent the survivor from leaving the relationship. Abusers are frequently so convincing, and survivors so eager for the relationship to improve, that survivors (who are often worn down and confused by longstanding abuse) stay in the relationship. 4: Calm During this phase (which is often considered an element of the honeymoon/reconciliation phase), the relationship is relatively calm and peaceful. During this period the abuser may agree to engage in counselling, ask for forgiveness, and create a normal atmosphere. In intimate partner relationships, the perpetrator may buy presents or the couple may engage in passionate sex. Over time, the abuser's apologies and requests for forgiveness become less sincere and are generally stated to prevent separation or intervention. However, interpersonal difficulties will inevitably arise, leading again to the tension building phase. The effect of the continual cycle may include loss of love, contempt, distress, and/or physical disability. Intimate partners may separate, divorce or, at the extreme, someone may be killed. Critiques Walker's cycle of abuse theory was regarded as a revolutionary and important concept in the study of abuse and interpersonal violence, which is a useful model, but may be simplistic. For instance, Scott Allen Johnson developed a 14-stage cycle that broke down the tension-building, acting-out and calm stages further. For instance, there are six stages in the "escalation" or tension building stage. These lead up to the assault by acting out the revenge plan, self-destructive behavior, victim grooming and the actual physical and/or sexual assault. This is followed by a sense of relief, fear of consequences, distraction, and rationalization of abuse. Donald Dutton and Susan Golant agree that Walker's cycle of abuse accurately describes all cyclically abusive relationships they studied. Nonetheless, they also note that her initial research was based almost entirely on anecdotal data from a rather small set of women who were in violent relationships. Walker herself wrote, "These women were not randomly selected and they cannot be considered a legitimate data base from which to make specific generalizations." See also References Further reading Books Engel, Beverly Breaking the Cycle of Abuse: How to Move Beyond Your Past to Create an Abuse-Free Future (2005) Biddix, Brenda FireEagle Inside the Pain: (a survivors guide to breaking the cycles of abuse and domestic violence) (2006) Hameen, Latifah Suffering In Silence: Breaking the Cycle of Abuse (2006) Hegstrom, Paul Angry Men and the Women Who Love Them: Breaking the Cycle of Physical and Emotional Abuse (2004) Herbruck, Christine Comstock Breaking the cycle of child abuse (1979) Marecek, Mary Breaking Free from Partner Abuse: Voices of Battered Women Caught in the Cycle of Domestic Violence (1999) Mills, Linda G. Violent Partners: A Breakthrough Plan for Ending the Cycle of Abuse (2008) Ney, Philip G. & Peters, Anna Ending the Cycle of Abuse: The Stories of Women Abused As Children & the Group Therapy Techniques That Helped Them Heal (1995) Pugh, Roxanne Deliverance from the Vicious Cycle of Abuse (2007) Quinn, Phil E. Spare the Rod: Breaking the Cycle of Child Abuse (Parenting/Social Concerns and Issues) (1988) Smullens, SaraKay Setting Yourself Free :Breaking the Cycle of Emtional Abuse in Family, Friendships, Work and Love (2002) Waldfogel, Jane The Future of Child Protection: How to Break the Cycle of Abuse and Neglect (2001) Wiehe, Vernon R. What Parents Need to Know About Sibling Abuse: Breaking the Cycle of Violence (2002) Academic journals Coxe, R & Holmes, W A study of the cycle of abuse among child molesters. Journal of Child Sexual Abuse, v10 n4 p111-18 2001 Dodge, K. A., Bates, J. E. and Pettit, G. S. (1990) Mechanisms in the cycle of violence. Science, 250: 1678–1681. Egeland, B., Jacobvitz, D., & Sroufe, L. A. (1988). Breaking the cycle of abuse: Relationship predictors. Child Development, 59(4), 1080–1088. Egeland, B & Erickson, M - Rising above the past: Strategies for helping new mothers break the cycle of abuse and neglect. Zero to Three 1990, 11(2):29-35. Egeland, B. (1993) A history of abuse is a major risk factor for abusing the next generation. In: R. J. Gelles and D. R. Loseke (eds) Current controversies on family violence. Newbury Park, Calif.; London: Sage. Furniss, Kathleen K. Ending the cycle of abuse: what behavioral health professionals need to know about domestic violence.: An article from: Behavioral Healthcare (2007) Glasser, M & Campbell, D & Glasser, A & Leitch I & Farrelly S Cycle of child sexual abuse: links between being a victim and becoming a perpetrator The British Journal of Psychiatry (2001) 179: 482-494 Kirn, Timothy F. Sexual abuse cycle can be broken, experts assert.(Psychiatry): An article from: Internal Medicine News (2008) Quayle, E Taylor, M - Child pornography and the Internet: Perpetuating a cycle of abuse Deviant Behavior, Volume 23, Issue 4 July 2002, pages 331 - 361 Stone, AE & Fialk, RJ Criminalizing the exposure of children to family violence: Breaking the cycle of abuse 20 Harv. Women's L.J. 205, Spring, 1997 Woods, J Breaking the cycle of abuse and abusing: Individual psychotherapy for juvenile sex Clinical Child Psychology and Psychiatry, Vol. 2, No. 3, 379-392 (1997) Abuse Interpersonal relationships
Cycle of abuse
Biology
2,064
4,460,501
https://en.wikipedia.org/wiki/Multiple%20single-level
Multiple single-level or multi-security level (MSL) is a means to separate different levels of data by using separate computers or virtual machines for each level. It aims to give some of the benefits of multilevel security without needing special changes to the OS or applications, but at the cost of needing extra hardware. The drive to develop MLS operating systems was severely hampered by the dramatic fall in data processing costs in the early 1990s. Before the advent of desktop computing, users with classified processing requirements had to either spend a lot of money for a dedicated computer or use one that hosted an MLS operating system. Throughout the 1990s, however, many offices in the defense and intelligence communities took advantage of falling computing costs to deploy desktop systems classified to operate only at the highest classification level used in their organization. These desktop computers operated in system high mode and were connected with LANs that carried traffic at the same level as the computers. MSL implementations such as these neatly avoided the complexities of MLS but traded off technical simplicity for inefficient use of space. Because most users in classified environments also needed unclassified systems, users often had at least two computers and sometimes more (one for unclassified processing and one for each classification level processed). In addition, each computer was connected to its own LAN at the appropriate classification level, meaning that multiple dedicated cabling plants were incorporated (at considerable cost in terms of both installation and maintenance). Limits of MSL versus MLS The obvious shortcoming of MSL (as compared to MLS) is that it does not support immixture of various classification levels in any manner. For example, the notion of concatenating a SECRET data stream (taken from a SECRET file) with a TOP SECRET data stream (read from a TOP SECRET file) and directing the resultant TOP SECRET data stream into a TOP SECRET file is unsupported. In essence, an MSL system can be thought of as a set of parallel (and collocated) computer systems, each restricted to operation at one, and only one, security level. Indeed, the individual MSL operating systems may not even understand the concept of security levels, since they operate as single-level systems. For example, while one of a set of collocated MSL OS may be configured to affix the character string "SECRET" to all output, that OS has no understanding of how the data compares in sensitivity and criticality to the data processed by its peer OS that affixes the string "UNCLASSIFIED" to all of its output. Operating across two or more security levels then, must use methods extraneous to the purview of the MSL "operating systems" per se, and needing human intervention, termed "manual review". For example, an independent monitor (not in Brinch Hansen's sense of the term) may be provided to support migration of data among multiple MSL peers (e.g., copying a data file from the UNCLASSIFIED peer to the SECRET peer). Although no strict requirements by way of federal legislation specifically address the concern, it would be appropriate for such a monitor to be quite small, purpose-built, and supportive of only a small number of very rigidly defined operations, such as importing and exporting files, configuring output labels, and other maintenance/administration tasks that require handling all the collocated MSL peers as a unit rather than as individual, single-level systems. It may also be appropriate to utilize a hypervisor software architecture, such as VMware, to provide a set of peer MSL "OS" in the form of distinct, virtualized environments supported by an underlying OS that is only accessible to administrators cleared for all of the data managed by any of the peers. From the users' perspectives, each peer would present a login or X display manager session logically indistinguishable from the underlying "maintenance OS" user environment. Advances in MSL The cost and complexity involved in maintaining distinct networks for each level of classification led the National Security Agency (NSA) to begin research into ways in which the MSL concept of dedicated system high systems could be preserved while reducing the physical investment demanded by multiple networks and computers. Periods processing was the first advance in this area, establishing protocols by which agencies could connect a computer to a network at one classification, process information, sanitize the system, and connect it to a different network with another classification. The periods processing model offered the promise of a single computer but did nothing to reduce multiple cabling plants and proved enormously inconvenient to users; accordingly, its adoption was limited. In the 1990s, the rise of virtualization technology changed the playing field for MSL systems. Suddenly, it was possible to create virtual machines (VMs) that behaved as independent computers but ran on a common hardware platform. With virtualization, NSA saw a way to preserve periods processing on a virtual level, no longer needing the physical system to be sanitized by performing all processing within dedicated, system-high VMs. To make MSL work in a virtual environment, however, it was necessary to find a way to securely control the virtual session manager and ensure that no compromising activity directed at one VM could compromise another. MSL solutions NSA pursued multiple programs aimed at creating viable, secure MSL technologies leveraging virtualization. To date, three major solutions have materialized. "Multiple Independent Levels of Security" or MILS, an architectural concept developed by Dr. John Rushby that combines high-assurance security separation with high-assurance safety separation. Subsequent refinement by NSA and Naval Postgraduate School in collaboration with Air Force Research Laboratory, Lockheed Martin, Rockwell Collins, Objective Interface Systems, University of Idaho, Boeing, Raytheon, and MITRE resulted in a Common Criteria EAL-6+ Protection Profile for a high-assurance separation kernel. "NetTop", developed by NSA in partnership with VMWare, Inc., uses security-enhanced Linux (SELinux) as the base operating system for its technology. The SELinux OS securely holds the virtual session manager, which in turn creates virtual machines to perform processing and support functions. The "Trusted Multi-Net", a commercial off-the-shelf (COTS) system based on a thin client model, was developed jointly by an industry coalition including Microsoft Corporation, Citrix Systems, NYTOR Technologies, VMWare, Inc., and MITRE Corporation to offer users access to classified and unclassified networks. Its architecture eliminates the need for multiple cabling plants, leveraging encryption to transmit all traffic over a cable approved for the highest level accessed. Both the NetTop and Trusted Multi-Net solutions have been approved for use. In addition, Trusted Computer Solutions has developed a thin-client product, originally based on the NetTop technology concepts through a licensing agreement with NSA. This product is called SecureOffice(r) Trusted Thin Client(tm), and runs on the LSPP configuration of Red Hat Enterprise Linux version 5 (RHEL5). Three competing companies have implemented MILS separation kernels: Green Hills Software LynuxWorks Wind River Systems In addition, there have been advances in the development of non-virtualization MSL systems through the use of specialized hardware, resulting in at least one viable solution: The Starlight Technology (now marketed as the Interactive Link System), developed by the Australian Defence Science Technology Organisation (DSTO) and Tenix Pty Ltd, uses specialized hardware to allow users to interact with a "Low" network from a "High" network session within a window, without any data flowing from the "High" to the "Low" network. Philosophical aspects, ease of use, flexibility It is interesting to consider the philosophical implications of the MSL "solution path." Rather than providing MLS abilities within a classical OS, the chosen direction is to build a set of "virtual OS" peers that can be managed, individually and as a collective, by an underlying real OS. If the underlying OS (let us introduce the term maintenance operating system, or MOS) is to have sufficient understanding of MLS semantics to prevent grievous errors, such as copying data from a TOP SECRET MSL peer to an UNCLASSIFIED MSL peer, then the MOS must have the ability to: represent labels; associate labels with entities (here we rigorously avoid the terms "subject" and "object"); compare labels (rigorously avoiding the term "reference monitor"); distinguish between those contexts where labels are meaningful and those where they are not (rigorously avoiding the term "trusted computing base" [TCB]); the list goes on. One readily perceives that the MLS architecture and design issues have not been eliminated, merely deferred to a separate stratum of software that invisibly manages mandatory access control concerns so that superjacent strata need not. This concept is none other than the geminal architectural concept (taken from the Anderson Report) underlying DoD-style trusted systems in the first place. What has been positively achieved by the set-of-MSL-peers abstraction, albeit, is radical restriction of the scope of MAC-cognizant software mechanisms to the small, subjacent MOS. This has been accomplished, however, at the cost of eliminating any practical MLS abilities, even the most elementary ones, as when a SECRET-cleared user appends an UNCLASSIFIED paragraph, taken from an UNCLASSIFIED file, to his SECRET report. The MSL implementation would obviously require every "reusable" resource (in this example, the UNCLASSIFIED file) to be replicated across every MSL peer that might find it useful—meaning either much secondary storage needlessly expended or intolerable burden on the cleared administrator able to effect such replications in response to users' requests therefor. (Of course, since the SECRET user cannot "browse" the system's UNCLASSIFIED offerings other than by logging out and beginning an UNCLASSIFIED system afresh, one evidences yet another severe limitation on functionality and flexibility.) Alternatively, less sensitive file systems could be NFS-mounted read-only so that more trustworthy users could browse, but not modify, their content. Albeit, the MLS OS peer would have no actual means for distinguishing (via a directory listing command, e.g.) that the NFS-mounted resources are at a different level of sensitivity than the local resources, and no strict means for preventing illegal uphill flow of sensitive information other than the brute-force, all-or-nothing mechanism of read-only NFS mounting. To demonstrate just what a handicap this drastic effectuation of "cross-level file sharing" actually is, consider the case of an MLS system that supports UNCLASSIFIED, SECRET, and TOP SECRET data, and a TOP SECRET cleared user who logs into the system at that level. MLS directory structures are built around the containment principle, which, loosely speaking, dictates that higher sensitivity levels reside deeper in the tree: commonly, the level of a directory must match or dominate that of its parent, while the level of a file (more specifically, of any link thereto) must match that of the directory that catalogs it. (This is strictly true of MLS UNIX: alternatives that support different conceptions of directories, directory entries, i-nodes, etc.—such as Multics, which adds the "branch" abstraction to its directory paradigm—tolerate a broader set of alternative implementations.) Orthogonal mechanisms are provided for publicly shared and spool directories, such as /tmp or C:\TEMP, which are automatically—and invisibly—partitioned by the OS, with users' file access requests automatically "deflected" to the appropriately labeled directory partition. The TOP SECRET user is free to browse the entire system, his only restriction being that—while logged in at that level—he is only allowed to create fresh TOP SECRET files within specific directories or their descendants. In the MSL alternative, where any browsable content must be specifically, laboriously replicated across all applicable levels by a fully cleared administrator—meaning, in this case, that all SECRET data must be replicated to the TOP SECRET MSL peer OS, while all UNCLASSIFIED data must be replicated to both the SECRET and TOP SECRET peers—one can readily perceive that, the more highly cleared the user, the more frustrating his timesharing computing experience will be. In a classical trusted systems-theoretic sense—relying upon terminology and concepts taken from the Orange Book, the foundation of trusted computing—a system that supports MSL peers could not achieve a level of assurance beyond (B1). This is because the (B2) criteria require, among other things, both clear identification of a TCB perimeter and the existence of a single, identifiable entity that has the ability and authority to adjudicate access to all data represented throughout all accessible resources of the ADP system. In a very real sense, then, the application of the term "high assurance" as a descriptor of MSL implementations is nonsensical, since the term "high assurance" is properly limited to (B3) and (A1) systems—and, with some laxity albeit, to (B2) systems. Cross-domain solutions MSL systems, whether virtual or physical in nature, are designed to preserve isolation between different classification levels. Consequently, (unlike MLS systems), an MSL environment has no innate abilities to move data from one level to another. To permit data sharing between computers working at different classification levels, such sites deploy cross-domain solutions (CDS), which are commonly referred to as gatekeepers or guards. Guards, which often leverage MLS technologies themselves, filter traffic flowing between networks; unlike a commercial Internet firewall, however, a guard is built to much more stringent assurance requirements and its filtering is carefully designed to try to prevent any improper leakage of classified information between LANs operating at different security levels. Data diode technologies are used extensively where data flows are required to be restricted to one direction between levels, with a high level of assurance that data will not flow in the opposite direction. In general, these are subject to the same restrictions that have imposed challenges on other MLS solutions: strict security assessment and the need to provide an electronic equivalent of stated policy for moving information between classifications. (Moving information down in classification level is particularly challenging and typically requires approval from several different people.) As of late 2005, numerous high-assurance platforms and guard applications have been approved for use in classified environments. N.b. that the term "high-assurance" as employed here is to be evaluated in the context of DCID 6/3 (read "dee skid six three"), a quasi-technical guide to the construction and deployment of various systems for processing classified information, lacking both the precise legal rigidity of the Orange Book criteria and the underlying mathematical rigor. (The Orange Book is motivated by, and derived from, a logical "chain of reasoning" constructed as follows: [a] a "secure" state is mathematically defined, and a mathematical model is constructed, the operations upon which preserve secure state so that any conceivable sequence of operations starting from a secure state yields a secure state; [b] a mapping of judiciously chosen primitives to sequences of operations upon the model; and [c] a "descriptive top-level specification" that maps actions that can be transacted at the user interface (such as system calls) into sequences of primitives; but stopping short of either [d] formally demonstrating that a live software implementation correctly implements said sequences of actions; or [e] formally arguing that the executable, now "trusted," system is generated by correct, reliable tools [e.g., compilers, librarians, linkers].) Computer security models
Multiple single-level
Engineering
3,272
23,820,115
https://en.wikipedia.org/wiki/Gymnopilus%20abramsii
Gymnopilus abramsii is a species of mushroom-forming fungus in the family Hymenogastraceae. It was first described by American mycologist Murrill in 1917. The epithet abramsii commemorates LeRoy Abrams. Description The cap is in diameter. Habitat and distribution Found in California, Gymnopilus abramsii grows on soil, and typically fruits in November. See also List of Gymnopilus species References abramsii Fungi described in 1917 Fungi of North America Taxa named by William Alphonso Murrill Fungus species
Gymnopilus abramsii
Biology
111
9,416,468
https://en.wikipedia.org/wiki/Leratiomyces%20ceres
Leratiomyces ceres, commonly known as the chip cherry or redlead roundhead, is mushroom which has a bright red to orange cap and dark purple-brown spore deposit. It is usually found growing gregariously on wood chips and is one of the most common and most distinctive mushrooms found in that habitat. It is common on wood chips and lawns in North America, Europe, Australia, New Zealand and elsewhere. The name Stropharia aurantiaca has been used extensively but incorrectly for this mushroom (together with a number of similar synonyms). Description L. ceres may be described as follows. Cap: 2 to 6 cm in diameter, with thin flesh and a bright red to orange top which is convex to plane in age. Has white partial veil remnants when young. The cap surface is usually dry, but can be slightly viscid when moist. Gills: White to pale grey at first, later darker purple/brown or purplish grey with whitish edges. Attached (adnexed to adnate) and often notched. Stipe: Whitish, often with dark orange stains in age (most evident around base), 3–6 cm long and 0.5 to 1 cm wide, equal to slightly larger at the base, which often has mycelium attached. The veil is thin and leaves a fragile, indistinct ring, sometimes missing with age. The stalk is smooth above the ring zone and is fluffy with tiny scales below, which often wash off in rain. Spores: Dark purple/brown. 10–13.5 × 6–8.5 m. Elliptical and smooth. Other microscopic features: Chrysocystidia are present both on the edges and on the faces of the gills. Naming There has been some confusion between L. ceres, which has a fairly thick white stem, and L. squamosus var. thaustus, which has a slender stem and prominent scales below the ring zone (although the two taxa are quite easy to distinguish by sight). Around 1885 Mordecai Cubitt Cooke originated the names Agaricus squamosus f. aurantiacus and Agaricus thraustus var. aurantiacus, and this later gave rise to the name Stropharia aurantiaca. This name is defined by Cooke's illustration to his Handbook of British Fungi and in 2004 Richard Fortey discovered that this illustration was not of L. ceres, as had generally been assumed, but it was L. squamosus var. thaustus. Thus the name aurantiaca is best avoided, being wrong when applied to L. ceres. The name Agaricus ceres was created in 1888 by Cooke and Massee for the white-stemmed species, and was reclassified as Psilocybe ceres (in 1891) and Leratiomyces ceres (in 2008). Similar species Similar species include L. squamosus, Agrocybe putaminum, Gymnopilus sapineus, Psathyrella corrugis, Stropharia squamosa, S. thrausta, and Tubaria furfuracea. In psilocybin mushroom hunting communities in Australia and New Zealand, L. ceres (or "Larrys" as commonly nicknamed) are scorned as lookalikes and imposters of Psilocybe species on wood chip. Prolific growth in the same habitats and a similar appearance from afar can give false hope of a large bounty, but on closer inspection the species are not particularly alike. References External links Mykoweb - Leratiomyces ceres Mushroom Expert - Leratiomyces ceres Strophariaceae Fungi of Europe Fungi of North America Fungi native to Australia Fungus species
Leratiomyces ceres
Biology
780
3,919,625
https://en.wikipedia.org/wiki/Stepwell
Stepwells (also known as vavs or baori) are wells, cisterns or ponds with a long corridor of steps that descend to the water level. Stepwells played a significant role in defining subterranean architecture in western India from the 7th to the 19th century. Some stepwells are multi-storeyed and can be accessed by a Persian wheel which is pulled by a bull to bring water to the first or second floor. They are most common in western India and are also found in the other more arid regions of the Indian subcontinent, extending into Pakistan. The construction of stepwells is mainly utilitarian, though they may include embellishments of architectural significance, and be temple tanks. Stepwells are examples of the many types of storage and irrigation tanks that were developed in India, mainly to cope with seasonal fluctuations in water availability. A basic difference between stepwells on one hand, and tanks and wells on the other, is that stepwells make it easier for people to reach the groundwater and to maintain and manage the well. Basic architecture The builders dug deep trenches into the earth for dependable, year-round groundwater. They lined the walls of these trenches with blocks of stone, without mortar, and created stairs leading down to the water. This led to the building of some significant ornamental and architectural features, often associated with dwellings in urban areas. It also ensured their survival as monuments. A stepwell structure consists of two sections: a vertical shaft from which water is drawn and the surrounding inclined subterranean passageways and the chambers and steps which provide access to the well. The galleries and chambers surrounding these wells were often carved profusely with elaborate detail and became cool, quiet retreats during the hot summers. Names A number of distinct names, sometimes local, exist for stepwells. In Hindi-speaking regions, they include names based on baudi (including bawdi (), bawri, bawari, baori, baoli, bavadi and bavdi). In Gujarati and Marwari language, they are usually called vav, vavri or vaav (). Other names include kalyani or pushkarani (Kannada), baoli (), barav () and degeenar (Bhojpuri: 𑂙𑂵𑂏𑂲𑂢𑂰𑂩). History The stepwell may have originated during periods of drought to ensure enough access to the water. The earliest archaeological evidence of stepwells is found at Dholavira where the site also has water tanks or reservoirs with flights of steps. Mohenjo Daro's great bath is also provided with steps on opposite directions. Ashokan inscriptions mention construction of stepwells along major Indian roads at a distance of every 8 kos (about 20.8 miles or 33.5 km) for the convenience of travellers, but Ashoka states that it was a well-established practice which predated him and was done by former kings as well. The first rock-cut stepwells in India date from 200 to 400 AD. The earliest example of a bath-like pond reached by steps is found at Uperkot caves in Junagadh. These caves are dated to the 4th century. Navghan Kuvo, a well with the circular staircase in the vicinity, is another example. It was possibly built in Western Satrap (200–400 AD) or Maitraka (600–700 AD) period, though some place it as late as the 11th century. The nearby Adi Kadi Vav was constructed either in the second half of the 10th century or the 15th century. The stepwells at Dhank in Rajkot district are dated to 550–625 AD. The stepped ponds at Bhinmal (850–950 AD) are followed by it. The stepwells were constructed in the southwestern region of Gujarat around 600 AD; from there they spread north to Rajasthan and subsequently to the north and west India. Initially used as an art form by Hindus, the construction of these stepwells hit its peak during Muslim rule from the 11th to 16th century. One of the earliest existing examples of stepwells was built in the 11th century in Gujarat, the Mata Bhavani's Stepwell. A long flight of steps leads to the water below a sequence of multi-story open pavilions positioned along the east–west axis. The elaborate ornamentation of the columns, brackets and beams are a prime example of how stepwells were used as a form of art. The Mughal emperors did not disrupt the culture that was practiced in these stepwells and encouraged the building of stepwells. The authorities during the British Raj found the hygiene of the stepwells less than desirable and installed pipe and pump systems to replace their purpose. Location of a stepwell A stepwell is generally located in two places - as an extension or part of a temple, and/or the outskirts of a village. When a stepwell is associated with a temple or a shrine, it is either at the opposite wall of it or in front of the temple. Sindhvai Mata stepwell in Patan, Mata Bhavani stepwell in Ahmedabad, and the Ankol Mata stepwell in Davad serve as a great example of the stepwells that house shrines. Function and use The stepwell ensures the availability of water during periods of drought. The stepwells had social, cultural and religious significance. These stepwells were proven to be well-built sturdy structures, after withstanding earthquakes. Most places in India where there is abundant fresh water only during the monsoon season, stepwell and wells play a critical role in serving as a direct means to fresh water filtered through the earth. While the rivers, rivulets, creeks, and other natural water bodies dry up in this climate zone, stepwell and wells remain at a depth where there is less exposure to sun and heat. The majority of surviving stepwells originally served a leisure purpose alongside being main source of water for basic needs like bathing, washing clothes, farming, and watering animals. Stepwells also served as a place for social gatherings and religious ceremonies. Usually, women were more associated with these wells because they were the ones who collected the water. Also, it was they who prayed and offered gifts to the goddess of the well for her blessings. The well-water is known to attract insects, animals, and many other germ breeding organisms. These stepwells, being a common space in frequent use by the inhabitants of the area, were considered to be a source of spreading epidemics and diseases. Details Many stepwells have ornamentation and details as elaborate as those of Hindu temples. Proportions in relationship to the human body were used in their design, as they were in many other structures in Indian architecture. Stepped ponds Stepped ponds are very similar to stepwells in terms of purpose. Generally, stepped ponds accompany nearby temples while stepwells are more isolated. Stepwells are dark and barely visible from the surface, while stepped ponds are illuminated by the light from the sun. Stepwells are more linear in design compared to the rectangular shape of stepped ponds. In India A number of surviving significant stepwells in India: can be found across India, including in Rajasthan, Gujarat, Delhi, Madhya Pradesh, Maharashtra, and North Karnataka (Karnataka). In 2016 a collaborative mapping project, Stepwell Atlas, started to map GPS coordinates and collate information on stepwells, mapping over 2800 stepwells in India. Another project mapped the location of over 1700 stepwells in Maharashtra. Delhi & Haryana: Stepwells of Delhi & Haryana: In his book Delhi Heritage: Top 10 Baolis, Vikramjit Singh Rooprai mentions that Delhi alone has 32 stepwells. Out of these, 16 are lost, but their locations can be traced. Of the remaining 16, only 14 are accessible to public and the water level in these keeps varying, while two are now permanently dry. Gujarat: Rani ki vav at Patan Adalaj ni Vav at Adalaj, Gandhinagar Dada Harir Stepwell, Ahmedbad Navghan Kuvo and Adi Kadi vav, Uparkot Fort, Junagadh Vanarashi Vav, Vavdi, Bhavnagar district Haryana: Baoli Ghaus Ali Shah, Farrukhnagar, Gurugram district Karnataka: Kalyani, Hulikere Bhoga Nandeeshwara Temple, Karnataka Kerala: Sree Peralassery Temple Maharashtra: Charthana Stepwell, Parbhani Pingli Stepwell, Parbhani Arvi Stepwell, Parbhani Rajasthan: Bundi: has over 60 baolis in and around the town. Raniji ki Baori in Bundi Jaipur: Chand Baori in Abhaneri near Jaipur Panna Meena ka Kund, Amer Jodhpur Birkha Bawari, Neem Ka Thana Udoji ki Baori at Mandholi 5 km north of Neem ka Thana on Neem ka Thana-Mandholi-Khetri highway. Udaipur Telangana: Bansilalpet Stepwell Uttar Pradesh: Shahi Baoli, Lucknow In Pakistan Stepwells from Mughal periods still exist in Pakistan. Some are in preserved conditions while others are not. Bahar Wali Baoli, in Kharian Rohtas Fort, near Jhelum Wan Bhachran, near Mianwali Losar Baoli, near Islamabad Makli Baoli, near Thatta Influence Stepwells influenced many other structures in Indian architecture, especially those that incorporate water into their design. For example, the Aram Bagh in Agra was the first Mughal garden in India. It was designed by the Mughal emperor Babur and reflected his notion of paradise not only through water and landscaping but also through symmetry by including a reflecting pool in the design. He was inspired by stepwells and felt that one would complement the garden of his palace. Many other Mughal gardens include reflecting pools to enhance the landscape or serving as an elegant entrance. Other notable gardens in India which incorporate water into their design include: Humayun's Tomb, Nizamuddin East, Delhi Taj Mahal, Agra Mehtab Bagh, Agra Safdarjung's Tomb Shalimar Bagh (Srinagar), Jammu and Kashmir Nishat Gardens, Jammu and Kashmir Yadvindra Gardens, Pinjore Khusro Bagh, Allahabad Roshanara Bagh Gallery See also Ancient India Water supply and sanitation in the Indus-Saraswati Valley Civilisation History of stepwells in Gujarat Water resources in India Notes References Rima Hooja: "Channeling Nature: Hydraulics, Traditional Knowledge Systems, And Water Resource Management in India – A Historical Perspective". At infinityfoundation.com Livingston, Morna & Beach, Milo (2002). Steps to Water: The Ancient Stepwells of India. Princeton Architectural Press. . Vikramjit Singh Rooprai. Delhi Heritage: Top 10 Baolis (2019). Niyogi Books. . Jutta Jain Neubauer The Stepwells of Gujarat: An art-historical Perspective (2001) Philip Davies, The Penguin guide to the monuments of India, Vol II (London: Viking, 1989) Christopher Tadgell, The History of Architecture in India (London: Phaidon Press, 1990) Abhilash Shekhawat, "Stepwells of Gujarat." India's Invitation. 2010. Web. 29 March 2012.<http://www.indiasinvitation.com/stepwells_of_gujarat/>. Further reading Azmi, Feza Tabassum. The ancient stepwells helping to curb India's water crisis, BBC External links Stepwell Atlas Stepwells of India Agrasen ki Baoli Stepwell architecture Stepwell on Oxfort Art Online India's Forgotten Stepwells at ArchDaily Irrigation Water wells Rainwater harvesting Ponds Subterranean buildings and structures Indian inventions Buildings and structures in Gujarat Architecture in India Architecture in Pakistan Rajasthani architecture Water conservation in India
Stepwell
Chemistry,Engineering,Environmental_science
2,474