id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
2,485,027 | https://en.wikipedia.org/wiki/Non-covalent%20interaction | In chemistry, a non-covalent interaction differs from a covalent bond in that it does not involve the sharing of electrons, but rather involves more dispersed variations of electromagnetic interactions between molecules or within a molecule. The chemical energy released in the formation of non-covalent interactions is typically on the order of 1–5 kcal/mol (1000–5000 calories per 6.02 molecules). Non-covalent interactions can be classified into different categories, such as electrostatic, π-effects, van der Waals forces, and hydrophobic effects.
Non-covalent interactions are critical in maintaining the three-dimensional structure of large molecules, such as proteins and nucleic acids. They are also involved in many biological processes in which large molecules bind specifically but transiently to one another (see the properties section of the DNA page). These interactions also heavily influence drug design, crystallinity and design of materials, particularly for self-assembly, and, in general, the synthesis of many organic molecules.
The non-covalent interactions may occur between different parts of the same molecule (e.g. during protein folding) or between different molecules and therefore are discussed also as intermolecular forces.
Electrostatic interactions
Ionic
Ionic interactions involve the attraction of ions or molecules with full permanent charges of opposite signs. For example, sodium fluoride involves the attraction of the positive charge on sodium (Na+) with the negative charge on fluoride (F−). However, this particular interaction is easily broken upon addition to water, or other highly polar solvents. In water ion pairing is mostly entropy driven; a single salt bridge usually amounts to an attraction value of about ΔG =5 kJ/mol at an intermediate ion strength I, at I close to zero the value increases to about 8 kJ/mol. The ΔG values are usually additive and largely independent of the nature of the participating ions, except for transition metal ions etc.
These interactions can also be seen in molecules with a localized charge on a particular atom. For example, the full negative charge associated with ethoxide, the conjugate base of ethanol, is most commonly accompanied by the positive charge of an alkali metal salt such as the sodium cation (Na+).
Hydrogen bonding
A hydrogen bond (H-bond), is a specific type of interaction that involves dipole–dipole attraction between a partially positive hydrogen atom and a highly electronegative, partially negative oxygen, nitrogen, sulfur, or fluorine atom (not covalently bound to said hydrogen atom). It is not a covalent bond, but instead is classified as a strong non-covalent interaction. It is responsible for why water is a liquid at room temperature and not a gas (given water's low molecular weight). Most commonly, the strength of hydrogen bonds lies between 0–4 kcal/mol, but can sometimes be as strong as 40 kcal/mol In solvents such as chloroform or carbon tetrachloride one observes e.g. for the interaction between amides additive values of about 5 kJ/mol. According to Linus Pauling the strength of a hydrogen bond is essentially determined by the electrostatic charges. Measurements of thousands of complexes in chloroform or carbon tetrachloride have led to additive free energy increments for all kind of donor-acceptor combinations.
Halogen bonding
Halogen bonding is a type of non-covalent interaction which does not involve the formation nor breaking of actual bonds, but rather is similar to the dipole–dipole interaction known as hydrogen bonding. In halogen bonding, a halogen atom acts as an electrophile, or electron-seeking species, and forms a weak electrostatic interaction with a nucleophile, or electron-rich species. The nucleophilic agent in these interactions tends to be highly electronegative (such as oxygen, nitrogen, or sulfur), or may be anionic, bearing a negative formal charge. As compared to hydrogen bonding, the halogen atom takes the place of the partially positively charged hydrogen as the electrophile.
Halogen bonding should not be confused with halogen–aromatic interactions, as the two are related but differ by definition. Halogen–aromatic interactions involve an electron-rich aromatic π-cloud as a nucleophile; halogen bonding is restricted to monatomic nucleophiles.
Van der Waals forces
Van der Waals forces are a subset of electrostatic interactions involving permanent or induced dipoles (or multipoles). These include the following:
permanent dipole–dipole interactions, alternatively called the Keesom force
dipole-induced dipole interactions, or the Debye force
induced dipole-induced dipole interactions, commonly referred to as London dispersion forces
Hydrogen bonding and halogen bonding are typically not classified as Van der Waals forces.
Dipole–dipole
Dipole-dipole interactions are electrostatic interactions between permanent dipoles in molecules. These interactions tend to align the molecules to increase attraction (reducing potential energy). Normally, dipoles are associated with electronegative atoms, including oxygen, nitrogen, sulfur, and fluorine.
For example, acetone, the active ingredient in some nail polish removers, has a net dipole associated with the carbonyl (see figure 2). Since oxygen is more electronegative than the carbon that is covalently bonded to it, the electrons associated with that bond will be closer to the oxygen than the carbon, creating a partial negative charge (δ−) on the oxygen, and a partial positive charge (δ+) on the carbon. They are not full charges because the electrons are still shared through a covalent bond between the oxygen and carbon. If the electrons were no longer being shared, then the oxygen-carbon bond would be an electrostatic interaction.
Often molecules contain dipolar groups, but have no overall dipole moment. This occurs if there is symmetry within the molecule that causes the dipoles to cancel each other out. This occurs in molecules such as tetrachloromethane. Note that the dipole-dipole interaction between two individual atoms is usually zero, since atoms rarely carry a permanent dipole. See atomic dipoles.
Dipole-induced dipole
A dipole-induced dipole interaction (Debye force) is due to the approach of a molecule with a permanent dipole to another non-polar molecule with no permanent dipole. This approach causes the electrons of the non-polar molecule to be polarized toward or away from the dipole (or "induce" a dipole) of the approaching molecule. Specifically, the dipole can cause electrostatic attraction or repulsion of the electrons from the non-polar molecule, depending on orientation of the incoming dipole. Atoms with larger atomic radii are considered more "polarizable" and therefore experience greater attractions as a result of the Debye force.
London dispersion forces
London dispersion forces are the weakest type of non-covalent interaction. In organic molecules, however, the multitude of contacts can lead to larger contributions, particularly in the presence of heteroatoms. They are also known as "induced dipole-induced dipole interactions" and present between all molecules, even those which inherently do not have permanent dipoles. Dispersive interactions increase with the polarizability of interacting groups, but are weakened by solvents of increased polarizability. They are caused by the temporary repulsion of electrons away from the electrons of a neighboring molecule, leading to a partially positive dipole on one molecule and a partially negative dipole on another molecule. Hexane is a good example of a molecule with no polarity or highly electronegative atoms, yet is a liquid at room temperature due mainly to London dispersion forces. In this example, when one hexane molecule approaches another, a temporary, weak partially negative dipole on the incoming hexane can polarize the electron cloud of another, causing a partially positive dipole on that hexane molecule. In absence of solvents hydrocarbons such as hexane form crystals due to dispersive forces ; the sublimation heat of crystals is a measure of the dispersive interaction. While these interactions are short-lived and very weak, they can be responsible for why certain non-polar molecules are liquids at room temperature.
π-effects
π-effects can be broken down into numerous categories, including π-stacking, cation-π and anion-π interactions, and polar-π interactions. In general, π-effects are associated with the interactions of molecules with the π-systems of arenes.
π–π interaction
π–π interactions are associated with the interaction between the π-orbitals of a molecular system. The high polarizability of aromatic rings lead to dispersive interactions as major contribution to so-called stacking effects. These play a major role for interactions of nucleobases e.g. in DNA. For a simple example, a benzene ring, with its fully conjugated π cloud, will interact in two major ways (and one minor way) with a neighboring benzene ring through a π–π interaction (see figure 3). The two major ways that benzene stacks are edge-to-face, with an enthalpy of ~2 kcal/mol, and displaced (or slip stacked), with an enthalpy of ~2.3 kcal/mol. The sandwich configuration is not nearly as stable of an interaction as the previously two mentioned due to high electrostatic repulsion of the electrons in the π orbitals.
Cation–π and anion–π interaction
Cation–pi interactions can be as strong or stronger than H-bonding in some contexts.
Anion–π interactions are very similar to cation–π interactions, but reversed. In this case, an anion sits atop an electron-poor π-system, usually established by the presence of electron-withdrawing substituents on the conjugated molecule
Polar–π
Polar–π interactions involve molecules with permanent dipoles (such as water) interacting with the quadrupole moment of a π-system (such as that in benzene (see figure 5). While not as strong as a cation-π interaction, these interactions can be quite strong (~1-2 kcal/mol), and are commonly involved in protein folding and crystallinity of solids containing both hydrogen bonding and π-systems. In fact, any molecule with a hydrogen bond donor (hydrogen bound to a highly electronegative atom) will have favorable electrostatic interactions with the electron-rich π-system of a conjugated molecule.
Hydrophobic effect
The hydrophobic effect is the desire for non-polar molecules to aggregate in aqueous solutions in order to separate from water. This phenomenon leads to minimum exposed surface area of non-polar molecules to the polar water molecules (typically spherical droplets), and is commonly used in biochemistry to study protein folding and other various biological phenomenon. The effect is also commonly seen when mixing various oils (including cooking oil) and water. Over time, oil sitting on top of water will begin to aggregate into large flattened spheres from smaller droplets, eventually leading to a film of all oil sitting atop a pool of water. However the hydrophobic effect is not considered a non-covalent interaction as it is a function of entropy and not a specific interaction between two molecules, usually characterized by entropy.enthalpy compensation. An essentially enthalpic hydrophobic effect materializes if a limited number of water molecules are restricted within a cavity; displacement of such water molecules by a ligand frees the water molecules which then in the bulk water enjoy a maximum of hydrogen bonds close to four.
Examples
Drug design
Most pharmaceutical drugs are small molecules which elicit a physiological response by "binding" to enzymes or receptors, causing an increase or decrease in the enzyme's ability to function. The binding of a small molecule to a protein is governed by a combination of steric, or spatial considerations, in addition to various non-covalent interactions, although some drugs do covalently modify an active site (see irreversible inhibitors). Using the "lock and key model" of enzyme binding, a drug (key) must be of roughly the proper dimensions to fit the enzyme's binding site (lock). Using the appropriately sized molecular scaffold, drugs must also interact with the enzyme non-covalently in order to maximize binding affinity binding constant and reduce the ability of the drug to dissociate from the binding site. This is achieved by forming various non-covalent interactions between the small molecule and amino acids in the binding site, including: hydrogen bonding, electrostatic interactions, pi stacking, van der Waals interactions, and dipole–dipole interactions.
Non-covalent metallo drugs have been developed. For example, dinuclear triple-helical compounds in which three ligand strands wrap around two metals, resulting in a roughly cylindrical tetracation have been prepared. These compounds bind to the less-common nucleic acid structures, such as duplex DNA, Y-shaped fork structures and 4-way junctions.
Protein folding and structure
The folding of proteins from a primary (linear) sequence of amino acids to a three-dimensional structure is directed by all types of non-covalent interactions, including the hydrophobic forces and formation of intramolecular hydrogen bonds. Three-dimensional structures of proteins, including the secondary and tertiary structures, are stabilized by formation of hydrogen bonds. Through a series of small conformational changes, spatial orientations are modified so as to arrive at the most energetically minimized orientation achievable. The folding of proteins is often facilitated by enzymes known as molecular chaperones. Sterics, bond strain, and angle strain also play major roles in the folding of a protein from its primary sequence to its tertiary structure.
Single tertiary protein structures can also assemble to form protein complexes composed of multiple independently folded subunits. As a whole, this is called a protein's quaternary structure. The quaternary structure is generated by the formation of relatively strong non-covalent interactions, such as hydrogen bonds, between different subunits to generate a functional polymeric enzyme. Some proteins also utilize non-covalent interactions to bind cofactors in the active site during catalysis, however a cofactor can also be covalently attached to an enzyme. Cofactors can be either organic or inorganic molecules which assist in the catalytic mechanism of the active enzyme. The strength with which a cofactor is bound to an enzyme may vary greatly; non-covalently bound cofactors are typically anchored by hydrogen bonds or electrostatic interactions.
Boiling points
Non-covalent interactions have a significant effect on the boiling point of a liquid. Boiling point is defined as the temperature at which the vapor pressure of a liquid is equal to the pressure surrounding the liquid. More simply, it is the temperature at which a liquid becomes a gas. As one might expect, the stronger the non-covalent interactions present for a substance, the higher its boiling point. For example, consider three compounds of similar chemical composition: sodium n-butoxide (C4H9ONa), diethyl ether (C4H10O), and n-butanol (C4H9OH).
The predominant non-covalent interactions associated with each species in solution are listed in the above figure. As previously discussed, ionic interactions require considerably more energy to break than hydrogen bonds, which in turn are require more energy than dipole–dipole interactions. The trends observed in their boiling points (figure 8) shows exactly the correlation expected, where sodium n-butoxide requires significantly more heat energy (higher temperature) to boil than n-butanol, which boils at a much higher temperature than diethyl ether. The heat energy required for a compound to change from liquid to gas is associated with the energy required to break the intermolecular forces each molecule experiences in its liquid state.
References
Chemical bonding
Supramolecular chemistry
pt:Interação não covalente | Non-covalent interaction | Physics,Chemistry,Materials_science | 3,337 |
1,849,799 | https://en.wikipedia.org/wiki/0.999... | In mathematics, 0.999... (also written as 0., 0., or 0.(9)) is a repeating decimal that is an alternate way of writing the number 1. Following the standard rules for representing numbers in decimal notation, its value is the smallest number greater than or equal to every number in the sequence . It can be proved that this number is1; that is,
Despite common misconceptions, 0.999... is not "almost exactly 1" or "very, very nearly but not quite 1"; rather, "0.999..." and "1" represent the same number.
An elementary proof is given below that involves only elementary arithmetic and the fact that there is no positive real number less than all where is a natural number, a property that results immediately from the Archimedean property of the real numbers.
There are many other ways of showing this equality, from intuitive arguments to mathematically rigorous proofs. The intuitive arguments are generally based on properties of finite decimals that are extended without proof to infinite decimals. The proofs are generally based on basic properties of real numbers and methods of calculus, such as series and limits. A question studied in mathematics education is why some people reject this equality.
In other number systems, 0.999... can have the same meaning, a different definition, or be undefined. Every nonzero terminating decimal has two equal representations (for example, 8.32000... and 8.31999...). Having values with multiple representations is a feature of all positional numeral systems that represent the real numbers.
Elementary proof
It is possible to prove the equation using just the mathematical tools of comparison and addition of (finite) decimal numbers, without any reference to more advanced topics such as series and limits. The proof given below is a direct formalization of the intuitive fact that, if one draws 0.9, 0.99, 0.999, etc. on the number line, there is no room left for placing a number between them and 1. The meaning of the notation 0.999... is the least point on the number line lying to the right of all of the numbers 0.9, 0.99, 0.999, etc. Because there is ultimately no room between 1 and these numbers, the point 1 must be this least point, and so .
Intuitive explanation
If one places 0.9, 0.99, 0.999, etc. on the number line, one sees immediately that all these points are to the left of 1, and that they get closer and closer to 1. For any number that is less than 1, the sequence 0.9, 0.99, 0.999, and so on will eventually reach a number larger than . So, it does not make sense to identify 0.999... with any number smaller than 1. Meanwhile, every number larger than 1 will be larger than any decimal of the form 0.999...9 for any finite number of nines. Therefore, 0.999... cannot be identified with any number larger than 1, either. Because 0.999... cannot be bigger than 1 or smaller than 1, it must equal 1 if it is to be any real number at all.
Rigorous proof
Denote by 0.(9) the number 0.999...9, with nines after the decimal point. Thus , , , and so on. One has , , and so on; that is, for every natural number .
Let be a number not greater than 1 and greater than 0.9, 0.99, 0.999, etc.; that is, , for every . By subtracting these inequalities from 1, one gets .
The end of the proof requires that there is no positive number that is less than for all . This is one version of the Archimedean property, which is true for real numbers. This property implies that if for all , then can only be equal to 0. So, and 1 is the smallest number that is greater than all 0.9, 0.99, 0.999, etc. That is, .
This proof relies on the Archimedean property of rational and real numbers. Real numbers may be enlarged into number systems, such as hyperreal numbers, with infinitely small numbers (infinitesimals) and infinitely large numbers (infinite numbers). When using such systems, the notation 0.999... is generally not used, as there is no smallest number among the numbers larger than all 0.(9).
Least upper bounds and completeness
Part of what this argument shows is that there is a least upper bound of the sequence 0.9, 0.99, 0.999, etc.: the smallest number that is greater than all of the terms of the sequence. One of the axioms of the real number system is the completeness axiom, which states that every bounded sequence has a least upper bound. This least upper bound is one way to define infinite decimal expansions: the real number represented by an infinite decimal is the least upper bound of its finite truncations. The argument here does not need to assume completeness to be valid, because it shows that this particular sequence of rational numbers has a least upper bound and that this least upper bound is equal to one.
Algebraic arguments
Simple algebraic illustrations of equality are a subject of pedagogical discussion and critique. discusses the argument that, in elementary school, one is taught that , so, ignoring all essential subtleties, "multiplying" this identity by 3 gives . He further says that this argument is unconvincing, because of an unresolved ambiguity over the meaning of the equals sign; a student might think, "It surely does not mean that the number 1 is identical to that which is meant by the notation 0.999...." Most undergraduate mathematics majors encountered by Byers feel that while 0.999... is "very close" to 1 on the strength of this argument, with some even saying that it is "infinitely close", they are not ready to say that it is equal to 1. discusses how "this argument gets its force from the fact that most people have been indoctrinated to accept the first equation without thinking", but also suggests that the argument may lead skeptics to question this assumption.
Byers also presents the following argument.
Students who did not accept the first argument sometimes accept the second argument, but, in Byers's opinion, still have not resolved the ambiguity, and therefore do not understand the representation of infinite decimals. , presenting the same argument, also state that it does not explain the equality, indicating that such an explanation would likely involve concepts of infinity and completeness. , citing , also conclude that the treatment of the identity based on such arguments as these, without the formal concept of a limit, is premature. concurs, arguing that knowing one can multiply 0.999... by 10 by shifting the decimal point presumes an answer to the deeper question of how one gives a meaning to the expression 0.999... at all. The same argument is also given by , who notes that skeptics may question whether is cancellable that is, whether it makes sense to subtract from both sides. similarly argues that both the multiplication and subtraction which removes the infinite decimal require further justification.
Analytic proofs
Real analysis is the study of the logical underpinnings of calculus, including the behavior of sequences and series of real numbers. The proofs in this section establish using techniques familiar from real analysis.
Infinite series and sequences
A common development of decimal expansions is to define them as sums of infinite series. In general:
For 0.999... one can apply the convergence theorem concerning geometric series, stating that if , then:
Since 0.999... is such a sum with and common ratio , the theorem makes short work of the question:
This proof appears as early as 1770 in Leonhard Euler's Elements of Algebra.
The sum of a geometric series is itself a result even older than Euler. A typical 18th-century derivation used a term-by-term manipulation similar to the algebraic proof given above, and as late as 1811, Bonnycastle's textbook An Introduction to Algebra uses such an argument for geometric series to justify the same maneuver on 0.999.... A 19th-century reaction against such liberal summation methods resulted in the definition that still dominates today: the sum of a series is defined to be the limit of the sequence of its partial sums. A corresponding proof of the theorem explicitly computes that sequence; it can be found in several proof-based introductions to calculus or analysis.
A sequence has the value as its limit if the distance becomes arbitrarily small as increases. The statement that can itself be interpreted and proven as a limit:
The first two equalities can be interpreted as symbol shorthand definitions. The remaining equalities can be proven. The last step, that 10 approaches 0 as approaches infinity (), is often justified by the Archimedean property of the real numbers. This limit-based attitude towards 0.999... is often put in more evocative but less precise terms. For example, the 1846 textbook The University Arithmetic explains, ".999 +, continued to infinity = 1, because every annexation of a 9 brings the value closer to 1"; the 1895 Arithmetic for Schools says, "when a large number of 9s is taken, the difference between 1 and .99999... becomes inconceivably small". Such heuristics are often incorrectly interpreted by students as implying that 0.999... itself is less than 1.
Nested intervals and least upper bounds
The series definition above defines the real number named by a decimal expansion. A complementary approach is tailored to the opposite process: for a given real number, define the decimal expansion(s) to name it.
If a real number is known to lie in the closed interval (that is, it is greater than or equal to 0 and less than or equal to 10), one can imagine dividing that interval into ten pieces that overlap only at their endpoints: , , , and so on up to . The number must belong to one of these; if it belongs to , then one records the digit "2" and subdivides that interval into , , ..., , . Continuing this process yields an infinite sequence of nested intervals, labeled by an infinite sequence of digits , , , ..., and one writes
In this formalism, the identities and reflect, respectively, the fact that 1 lies in both . and , so one can choose either subinterval when finding its digits. To ensure that this notation does not abuse the "=" sign, one needs a way to reconstruct a unique real number for each decimal. This can be done with limits, but other constructions continue with the ordering theme.
One straightforward choice is the nested intervals theorem, which guarantees that given a sequence of nested, closed intervals whose lengths become arbitrarily small, the intervals contain exactly one real number in their intersection. So , , , ... is defined to be the unique number contained within all the intervals , , and so on. 0.999... is then the unique real number that lies in all of the intervals , , , and for every finite string of 9s. Since 1 is an element of each of these intervals, .
The nested intervals theorem is usually founded upon a more fundamental characteristic of the real numbers: the existence of least upper bounds or suprema. To directly exploit these objects, one may define ... to be the least upper bound of the set of approximants , , , .... One can then show that this definition (or the nested intervals definition) is consistent with the subdivision procedure, implying again. Tom Apostol concludes, "the fact that a real number might have two different decimal representations is merely a reflection of the fact that two different sets of real numbers can have the same supremum."
Proofs from the construction of the real numbers
Some approaches explicitly define real numbers to be certain structures built upon the rational numbers, using axiomatic set theory. The natural numbers begin with 0 and continue upwards so that every number has a successor. One can extend the natural numbers with their negatives to give all the integers, and to further extend to ratios, giving the rational numbers. These number systems are accompanied by the arithmetic of addition, subtraction, multiplication, and division. More subtly, they include ordering, so that one number can be compared to another and found to be less than, greater than, or equal to another number.
The step from rationals to reals is a major extension. There are at least two popular ways to achieve this step, both published in 1872: Dedekind cuts and Cauchy sequences. Proofs that that directly uses these constructions are not found in textbooks on real analysis, where the modern trend for the last few decades has been to use an axiomatic analysis. Even when a construction is offered, it is usually applied toward proving the axioms of the real numbers, which then support the above proofs. However, several authors express the idea that starting with a construction is more logically appropriate, and the resulting proofs are more self-contained.
Dedekind cuts
In the Dedekind cut approach, each real number is defined as the infinite set of all rational numbers less than . In particular, the real number 1 is the set of all rational numbers that are less than 1. Every positive decimal expansion easily determines a Dedekind cut: the set of rational numbers that are less than some stage of the expansion. So the real number 0.999... is the set of rational numbers such that , or , or , or is less than some other number of the form
Every element of 0.999... is less than 1, so it is an element of the real number 1. Conversely, all elements of 1 are rational numbers that can be written as
with and . This implies
and thus
Since
by the definition above, every element of 1 is also an element of 0.999..., and, combined with the proof above that every element of 0.999... is also an element of 1, the sets 0.999... and 1 contain the same rational numbers, and are therefore the same set, that is, .
The definition of real numbers as Dedekind cuts was first published by Richard Dedekind in 1872.
The above approach to assigning a real number to each decimal expansion is due to an expository paper titled "Is ?" by Fred Richman in Mathematics Magazine. Richman notes that taking Dedekind cuts in any dense subset of the rational numbers yields the same results; in particular, he uses decimal fractions, for which the proof is more immediate. He also notes that typically the definitions allow to be a cut but not (or vice versa). A further modification of the procedure leads to a different structure where the two are not equal. Although it is consistent, many of the common rules of decimal arithmetic no longer hold, for example, the fraction has no representation; see below.
Cauchy sequences
Another approach is to define a real number as the limit of a Cauchy sequence of rational numbers. This construction of the real numbers uses the ordering of rationals less directly. First, the distance between and is defined as the absolute value , where the absolute value is defined as the maximum of and , thus never negative. Then the reals are defined to be the sequences of rationals that have the Cauchy sequence property using this distance. That is, in the sequence , , , ..., a mapping from natural numbers to rationals, for any positive rational there is an such that for all ; the distance between terms becomes smaller than any positive rational.
If and are two Cauchy sequences, then they are defined to be equal as real numbers if the sequence has the limit 0. Truncations of the decimal number ... generate a sequence of rationals, which is Cauchy; this is taken to define the real value of the number. Thus in this formalism the task is to show that the sequence of rational numbers
has a limit 0. Considering the th term of the sequence, for , it must therefore be shown that
This can be proved by the definition of a limit. So again, .
The definition of real numbers as Cauchy sequences was first published separately by Eduard Heine and Georg Cantor, also in 1872. The above approach to decimal expansions, including the proof that , closely follows Griffiths & Hilton's 1970 work A comprehensive textbook of classical mathematics: A contemporary interpretation.
Infinite decimal representation
Commonly in secondary schools' mathematics education, the real numbers are constructed by defining a number using an integer followed by a radix point and an infinite sequence written out as a string to represent the fractional part of any given real number. In this construction, the set of any combination of an integer and digits after the decimal point (or radix point in non-base 10 systems) is the set of real numbers. This construction can be rigorously shown to satisfy all of the real axioms after defining an equivalence relation over the set that defines as well as for any other nonzero decimals with only finitely many nonzero terms in the decimal string with its trailing 9s version. In other words, the equality holding true is a necessary condition for strings of digits to behave as real numbers should.
Dense order
One of the notions that can resolve the issue is the requirement that real numbers be densely ordered. Dense ordering implies that if there is no new element strictly between two elements of the set, the two elements must be considered equal. Therefore, if 0.99999... were to be different from 1, there would have to be another real number in between them but there is none: a single digit cannot be changed in either of the two to obtain such a number.
Generalizations
The result that generalizes readily in two ways. First, every nonzero number with a finite decimal notation (equivalently, endless trailing 0s) has a counterpart with trailing 9s. For example, 0.24999... equals 0.25, exactly as in the special case considered. These numbers are exactly the decimal fractions, and they are dense.
Second, a comparable theorem applies in each radix or base. For example, in base 2 (the binary numeral system) 0.111... equals 1, and in base 3 (the ternary numeral system) 0.222... equals 1. In general, any terminating base expression has a counterpart with repeated trailing digits equal to . Textbooks of real analysis are likely to skip the example of 0.999... and present one or both of these generalizations from the start.
Alternative representations of 1 also occur in non-integer bases. For example, in the golden ratio base, the two standard representations are 1.000... and 0.101010..., and there are infinitely many more representations that include adjacent 1s. Generally, for almost all between 1 and 2, there are uncountably many expansions of 1. In contrast, there are still uncountably many , including all natural numbers greater than 1, for which there is only one expansion of 1, other than the trivial 1.000.... This result was first obtained by Paul Erdős, Miklos Horváth, and István Joó around 1990. In 1998 Vilmos Komornik and Paola Loreti determined the smallest such base, the Komornik–Loreti constant . In this base, ; the digits are given by the Thue–Morse sequence, which does not repeat.
A more far-reaching generalization addresses the most general positional numeral systems. They too have multiple representations, and in some sense, the difficulties are even worse. For example:
In the balanced ternary system, .
In the reverse factorial number system (using bases 2!, 3!, 4!, ... for positions after the decimal point), .
has proven that for any positional system that names all the real numbers, the set of reals with multiple representations is always dense. He calls the proof "an instructive exercise in elementary point-set topology"; it involves viewing sets of positional values as Stone spaces and noticing that their real representations are given by continuous functions.
Applications
One application of 0.999... as a representation of 1 occurs in elementary number theory. In 1802, H. Goodwyn published an observation on the appearance of 9s in the repeating-decimal representations of fractions whose denominators are certain prime numbers. Examples include:
= 0. and .
= 0. and .
E. Midy proved a general result about such fractions, now called Midy's theorem, in 1836. The publication was obscure, and it is unclear whether his proof directly involved 0.999..., but at least one modern proof by William G. Leavitt does. If it can be proved that if a decimal of the form ... is a positive integer, then it must be 0.999..., which is then the source of the 9s in the theorem. Investigations in this direction can motivate such concepts as greatest common divisors, modular arithmetic, Fermat primes, order of group elements, and quadratic reciprocity.
Returning to real analysis, the base-3 analogue plays a key role in the characterization of one of the simplest fractals, the middle-thirds Cantor set: a point in the unit interval lies in the Cantor set if and only if it can be represented in ternary using only the digits 0 and 2.
The th digit of the representation reflects the position of the point in the th stage of the construction. For example, the point is given the usual representation of 0.2 or 0.2000..., since it lies to the right of the first deletion and the left of every deletion thereafter. The point is represented not as 0.1 but as 0.0222..., since it lies to the left of the first deletion and the right of every deletion thereafter.
Repeating nines also turns up in yet another of Georg Cantor's works. They must be taken into account to construct a valid proof, applying his 1891 diagonal argument to decimal expansions, of the uncountability of the unit interval. Such a proof needs to be able to declare certain pairs of real numbers to be different based on their decimal expansions, so one needs to avoid pairs like 0.2 and 0.1999... A simple method represents all numbers with nonterminating expansions; the opposite method rules out repeating nines. A variant that may be closer to Cantor's original argument uses base 2, and by turning base-3 expansions into base-2 expansions, one can prove the uncountability of the Cantor set as well.
Skepticism in education
Students of mathematics often reject the equality of 0.999... and 1, for reasons ranging from their disparate appearance to deep misgivings over the limit concept and disagreements over the nature of infinitesimals. There are many common contributing factors to the confusion:
Students are often "mentally committed to the notion that a number can be represented in one and only one way by a decimal". Seeing two manifestly different decimals representing the same number appears to be a paradox, which is amplified by the appearance of the seemingly well-understood number 1.
Some students interpret "0.999..." (or similar notation) as a large but finite string of 9s, possibly with a variable, unspecified length. If they accept an infinite string of nines, they may still expect a last 9 "at infinity".
Intuition and ambiguous teaching lead students to think of the limit of a sequence as a kind of infinite process rather than a fixed value since a sequence need not reach its limit. Where students accept the difference between a sequence of numbers and its limit, they might read "0.999..." as meaning the sequence rather than its limit.
These ideas are mistaken in the context of the standard real numbers, although some may be valid in other number systems, either invented for their general mathematical utility or as instructive counterexamples to better understand 0.999...; see below.
Many of these explanations were found by David Tall, who has studied characteristics of teaching and cognition that lead to some of the misunderstandings he has encountered with his college students. Interviewing his students to determine why the vast majority initially rejected the equality, he found that "students continued to conceive of 0.999... as a sequence of numbers getting closer and closer to 1 and not a fixed value, because 'you haven't specified how many places there are' or 'it is the nearest possible decimal below 1.
The elementary argument of multiplying by 3 can convince reluctant students that 0.999... = 1. Still, when confronted with the conflict between their belief in the first equation and their disbelief in the second, some students either begin to disbelieve the first equation or simply become frustrated. Nor are more sophisticated methods foolproof: students who are fully capable of applying rigorous definitions may still fall back on intuitive images when they are surprised by a result in advanced mathematics, including 0.999.... For example, one real analysis student was able to prove that using a supremum definition but then insisted that based on her earlier understanding of long division. Others still can prove that , but, upon being confronted by the fractional proof, insist that "logic" supersedes the mathematical calculations.
tells the tale of an otherwise brilliant calculus student of his who "challenged almost everything I said in class but never questioned his calculator", and who had come to believe that nine digits are all one needs to do mathematics, including calculating the square root of 23. The student remained uncomfortable with a limiting argument that , calling it a "wildly imagined infinite growing process".
As part of the APOS Theory of mathematical learning, propose that students who conceive of 0.999... as a finite, indeterminate string with an infinitely small distance from 1 have "not yet constructed a complete process conception of the infinite decimal". Other students who have a complete process conception of 0.999... may not yet be able to "encapsulate" that process into an "object conception", like the object conception they have of 1, and so they view the process 0.999... and the object 1 as incompatible. They also link this mental ability of encapsulation to viewing as a number in its own right and to dealing with the set of natural numbers as a whole.
Cultural phenomenon
With the rise of the Internet, debates about 0.999... have become commonplace on newsgroups and message boards, including many that nominally have little to do with mathematics. In the newsgroup in the 1990s, arguing over 0.999... became a "popular sport", and was one of the questions answered in its FAQ. The FAQ briefly covers , multiplication by 10, and limits, and alludes to Cauchy sequences as well.
A 2003 edition of the general-interest newspaper column The Straight Dope discusses 0.999... via and limits, saying of misconceptions,
A Slate article reports that the concept of 0.999... is "hotly disputed on websites ranging from World of Warcraft message boards to Ayn Rand forums".
0.999... features also in mathematical jokes, such as:
The fact that 0.999... is equal to 1 has been compared to Zeno's paradox of the runner. The runner paradox can be mathematically modeled and then, like 0.999..., resolved using a geometric series. However, it is not clear whether this mathematical treatment addresses the underlying metaphysical issues Zeno was exploring.
In alternative number systems
Although the real numbers form an extremely useful number system, the decision to interpret the notation "0.999..." as naming a real number is ultimately a convention, and Timothy Gowers argues in Mathematics: A Very Short Introduction that the resulting identity is a convention as well:
Infinitesimals
Some proofs that rely on the Archimedean property of the real numbers: that there are no nonzero infinitesimals. Specifically, the difference must be smaller than any positive rational number, so it must be an infinitesimal; but since the reals do not contain nonzero infinitesimals, the difference is zero, and therefore the two values are the same.
However, there are mathematically coherent ordered algebraic structures, including various alternatives to the real numbers, which are non-Archimedean. Non-standard analysis provides a number system with a full array of infinitesimals (and their inverses). A. H. Lightstone developed a decimal expansion for hyperreal numbers in . Lightstone shows how to associate each number with a sequence of digits,
indexed by the hypernatural numbers. While he does not directly discuss 0.999..., he shows the real number is represented by 0.333...;...333..., which is a consequence of the transfer principle. As a consequence the number . With this type of decimal representation, not every expansion represents a number. In particular "0.333...;...000..." and "0.999...;...000..." do not correspond to any number.
The standard definition of the number 0.999... is the limit of the sequence 0.9, 0.99, 0.999, .... A different definition involves an ultralimit, i.e., the equivalence class of this sequence in the ultrapower construction, which is a number that falls short of 1 by an infinitesimal amount. More generally, the hyperreal number , with last digit 9 at infinite hypernatural rank , satisfies a strict inequality . Accordingly, an alternative interpretation for "zero followed by infinitely many 9s" could be
All such interpretations of "0.999..." are infinitely close to 1. Ian Stewart characterizes this interpretation as an "entirely reasonable" way to rigorously justify the intuition that "there's a little bit missing" from 1 in 0.999.... Along with , also questions the assumption that students' ideas about are erroneous intuitions about the real numbers, interpreting them rather as nonstandard intuitions that could be valuable in the learning of calculus.
Hackenbush
Combinatorial game theory provides a generalized concept of number that encompasses the real numbers and much more besides. For example, in 1974, Elwyn Berlekamp described a correspondence between strings of red and blue segments in Hackenbush and binary expansions of real numbers, motivated by the idea of data compression. For example, the value of the Hackenbush string LRRLRLRL... is However, the value of LRLLL... (corresponding to 0.111...2) is infinitesimally less than 1. The difference between the two is the surreal number , where is the first infinite ordinal; the relevant game is LRRRR... or 0.000...2.
This is true of the binary expansions of many rational numbers, where the values of the numbers are equal but the corresponding binary tree paths are different. For example, , which are both equal to , but the first representation corresponds to the binary tree path LRLRLLL..., while the second corresponds to the different path LRLLRRR....
Revisiting subtraction
Another manner in which the proofs might be undermined is if simply does not exist because subtraction is not always possible. Mathematical structures with an addition operation but not a subtraction operation include commutative semigroups, commutative monoids, and semirings. considers two such systems, designed so that .
First, defines a nonnegative decimal number to be a literal decimal expansion. He defines the lexicographical order and an addition operation, noting that simply because in the ones place, but for any nonterminating , one has . So one peculiarity of the decimal numbers is that addition cannot always be canceled; another is that no decimal number corresponds to . After defining multiplication, the decimal numbers form a positive, totally ordered, commutative semiring.
In the process of defining multiplication, Richman also defines another system he calls "cut ", which is the set of Dedekind cuts of decimal fractions. Ordinarily, this definition leads to the real numbers, but for a decimal fraction he allows both the cut and the "principal cut" . The result is that the real numbers are "living uneasily together with" the decimal fractions. Again . There are no positive infinitesimals in cut , but there is "a sort of negative infinitesimal", 0−, which has no decimal expansion. He concludes that , while the equation "" has no solution.
p-adic numbers
When asked about 0.999..., novices often believe there should be a "final 9", believing to be a positive number which they write as "0.000...1". Whether or not that makes sense, the intuitive goal is clear: adding a 1 to the final 9 in 0.999... would carry all the 9s into 0s and leave a 1 in the ones place. Among other reasons, this idea fails because there is no "final 9" in 0.999.... However, there is a system that contains an infinite string of 9s including a last 9.
The adic numbers are an alternative number system of interest in number theory. Like the real numbers, the adic numbers can be built from the rational numbers via Cauchy sequences; the construction uses a different metric in which 0 is closer to , and much closer to , than it is to 1. The adic numbers form a field for prime and a ring for other , including 10. So arithmetic can be performed in the adics, and there are no infinitesimals.
In the 10-adic numbers, the analogues of decimal expansions run to the left. The 10-adic expansion ...999 does have a last 9, and it does not have a first 9. One can add 1 to the ones place, and it leaves behind only 0s after carrying through: , and so . Another derivation uses a geometric series. The infinite series implied by "...999" does not converge in the real numbers, but it converges in the 10-adics, and so one can re-use the familiar formula:
Compare with the series in the section above. A third derivation was invented by a seventh-grader who was doubtful over her teacher's limiting argument that but was inspired to take the multiply-by-10 proof above in the opposite direction: if , then , so , hence again.
As a final extension, since (in the reals) and (in the 10-adics), then by "blind faith and unabashed juggling of symbols" one may add the two equations and arrive at . This equation does not make sense either as a 10-adic expansion or an ordinary decimal expansion, but it turns out to be meaningful and true in the doubly infinite decimal expansion of the 10-adic solenoid, with eventually repeating left ends to represent the real numbers and eventually repeating right ends to represent the 10-adic numbers.
See also
Finitism
Informal mathematics
Notes
References
Sources
This introductory textbook on dynamical systems is aimed at undergraduate and beginning graduate students. (p. ix)
A transition from calculus to advanced analysis, Mathematical analysis is intended to be "honest, rigorous, up to date, and, at the same time, not too pedantic". (pref.) Apostol's development of the real numbers uses the least upper bound axiom and introduces infinite decimals two pages later. (pp. 9–11)
This book is intended as introduction to real analysis aimed at upper- undergraduate and graduate-level. (pp. xi-xii)
This text aims to be "an accessible, reasonably paced textbook that deals with the fundamental concepts and techniques of real analysis". Its development of the real numbers relies on the supremum axiom. (pp. vii–viii)
This book presents an analysis of paradoxes and fallacies as a tool for exploring its central topic, "the rather tenuous relationship between mathematical reality and physical reality". It assumes first-year high-school algebra; further mathematics is developed in the book, including geometric series in Chapter 2. Although 0.999... is not one of the paradoxes to be fully treated, it is briefly mentioned during a development of Cantor's diagonal method. (pp. ix-xi, 119)
This article is a field study involving a student who developed a Leibnizian-style theory of infinitesimals to help her understand calculus, and in particular to account for falling short of 1 by an infinitesimal
An introductory undergraduate textbook in set theory that "presupposes no specific background". It is written to accommodate a course focusing on axiomatic set theory or on the construction of number systems; the axiomatic material is marked such that it may be de-emphasized. (pp. xi–xii)
This book grew out of a course for Birmingham-area grammar school mathematics teachers. The course was intended to convey a university-level perspective on school mathematics, and the book is aimed at students "who have reached roughly the level of completing one year of specialist mathematical study at a university". The real numbers are constructed in Chapter 24, "perhaps the most difficult chapter in the entire book", although the authors ascribe much of the difficulty to their use of ideal theory, which is not reproduced here. (pp. vii, xiv)
Mankiewicz seeks to represent "the history of mathematics in an accessible style" by combining visual and qualitative aspects of mathematics, mathematicians' writings, and historical sketches. (p. 8)
A topical rather than chronological review of infinity, this book is "intended for the general reader" but "told from the point of view of a mathematician". On the dilemma of rigor versus readable language, Maor comments, "I hope I have succeeded in properly addressing this problem." (pp. x-xiii)
Intended as an introduction "at the senior or first-year graduate level" with no formal prerequisites: "I do not even assume the reader knows much set theory." (p. xi) Munkres's treatment of the reals is axiomatic; he claims of bare-hands constructions, "This way of approaching the subject takes a good deal of time and effort and is of greater logical than mathematical interest." (p. 30)
This book aims to "present a theoretical foundation of analysis that is suitable for students who have completed a standard course in calculus". (p. vii) At the end of Chapter 2, the authors assume as an axiom for the real numbers that bounded, nondecreasing sequences converge, later proving the nested intervals theorem and the least upper bound property. (pp. 56–64) Decimal expansions appear in Appendix 3, "Expansions of real numbers in any base". (pp. 503–507)
While assuming familiarity with the rational numbers, Pugh introduces Dedekind cuts as soon as possible, saying of the axiomatic treatment, "This is something of a fraud, considering that the entire structure of analysis is built on the real number system." (p. 10) After proving the least upper bound property and some allied facts, cuts are not used in the rest of the book.
Free HTML preprint: Note: the journal article contains material and wording not found in the preprint.
This book gives a "careful rigorous" introduction to real analysis. It gives the axioms of the real numbers and then constructs them (pp. 27–31) as infinite decimals with 0.999... = 1 as part of the definition.
A textbook for an advanced undergraduate course. "Experience has convinced me that it is pedagogically unsound (though logically correct) to start off with the construction of the real numbers from the rational ones. At the beginning, most students simply fail to appreciate the need for doing this. Accordingly, the real number system is introduced as an ordered field with the least-upper-bound property, and a few interesting applications of this property are quickly made. However, Dedekind's construction is not omitted. It is now in an Appendix to Chapter 1, where it may be studied and enjoyed whenever the time is ripe." (p. ix)
This book aims to "assist students in discovering calculus" and "to foster conceptual understanding". (p. v) It omits proofs of the foundations of calculus.
Further reading
External links
.999999... = 1? from Cut-the-Knot
Why does 0.9999... = 1 ?
Proof of the equality based on arithmetic from Math Central
David Tall's research on mathematics cognition
What is so wrong with thinking of real numbers as infinite decimals?
Theorem 0.999... on Metamath
1 (number)
Mathematical paradoxes
Real numbers
Real analysis
Articles containing proofs | 0.999... | Mathematics | 8,681 |
2,502,518 | https://en.wikipedia.org/wiki/Kan%20extension | Kan extensions are universal constructs in category theory, a branch of mathematics. They are closely related to adjoints, but are also related to limits and ends. They are named after Daniel M. Kan, who constructed certain (Kan) extensions using limits in 1960.
An early use of (what is now known as) a Kan extension from 1956 was in homological algebra to compute derived functors.
In Categories for the Working Mathematician Saunders Mac Lane titled a section "All Concepts Are Kan Extensions", and went on to write that
The notion of Kan extensions subsumes all the other fundamental concepts of category theory.
Kan extensions generalize the notion of extending a function defined on a subset to a function defined on the whole set. The definition, not surprisingly, is at a high level of abstraction. When specialised to posets, it becomes a relatively familiar type of question on constrained optimization.
Definition
A Kan extension proceeds from the data of three categories
and two functors
,
and comes in two varieties: the "left" Kan extension and the "right" Kan extension of along .
Abstractly, the functor gives a pullback map . When they exist, the left and right adjoints to applied to gives the left and right kan extensions. Spelling the definition of adjoints out, we get the following definitions;
The right Kan extension amounts to finding the dashed arrow and the natural transformation in the following diagram:
Formally, the right Kan extension of along consists of a functor and a natural transformation that is couniversal with respect to the specification, in the sense that for any functor and natural transformation , a unique natural transformation is defined and fits into a commutative diagram:
where is the natural transformation with for any object of
The functor R is often written .
As with the other universal constructs in category theory, the "left" version of the Kan extension is dual to the "right" one and is obtained by replacing all categories by their opposites.
The effect of this on the description above is merely to reverse the direction of the natural transformations.
(Recall that a natural transformation between the functors consists of having an arrow for every object of , satisfying a "naturality" property. When we pass to the opposite categories, the source and target of are swapped, causing to act in the opposite direction).
This gives rise to the alternate description: the left Kan extension of along consists of a functor and a natural transformation that are universal with respect to this specification, in the sense that for any other functor and natural transformation , a unique natural transformation exists and fits into a commutative diagram:
where is the natural transformation with for any object of .
The functor L is often written .
The use of the word "the" (as in "the left Kan extension") is justified by the fact that, as with all universal constructions, if the object defined exists, then it is unique up to unique isomorphism. In this case, that means that (for left Kan extensions) if are two left Kan extensions of along , and are the corresponding transformations, then there exists a unique isomorphism of functors such that the second diagram above commutes. Likewise for right Kan extensions.
Properties
Kan extensions as (co)limits
Suppose and are two functors. If A is small and C is cocomplete, then there exists a left Kan extension of along , defined at each object b of B by
where the colimit is taken over the comma category , where is the constant functor. Dually, if A is small and C is complete, then right Kan extensions along exist, and can be computed as the limit
over the comma category .
Kan extensions as (co)ends
Suppose and are two functors such that for all objects a and a of A and all objects b of B, the copowers exist in C. Then the functor X has a left Kan extension along F, which is such that, for every object b of B,
when the above coend exists for every object b of B.
Dually, right Kan extensions can be computed by the end formula
Limits as Kan extensions
The limit of a functor can be expressed as a Kan extension by
where is the unique functor from to (the category with one object and one arrow, a terminal object in ). The colimit of can be expressed similarly by
Adjoints as Kan extensions
A functor possesses a left adjoint if and only if the right Kan extension of along exists and is preserved by . In this case, a left adjoint is given by and this Kan extension is even preserved by any functor whatsoever, i.e. is an absolute Kan extension.
Dually, a right adjoint exists if and only if the left Kan extension of the identity along exists and is preserved by .
Applications
The codensity monad of a functor is a right Kan extension of G along itself.
References
External links
Model independent proof of colimit formula for left Kan extensions
Kan extension as a limit: an example
Adjoint functors
Category theory | Kan extension | Mathematics | 1,034 |
4,055,635 | https://en.wikipedia.org/wiki/Siblicide | Siblicide (attributed by behavioural ecologist Doug Mock to Barbara M. Braun) is the killing of an infant individual by its close relatives (full or half siblings). It may occur directly between siblings or be mediated by the parents, and is driven by the direct fitness benefits to the perpetrator and sometimes its parents. Siblicide has mainly, but not only, been observed in birds. (The word is also used as a unifying term for fratricide and sororicide in the human species; unlike these more specific terms, it leaves the sex of the victim unspecified.)
Siblicidal behavior can be either obligate or facultative. Obligate siblicide is when a sibling almost always ends up being killed. Facultative siblicide means that siblicide may or may not occur, based on environmental conditions. In birds, obligate siblicidal behavior results in the older chick killing the other chick(s). In facultative siblicidal animals, fighting is frequent, but does not always lead to death of a sibling; this type of behavior often exists in patterns for different species. For instance, in the blue-footed booby, a sibling may be hit by a nest mate only once a day for a couple of weeks and then attacked at random, leading to its death. More birds are facultatively siblicidal than obligatory siblicidal. This is perhaps because siblicide takes a great amount of energy and is not always advantageous.
Siblicide generally only occurs when resources, specifically food sources, are scarce. Siblicide is advantageous for the surviving offspring because they have now eliminated most or all of their competition. It is also somewhat advantageous for the parents because the surviving offspring most likely have the strongest genes, and therefore likely have the highest fitness.
Some parents encourage siblicide, while others prevent it. If resources are scarce, the parents may encourage siblicide because only some offspring will survive anyway, so they want the strongest offspring to survive. By letting the offspring kill each other, it saves the parents time and energy that would be wasted on feeding offspring that most likely would not survive anyway.
Models
Originally proposed by , the insurance egg hypothesis (IEH) has quickly become the most widely supported explanation for avian siblicide as well as the overproduction of eggs in siblicidal birds. The IEH states that the extra egg(s) produced by the parent serves as an "insurance policy" in the case of the failure of the first egg (either it did not hatch or the chick died soon after hatching). When both eggs hatch successfully, the second chick, or is the so-called marginal offspring; it is marginal in the sense that it can add to or subtract from the evolutionary success of its family members. It can increase reproductive and evolutionary success in two primary ways. Firstly, it represents an extra unit of parental success if it survives along with its siblings.
In the context of Hamilton's inclusive fitness theory, the marginal chick increases the total number of offspring successfully produced by the parent and therefore adds to the gene pool that the parent bird passes to the next generation. Secondly, it can serve as a replacement for any of its siblings that do not hatch or die prematurely.
Inclusive fitness is defined as an animal's individual reproductive success, plus the positive and/or negative effects that animal has on its sibling's reproductive success, multiplied by the animal's degree of kinship. In instances of siblicide, the victim is usually the youngest sibling. This sibling's reproductive value can be measured by how much it enhances or detracts from the success of other siblings, therefore this individual is considered to be marginal. The marginal sibling can act as an additional element of parental success if it, as well as its siblings, survive. If an older sibling happens to die unexpectedly, the marginal sibling is there to take its place; this acts as insurance against the death of another sibling, which depends on the likelihood of the older sibling dying.
Parent–offspring conflict is a theory which states that offspring can take actions to advance their own fitness while decreasing the fitness of their parents and that parents can increase their own fitness while simultaneously decreasing the fitness of their offspring. This is one of the driving forces of siblicide because it increases the fitness of the offspring by decreasing the amount of competition they have. Parents may either discourage or accept siblicide, depending on whether it increases the probability of their offspring surviving to reproduce.
Mathematical representation
The cost and effect siblicide has on a brood's reproductive success can be broken down into an algebraic equation: is some measure of the total parental care or parental investment (PI) in the entire brood, with an absolute maximum possible value (hence parental effort constrained to ). Parents investing units of care in the current batch of offspring can expect a future reproductive success given by
{|
|-
| for
|
|-
| for
|
|-
| for
|
|}
where is the parents' future reproductive success when it makes no reproductive attempt (reproduction postponed to next season). The constant is a shape parameter that determines the relationship between parental investment and the cost of reproduction.
The equation models the risk / cost to the parent's own survival into the next breeding season, given the extra exertion to protect and provide food for their young;
it indicates that as parental care increases, the future reproductive success of the parent decreases. The parents' future reproductive success is modeled as an exhaustible asset, which drops to zero (no possibility of parents breeding again, later) if they provide self-sacrificial care (), whereas the parents' own future prospects remain the same, or nearly the same, if they provide no care, or very little care ().
The probability that the offspring thrive to join the breeding population after receiving units of parental care is
{|
|-
| for
|
|-
| for
|
|}
where is the minimum amount of parental care, required for the season's offspring to have any chance of growing to themselves become breeding adults.
The relation indicates that with inadequate care, or with merely adequate care, () the whole brood will surely fail to survive to become reproducing adults, but that with more than adequate care () the probability of the offspring living and breeding in the next season rises (only becoming certain, with a hypothetically "infinite" amount of parental care, ).
is the minimum amount of effort required from the parents, to give their offspring any non-zero chance of their brood / litter maturing to themselves become breeding adults.
If then the parents just barely have a chance of producing any offspring, and have only one chance to breed in their lifetime, like many seasonal insects. If then the parents might raise several successful offspring, while still themselves having a fair chance of breeding again; in that case, would represent a minimalist strategy, where the parents spend little effort, and the underfed offspring just barely have any chance of survival, but the parents conserve their own chance of breeding again later. At the other extreme, would represent a parental "go for broke" strategy, where the parents will be unable to breed any more, but ensure maximal brood survival (e.g. salmon or octopuses laying myriad eggs, but the parents always dying soon after they breed). There is some kind of middle ground, where the parents raise as many offspring as possible, with some risk to their own future, but not so much that they completely squander their own chance of breeding again.
Examples
In birds
Cattle egrets, Bubulcus ibis, exhibit asynchronous hatching and androgen loading in the first two eggs of their normal three-egg clutch. This results in older chicks being more aggressive and having a developmental head start. If food is scarce the third chick often dies or is killed by the larger siblings and so parental effort is distributed between the remaining chicks, which are hence more likely to survive to reproduce. The extra "excess" egg is possibly laid either due to exploit the possibility of elevated food abundance (as seen in the blue-footed booby, Sula nebouxii) or due to the chance of sterility in one egg. This is suggested by studies into the common grackle, Quiscalus quiscula and the masked booby, Sula dactylatra.
The theory of kin selection may be seen as a genetically mediated altruistic response within closely related individuals whereby the fitness conferred by the altruist to the recipient outweighs the cost to itself or the sibling/parent group. The fact that such a sacrifice occurs indicates an evolutionary tendency in some taxa toward improved vertical gene transmission in families or a higher percentage of the unit in reaching a reproductive age in a resource-limited environment.
The closely related masked and Nazca boobies are both obligately siblicidal species, while the blue-footed booby is a facultatively siblicidal species. In a facultatively siblicidal species, aggression occurs between siblings but is not always lethal, whereas in an obligately siblicidal species, aggression between siblings always leads to the death of one of the offspring. All three species have an average brood size of two eggs, which are laid within approximately four days of each other. In the few days before the second egg hatches, the first-born chick, known as the senior chick or A-chick, enjoys a period of growth and development during which it has full access to resources provided by the parent bird. Therefore, when the junior chick (B-chick) hatches, there is a significant disparity in size and strength between it and its older sibling.
In these three booby species, hatching order indicates chick hierarchy in the nest. The A-chick is dominant to the B-chick, which in turn is dominant to the C chick, etc. (when there are more than two chicks per brood). Masked booby and Nazca booby dominant A-chicks always begin pecking their younger sibling(s) as soon as they hatch; moreover, assuming it is healthy, the A-chick usually pecks its younger sibling to death or pushes it out of the nest scrape within the first two days that the junior chick is alive. Blue-footed booby A-chicks also express their dominance by pecking their younger sibling. However, unlike the obligately siblicidal masked and Nazca booby chicks, their behavior is not always lethal. A study by Lougheed and Anderson (1999) reveals that blue-footed booby senior chicks only kill their siblings in times of food shortage. Furthermore, even when junior chicks are killed, it does not happen immediately. According to Anderson, the average age of death of the junior chick in a masked booby brood is 1.8 days, while the average age of death of the junior chick in a blue-footed booby brood may be as high as 18 days. The difference in age of death in the junior chick in each booby species is indicative of the type of siblicide that the species practices. Facultatively siblicidal blue-footed booby A-chicks only kill their nest mate(s) when necessary. Obligately siblicidal masked and Nazca booby A-chicks kill their sibling no matter if resources are plentiful or not; in other words, siblicidal behavior occurs independently of environmental factors.
Blue-footed boobies are less likely to commit siblicide and if they do, they commit it later after hatching than masked boobies. In a study, the chicks of blue-footed and masked boobies were switched to see if the rates of siblicide would be affected by the foster parents. It turns out that the masked boobies that were placed under the care of blue-footed booby parents committed siblicide less often than they would normally. Similarly, the blue-footed booby chicks placed with the masked booby parents committed siblicide more often than they normally did, indicating that parental intervention also affects the offspring's behavior.
In another experiment which tested the effect of a synchronous brood on siblicide, three groups were created: one in which all the eggs were synchronous, one in which the eggs hatched asynchronously, and one in which asynchronous hatching was exaggerated. It was found that the synchronous brood fought more, was less likely to survive than the control group, and resulted in lower parental efficiency. The exaggerated asynchronous brood also had a lower survivorship rate than the control brood and forced parents to bring more food to the nest each day, even though not as many offspring survived.
In other animals
Siblicide (brood reduction) in spotted hyenas (Crocuta crocuta) resulted in the champions achieving a long-term growth rate similar to that of singletons and thus significantly increased their expected survival. The incidence of siblicide increased as the average cohort growth rate declined. When both cubs were alive, total maternal input in siblicidal litters was significantly lower than in non-siblicidal litters. Once siblicide has occurred, the growth rates of siblicide survivors substantially increased, indicating that mothers don't reduce their maternal input after siblicide has occurred. Also, facultative siblicide can evolve when the fitness benefits gained after the removal of a sibling by the dominant offspring, exceeds the costs acquired in terms of decreasing that sibling's inclusive fitness from the death of its sibling.
Some mammals sometimes commit siblicide for the purpose of gaining a larger portion of the parent's care. In spotted hyenas, pups of the same sex exhibit siblicide more often than male-female twins. Sex ratios may be manipulated in this way and the dominant status of a female and transmission of genes may be ensured through a son or daughter which inherits this solely, receiving much more parental nursing and decreased sexual competition.
Siblicidal "survival of the fittest" is also exhibited in parasitic wasps, which lay multiple eggs in a host, after which the strongest larva kills its rival sibling. Another example is when mourning cloak larvae will eat non-hatched eggs.
In sand tiger sharks, the first embryo to hatch from its egg capsule kills and consumes its younger siblings while still in the womb.
In humans
Siblicide can also be seen in humans in the form of twins in the mother's womb. One twin may grow to be an average weight, while the other is underweight. This is a result of one twin taking more nutrients from the mother than the other twin. In cases of identical twins, they may even have twin-to-twin transfusion syndrome (TTTS). This means that the twins share the same placenta and blood and nutrients can then move between twins. The twins may also be suffering from intrauterine growth restriction (IUGR), meaning that there is not enough room for both of the twins to grow. All of these factors can limit the growth of one of the twins while promoting the growth of the other. While one of the twins may not die because of these factors, it is entirely possible that their health will be compromised and lead to complications after their birth.
Siblicide in humans can also manifest itself in the form of murder. This type of killing (siblicide) is rarer than other types of killings. Genetic relatedness may be an important moderator of conflict and homicide among family members, including siblings. Siblings may be less likely to kill a full sibling because that would be a decrease in their own fitness. The cost of killing a sibling is much higher than the fitness costs associated with the death of a sibling-in-law because the killer wouldn't be losing 50% of their genes. Siblicide was found to be more common in early to middle adulthood as opposed to adolescence. However, there is still a tendency for the killer to be the younger party when the victim and killer were of the same sex. The older individual was most likely to be the killer if the incident were to occur at a younger age.
See also
Fratricide, the killing of a brother
Infanticide (zoology), a related behaviour
Intrauterine cannibalism
Nazca booby (displays obligate siblicide)
Parent–offspring conflict
Sibling abuse
Sibling rivalry
Sororicide, the killing of a sister
References
Further reading
Killings by type
Fratricides
Homicide
Selection
Sibling
Sibling rivalry
Sociobiology
Sororicides | Siblicide | Biology | 3,425 |
3,569,858 | https://en.wikipedia.org/wiki/Motif%20%28visual%20arts%29 | In art and iconography, a motif () is an element of an image. Motifs can occur both in figurative and narrative art, and in ornament and geometrical art. A motif may be repeated in a pattern or design, often many times, or may just occur once in a work.
A motif may be an element in the iconography of a particular subject or type of subject that is seen in other works, or may form the main subject, as the Master of Animals motif in ancient art typically does. The related motif of confronted animals is often seen alone, but may also be repeated, for example in Byzantine silk and in other ancient textiles. Where the main subject of an artistic work - such as a painting - is a specific person, group, or moment in a narrative, that should be referred to as the "subject" of the work, not a motif, though the same thing may be a "motif" when part of another subject, or part of a work of decorative art - such as a painting on a vase.
Ornamental or decorative art can usually be analysed into a number of different elements, which can be called motifs. These may often, as in textile art, be repeated many times in a pattern. Important examples in Western art include acanthus, egg and dart, and various types of scrollwork.
Some examples
Geometric, typically repeated: Meander, palmette, rosette, gul in Oriental rugs, acanthus, egg and dart, Bead and reel, Pakudos, Swastika, Adinkra symbols.
Figurative: Master of Animals, confronted animals, velificatio, Death and the Maiden, Three hares, Sheela na gig, puer mingens. In the Nativity of Jesus in art, the detail of showing Saint Joseph as asleep, which was common in medieval depictions, can be regarded as a "motif".
Many designs in Islamic culture are motifs, including those of the sun, moon, animals such as horses and lions, flowers, and landscapes. In kilim flatwoven carpets, motifs such as the hands-on-hips elibelinde are woven in to the design to express the hopes and concerns of the weavers: the elibelinde symbolises the female principle and fertility, including the desire for children.
Pennsylvania Dutch hex signs are a familiar type of motif in the eastern portions of the United States. Their circular and symmetric design, and their use of brightly colored patterns from nature, such as stars, compass roses, doves, hearts, tulips, leaves, and feathers have made them quite popular.
The idea of a motif has become used more broadly in discussing literature and other narrative arts for an element in the story that represents a theme.
Gallery
See also
Three hares
Notes
Further reading
Hoffman, Richard. Decorative Flower and Leaf Designs. Dover Publications (1991),
Jones, Owen. The Grammar of Ornament. Dover Publications, Revised edition (1987),
Welch, Patricia Bjaaland. Chinese art: a guide to motifs and visual imagery. Turtle Publishing (2008),
External links
Visual motifs (essay) Theater of Drawing
Decorative arts
Iconography | Motif (visual arts) | Mathematics | 650 |
286,322 | https://en.wikipedia.org/wiki/Crash%20test | A crash test is a form of destructive testing usually performed in order to ensure safe design standards in crashworthiness and crash compatibility for various modes of transportation (see automobile safety) or related systems and components.
Types
Frontal-impact tests: which is what most people initially think of when asked about a crash test. Vehicles usually impact a solid concrete wall at a specified speed, but these can also be vehicle impacting vehicle tests. SUVs have been singled out in these tests for a while, due to the high ride-height that they often have.
Moderate Overlap tests: in which only part of the front of the car impacts with a barrier (vehicle). These are important, as impact forces (approximately) remain the same as with a frontal impact test, but a smaller fraction of the car is required to absorb all of the force. These tests are often realized by cars turning into oncoming traffic. This type of testing is done by the U.S.A. Insurance Institute for Highway Safety (IIHS), Euro NCAP, Australasian New Car Assessment Program (ANCAP) and ASEAN NCAP.
Small Overlap tests: this is where only a small portion of the car's structure strikes an object such as a pole or a tree, or if a car were to clip another car. This is the most demanding test because it loads the most force onto the structure of the car at any given speed. These are usually conducted at 15–20% of the front vehicle structure.
Side-impact tests: these forms of accidents have a very significant likelihood of fatality, as cars do not have a significant crumple zone to absorb the impact forces before an occupant is injured.
Pole-impact tests: A difficult test which places a large amount of force on a small proportion on the side of the vehicle.
Roll-over tests: which tests a car's ability (specifically the pillars holding the roof) to support itself in a dynamic impact. More recently, dynamic rollover tests have been proposed in lieu of static crush testing (video).
Roadside hardware crash tests: are used to ensure crash barriers and crash cushions will protect vehicle occupants from roadside hazards, and also to ensure that guard rails, sign posts, light poles and similar appurtenances do not pose an undue hazard to vehicle occupants.
Old versus new: Often an old and big car against a small and new car, or two different generations of the same car model. These tests are performed to show the advancements in crash-worthiness.
Computer model: Because of the cost of full-scale crash tests, engineers often run many simulated crash tests using computer models to refine their vehicle or barrier designs before conducting live tests.
Sled testing: A cost-effective way of testing components such as airbags and seat belts is conducting sled crash testing. The two most common types of sled systems are reverse-firing sleds which are fired from a standstill, and decelerating sleds which are accelerated from a starting point and stopped in the crash area with a hydraulic ram. It can also be used to evaluate the whiplash protection of a vehicle's seat.
Major providers
Auto Review Car Assessment Program (ARCAP)
Allgemeiner Deutscher Automobil-Club (ADAC) in Germany
National Highway Traffic Safety Administration (NHTSA) in the United States, specifically the Federal Motor Vehicle Safety Standard (FMVSS) and New Car Assessment Program (NCAP)
Data collection
Crash tests are conducted under rigorous scientific and safety standards. Each crash test is very expensive so the maximum amount of data must be extracted from each test. Usually, this requires the use of high-speed data-acquisition, at least one triaxial accelerometer and a crash test dummy, but often includes more.
Some organizations that conduct crash tests include Calspan, an independent test laboratory in Buffalo, NY. As a result of the capabilities and expertise at Calspan, Calspan has been awarded 5 year contracts by the National Highway Traffic Safety Administration (NHTSA) to execute for the NHTSA FMVSS No. 214, Side Impact Protection Compliance Testing, FMVSS No. 301 Fuel System Integrity, and FMVSS No. 305 Electric Powered Vehicles: Electrolyte Spillage and Electrical Shock Protection vehicle crash tests. Calspan also holds the NHTSA contracts for executing New Car Assessment Program crash tests.
Also, the Monash University department of civil engineering routinely conducts crash tests for the purposes of roadside barrier safety and design.
Consumer response
In 1998 the Rover 100 received a one-star Adult Occupant Rating in EuroNCAP crash tests; sales promptly collapsed and the 18-year-old design was quickly scrapped.
In 2005 the Daewoo Kalos made news in Europe and Australia by scoring only two stars in its crash test, resulting in lower sales and demonstrating the influence of vehicle crashworthiness on a model's success in the marketplace. The result for Holden in Australia, who retailed the Kalos under the Holden Barina name, resulted in a considerable amount of negative publicity, with the managing director of Holden forced to publicly defend the vehicle.
The second generation Isuzu Trooper (1995–1997) models were rated "Not Acceptable" by Consumer Reports for their tendency to roll over during testing. After the report Trooper sales never recovered and two years later production ceased.
Crash testing programs
There are a number of crash test programs around the world dedicated to providing consumers with a source of comparitative information in relation to the safety performance of new and used vehicles. Examples of new car crash test programs include National Highway Traffic Safety Administration's NCAP, the Insurance Institute for Highway Safety, Australasian New Car Assessment Program, EuroNCAP and JapNCAP. Programs such as the Used Car Safety Ratings provide consumers information on the safety performance of vehicles based on real world crash data.
In 2020, EuroNCAP introduces a mobile progressive deformable barrier (MPDB) test first experimented on the Toyota Yaris.
See also
Air safety
Automobile safety
Automobile safety rating
Car accident
Crash test dummy
Crashworthiness
European New Car Assessment Programme (Euro NCAP)
Head injury criterion
Insurance Institute for Highway Safety
Moose test
Out of position (crash testing)
NASA Impact Dynamics Research Facility
References
External links
Automotive Safety and Bharat NCAP
How Crash Testing Works at HowStuffWorks
Insurance Institute of Highway Safety
EuroNCAP
Motorward: All you need to know about crash tests
Mechanical tests
Transport safety
Product testing | Crash test | Physics,Engineering | 1,322 |
4,028,673 | https://en.wikipedia.org/wiki/Anacin | Anacin is an American brand of analgesic that is manufactured by Prestige Consumer Healthcare. Anacin's active ingredients are aspirin and caffeine.
History
Anacin was invented by William Milton Knight and was first to be used as stated in the patent. Trademarked in 1918, Anacin is one of the oldest brands of pain relievers in the United States. It originally contained acetophenetidin (phenacetin) and was promoted as "aspirin-free relief," but was reformulated in the 1980s following the FDA's ruling to withdraw phenacetin from the market in 1983 due to concerns over its carcinogenic properties.
It was originally sold by the Anacin Co. ("Pharmaceutical Chemists") in Chicago, Illinois. American Home Products, now known as Wyeth, purchased the manufacturing rights in 1930. Anacin was reportedly their most popular product. Insight Pharmaceuticals acquired the brand in 2003. In 2014, Prestige Consumer Healthcare signed an agreement with Insight to acquire the company; it was Prestige's largest acquisition to that point.
Advertising
In 1939, Anacin sponsored a daytime serial called Our Gal Sunday. Their sponsorship spanned 18 of the program's 23 years on the air. Early Anacin radio commercials appeared in radio shows and dramas of the 1940s and 1950s. These "formulaic" commercials usually claimed that Anacin was being actively prescribed by doctors and dentists at the time, treated "headaches, neuritis and neuralgia", and that it contained "a combination of medically proven ingredients, like a doctor's prescription", without specifying those ingredients. Sometimes the announcer would mention that there were four active ingredients in Anacin, one of which was the medicine the consumer was already taking. It also claimed to help with depression. The announcer then reminded the listener that Anacin was available "at any drug counter", and "comes in handy (tin) boxes of 12 and 30, and economical family-size bottles of 50 and 100", usually spelling out its name at the end of the commercial.
Anacin sponsored the first made-for-television sitcom, Mary Kay and Johnny. Unsure of how many viewers would be watching when they sponsored the show in 1947, Anacin ran a simple test, offering a free mirror to the first 200 viewers to write for one. The offer drew over 9,000 responses, overwhelming the sponsor but proving television was a viable advertising medium.
Anacin was also a leading sponsor of the television soaps Love of Life, The Secret Storm and the early years of The Young and the Restless.
Anacin is one of the earliest and best examples of a concerted television marketing campaign, created for them in the late 1950s by Rosser Reeves of the Ted Bates ad agency. Many people remember the commercials advertising "tension producing" situations, and the "hammers in the head" advertisement with the slogan "Tension. Pressure. Pain."
An Anacin advertisement in 1962 featured a mother trying to assist her grown daughter with various chores, such as preparing a meal. "Don't you think it needs a little salt?", the mother would say, only to have her nerve-racked daughter shout, "Mother, please, I'd rather do it myself!" As the mother wilted, the daughter would emote and rub her head, with her inner voice saying, "Control yourself! Sure, you've got a headache, you're tense, irritable, but don't take it out on her!" Another commercial had a wife greeting her husband as he pulled into their driveway in his car; the husband responded by yelling "Helen, can't you keep Billy's bike out of the driveway?!?" These advertisement scenarios became popular and were parodied a number of times, including in the Allan Sherman song "Headaches", the 1966 film The Silencers and the 1980 film Airplane.
Anacin had a large billboard behind the center field fence of Yankee Stadium from the 1950s through 1973, until the stadium's 1974–75 renovation.
Products
Anacin covers a family of pain relievers. There are currently two different formulations:
Anacin Regular Strength – contains 400 mg ASA (aspirin) and 32 mg caffeine per tablet.
Anacin Max Strength – contains 500 mg ASA and 32 mg caffeine per tablet.
Side effects
Anacin's side effects may include dizziness, heartburn, irritability, nausea, nervousness, rashes, hives, bloody stools, drowsiness, hearing loss, ringing in the ears, and trouble sleeping.
See also
Anadin, an Anacin brand sold in the United Kingdom, launched in 1932.
References
External links
Prestige Brands Anacin
Insight Pharmaceuticals - Anacin
Prestige Brands
Prestige Brands brands
Drugs developed by Pfizer
Drug brand names
Analgesics | Anacin | Chemistry | 993 |
2,902,853 | https://en.wikipedia.org/wiki/16%20Arietis | 16 Arietis (abbreviated 16 Ari) is a star in the northern constellation of Aries. 16 Arietis is the Flamsteed designation. Its apparent magnitude is 6.01. Based upon the annual parallax shift of , this star is approximately distant from Earth. The brightness of this star is diminished by 0.40 in magnitude from extinction caused by interstellar gas and dust. This is an evolved giant star with a stellar classification of K3 III.
References
External links
HR 633
Image 16 Arietis
K-type giants
Aries (constellation)
0633
Durchmusterung objects
Arietis, 16
013363
010203 | 16 Arietis | Astronomy | 139 |
36,837,781 | https://en.wikipedia.org/wiki/Nu%20Cygni | Nu Cygni, Latinized from ν Cygni, is a binary star system in the constellation Cygnus. Its apparent magnitude is 3.94 and it is approximately 374 light years away based on parallax. The brighter component is a magnitude 4.07 A-type giant star with a stellar classification of A0III n, where the 'n' indicates broad "nebulous" absorption lines due to rapid rotation. This white-hued star has an estimated 3.6 times the mass of the Sun and about times the Sun's radius. It is radiating 412 times the Sun's luminosity from its photosphere at an effective temperature of 9,462 K. The magnitude 6.4 companion has an angular separation of 0.24" from the primary.
Notes
References
Cygnus (constellation)
A-type giants
Cygni, Nu
103413
8028
199629
Cygni, 58
Durchmusterung objects | Nu Cygni | Astronomy | 197 |
19,804,030 | https://en.wikipedia.org/wiki/Ethyl%20cellulose | Ethyl cellulose (or ethylcellulose) is a derivative of cellulose in which some of the hydroxyl groups on the repeating glucose units are converted into ethyl ether groups. The number of ethyl groups can vary depending on the manufacturer.
It is mainly used as a thin-film coating material for coating paper, vitamin and medical pills, and for thickeners in cosmetics and in industrial processes.
Food grade ethyl cellulose is one of few non-toxic films and thickeners which are not water-soluble. This property allows it to be used to safeguard ingredients from water.
Ethyl cellulose is also used as a food additive as an emulsifier (E462).
Ethyl cellulose is commonly used as a coating material for tablets and capsules, as it provides a protective barrier that prevents the active ingredients from being released too quickly in the digestive system. EC is also used as a binder, thickener, and stabilizer in a variety of food, cosmetic, and pharmaceutical products.
See also
Ethyl methyl cellulose
Methyl cellulose
References
Cellulose
Food additives
Excipients
Cellulose ethers
E-number additives | Ethyl cellulose | Chemistry | 252 |
7,367,566 | https://en.wikipedia.org/wiki/CollectSPACE | collectSPACE is an online publication and community for space history enthusiasts featuring articles and photos about space artifacts and memorabilia, information on past, current, and upcoming space events, space history collecting resources, and links to other space-related websites. It also provides an array of message boards where registered members can discuss various aspects of space history and the space collecting hobby; buy, sell, or trade items; or pose "what if?" historical questions. Users often abbreviate the website's name as "cS," and members often refer to each other as "cSers."
collectSPACE, founded and edited by Robert Pearlman, has published articles and reviews by authors Andrew Chaikin (A Man on the Moon), Kris Stoever (For Spacious Skies), James Oberg (Red Star in Orbit), Frederick Ordway III (Imagining Space), Francis French (In the Shadow of the Moon), David Hitt (Homesteading Space), Russell Still (Relics of the Space Race), Colin Burgess (Into That Silent Sea), Jay Gallentine (Ambassadors From Earth) and Apollo astronaut Walt Cunningham, among others.
History
The website's intended name was spacememorabilia.com, for which a logo had been designed; however, the URL was owned (though not in use) by former Gemini and Apollo astronaut Pete Conrad. Pearlman instead bought the URL collectSPACE.com, which came online on July 20, 1999, the 30th anniversary of the Apollo 11 Moon landing (Conrad died unexpectedly July 8).
collectSpace originally contained a photo gallery, drawing on Pearlman's personal collection; "Sightings," a calendar of astronaut appearances; and a short article about Apollo 11 anniversary toys. "Sightings" was chosen to show up in Internet searches for Sightings, a TV series about UFOs. The site's original tagline was "memorabilia from the conquest of the final frontier," which became "The Source for Space History & Artifacts."
collectSPACE earned national media attention later in 1999 for its role in halting a controversial eBay auction for Space Shuttle Challenger debris. In September 1999, it first covered a space memorabilia auction—Christie's East—followed by Superior Galleries of Beverly Hills, California the following month. collectSPACE was the first to webcast space memorabilia auctions, providing live audio (and one year, video) from Superior Gallery's auction floor, as well as live hammer results (auction houses subsequently added their own webcast capabilities or partnered with eBay for live online bidding).
The site's message board went online in November 1999. Among those posting and replying to messages have been former Apollo (EECOM flight controller) Sy Liebergot; Stephen Clemmons, a member of the Apollo 1 ground support crew; Project Mercury astronaut Scott Carpenter's daughter Kris Stoever; astronaut Pete Conrad's son, Pete Conrad, III; National Air and Space Museum curator Allan Needell, space historian Dwayne A. Day, Who's Who in Space authors Michael Cassutt and Rex Hall, Kraig McNutt of "Today In Space History," and The Surfaris' former bassist Andrew Lagomarsino, among others. A number of astronauts are known to be cS readers.
collectSPACE was nominated for The Houston Chronicle's best blog in its Ultimate Houston Readers Pick for 2005.
In 2006, collectSPACE was the first to reveal the name of NASA's next planned crewed spacecraft, Orion, and publish its logo; as well as the name Altair for the next planned lunar lander.
Charitable auctions
In the wake of the 9-11 terrorist attacks, collectSPACE organized Heroes Helping Heroes, an online auction benefiting the American Red Cross. In partnership with Yahoo! Auctions, the site offered bidders the chance to have an item of their choice signed by one of 22 retired astronauts, who volunteered to participate. $12,686 was raised.
Between 2003 and 2006, collectSPACE hosted annual silent auctions benefiting the Astronaut Scholarship Foundation. The astronaut experiences and artifacts auctions have raised more than $180,000 for exceptional college students seeking degrees in science and engineering.
References
External links
Internet forums
Space organizations
American educational websites
Space advocacy organizations | CollectSPACE | Astronomy | 875 |
19,714,534 | https://en.wikipedia.org/wiki/Technetium%20%2899mTc%29%20votumumab | {{DISPLAYTITLE:Technetium (99mTc) votumumab}}
Technetium (99mTc) votumumab (trade name HumaSPECT) is a human monoclonal antibody labelled with the radionuclide technetium-99m. It was developed for the detection of colorectal tumors, but has never been marketed.
The target of votumumab is CTAA16.88, a complex of cytokeratin polypeptides in the molecular weight range of 35 to 43 kDa, which is expressed in colorectal tumors.
References
Radiopharmaceuticals
Technetium-99m
Abandoned drugs
Technetium compounds
Antibody-drug conjugates | Technetium (99mTc) votumumab | Chemistry,Biology | 154 |
9,328,589 | https://en.wikipedia.org/wiki/Cryogenic%20grinding | Cryogenic grinding, also known as freezer milling, freezer grinding, and cryomilling, is the act of cooling or chilling a material and then reducing it into a small particle size. For example, thermoplastics are difficult to grind to small particle sizes at ambient temperatures because they soften, adhere in lumpy masses and clog screens. When chilled by dry ice, liquid carbon dioxide or liquid nitrogen, the thermoplastics can be finely ground to powders suitable for electrostatic spraying and other powder processes. Cryogenic grinding of plant and animal tissue is a technique used by microbiologists. Samples that require extraction of nucleic acids must be kept at −80 °C or lower during the entire extraction process. For samples that are soft or flexible at room temperature, cryogenic grinding may be the only viable technique for processing samples. A number of recent studies report on the processing and behavior of nanostructured materials via cryomilling.
Freezer milling
Freezer milling is a type of cryogenic milling that uses a solenoid to mill samples. The solenoid moves the grinding media back and forth inside the vial, grinding the sample down to analytical fineness. This type of milling is especially useful in milling temperature sensitive samples, as samples are milled at liquid nitrogen temperatures. The idea behind using a solenoid is that the only "moving part" in the system is the grinding media inside the vial. The reason for this is that at liquid nitrogen temperatures (–196°C) any moving part will come under huge stress leading to potentially poor reliability. Cryogenic milling using a solenoid has been used for over 50 years and has proven to be a very reliable method of processing temperature sensitive samples in the laboratory.
Cryomilling
Cryomilling is a variation of mechanical milling, in which metallic powders or other samples (e.g. temperature sensitive samples and samples with volatile components) are milled in a cryogen (usually liquid nitrogen or liquid argon) slurry or at a cryogenics temperature under processing parameters, so a nanostructured microstructure is attained. Cryomilling takes advantage of both the cryogenic temperatures and conventional mechanical milling. The extremely low milling temperature suppresses recovery and recrystallization and leads to finer grain structures and more rapid grain refinement. The embrittlement of the sample makes even elastic and soft samples grindable. Tolerances less than 5 μm can be achieved. The ground material can be analyzed by a laboratory analyzer.
Applications in biology
Cryogenic grinding (or "cryogrinding") is a method of cell disruption employed by molecular life scientists to obtain broken cell material with favorable properties for protein extraction and affinity capture. Once ground, the fine powder consisting of broken cells (or "grindate") can be stored for long periods at –80°C without obvious changes to biochemical properties – making it a very convenient source material in e.g. proteomic studies including affinity capture / mass spectrometry.
References
Cryogenics
Microbiology techniques
Grinding and lapping
Plastics industry | Cryogenic grinding | Physics,Chemistry,Biology | 632 |
51,754,640 | https://en.wikipedia.org/wiki/Hybrid%20shipping%20container | A hybrid shipping container is a shipping system that uses the energy of phase-change material (PCM) in combination with the ability to recharge without removing the media. This ability is known as cold-energy battery.
Application
Currently, this technology is only being used in a limited number of shipping containers.
SkyCell - high energy protection
TOWER - very limited energy protection
Peli BioThermal - high energy protection
World Courier Cocoon - similar platform to va-Q-tec
va-Q-tec - high energy protection combined with vacuum panels
A Cold-energy battery works by storing energy to a given temperature and using its thermal mass to maintain this temperature. It can be recharged by being placed in a temperature range applicable to its phase change window.
See also
Insulated shipping container
References
External links
Storage Container
Shipping containers | Hybrid shipping container | Physics | 169 |
63,611,729 | https://en.wikipedia.org/wiki/Spill%20%28book%29 | Spill is a 1991 fictional thriller by Les Standiford about a lethal biological weapon that has leaked from a crashed tanker truck in Yellowstone National Park. Agents of PetroDyne Corporation, the Denver-based chemical company responsible for manufacturing the banned agent, works with its co-conspirators in the government to cover up the incident. A Yellowstone National Park Ranger named Jack Fairchild finds himself in the middle of the coverup and does everything in his power to help his friends escape an assassin named Skanz.
Spill was dramatized in Virus, a 1996 film directed by Allan A. Goldstein and starring Brian Bosworth.
Plot
A lethal biological weapon has leaked from a crashed tanker truck in Yellowstone National Park. PetroDyne Chemical, the company manufacturing the substance banned by international treaty, sends the tanker from its headquarters in Denver to a storage facility in Idaho. The genetically engineered form of hemorrhagic fever has spilled into a waterway and has infected wildlife and humans in a popular camping area in the small town of West Yellowstone, Montana. The driver responsible for the spill has been paid by a rogue employee of the company to divert the shipment to the hills of Yellowstone.
Agents for PetroDyne work with local government officials who are aware of the transport of hazardous chemicals by the company to cover up the incident. The coverup involves finding any survivors of the spill, quarantining them, observing them, and even worse, allowing them to die and incinerating the bodies. The head of PetroDyne Corporation, a man named Schreiber who works at the company headquarters in Denver, directs a loyal employee named Alec Reisman and a half-mad hitman named Skanz to clean up the mess that was created.
Jack Fairchild is the Park Ranger who finds the truck driver who caused the spill, and other survivors, and must overcome all obstacles to free them from the grasp of PetroDyne’s security team.
Reception
The book received reviews from publications including Publishers Weekly, Kirkus Reviews, Library Journal, Chicago Tribune, and Los Angeles Times.
References
1991 American novels
1991 science fiction novels
Action novels
American novels adapted into films
Yellowstone National Park
Novels set in Montana | Spill (book) | Biology | 437 |
15,449,185 | https://en.wikipedia.org/wiki/Novosphingobium | Novosphingobium is a genus of Gram-negative bacteria that includes N. taihuense, which can degrade aromatic compounds such as phenol, aniline, nitrobenzene and phenanthrene. The species N. aromativorans, which was first found in Ulsan Bay, similarly degrades aromatic molecules of two to five rings.
Species
Accepted Species
Novosphingobium comprises the following species:
Novosphingobium acidiphilum Glaeser et al. 2009
Novosphingobium aquaticum Glaeser et al. 2013
Novosphingobium aquimarinum Le et al. 2020
Novosphingobium aquiterrae Lee et al. 2014
Novosphingobium arabidopsis Lin et al. 2014
Novosphingobium aromaticivorans corrig. (Balkwill et al. 1997) Takeuchi et al. 2001
Novosphingobium arvoryzae Sheu et al. 2018
Novosphingobium barchaimii Niharika et al. 2013
Novosphingobium bradum Sheu et al. 2016
Novosphingobium capsulatum (Leifson 1962) Takeuchi et al. 2001
Novosphingobium chloroacetimidivorans Chen et al. 2014
Novosphingobium clariflavum Zhang et al. 2017
Novosphingobium colocasiae Chen et al. 2016
Novosphingobium endophyticum Li et al. 2016
Novosphingobium flavum Nguyen et al. 2016
Novosphingobium fluoreni Gao et al. 2015
Novosphingobium fontis Sheu et al. 2017
Novosphingobium fuchskuhlense Glaeser et al. 2013
Novosphingobium gossypii Kämpfer et al. 2015
Novosphingobium guangzhouense Sha et al. 2017
Novosphingobium hassiacum Kämpfer et al. 2002
Novosphingobium humi Hyeon et al. 2017
Novosphingobium indicum Yuan et al. 2009
Novosphingobium ipomoeae Chen et al. 2017
Novosphingobium kunmingense Xie et al. 2014
Novosphingobium lentum Tiirola et al. 2005
Novosphingobium lindaniclasticum Saxena et al. 2013
Novosphingobium lotistagni Ngo et al. 2016
Novosphingobium lubricantis Kämpfer et al. 2018
Novosphingobium malaysiense Lee et al. 2014
Novosphingobium marinum Huo et al. 2015
Novosphingobium mathurense Gupta et al. 2009
Novosphingobium meiothermophilum Xian et al. 2019
Novosphingobium naphthae Chaudhary and Kim 2016
Novosphingobium naphthalenivorans Suzuki and Hiraishi 2008
Novosphingobium nitrogenifigens Addison et al. 2007
Novosphingobium olei Chaudhary et al. 2021
Novosphingobium oryzae Zhang et al. 2016
Novosphingobium ovatum Chen et al. 2020
Novosphingobium panipatense Gupta et al. 2009
Novosphingobium pentaromativorans Sohn et al. 2004
Novosphingobium piscinae Sheu et al. 2016
Novosphingobium pokkalii Krishnan et al. 2017
Novosphingobium resinovorum (Delaporte and Daste 1956) Lim et al. 2007
Novosphingobium rhizosphaerae Kämpfer et al. 2015
Novosphingobium rosa corrig. (Takeuchi et al. 1995) Takeuchi et al. 2001
Novosphingobium sediminicola Baek et al. 2011
Novosphingobium silvae Feng et al. 2020
Novosphingobium soli Kämpfer et al. 2011
Novosphingobium stygium corrig. (Balkwill et al. 1997) Takeuchi et al. 2001
Novosphingobium subterraneum corrig. (Balkwill et al. 1997) Takeuchi et al. 2001
Novosphingobium taihuense Liu et al. 2005
Novosphingobium umbonatum Sheu et al. 2020
Provisional Species
The following species names have been published, but not validated according to the Bacteriological Code:
"Novosphingobium aquaticum" Singh et al. 2015
"Novosphingobium ginsenosidimutans" Kim et al. 2013
"Novosphingobium profundi" Zhang et al. 2017
"Novosphingobium sediminis" Li et al. 2012
"Novosphingobium tardum" Chen et al. 2015
References
Hydrocarbon-degrading bacteria
Bacteria genera
Sphingomonadales | Novosphingobium | Biology | 1,049 |
40,504,424 | https://en.wikipedia.org/wiki/Gravity%20current%20intrusion | The term gravity current intrusion denotes the fluid mechanics phenomenon within which a fluid intrudes with a predominantly horizontal motion into a separate stratified fluid, typically along a plane of neutral buoyancy. This behaviour distinguishes the difference between gravity current intrusions and gravity currents, as intrusions are not restrained by a well-defined boundary surface. As with gravity currents, intrusion flow is driven within a gravity field by density differences typically small enough to allow for the Boussinesq approximation.
The driving density difference between fluids that produces intrusion motion could simply be due to chemical composition. However variations can also be caused by differences in respective fluid temperatures, dissolved matter concentrations and by particulate matter suspended in flows.
Examples of particulate suspension intrusions include sediment laden river outflows within oceans, 'short-circuit' sewage sedimentation tank intrusions and turbidity current flows over hypersaline Mediterranean pools. Examples also exist of particulate intrusions caused by the lateral spread of thermals or plumes along planes of neutral buoyancy; such as intrusions containing metalliferous sediments formed from deep ocean hydrothermal vents. Or equally crystal laden intrusions formed by plumes within volcanic magma chambers. Arguably the most striking of all gravitational intrusions, is the atmospheric gravity current generated from a large, 'Plinean' volcanic eruption. In which case the volcano's overhanging 'umbrella' is an example of an intrusion laterally intruding into the stratified Troposphere.
Research
Work analysing gravity currents propagating within a single fluid host was broadened to consider intrusions within sharply stratified fluids by Hoyler & Huppert in 1980. Since then there have been further significant analytical and experimental advancements into understanding specifically particle laden intrusions by researchers including Bonnecaze, et al., (1993, 1995, 1996), Rimoldi et al. (1996), and Rooij, et al. (1999). As of 2012 the most recent rigorous analytical analysis, designed to determine the propagation speed of a classically extending intrusion, was performed by Flynn and Linden. Practical experimentation into intrusions has typically employed a lock exchange to study intrusion dynamics.
Structure
The basic structure of a gravity intrusion is approximate to that of a classic current with a roughly elliptical 'head' followed by a tail which stretches with increased current length, it is within the rear half of the intrusion head that the majority of mixing with ambient fluids takes place. As with gravity currents, intrusions display the same 'slumping', 'self –similar' and 'viscous' phases as gravity currents during propagation.
References
Fluid dynamics | Gravity current intrusion | Chemistry,Engineering | 538 |
75,360,752 | https://en.wikipedia.org/wiki/Erfonrilimab | Erfonrilimab is a investigational drug being evaluated for use in cancer immunotherapy. It is a bispecific antibody targeting PD-L1 and CTLA-4.
References
Cancer immunotherapy
Monoclonal antibodies | Erfonrilimab | Chemistry | 56 |
11,999,467 | https://en.wikipedia.org/wiki/Telos%20Alliance | Telos Alliance is an American corporation manufacturing audio products primarily for broadcast stations. Headquartered in Cleveland, Ohio, US, the company is divided into six divisions:
Telos Systems manufactures talkshow systems, IP audio codecs and transceivers, as well as streaming audio encoders.
Omnia Audio makes audio processors for AM, FM, HD Radio, and Internet audio streaming applications.
Axia Audio builds mixing consoles and audio distribution systems based on Livewire IP networking, an audio over Ethernet protocol.
Linear Acoustic, whose product line includes TV loudness controls, metering and monitoring devices, along with mixing and metadata tools.
25-Seven Systems specializes in broadcast delays, time management and processing products.
Minnetonka Audio Software delivers software-based audio automation to media production infrastructures.
History and founder
Telos Alliance began as Telos Systems, a part-time project founded in 1985 by radio station engineer and talk show host (WFBQ, WMMS) Steve Church. Its first product was a telephone hybrid, the Telos 10, which was based on digital signal processing.
Church visited Fraunhofer in Germany in the late 1980s. There, he learned of MPEG-1 Audio Layer III audio coding. Telos became the first licensee in the United States of what is now known as MP3. MP3 became part of the solution to long-distance remote broadcasts using Integrated Services Digital Network (ISDN). This became the preferred alternative to leased lines available since the 1920s and satellite links available since the 1970s.
Audio over IP (AoIP) technology called Livewire made its debut in 2003 at the NAB Show in Las Vegas. The original Livewire-capable products included mixing consoles, analog, AES, mic and GPIO nodes. Other manufacturers began making their own AoIP broadcast equipment and there was a need for AoIP gear from different manufacturers to communicate with each other. Telos, along with other manufacturers, developed the AES67 standard for AoIP interoperability.
Church received many accolades for his work over the years. In 2010, the National Association of Broadcaster's (NAB) honored him with its Radio Engineering award. He stepped down as CEO of Telos in January 2011, and died on September 28, 2012, after a three-year battle with brain cancer.
In the following years, the company also expanded its product lines. Telos Systems continued to develop broadcast telephone systems, IP audio codecs & transceivers, and processing as well as encoding for streaming audio. Networked radio consoles, audio interfaces and routing control, networked intercom, and related software were created under the Axia Audio brand name. Audio processing, processing and encoding products for streaming audio, voice processing, analysis tools, and studio audio processing was developed under the Omnia Audio brand. The three companies were under the larger corporate umbrella known as Telos Systems.
Growth of the company continued with the acquisition of new partners. Linear Acoustic of Lancaster PA was acquired, along with its product line of TV loudness controls, metering and monitoring devices, along with mixing and metadata tools. The corporate name was changed to The Telos Alliance. Shortly thereafter, 25-Seven came on board. This Boston-based company specializes in broadcast delays, time management and processing products which result in more efficient and profitable radio operations. In September 2015, Minnetonka Audio Software joined the Telos Alliance through a merger of the companies. The Minnetonka, Minnesota-based company delivers a file-based software alternative to hardware program optimizers, providing audio automation to media production infrastructures.
In September 2016, Linear Acoustic and Minnetonka Audio were rebranded as The TV Solutions Group, which provides consulting and partnerships with television broadcasters seeking to transition to the latest technology.
References
Companies based in Cleveland
Broadcast engineering
Manufacturing companies established in 1985
Manufacturers of professional audio equipment
Audio equipment manufacturers of the United States
American companies established in 1985 | Telos Alliance | Engineering | 799 |
31,878,723 | https://en.wikipedia.org/wiki/European%20Space%20Weather%20portal | The European Space Weather Portal (ESWeP) results from the COST (European Cooperation in Science and Technology) action 724 (Developing the basis for monitoring, modelling and predicting Space Weather). ESWeP is a website providing a centralized access point to the space weather community to share their knowledge and results. The website is hosted and maintained at the Belgian Institute for Space Aeronomy. Its development will be continued in the framework of the COST ES0803 action (Developing space weather products and services in Europe).
On ESWeP, a section is devoted to education and outreach. Children as young as five years old are invited to get involved in some of these activities. For example, artworks illustrating space weather made by primary school children from several countries are presented on the website.
Models
The portal provides a platform to run local and remote models and access their results both in graphical and numerical forms. Models hosted at ESWeP:
Geomagnetic cutoff calculations (K. Kudela (IEP/SAV) & M. Storini (IFSI/CNR), COST724)
SOLPENCO (A. Aran, B. Sanahuja and D. Lario, University of Barcelona, COST724)
Exospheric solar wind model (H. Lamy and V. Pierrard, BISA, COST724)
Plasmapause location (V. Pierrard, BISA, COST724)
Magnetocosmics cutoffs (L. Desorgher, University of Bern, COST724)
Magnetocosmics trajectories (L. Desorgher, University of Bern, COST724)
Space weather document repository
Space weather research and services can only be organized efficiently with reference documentation. The FP7 SOTERIA project identified the need to have a repository where space weather professionals can upload and share their technical documents, reference documents, standards, or research papers.
The "Space Weather Document Repository", which is an online tool to disseminate reference documents (papers, reports, etc.) that are space weather-related, was developed by ESWeP in collaboration with SOTERIA.
References
External links
BIRA-IASB homepage
2008 establishments in Belgium
Science and technology in Belgium
Space physics
Space weather | European Space Weather portal | Astronomy | 467 |
24,430,575 | https://en.wikipedia.org/wiki/The%20Korea%20IT%20Times | Korea IT Times is a bilingual publication (Korean and English) with an eye on Industry & Technology, including the ICT field based in Seoul, South Korea.
Publication Details:
• Launch: July 2004
• Publisher: Korea ET Times Media Group
• Category: ICT, Science news and issues including all of Industry and Technology
• Internet and Mobile online: Daily News
• Print Magazine: Monthly
• Language: English and Korean
Core team
Chung Monica Younsoo, the founder and publisher of the Korea IT Times, served as the editor of the Korea Economic Daily News.
Jung Yeon-tae, former CEO of KOSCOM (a Korean provider of financial IT services), was inaugurated as co-publisher and chairman of the Korea IT Times on November 3, 2015.
Lee Kap-soo, former editor of Korea Times, was inaugurated as an editor in chief of the Korea IT Times on January 10, 2017.
The Korea IT Times editors are Yeon Choul-woong, Jeong Yeon-jin and Chun Clair Go-eun, CMO in New York.
The Korea IT Times staff reporters are Oh Hae-young, Arthur E. Michalak, Yeon Je-hyun, D.Peter Kim, Timothy Daniel, Kim Min-ji, Lee Jun-seong, Jung Se-jin, Kim In-wook, Park Jeong-Jun, Travis Allen, Kim Sung-kap, Ryan Shuster, Natasha Willhite.
The special advisors are Dr. Yang Seung-taik, former Minister of Information and Communications, Dr. Shin Kook-hwan, the former Minister of Commerce, Industry and Energy, Dr. Kim Hak-su, the former ESCAP Executive Secretary, Dr. Youn Hwa-jin, the former senior economist of ADB, Dr. Kim Wan-soon, professor emeritus, business school of Korea University and Dr. Park Ho-goon, the former Minister of Science and Technology.
Association
The content partners of the Korea IT Times are Google News, Naver News, Euromoney EMIS, Nasdaq Globe Newswire, PR Newswire, Media OutReach and News Republic.
References
Alexa traffic ranking for Korea IT Times.com:
Google News index for Korea IT Times.com: https://news.google.com/news?pz=1&ned=us&hl=en&q=site:www.koreaittimes.com&cf=all&scoring=n
Euromoney (A Euromoney Institutional Investor Company)index for Korea IT Times.com: http://www.securities.com/ch.html?pc=KR
Naver News index for Korea IT Times.com:https://search.naver.com/search.naver?where=news&sm=tab_jum&ie=utf8&query=%EC%BD%94%EB%A6%AC%EC%95%84%EC%95%84%EC%9D%B4%ED%8B%B0%ED%83%80%EC%9E%84%EC%A6%88
Publications established in 2004
2004 establishments in South Korea
Mass media in Seoul
Magazines published in South Korea
English-language magazines published in South Korea | The Korea IT Times | Technology | 679 |
38,112,149 | https://en.wikipedia.org/wiki/Omicron1%20Orionis | {{DISPLAYTITLE:Omicron1 Orionis}}
Omicron1 Orionis (ο1 Ori) is a binary star in the northeastern corner of the constellation Orion. It is visible to the naked eye with an apparent visual magnitude of 4.7. Based upon an annual parallax shift of , it is located approximately 650 light years from the Sun. At that distance, the visual magnitude of the star is diminished by an interstellar absorption factor of 0.27 due to intervening dust.
The two components of this system have an orbital period of greater than 1,900 days (5.2 years). The primary component is an evolved red giant with the stellar classification of M3S III. This is an S-type star on the asymptotic giant branch. The variability of the brightness of ο1 Orionis was announced by Joel Stebbins and Charles Morse Huffer in 1928, based on observations made at Washburn Observatory.It is a semiregular variable that is pulsating with periods of 30.8 and 70.7 days, each with nearly identical amplitudes of 0.05 in magnitude. The star has an estimated 90% of the mass of the Sun but has expanded to 214 times the Sun's radius. It shines with 4,046 times the solar luminosity from its outer atmosphere at an effective temperature of 3,465 K.
References
M-type giants
S-type stars
Asymptotic-giant-branch stars
Orion (constellation)
Orionis, Omicron1
Orionis, 04
030959
022667
1556
Semiregular variable stars
Binary stars
Durchmusterung objects | Omicron1 Orionis | Astronomy | 346 |
71,854,729 | https://en.wikipedia.org/wiki/Hijagang | The Hijagang (Meitei pronunciation: hī-ja-gāng) is a boathouse inside the Kangla Fort in Imphal, India. It houses four traditional Meitei watercraft, including two hiyang hirens (Royal racing boats) and two tanna his (commoners' racing boats). According to Meitei religious beliefs, the hiyang hirens are used by the male ancestral deity () and female ancestral deity () and are sacred to the Meiteis, the major ethnic group of Manipur.
Construction and inauguration
The construction of the Hijagang watercraft storage building started in the year 2010 and completed in the year 2013.
On 21 August 2013, with the performances of necessary religious rites and rituals by the and the in the early morning, the Hijagang was inaugurated by Okram Ibobi Singh, the then Chief Minister of Manipur, who was also the then President of "Kangla Fort Board".
Featuring watercrafts
Crafting processes and inaugural
According to RK Nimai, the then Commissioner of the Department of Arts and Culture, Government of Manipur, the two kinds of watercrafts were made from special kind of trees brought from Khamsom village in Senapati district of Manipur. The crafting processes were initiated in Khamsom in the year 2007. Later, the woods for crafting the Hiyang Hirens were brought to the Kangla on 6 June 2007, with the work of sculpturing getting commenced 2 days later. The watercrafts were made by a four-member team under the leadership of craftsman L. Thoiba. Later, the watercrafts were inaugurated on 19 February 2010 (3 years before the completion of the building construction).
Crafting materials used
The hiyang hiren are made of uningthou and the tanna hi of tairen.
Lengths, widths and heights
See also
Hiyang Hiren
Hiyang Tannaba
Heikru Hidongba
Kangla Sanathong
Statue of Meidingu Nara Singh
Manipur State Museum
Notes
References
External links
Hijagang at e-pao.net Gallery
Meitei architecture
Monuments and memorials in Imphal
Monuments and memorials to Meitei royalty
Museums in Manipur
Public art in India
Tourist attractions in Manipur | Hijagang | Engineering | 467 |
44,633 | https://en.wikipedia.org/wiki/Ultramarine | Ultramarine is a deep blue color pigment which was originally made by grinding lapis lazuli into a powder. Its lengthy grinding and washing process makes the natural pigment quite valuable—roughly ten times more expensive than the stone it comes from and as expensive as gold.
The name ultramarine comes from the Latin . The word means 'beyond the sea', as the pigment was imported by Italian traders during the 14th and 15th centuries from mines in Afghanistan. Much of the expansion of ultramarine can be attributed to Venice which historically was the port of entry for lapis lazuli in Europe.
Ultramarine was the finest and most expensive blue used by Renaissance painters. It was often used for the robes of the Virgin Mary and symbolized holiness and humility. It remained an extremely expensive pigment until a synthetic ultramarine was invented in 1826.
Ultramarine is a permanent pigment when under ideal preservation conditions. Otherwise, it is susceptible to discoloration and fading.
Structure
The pigment consists primarily of a zeolite-based mineral containing small amounts of polysulfides. It occurs in nature as a proximate component of lapis lazuli containing a blue cubic mineral called lazurite. In the Colour Index International, the pigment of ultramarine is identified as P. Blue 29 77007.
The major component of lazurite is a complex sulfur-containing sodium-silicate (Na8–10Al6Si6O24S2–4), which makes ultramarine the most complex of all mineral pigments. Some chloride is often present in the crystal lattice as well. The blue color of the pigment is due to the radical anion, which contains an unpaired electron.
Visual properties
The best samples of ultramarine are a uniform deep blue while other specimens are of paler color.
Particle size distribution has been found to vary among samples of ultramarine from various workshops. Numerous grinding techniques used by painters have resulted in different pigment/medium ratios and particle size distributions. The grinding and purification process results in pigment with particles of various geometries. Different grades of pigment may have been used for different areas in a painting, a characteristic that is sometimes used in art authentication.
Shades and variations
International Klein Blue (IKB) a deep blue hue first mixed by the French artist Yves Klein.
Electric
Electric ultramarine is the tone of ultramarine that is halfway between blue and violet on the RGB (HSV) color wheel, the expression of the HSV color space of the RGB color model.
Production
Natural production
Historically, lapis lazuli stone was mined in Afghanistan and shipped overseas to Europe.
A method to produce ultramarine from lapis lazuli was introduced and later described by Cennino Cennini in the 15th century. This process consisted of grinding the lapis lazuli mineral, mixing the ground material with melted wax, resins, and oils, wrapping the resulting mass in a cloth, and then kneading it in a dilute lye solution, a potassium carbonate solution prepared by combining wood ash with water. The blue lazurite particles collect at the bottom of the pot, while the colorless crystalline material and other impurities remain at the top. This process was performed at least three times, with each successive extraction generating a lower quality material. The final extraction, consisting largely of colorless material as well as a few blue particles, brings forth ultramarine ash which is prized as a glaze for its pale blue transparency. This extensive process was specific to ultramarine because the mineral it comes from has a combination of both blue and colorless pigments. If an artist were to simply grind and wash lapis lazuli, the resulting powder would be a greyish-blue color that lacks purity and depth of color since lapis lazuli contains a high proportion of colorless material.
Although the lapis lazuli stone itself is relatively inexpensive, the lengthy process of pulverizing, sifting, and washing to produce ultramarine makes the natural pigment quite valuable and roughly ten times more expensive than the stone it comes from. The high cost of the imported raw material and the long laborious process of extraction combined has been said to make high-quality ultramarine as expensive as gold.
Synthetic production
In 1990, an estimated 20,000 tons of ultramarine were produced industrially. The raw materials used in the manufacture of synthetic ultramarine are the following:
white kaolin,
anhydrous sodium sulfate (Na2SO4),
anhydrous sodium carbonate (Na2CO3),
powdered sulfur,
powdered charcoal or relatively ash-free coal, or colophony in lumps.
The preparation is typically made in steps:
The first part of the process takes place at 700 to 750 °C in a closed furnace, so that sulfur, carbon and organic substances give reducing conditions. This yields a yellow-green product sometimes used as a pigment.
In the second step, air or sulfur dioxide at 350 to 450 °C is used to oxidize sulfide in the intermediate product to S2 and Sn chromophore molecules, resulting in the blue (or purple, pink or red) pigment.
The mixture is heated in a kiln, sometimes in brick-sized amounts.
The resultant solids are then ground and washed, as is the case in any other insoluble pigment's manufacturing process; the chemical reaction produces large amounts of sulfur dioxide. (Flue-gas desulfurization is thus essential to its manufacture where SO2 pollution is regulated.)
Ultramarine poor in silica is obtained by fusing a mixture of soft clay, sodium sulfate, charcoal, sodium carbonate, and sulfur. The product is at first white, but soon turns green "green ultramarine" when it is mixed with sulfur and heated. The sulfur burns, and a fine blue pigment is obtained. Ultramarine rich in silica is generally obtained by heating a mixture of pure clay, very fine white sand, sulfur, and charcoal in a muffle furnace. A blue product is obtained at once, but a red tinge often results. The different ultramarines—green, blue, red, and violet—are finely ground and washed with water.
Synthetic ultramarine is a more vivid blue than natural ultramarine, since the particles in synthetic ultramarine are smaller and more uniform than the particles in natural ultramarine and therefore diffuse light more evenly. Its color is unaffected by light nor by contact with oil or lime as used in painting. Hydrochloric acid immediately bleaches it with liberation of hydrogen sulfide. Even a small addition of zinc oxide to the reddish varieties especially causes a considerable diminution in the intensity of the color. Modern, synthetic ultramarine blue is a non-toxic, soft pigment that does not need much mulling to disperse into a paint formulation.
Structure and classification
Ultramarine is the aluminosilicate zeolite with a sodalite structure. Sodalite consists of interconnected aluminosilicate cages. Some of these cages contain polysulfide () groups that are the chromophore (color centre). The negative charge on these ions is balanced by ions that also occupy these cages.
The chromophore is proposed to be or S4.
History
Antiquity and Middle Ages
The name derives from Middle Latin , literally "beyond the sea" because it was imported from Asia by sea. In the past, it has also been known as azzurrum ultramarine, , , , . The current terminology for ultramarine includes natural ultramarine (English), (French), (German), (Italian), and (Spanish). The first recorded use of ultramarine as a color name in English was in 1598.
The first noted use of lapis lazuli as a pigment can be seen in 6th and 7th-century paintings in Zoroastrian and Buddhist cave temples in Afghanistan, near the most famous source of the mineral. Lapis lazuli has been identified in Chinese paintings from the 10th and 11th centuries, in Indian mural paintings from the 11th, 12th, and 17th centuries, and on Anglo-Saxon and Norman illuminated manuscripts from .
Ancient Egyptians used lapis lazuli in solid form for ornamental applications in jewelry, however, there is no record of them successfully formulating lapis lazuli into paint. Archaeological evidence and early literature reveal that lapis lazuli was used as a semi-precious stone and decorative building stone from early Egyptian times. The mineral is described by the classical authors Theophrastus and Pliny. There is no evidence that lapis lazuli was used ground as a painting pigment by ancient Greeks and Romans. Like ancient Egyptians, they had access to a satisfactory blue colorant in the synthetic copper silicate pigment, Egyptian blue.
Renaissance
Venice was central to both the manufacturing and distribution of ultramarine during the early modern period. The pigment was imported by Italian traders during the 14th and 15th centuries from mines in Afghanistan. Other European countries employed the pigment less extensively than in Italy; the pigment was not used even by wealthy painters in Spain at that time.
During the Renaissance, ultramarine was the finest and most expensive blue that could be used by painters. Color infrared photogenic studies of ultramarine in 13th and 14th-century Sienese panel paintings have revealed that historically, ultramarine has been diluted with white lead pigment in an effort to use the color more sparingly given its high price. The 15th century artist Cennino Cennini wrote in his painters' handbook: "Ultramarine blue is a glorious, lovely and absolutely perfect pigment beyond all the pigments. It would not be possible to say anything about or do anything to it which would not make it more so." Natural ultramarine is a difficult pigment to grind by hand, and for all except the highest quality of mineral, sheer grinding and washing produces only a pale grayish blue powder.
The pigment was most extensively used during the 14th through 15th centuries, as its brilliance complemented the vermilion and gold of illuminated manuscripts and Italian panel paintings. It was valued chiefly on account of its brilliancy of tone and its inertness in opposition to sunlight, oil, and slaked lime. It is, however, extremely susceptible to even minute and dilute mineral acids and acid vapors. Dilute HCl, HNO3, and H2SO4 rapidly destroy the blue color, producing hydrogen sulfide (H2S) in the process. Acetic acid attacks the pigment at a much slower rate than mineral acids.
Ultramarine was only used for frescoes when it was applied secco because frescoes' absorption rate made its use cost prohibitive. The pigment was mixed with a binding medium like egg to form a tempera and applied over dry plaster, such as in Giotto di Bondone's frescos in the Cappella degli Scrovegni or the Arena Chapel in Padua.
European artists used the pigment sparingly, reserving their highest quality blues for the robes of Mary and the Christ child, possibly in an effort to show piety, spending as a means of expressing devotion. As a result of the high price, artists sometimes economized by using a cheaper blue, azurite, for under painting. Most likely imported to Europe through Venice, the pigment was seldom seen in German art or art from countries north of Italy. Due to a shortage of azurite in the late 16th and 17th century, the price for the already-expensive ultramarine increased dramatically.
17th and 18th centuries
Johannes Vermeer made extensive use of ultramarine in his paintings. The turban of the Girl with a Pearl Earring is painted with a mixture of ultramarine and lead white, with a thin glaze of pure ultramarine over it. In Lady Standing at a Virginal, the young woman's dress is painted with a mixture of ultramarine and green earth, and ultramarine was used to add shadows in the flesh tones. Scientific analysis by the National Gallery in London of Lady Standing at a Virginal showed that the ultramarine in the blue seat cushion in the foreground had degraded and become paler with time; it would have been a deeper blue when originally painted.
19th century (invention of synthetic ultramarine)
The beginning of the development of artificial ultramarine blue is known from Goethe. In about 1787, he observed the blue deposits on the walls of lime kilns near Palermo in Sicily. He was aware of the use of these glassy deposits as a substitute for lapis lazuli in decorative applications. He did not mention if it was suitable to grind for a pigment.
In 1814, Tassaert observed the spontaneous formation of a blue compound, very similar to ultramarine, if not identical with it, in a lime kiln at St. Gobain. In 1824, this caused the to offer a prize for the artificial production of the precious color. Processes were devised by Jean Baptiste Guimet (1826) and by Christian Gmelin (1828), then professor of chemistry in Tübingen. While Guimet kept his process a secret, Gmelin published his, and became the originator of the "artificial ultramarine" industry.
Permanence
Easel paintings and illuminated manuscripts have revealed natural ultramarine in a perfect state of preservation even though the art may be several centuries old. In general, ultramarine is a permanent pigment. Although it is a sulfur-containing compound from which sulfur is readily emitted as H2S, historically, it has been mixed with lead white with no reported occurrences of the lead pigment blackening to become lead sulfide.
A plague known as "ultramarine sickness" has occasionally been observed among ultramarine oil paintings as a grayish or yellowish gray discoloration of the paint surface. This can occur with artificial ultramarine that is used industrially. The cause of this has been debated among experts, however, potential causes include atmospheric sulfur dioxide and moisture, acidity of an oil- or oleo-resinous paint medium, or slow drying of the oil during which time water may have been absorbed, creating swelling, opacity of the medium, and therefore whitening of the paint film.
Both natural and artificial ultramarine are stable to ammonia and caustic alkalis in ordinary conditions. Artificial ultramarine has been found to fade when in contact with lime when it is used to color concrete or plaster. These observations have led experts to speculate if the natural pigment's fading may be the result of contact with the lime plaster of fresco paintings.
Synthetic applications
Synthetic ultramarine, being very cheap, is used for wall painting, the printing of paper hangings, and calico. It also is used as a corrective for the yellowish tinge often present in things meant to be white, such as linen and paper. Bluing or "laundry blue" is a suspension of synthetic ultramarine, or the chemically different Prussian blue, that is used for this purpose when washing white clothes. It is often found in makeup such as mascaras or eye shadows.
Large quantities are used in the manufacture of paper, and especially for producing a kind of pale blue writing paper which was popular in Britain. During World War I, the RAF painted the outer roundels with a color made from ultramarine blue. This became BS 108(381C) aircraft blue. It was replaced in the 1960s by a new color made on phthalocyanine blue, called BS110(381C) roundel blue.
Terminology
Ultramarine is a blue made from natural lapis lazuli, or its synthetic equivalent which is sometimes called "French Ultramarine". More generally "ultramarine blue" can refer to a vivid blue.
The term ultramarine can also refer to other pigments. Variants of the pigment such as "ultramarine red," "ultramarine green," and "ultramarine violet" all resemble ultramarine with respect to their chemistry and crystal structure.
The term "ultramarine green" indicates a dark green while barium chromate is sometimes referred to as "ultramarine yellow". Ultramarine pigment has also been termed "Gmelin's Blue," "Guimet's Blue," "New blue," "Oriental Blue," and "Permanent Blue".
See also
Blue pigments
RAL 5002 Ultramarine blue
Notes
Further reading
Mangla, Ravi (8 June 2015), "True blue: a brief history of ultramarine", Paris Review—Daily.
Plesters, J. (1993), "Ultramarine Blue, Natural and Artificial", in Artists' Pigments. A Handbook of Their History and Characteristics, Vol. 2: A. Roy (Ed.) Oxford University Press, p. 37–66
References
External links
Discussion of ultramarine in an article on blue pigments in early Sienese paintings from The Journal of the American Institute for Conservation
National Gallery essay on the altered appearance of ultramarine in the paintings of Vermeer
Ultramarine natural, ColourLex
Ultramarine artificial, ColourLex
Shades and tints and color harmonies of ultramarine, HTMLCSScolor.com
More shades and tints and color harmonies of ultramarine, HTMLCSScolor.com
An alternative ultramarine color (#5A7CC2) from Pantone, pantone.com
Quaternary colors
Aluminosilicates
Inorganic pigments
Zeolites
Sulfides
Shades of blue | Ultramarine | Chemistry | 3,537 |
841,860 | https://en.wikipedia.org/wiki/Oligomer | In chemistry and biochemistry, an oligomer () is a molecule that consists of a few repeating units which could be derived, actually or conceptually, from smaller molecules, monomers. The name is composed of Greek elements oligo-, "a few" and -mer, "parts". An adjective form is oligomeric.
The oligomer concept is contrasted to that of a polymer, which is usually understood to have a large number of units, possibly thousands or millions. However, there is no sharp distinction between these two concepts. One proposed criterion is whether the molecule's properties vary significantly with the removal of one or a few of the units.
An oligomer with a specific number of units is referred to by the Greek prefix denoting that number, with the ending -mer: thus dimer, trimer, tetramer, pentamer, and hexamer refer to molecules with two, three, four, five, and six units, respectively. The units of an oligomer may be arranged in a linear chain (as in melam, a dimer of melamine); a closed ring (as in 1,3,5-trioxane, a cyclic trimer of formaldehyde); or a more complex structure (as in tellurium tetrabromide, a tetramer of with a cube-like core). If the units are identical, one has a homo-oligomer; otherwise one may use hetero-oligomer. An example of a homo-oligomeric protein is collagen, which is composed of three identical protein chains.
Some biologically important oligomers are macromolecules like proteins or nucleic acids; for instance, hemoglobin is a protein tetramer. An oligomer of amino acids is called an oligopeptide or just a peptide. An oligosaccharide is an oligomer of monosaccharides (simple sugars). An oligonucleotide is a short single-stranded fragment of nucleic acid such as DNA or RNA, or similar fragments of analogs of nucleic acids such as peptide nucleic acid or Morpholinos.
The units of an oligomer may be connected by covalent bonds, which may result from bond rearrangement or condensation reactions, or by weaker forces such as hydrogen bonds.
The term multimer () is used in biochemistry for oligomers of proteins that are not covalently bound. The major capsid protein VP1 that comprises the shell of polyomaviruses is a self-assembling multimer of 72 pentamers held together by local electric charges.
Many oils are oligomeric, such as liquid paraffin. Plasticizers are oligomeric esters widely used to soften thermoplastics such as PVC. They may be made from monomers by linking them together, or by separation from the higher fractions of crude oil. Polybutene is an oligomeric oil used to make putty.
Oligomerization is a chemical process that converts monomers to macromolecular complexes through a finite degree of polymerization. Telomerization is an oligomerization carried out under conditions that result in chain transfer, limiting the size of the oligomers. (This concept is not to be confused with the formation of a telomere, a region of highly repetitive DNA at the end of a chromosome.)
Green oil
In the oil and gas industry, green oil refers to oligomers formed in all C2, C3, and C4 hydrogenation reactors of ethylene plants and other petrochemical production facilities; it is a mixture of C4 to C20 unsaturated and reactive components with about 90% aliphatic dienes and 10% of alkanes plus alkenes. Different heterogeneous and homogeneous catalysts are operative in producing green oils via the oligomerization of alkenes.
See also
GPCR oligomer
Oligomery (botany)
Protein oligomer
References
External links
Polymer chemistry | Oligomer | Chemistry,Materials_science,Engineering | 849 |
2,670,529 | https://en.wikipedia.org/wiki/Pi%20Sculptoris | π Sculptoris, Latinized as Pi Sculptoris, is candidate astrometric binary star system in the southern constellation Sculptor, positioned near the eastern constellation border with Fornax. It has an orange hue and is dimly visible to the naked eye with an apparent visual magnitude of 5.25. Based upon parallax measurements, the system is located at a distance of 66 light years from the Sun, and is drifting further away with a radial velocity of +14 km/s.
The visible component is an aging giant/bright giant star with a stellar classification of K1II/III. It is a red clump giant, which indicates it is on the horizontal branch and is generating energy through core helium fusion. The star has 1.5 times the mass of the Sun and 9.3 times the Sun's radius. It is radiating 41 times the luminosity of the Sun from its swollen photosphere at an effective temperature of 4,800 K.
References
K-type bright giants
K-type giants
Horizontal-branch stars
Astrometric binaries
Sculptor (constellation)
Sculptoris, Pi
CD-32 666
010537
007955
0497 | Pi Sculptoris | Astronomy | 238 |
53,573,195 | https://en.wikipedia.org/wiki/Northern%20Marmara%20and%20De%C4%9Firmenk%C3%B6y%20%28Silivri%29%20Depleted%20Gas%20Reservoir | Northern Marmara and Değirmenköy (Silivri) Depleted Gas Reservoir () are underground natural gas storages inside depleted gas fields in Istanbul Province, northwestern Turkey. Combined, it is the country's first underground natural gas storage facility.
One of the storage facilities is situated inside a depleted gas field undersea in northern Marmara Sea and the other is in neighboring Değirmenköy, a town in Silivri district of Istanbul Province. Both sites were suitable due to their proximity to Istanbul and to the gas pipeline of BOTAŞ.
Northern Marmara Gas Field
The Northern Marmara Gas Field was discovered in 1988 in an area west of Silivri and far off the coast at a depth of . To determine the size of the natural gas reserve, which is the first undersea natural gas reserve in Turkey, three offshore boreholes in 1995 and two more were drilled in 1996. Natural gas production started in September 1997 at the five gas wells. Gas was pumped from an offshore platform by a -long undersea pipeline to the plant at the coast for processing. Between 2003 and 2004, six directional wells were drilled, which had vertical depths of and horizontal deviation of .
Değirmenköy Natural Gas Field
Değirmenköy Natural Gas Field is located west of Silivri. The field was discovered in 1994, and the production started in 1995 from nine wells, seven of which were directional. The gas processing facility was built by a consortium of German Lurgi AG and Turkish Fernas Construction Ltd.
Depleted gas reservoirs
The storage facilities of Northern Marmara and Değirmenköy were projected by the Turkish Petroleum Corporation (TPAO) in 1996. The depleted gas reservoirs went into service in July 2007. The Northern Marmara Reservoir is connected to the main processing plant of BOTAŞ by a -long pipeline and the Değirmenköy Reservoir by a -long pipeline.
The storage capacity of the Northern Marmara Reservoir is and of the Değirmenköy Reservoir is . While the maximum daily gas injection capacity is , the maximum withdrawal capacity per day is .
Currently, the Northern Marmara and Değirmenköy (Silivri) Depleted Gas Reservoir is the only underground natural gas storage facility in Turkey. It is operated by the TPAO.
Capacity expansion
The entire natural gas storage project is planned in three phases. The second phase involves the capacity expansion for the Değirmenköy facility, and the third phase for the Northern Marmara facility. The second phase expansion project, which is scheduled to be completed in 2020, provides increasing of the daily injection capacity up to and the maximum daily withdrawal capacity to . It is planned that the total storage capacity will be , the daily injection capacity and the daily withdrawal capacity after completion of the third phase.
See also
Lake Tuz Natural Gas Storage
Marmara Ereğlisi LNG Storage Facility
Egegaz Aliağa LNG Storage Facility
Botaş Dörtyol LNG Storage Facility
References
Natural gas storage
Energy infrastructure in Turkey
Natural gas in Turkey
2007 establishments in Turkey
Energy infrastructure completed in 2007
Buildings and structures in Istanbul Province
Silivri
Botaş
21st-century architecture in Turkey | Northern Marmara and Değirmenköy (Silivri) Depleted Gas Reservoir | Chemistry | 645 |
48,651,935 | https://en.wikipedia.org/wiki/GJ%203470%20b | GJ 3470 b (occasionally Gliese 3470 b, formally named Phailinsiam) is an exoplanet orbiting the star GJ 3470, located in the constellation Cancer. With a mass of just under 14 Earth-masses, a radius approximately 4.3 times that of Earth's, and a high equilibrium temperature of , it is a hot Neptune.
The orbit of GJ 3470 b is strongly inclined to the equatorial plane of the parent star, with misalignment equal to 97°.
Nomenclature
In August 2022, this planet and its host star were included among 20 systems to be named by the third NameExoWorlds project. The approved names, proposed by a team from Thailand, were announced in June 2023. GJ 3470 b is named Phailinsiam and its host star is named Kaewkosin, after names of precious stones in the Thai language.
Atmosphere
The atmosphere of Phailinsiam is one of the best spectroscopically characterized among all exoplanets.
The exoplanet's atmosphere was first observed by researchers Akihiko Fukui, Norio Narita and Kenji Kuroda at the University of Tokyo in 2013, and afterwards, Fukui commented, "Suppose the atmosphere consists of hydrogen and helium, the mass of the atmosphere would be 5–20% of the total mass of the planet. Comparing that to the fact that the mass of Earth's atmosphere is about one ten-thousandth of a percent (0.0001%) of the total mass of the Earth, this planet has a considerably thick atmosphere." In 2013, by means of Large Binocular Telescope observations, with the LBC Blue and Red cameras, a team reported the detection of Rayleigh scattering in the atmosphere of this planet. In 2015 a team using the Las Cumbres Observatory Global Telescope (LCOGT) network confirmed this finding. In the Las Cumbres researchers' paper published in The Astrophysical Journal, they conclude that the most plausible explanation for the scattering effect to be an atmosphere made predominantly of hydrogen and helium, causing the exoplanet to be veiled by dense clouds and hazes. It is thought that the planet would appear blue to the human eye due to this scattering.
In 2017–2019, the primary hydrogen atmosphere with overall low metallicity, depleted methane and traces of water was characterized. It is likely filling an entire Roche lobe of the planet. In 2019 and 2020, a metastable helium outflow was detected in the atmosphere of Phailinsiam, indicating the atmosphere is currently escaping at rate 30,000-100,000 tons per second, or 0.16-0.53 Earth masses per billion years.
In 2024, a team of astronomers led by Thomas Beatty discovered a haze of sulfur dioxide in the atmosphere of the exoplanet indicating active chemical reactions in the atmosphere, likely triggered by radiation from its nearby star.
Gallery
See also
KELT-9b
GJ 3470
Kepler-51
References
Exoplanets discovered in 2012
Exoplanets detected by radial velocity
6
Transiting exoplanets
Cancer (constellation)
Exoplanets with proper names | GJ 3470 b | Astronomy | 656 |
3,106,346 | https://en.wikipedia.org/wiki/Levmetamfetamine | Levmetamfetamine, also known as l-desoxyephedrine or levomethamphetamine, and commonly sold under the brand name Vicks VapoInhaler among others, is an optical isomer of methamphetamine primarily used as a topical nasal decongestant. It is used to treat nasal congestion from allergies and the common cold. It was first used medically as decongestant beginning in 1958 and has been used for such purposes, primarily in the United States, since then.
Medical uses
Levmetamfetamine is used to treat nasal congestion related to the common cold and allergic rhinitis. It is available in the form of an inhaler containing 50mg total per inhaler and delivering between 0.04 and 0.15mg of the drug per inhalation. Inhalers with a total of 113mg levmetamfetamine were previously marketed in the United States, but the total amount was eventually reduced to 50mg.
Side effects
When the nasal decongestant is taken in excess, levmetamfetamine has potential side effects. These would be similar to those of other decongestants.
Pharmacology
Pharmacodynamics
Levmetamfetamine acts as a selective norepinephrine releasing agent. The potencies of levmetamfetamine, levoamphetamine, dextromethamphetamine, and dextroamphetamine in terms of norepinephrine release in vitro and in vivo in rats are all similar.
Conversely, whereas dextromethamphetamine and dextroamphetamine are relatively balanced releasers of dopamine and norepinephrine in vitro, levmetamfetamine is about 15- to 20-fold less potent in inducing dopamine release relative to norepinephrine release. Moreover, whereas levoamphetamine is about 3- to 5-fold less potent in terms of dopamine release than dextroamphetamine in vivo, levmetamfetamine is dramatically less potent than dextromethamphetamine and substantially less potent than levoamphetamine in this regard.
In accordance with the findings of catecholamine release studies, levmetamfetamine is 2- to 10-fold or more less potent than dextromethamphetamine in terms of psychostimulant-like effects in rodents. For comparison, levoamphetamine is only 1- to 4-fold less potent than dextroamphetamine in its stimulating and reinforcing effects in monkeys and humans.
The effects of levmetamfetamine are qualitatively distinct relative to those of racemic methamphetamine and dextromethamphetamine and it does not possess the same potential for euphoria or addiction that these drugs possess. In clinical studies, levmetamfetamine at oral doses of 1 to 10mg has been found not to affect subjective drug responses, heart rate, blood pressure, core temperature, electrocardiography, respiration rate, oxygen saturation, or other clinical parameters. As such, doses of levmetamfetamine of less than or equal to 10mg have no significant physiological or subjective effects. However, higher doses of levmetamfetamine, for instance 0.25 to 0.5mg/kg (mean doses of ~18–37mg) intravenously, have been reported to produce significant pharmacological effects, including increased heart rate and blood pressure, increased respiration rate, and subjective effects like intoxication and drug liking. On the other hand, in contrast to dextromethamphetamine, levmetamfetamine also produces subjective "bad" or aversive drug effects. Among the physiological effects of levmetamfetamine is vasoconstriction, which makes it useful for nasal decongestion.
For comparison to levmetamfetamine, 5 to 60mg oral doses of the related drug levoamphetamine have been used clinically and have been reported to produce significant pharmacological effects, for instance on wakefulness and mood.
In addition to its norepinephrine-releasing activity, levmetamfetamine is also an agonist of the trace amine-associated receptor 1 (TAAR1). Levmetamfetamine has also been found to act as a catecholaminergic activity enhancer (CAE), notably at much lower concentrations than its catecholamine releasing activity. It is 1- to 10-fold less potent than selegiline but is 3- to 5-fold more potent than dextromethamphetamine in this action. The CAE effects of such agents may be mediated by TAAR1 agonism.
Pharmacokinetics
Absorption
The bioavailability of levmetamfetamine is approximately 100%. The peak levels of levmetamfetamine range from 3.3 to 31.4ng/mL with single oral doses of 1 to 10mg and from 65.4 to 125.9ng/mL with single intravenous doses of 0.25 to 0.5mg/kg. The area-under-the-curve (AUC) levels of levmetamfetamine range from 73.0 to 694.7ng⋅h/mL with single oral doses of 1 to 10mg and from 1,190.7 to 2,368.1mg/kg with single intravenous doses of 0.25 to 0.5mg/kg.
Distribution
The volume of distribution of levmetamfetamine is 288.5 to 315.5L or 4.15 to 4.17L/kg.
Metabolism
The pharmacokinetics of levmetamfetamine generated as a metabolite from selegiline have been found to be significantly different in CYP2D6 poor metabolizers versus extensive metabolizers. Area-under-the-curve (AUC) levels of levmetamfetamine were 46% higher and its elimination half-life was 33% longer in CYP2D6 poor metabolizers compared to extensive metabolizers. These findings suggest that CYP2D6 may be significantly involved in the metabolism of levmetamfetamine.
Levmetamfetamine is metabolized into levoamphetamine in small amounts.
Elimination
Levmetamfetamine is excreted in urine 40.8 to 49.0% as unchanged levmetamfetamine and 2.1 to 3.3% as levoamphetamine.
The mean elimination half-life of levmetamfetamine ranges between 10.2 and 15.0hours. For comparison, the elimination half-life of dextromethamphetamine was around 10.2 to 10.7hours in the same studies. The clearance of levmetamfetamine is 15.5 to 19.1L/h or 0.221L/h⋅kg.
With selegiline at an oral dose of 10mg, levmetamfetamine and levoamphetamine are eliminated in urine and recovery of levmetamfetamine is 20 to 60% (or about 2–6mg) while that of levoamphetamine is 9 to 30% (or about 1–3mg).
Chemistry
Levmetamfetamine, also known as L-α,N-dimethyl-β-phenylethylamine or as L-N-methylamphetamine, is a substituted phenethylamine and amphetamine. It is the levorotatory enantiomer of methamphetamine. Racemic methamphetamine contains two optical isomers in equal amounts, dextromethamphetamine (the dextrorotatory enantiomer) and levmetamfetamine.
Detection in body fluids
Levmetamfetamine can register on urine drug tests as either methamphetamine, amphetamine, or both, depending on the subject's metabolism and dosage. Levmetamfetamine metabolizes completely into levoamphetamine after a period of time.
History
Methamphetamine, a racemic mixture of dextromethamphetamine and levmetamfetamine, was first discovered and synthesized in 1919. Methamphetamine was first introduced for medical use in 1938 in oral form under the brand name Pervitin in Germany. Over-the-counter nasal decongestant inhalers containing enantiopure levmetamfetamine, originally labeled with the chemical name l-desoxyephedrine, were first introduced in 1958 under the brand name Vicks Inhaler. By 1995, the brand name was changed to Vicks Vapor Inhaler. In 1998, the United States Food and Drug Administration (FDA) required that the chemical name on the labeling be changed from l-desoxyephedrine to levmetamfetamine.
Society and culture
Legal status
Levomethamphetamine is a controlled substance in the Philippines.
Recreational use
As of 2006, there were no studies demonstrating "drug liking" scores of oral levmetamfetamine that are similar to racemic methamphetamine or dextromethamphetamine in either recreational users or medicinal users. In any case, misuse of levmetamfetamine at high doses has been reported.
In recent years, tighter controls in Mexico on certain methamphetamine precursors like ephedrine and pseudoephedrine has led to a greater percentage of illicit methamphetamine from Mexican drug cartels consisting of a higher ratio of levmetamfetamine to dextromethamphetamine within batches of racemic methamphetamine.
Manufacturing
The manufacturing of levmetamfetamine products for therapeutic use is done according to government regulations and pharmacopeia monographs. The most recent change in Food and Drug Administration regulations for levmetamfetamine inhalers was in 1994, with the adoption of a final monograph.
Notes
References
Enantiopure drugs
Methamphetamine
Methamphetamines
Norepinephrine-dopamine releasing agents
Selegiline
Substituted amphetamines
Sympathomimetics
TAAR1 agonists
VMAT inhibitors | Levmetamfetamine | Chemistry | 2,176 |
20,911 | https://en.wikipedia.org/wiki/Multiverse | The multiverse is the hypothetical set of all universes. Together, these universes are presumed to comprise everything that exists: the entirety of space, time, matter, energy, information, and the physical laws and constants that describe them. The different universes within the multiverse are called "parallel universes", "flat universes", "other universes", "alternate universes", "multiple universes", "plane universes", "parent and child universes", "many universes", or "many worlds". One common assumption is that the multiverse is a "patchwork quilt of separate universes all bound by the same laws of physics."
The concept of multiple universes, or a multiverse, has been discussed throughout history, including Greek philosophy. It has evolved and has been debated in various fields, including cosmology, physics, and philosophy. Some physicists argue that the multiverse is a philosophical notion rather than a scientific hypothesis, as it cannot be empirically falsified. In recent years, there have been proponents and skeptics of multiverse theories within the physics community. Although some scientists have analyzed data in search of evidence for other universes, no statistically significant evidence has been found. Critics argue that the multiverse concept lacks testability and falsifiability, which are essential for scientific inquiry, and that it raises unresolved metaphysical issues.
Max Tegmark and Brian Greene have proposed different classification schemes for multiverses and universes. Tegmark's four-level classification consists of Level I: an extension of our universe, Level II: universes with different physical constants, Level III: many-worlds interpretation of quantum mechanics, and Level IV: ultimate ensemble. Brian Greene's nine types of multiverses include quilted, inflationary, brane, cyclic, landscape, quantum, holographic, simulated, and ultimate. The ideas explore various dimensions of space, physical laws, and mathematical structures to explain the existence and interactions of multiple universes. Some other multiverse concepts include twin-world models, cyclic theories, M-theory, and black-hole cosmology.
The anthropic principle suggests that the existence of a multitude of universes, each with different physical laws, could explain the asserted appearance of fine-tuning of our own universe for conscious life. The weak anthropic principle posits that we exist in one of the few universes that support life. Debates around Occam's razor and the simplicity of the multiverse versus a single universe arise, with proponents like Max Tegmark arguing that the multiverse is simpler and more elegant. The many-worlds interpretation of quantum mechanics and modal realism, the belief that all possible worlds exist and are as real as our world, are also subjects of debate in the context of the anthropic principle.
History of the concept
According to some, the idea of infinite worlds was first suggested by the pre-Socratic Greek philosopher Anaximander in the sixth century BCE. However, there is debate as to whether he believed in multiple worlds, and if he did, whether those worlds were co-existent or successive.
The first to whom we can definitively attribute the concept of innumerable worlds are the Ancient Greek Atomists, beginning with Leucippus and Democritus in the 5th century BCE, followed by Epicurus (341–270 BCE) and Lucretius (1st century BCE). In the third century BCE, the philosopher Chrysippus suggested that the world eternally expired and regenerated, effectively suggesting the existence of multiple universes across time. The concept of multiple universes became more defined in the Middle Ages.
The American philosopher and psychologist William James used the term "multiverse" in 1895, but in a different context.
The concept first appeared in the modern scientific context in the course of the debate between Boltzmann and Zermelo in 1895.
In Dublin in 1952, Erwin Schrödinger gave a lecture in which he jocularly warned his audience that what he was about to say might "seem lunatic". He said that when his equations seemed to describe several different histories, these were "not alternatives, but all really happen simultaneously". This sort of duality is called "superposition".
Search for evidence
In the 1990s, after recent works of fiction about the concept gained popularity, scientific discussions about the multiverse and journal articles about it gained prominence.
Around 2010, scientists such as Stephen M. Feeney analyzed Wilkinson Microwave Anisotropy Probe (WMAP) data and claimed to find evidence suggesting that this universe collided with other (parallel) universes in the distant past. However, a more thorough analysis of data from the WMAP and from the Planck satellite, which has a resolution three times higher than WMAP, did not reveal any statistically significant evidence of such a bubble universe collision. In addition, there was no evidence of any gravitational pull of other universes on ours.
In 2015, an astrophysicist may have found evidence of alternate or parallel universes by looking back in time to a time immediately after the Big Bang, although it is still a matter of debate among physicists. Dr. Ranga-Ram Chary, after analyzing the cosmic radiation spectrum, found a signal 4,500 times brighter than it should have been, based on the number of protons and electrons scientists believe existed in the very early universe. This signal—an emission line that arose from the formation of atoms during the era of recombination—is more consistent with a universe whose ratio of matter particles to photons is about 65 times greater than our own. There is a 30% chance that this signal is noise, and not really a signal at all; however, it is also possible that it exists because a parallel universe dumped some of its matter particles into our universe. If additional protons and electrons had been added to our universe during recombination, more atoms would have formed, more photons would have been emitted during their formation, and the signature line that arose from all of these emissions would be greatly enhanced. Chary himself is skeptical:
Chary also noted:
The signature that Chary has isolated may be a consequence of incoming light from distant galaxies, or even from clouds of dust surrounding our own galaxy.
Proponents and skeptics
Modern proponents of one or more of the multiverse hypotheses include Lee Smolin, Don Page, Brian Greene, Max Tegmark, Alan Guth, Andrei Linde, Michio Kaku, David Deutsch, Leonard Susskind, Alexander Vilenkin, Yasunori Nomura, Raj Pathria, Laura Mersini-Houghton, Neil deGrasse Tyson, Sean Carroll and Stephen Hawking.
Scientists who are generally skeptical of the concept of a multiverse or popular multiverse hypotheses include Sabine Hossenfelder, David Gross, Paul Steinhardt, Anna Ijjas, Abraham Loeb, David Spergel, Neil Turok, Viatcheslav Mukhanov, Michael S. Turner, Roger Penrose, George Ellis, Joe Silk, Carlo Rovelli, Adam Frank, Marcelo Gleiser, Jim Baggott and Paul Davies.
Arguments against multiverse hypotheses
In his 2003 New York Times opinion piece, "A Brief History of the Multiverse", author and cosmologist Paul Davies offered a variety of arguments that multiverse hypotheses are non-scientific:
George Ellis, writing in August 2011, provided a criticism of the multiverse, and pointed out that it is not a traditional scientific theory. He accepts that the multiverse is thought to exist far beyond the cosmological horizon. He emphasized that it is theorized to be so far away that it is unlikely any evidence will ever be found. Ellis also explained that some theorists do not believe the lack of empirical testability and falsifiability is a major concern, but he is opposed to that line of thinking:
Ellis says that scientists have proposed the idea of the multiverse as a way of explaining the nature of existence. He points out that it ultimately leaves those questions unresolved because it is a metaphysical issue that cannot be resolved by empirical science. He argues that observational testing is at the core of science and should not be abandoned:
Philosopher Philip Goff argues that the inference of a multiverse to explain the apparent fine-tuning of the universe is an example of Inverse Gambler's Fallacy.
Stoeger, Ellis, and Kircher note that in a true multiverse theory, "the universes are then completely disjoint and nothing that happens in any one of them is causally linked to what happens in any other one. This lack of any causal connection in such multiverses really places them beyond any scientific support".
In May 2020, astrophysicist Ethan Siegel expressed criticism in a Forbes blog post that parallel universes would have to remain a science fiction dream for the time being, based on the scientific evidence available to us.
Scientific American contributor John Horgan also argues against the idea of a multiverse, claiming that they are "bad for science."
Types
Max Tegmark and Brian Greene have devised classification schemes for the various theoretical types of multiverses and universes that they might comprise.
Max Tegmark's four levels
Cosmologist Max Tegmark has provided a taxonomy of universes beyond the familiar observable universe. The four levels of Tegmark's classification are arranged such that subsequent levels can be understood to encompass and expand upon previous levels. They are briefly described below.
Level I: An extension of our universe
A prediction of cosmic inflation is the existence of an infinite ergodic universe, which, being infinite, must contain Hubble volumes realizing all initial conditions.
Accordingly, an infinite universe will contain an infinite number of Hubble volumes, all having the same physical laws and physical constants. In regard to configurations such as the distribution of matter, almost all will differ from our Hubble volume. However, because there are infinitely many, far beyond the cosmological horizon, there will eventually be Hubble volumes with similar, and even identical, configurations. Tegmark estimates that an identical volume to ours should be about 1010115 meters away from us.
Given infinite space, there would be an infinite number of Hubble volumes identical to ours in the universe. This follows directly from the cosmological principle, wherein it is assumed that our Hubble volume is not special or unique.
Level II: Universes with different physical constants
In the eternal inflation theory, which is a variant of the cosmic inflation theory, the multiverse or space as a whole is stretching and will continue doing so forever, but some regions of space stop stretching and form distinct bubbles (like gas pockets in a loaf of rising bread). Such bubbles are embryonic level I multiverses.
Different bubbles may experience different spontaneous symmetry breaking, which results in different properties, such as different physical constants.
Level II also includes John Archibald Wheeler's oscillatory universe theory and Lee Smolin's fecund universes theory.
Level III: Many-worlds interpretation of quantum mechanics
Hugh Everett III's many-worlds interpretation (MWI) is one of several mainstream interpretations of quantum mechanics.
In brief, one aspect of quantum mechanics is that certain observations cannot be predicted absolutely. Instead, there is a range of possible observations, each with a different probability. According to the MWI, each of these possible observations corresponds to a different "world" within the Universal wavefunction, with each world as real as ours. Suppose a six-sided dice is thrown and that the result of the throw corresponds to observable quantum mechanics. All six possible ways the dice can fall correspond to six different worlds. In the case of the Schrödinger's cat thought experiment, both outcomes would be "real" in at least one "world".
Tegmark argues that a Level III multiverse does not contain more possibilities in the Hubble volume than a Level I or Level II multiverse. In effect, all the different worlds created by "splits" in a Level III multiverse with the same physical constants can be found in some Hubble volume in a Level I multiverse. Tegmark writes that, "The only difference between Level I and Level III is where your doppelgängers reside. In Level I they live elsewhere in good old three-dimensional space. In Level III they live on another quantum branch in infinite-dimensional Hilbert space."
Similarly, all Level II bubble universes with different physical constants can, in effect, be found as "worlds" created by "splits" at the moment of spontaneous symmetry breaking in a Level III multiverse. According to Yasunori Nomura, Raphael Bousso, and Leonard Susskind, this is because global spacetime appearing in the (eternally) inflating multiverse is a redundant concept. This implies that the multiverses of Levels I, II, and III are, in fact, the same thing. This hypothesis is referred to as "Multiverse = Quantum Many Worlds". According to Yasunori Nomura, this quantum multiverse is static, and time is a simple illusion.
Another version of the many-worlds idea is H. Dieter Zeh's many-minds interpretation.
Level IV: Ultimate ensemble
The ultimate mathematical universe hypothesis is Tegmark's own hypothesis.
This level considers all universes to be equally real which can be described by different mathematical structures.
Tegmark writes:
He argues that this "implies that any conceivable parallel universe theory can be described at Level IV" and "subsumes all other ensembles, therefore brings closure to the hierarchy of multiverses, and there cannot be, say, a Level V."
Jürgen Schmidhuber, however, says that the set of mathematical structures is not even well-defined and that it admits only universe representations describable by constructive mathematics—that is, computer programs.
Schmidhuber explicitly includes universe representations describable by non-halting programs whose output bits converge after a finite time, although the convergence time itself may not be predictable by a halting program, due to the undecidability of the halting problem. He also explicitly discusses the more restricted ensemble of quickly computable universes.
Brian Greene's nine types
The American theoretical physicist and string theorist Brian Greene discussed nine types of multiverses:
Quilted
The quilted multiverse works only in an infinite universe. With an infinite amount of space, every possible event will occur an infinite number of times. However, the speed of light prevents us from being aware of these other identical areas.
Inflationary
The inflationary multiverse is composed of various pockets in which inflation fields collapse and form new universes.
Brane
The brane multiverse version postulates that our entire universe exists on a membrane (brane) which floats in a higher dimension or "bulk". In this bulk, there are other membranes with their own universes. These universes can interact with one another, and when they collide, the violence and energy produced is more than enough to give rise to a Big Bang. The branes float or drift near each other in the bulk, and every few trillion years, attracted by gravity or some other force we do not understand, collide and bang into each other. This repeated contact gives rise to multiple or "cyclic" Big Bangs. This particular hypothesis falls under the string theory umbrella as it requires extra spatial dimensions.
Cyclic
The cyclic multiverse has multiple branes that have collided, causing Big Bangs. The universes bounce back and pass through time until they are pulled back together and again collide, destroying the old contents and creating them anew.
Landscape
The landscape multiverse relies on string theory's Calabi–Yau spaces. Quantum fluctuations drop the shapes to a lower energy level, creating a pocket with a set of laws different from that of the surrounding space.
Quantum
The quantum multiverse creates a new universe when a diversion in events occurs, as in the real-worlds variant of the many-worlds interpretation of quantum mechanics.
Holographic
The holographic multiverse is derived from the theory that the surface area of a space can encode the contents of the volume of the region.
Simulated
The simulated multiverse exists on complex computer systems that simulate entire universes. A related hypothesis, as put forward as a possibility by astronomer Avi Loeb, is that universes may be creatable in laboratories of advanced technological civilizations who have a theory of everything. Other related hypotheses include brain in a vat-type scenarios where the perceived universe is either simulated in a low-resource way or not perceived directly by the virtual/simulated inhabitant species.
Ultimate
The ultimate multiverse contains every mathematically possible universe under different laws of physics.
Twin-world models
There are models of two related universes that e.g. attempt to explain the baryon asymmetry – why there was more matter than antimatter at the beginning – with a mirror anti-universe. One two-universe cosmological model could explain the Hubble constant (H0) tension via interactions between the two worlds. The "mirror world" would contain copies of all existing fundamental particles. Another twin/pair-world or "bi-world" cosmology is shown to theoretically be able to solve the cosmological constant (Λ) problem, closely related to dark energy: two interacting worlds with a large Λ each could result in a small shared effective Λ.
Cyclic theories
In several theories, there is a series of, in some cases infinite, self-sustaining cycles – typically a series of Big Crunches (or Big Bounces). However, the respective universes do not exist at once but are forming or following in a logical order or sequence, with key natural constituents potentially varying between universes (see § Anthropic principle).
M-theory
A multiverse of a somewhat different kind has been envisaged within string theory and its higher-dimensional extension, M-theory.
These theories require the presence of 10 or 11 spacetime dimensions respectively. The extra six or seven dimensions may either be compactified on a very small scale, or our universe may simply be localized on a dynamical (3+1)-dimensional object, a D3-brane. This opens up the possibility that there are other branes which could support other universes.
Black-hole cosmology
Black-hole cosmology is a cosmological model in which the observable universe is the interior of a black hole existing as one of possibly many universes inside a larger universe. This includes the theory of white holes, which are on the opposite side of space-time.
Anthropic principle
The concept of other universes has been proposed to explain how our own universe appears to be fine-tuned for conscious life as we experience it.
If there were a large (possibly infinite) number of universes, each with possibly different physical laws (or different fundamental physical constants), then some of these universes (even if very few) would have the combination of laws and fundamental parameters that are suitable for the development of matter, astronomical structures, elemental diversity, stars, and planets that can exist long enough for life to emerge and evolve.
The weak anthropic principle could then be applied to conclude that we (as conscious beings) would only exist in one of those few universes that happened to be finely tuned, permitting the existence of life with developed consciousness. Thus, while the probability might be extremely small that any particular universe would have the requisite conditions for life (as we understand life), those conditions do not require intelligent design as an explanation for the conditions in the Universe that promote our existence in it.
An early form of this reasoning is evident in Arthur Schopenhauer's 1844 work "Von der Nichtigkeit und dem Leiden des Lebens", where he argues that our world must be the worst of all possible worlds, because if it were significantly worse in any respect it could not continue to exist.
Occam's razor
Proponents and critics disagree about how to apply Occam's razor. Critics argue that to postulate an almost infinite number of unobservable universes, just to explain our own universe, is contrary to Occam's razor. However, proponents argue that in terms of Kolmogorov complexity the proposed multiverse is simpler than a single idiosyncratic universe.
For example, multiverse proponent Max Tegmark argues:
Possible worlds and real worlds
In any given set of possible universes – e.g. in terms of histories or variables of nature – not all may be ever realized, and some may be realized many times. For example, over infinite time there could, in some potential theories, be infinite universes, but only a small or relatively small real number of universes where humanity could exist and only one where it ever does exist (with a unique history). It has been suggested that a universe that "contains life, in the form it has on Earth, is in a certain sense radically non-ergodic, in that the vast majority of possible organisms will never be realized". On the other hand, some scientists, theories and popular works conceive of a multiverse in which the universes are so similar that humanity exists in many equally real separate universes but with varying histories.
There is a debate about whether the other worlds are real in the many-worlds interpretation (MWI) of quantum mechanics. In Quantum Darwinism one does not need to adopt a MWI in which all of the branches are equally real.
Modal realism
Possible worlds are a way of explaining probability and hypothetical statements. Some philosophers, such as David Lewis, posit that all possible worlds exist and that they are just as real as the world we live in. This position is known as modal realism.
See also
References
Footnotes
Citations
Further reading
Andrei Linde, The Self-Reproducing Inflationary Universe, Scientific American, November 1994 – Touches on multiverse concepts at the end of the article.
External links
Interview with Tufts cosmologist Alex Vilenkin on his book, "Many Worlds in One: The Search for Other Universes" on the podcast and public radio interview program ThoughtCast. .
Multiverse – an episode of the series In Our Time with Melvyn Bragg, on BBC Radio 4.
Why There Might be Many More Universes Besides Our Own, by Phillip Ball, March 21, 2016, bbc.com.
Physical cosmology
Science fiction themes
Science fiction genres
Quantum mechanics
Astronomical hypotheses
Anthropic principle
Hypothetical astronomical objects
1890s neologisms | Multiverse | Physics,Astronomy | 4,680 |
42,025,962 | https://en.wikipedia.org/wiki/C21H29NO3 | {{DISPLAYTITLE:C21H29NO3}}
The molecular formula C21H29NO3 (molar mass: 343.46 g/mol, exact mass: 343.2147 u) may refer to:
CAR-226,086
CAR-301,060
25iP-NBOMe
25P-NBOMe
Molecular formulas | C21H29NO3 | Physics,Chemistry | 77 |
76,444,835 | https://en.wikipedia.org/wiki/Watcher%20Entertainment | Watcher Entertainment is an American digital media and entertainment company, founded by Steven Lim, Shane Madej, and Ryan Bergara. The channel features a variety of comedy, paranormal, gaming, cooking, and educational shows – typically hosted by Madej and Bergara. The Watcher main channel has over 400million views and 2.8million subscribers.
History
Buzzfeed and the creation of Watcher Entertainment (2019)
Madej, Bergara, and Lim met while working at the digital media company Buzzfeed. Madej and Bergara were co-hosts of the popular true crime and paranormal series Buzzfeed Unsolved and Lim was the creator and co-host of the popular internet food series Worth It. Both shows generated a combined 2billion views with 15billion minutes watched, making them two of the most successful shows on Buzzfeed.
In 2019, Madej, Bergara, and Lim quit Buzzfeed as full-time employees. They each stayed on as contracted employees to complete their respective shows. The trio credited their departure due to their desire to found a company with more "creative opportunities" and the ability to have "actual ownership of the content" made.
The company is majority-owned by the trio. They received funding from Neuro, a caffeinated energy gum company; Boba Guys, a bubble-milk tea chain; and Steve Chen, a YouTube co-founder.
Watcher Entertainment gained its name from the infamous true crime case of The Westfield Watcher, from which Madej and Bergara had covered in a Buzzfeed Unsolved episode.
The trio began the company as co-CEOs; however, Bergara and Madej stepped down from the role in 2023 to focus on content creation.
Watcher Entertainment (2020–present)
Watcher Entertainment was launched in January 2020. The company debuted with seven series and a weekly interactive talk show; Homemade, Grocery Run, Weird Wonderful World, Puppet History, Tourist Trapped, Top 5 Beatdown, Spooky Small Talk, and Watcher Weekly. The channel reached over 300,000 subscribers within the first month of launching. They were signed by talent agency CAA in the same year.
Puppet History, a comedy educational game show, quickly became a success and gained a significant audience. The show, which stars Madej as a fluffy blue puppet, has spanned five seasons and led to the creation of a variety of merchandise.
The company premiered its first horror series in July 2020 with Are You Scared?.
Following the end of Buzzfeed Unsolved: Supernatural in 2021, the studio premiered its highly anticipated successor, Ghost Files, just months after. The show followed a similar format, with Bergara and Madej investigating reportedly haunted locations and attempting to find evidence of the paranormal. The show had significant success, with critics noting the improved production value and design from its predecessor.
In 2023, Bergara and Madej went on a tour across the United States to premiere episodes of the second season. The series was renewed for a third season, which they will be premiering with a United Kingdom tour in 2024.
That year, Watcher premiered a light-hearted successor to the graphic Buzzfeed Unsolved: True Crime, with Mystery Files. In this rendition, Bergara or Madej present unusual crime or supernatural mysteries with a collection of theoretical solutions. The show was met with great success by audiences and was quickly renewed for a second season.
Watcher launched a second channel, WatcherPodcasts, in October 2023. The channel features podcasts hosted by Lim, Bergara, and Madej.
On April 19, 2024, the company launched its Watcher streaming service. Going forward, some content would be released exclusively on the service and the company planned to transition away from YouTube. This announcement was met with overwhelmingly negative reactions from their fans, with many calling for the company to reverse the decision. Additionally, their YouTube channel lost over 50,000 subscribers in the day following the announcement. On April 22, 2024, the company issued an apology and changed their decision, stating that episodes would instead be released on the streaming service a month before their premiere on YouTube.
Channels and shows
Watcher
Current shows
Puppet History (2020–present)
A whimsical puppet host walks through history's wildest tales as two guests compete for the title of history wizard.
Making Watcher (2020–present)
What happens when 3 creators with no business experience decide to make their own company? A multi-series documentary on the journey of creating Watcher Entertainment.
Weird Wonderful World (2020–present)
Curious pals Madej and Bergara explore lesser-known destinations and the fascinating subcultures within them.
Too Many Spirits (2020–present)
Bergara and Madej read and rate audience-submitted ghost stories, while getting progressively more tipsy drinking cocktails prepared by Steven and Ricky Wang.
Top 5 Beatdown (2020–present)
Bergara and Madej compare asinine top 5 lists with a topical expert, inspiring surprisingly heated debate.
Are You Scared? (2020–2022, 2024–present)
Bergara reads the internet's scariest stories (some true, some false) to his pal Madej as they try to figure out if the story is experienced or imagined.
Ghost Files (2021–present)
Bergara and Madej investigate haunted locations to discover whether something paranormal really lays within.
Mystery Files (2023–present)
Bergara and Madej present unusual crime or supernatural mysteries with a collection of theoretical solutions.
Survival Mode (2023–present)
Bergara and Madej play a variety of horror games and give a spooky review.
Travel Season (2024–present)
Lim reunites with Worth It costars Andrew Ilnyckyj and Adam Bianchi in a new food review show.
Former shows
Grocery Run (2020)
Madej interviews a celeb on their typical grocery run, before returning to their home to help prepare their signature dish.
Homemade (2020)
Lim examines popular food by comparing an elevated restaurant experience vs. a home-cooked experience.
Spooky Small Talk (2020)
Bergara interviews celebs in a haunted house, exposing their fears and if they can manage it, a little about themselves too.
Social Distancing D&D (2020)
Socially Distance along with the motley gang of Watchers as they embark on a great quest of Dungeons and Dragons!
Tourist Trapped (2020)
Begara and Madej battle for tour guide supremacy, highlighting the two sides of a city, tourist attractions and hidden gems.
Watcher Weekly (2020–2021)
Lim, Bergara, and Madej chat the week's content and answer questions, with the occasional musical guest!
Dish Granted (2021–2022)
A show where host and amateur home cook Lim attempts to create the most extravagant dishes for his friends.
Pretty Historic (2022)
Selorm and guests explore beauty and fashion trends from history, try them, and decide whether the trends should remain in the past or come to the present.
Worth a Shot (2022–2023)
Take a seat at a Master Mixologist's bar as pro Ricky Wang crafts the unbelievable into a digestible drink for his guests.
Watcher Podcast
Current shows
Pod Watcher (2023–present)
For Your Amusement (2023–present)
Awards and nominations
References
American YouTube groups
Digital media
2020 web series debuts
Companies based in Los Angeles
2020s YouTube series
American non-fiction web series
Documentary television series about crime in the United States
Documentary web series
Mystery web series
Works about the paranormal
YouTube channels launched in 2020 | Watcher Entertainment | Technology | 1,553 |
11,392,946 | https://en.wikipedia.org/wiki/Minto%20wheel | The Minto wheel is a heat engine named after Wally Minto. The engine consists of a set of sealed chambers arranged in a circle, with each chamber connected to the chamber opposite it. One chamber in each connected pair is filled with a liquid with a low boiling point (propane (TB = −42 °C) and R-12 (TB = −29.8 °C) are listed in the Mother Earth News articles). Ideally, the working fluid also has a high vapor pressure and density.
Operation
As the lower chamber in each pair is heated, the liquid begins to vaporize, forcing the remaining liquid to travel to the upper chamber. This fluid transfer causes a weight imbalance, which causes the wheel to rotate.
Minto's pamphlet also suggests obtaining a pressure differential with a dissolved gas instead of a boiling gas. Soda water or propane dissolved in kerosene are suggested.
Characteristics
The Minto wheel operates on a small temperature gradient, and produces a large amount of torque, but at very low rotational speed. The speed of rotation is directly proportional to the surface area of the containers used, the volume, and the height of the wheel. The higher the ratio of surface area to volume, the greater the rate of revolution.
History
Iske brothers and Israel L. Landis
In 1881, the Iske brothers got two patents granted for a design similar to the Minto wheel.
According to the patent, the working fluid is alcohol "or other volatile liquid". Air in the tubes is to be removed and the tubes are sealed (creating a partial vacuum).
The patent suggests lamps as heating sources.
The first patent describes glass for the bulbs and tubes. The second patent does not specify materials, but the construction implies metal. A later patent then clearly specify metal.
Later the same year, Israel L. Landis got a patent for a similar engine. Different to the Minto wheel and the Iske brothers' patent, the engine was oscillating, not revolving. Landis suggested alcohol or ether as the volatile liquid. Landis suggested heating up the apparatus before removing the air from the bulb/chambers.
In the following years, the Iske brothers were granted various patents, including some relating to modification and/or improvements on engines similarly to the Minto Wheel and an oscillating engine similarly to Israel L. Landis design.
Drinking bird
The oscillating types by the Iske Brothers and Landis are related to the drinking bird toy.
The drinking bird is dating back to 1910s~1930s. The drinking bird was patented in the US in 1945 and 1946 by two different inventors.
Wally Minto's contribution
Wally Minto experimented with different working fluids. With the working fluids he used, he got the required temperature difference down, enabling the engine - for example - to run on solar power. Based on the working fluid, his improved wheel is also known as "Freon Power Wheel". Popular Science reported about in its March 1976 issue.
Examples
A working example of a Minto wheel was first published in a series of articles in The Mother Earth News, Issues #38 March, #39 May and #40 July 1976. Test units constructed by Mother Earth News (Issue 40, July 1976) and the MythBusters (Episode 24, December 5, 2004 – "Ming Dynasty Astronaut") did convert temperature difference into torque, but not as well as overenthusiastic boosters claimed.
See also
Drinking bird
Stirling engine
Ocean Thermal Energy Conversion
References
External links
Internet Archive archive of scans of the 1976 The Mother Earth News articles
Wally Minto's original booklet
Minto outline
YouTube video of a model Minto wheel in operation
YouTube video of a model Minto wheel in operation (shows smoother action from more chambers)
patents granted to the Iske brothers
US243909 - 1881 patent for the device
US253867
US253868
US256482 - 1882 patent for the device
US271639
US673022
patent granted to Israel L. Landis
US250821
http://www.genuineideas.com/HallofInventions/SolarPivots/thermoscopicSolarWheel.html
Engines
External combustion engines | Minto wheel | Physics,Technology | 859 |
12,953,265 | https://en.wikipedia.org/wiki/IEEE%20Richard%20W.%20Hamming%20Medal | The IEEE Richard W. Hamming Medal is presented annually to up to three persons, for outstanding achievements in information sciences, information systems and information technology. The recipients receive a gold medal, together with a replica in bronze, a certificate and an honorarium.
The award was established in 1986 by the Institute of Electrical and Electronics Engineers (IEEE) and is sponsored by Qualcomm, Inc. It is named after Richard W. Hamming, whose work has had many implications for computer science and telecommunications. His contributions include the invention of the Hamming code, and error-correcting code.
Recipients
The following people have received the IEEE Richard W. Hamming Medal:
See also
Richard W. Hamming
List of computer science awards
Prizes named after people
References
Computer science awards
Information science awards
Awards established in 1986
Richard W. Hamming Medal | IEEE Richard W. Hamming Medal | Technology | 169 |
52,759,876 | https://en.wikipedia.org/wiki/NGC%20380 | NGC 380 is an elliptical galaxy located in the constellation Pisces. It was discovered on September 12, 1784 by William Herschel. It was described by Dreyer as "pretty faint, small, round, suddenly brighter middle." Along with galaxies NGC 375, NGC 379, NGC 382, NGC 383, NGC 384, NGC 385, NGC 386, NGC 387 and NGC 388, NGC 380 forms a galaxy cluster called Arp 331.
References
External links
0380
17840912
Pisces (constellation)
Elliptical galaxies
003969 | NGC 380 | Astronomy | 118 |
54,601,434 | https://en.wikipedia.org/wiki/Alinaghi%20Khamoushi | Alinaghi Khamoushi () is an Iranian business magnate and conservative politician.
An influential lobbyist in Iranian political and economic arena, he is a senior member of the Islamic Coalition Party and runs three large textile companies owned by two religious centers and the CEO of Iran Investments Company.
Khamoushi served as the president of the Iran Chamber of Commerce Industries and Mines from 1984 to 2007. He also represented Tehran, Rey, Shemiranat and Eslamshahr electoral district in the Parliament of Iran from 1992 to 1996. Khamoushi was the first head of Mostazafan Foundation, a Bonyad.
References
External linlks
Profile at Bloomberg.com
1939 births
Living people
Islamic Coalition Party politicians
Iranian billionaires
21st-century Iranian businesspeople
Members of the 4th Islamic Consultative Assembly
Textile engineers | Alinaghi Khamoushi | Engineering | 166 |
3,038,004 | https://en.wikipedia.org/wiki/Berezinian | In mathematics and theoretical physics, the Berezinian or superdeterminant is a generalization of the determinant to the case of supermatrices. The name is for Felix Berezin. The Berezinian plays a role analogous to the determinant when considering coordinate changes for integration on a supermanifold.
Definition
The Berezinian is uniquely determined by two defining properties:
where str(X) denotes the supertrace of X. Unlike the classical determinant, the Berezinian is defined only for invertible supermatrices.
The simplest case to consider is the Berezinian of a supermatrix with entries in a field K. Such supermatrices represent linear transformations of a super vector space over K. A particular even supermatrix is a block matrix of the form
Such a matrix is invertible if and only if both A and D are invertible matrices over K. The Berezinian of X is given by
For a motivation of the negative exponent see the substitution formula in the odd case.
More generally, consider matrices with entries in a supercommutative algebra R. An even supermatrix is then of the form
where A and D have even entries and B and C have odd entries. Such a matrix is invertible if and only if both A and D are invertible in the commutative ring R0 (the even subalgebra of R). In this case the Berezinian is given by
or, equivalently, by
These formulas are well-defined since we are only taking determinants of matrices whose entries are in the commutative ring R0. The matrix
is known as the Schur complement of A relative to
An odd matrix X can only be invertible if the number of even dimensions equals the number of odd dimensions. In this case, invertibility of X is equivalent to the invertibility of JX, where
Then the Berezinian of X is defined as
Properties
The Berezinian of is always a unit in the ring R0.
where denotes the supertranspose of .
Berezinian module
The determinant of an endomorphism of a free module M can be defined as the induced action on the 1-dimensional highest exterior power of M. In the supersymmetric case there is no highest exterior power, but there is a still a similar definition of the Berezinian as follows.
Suppose that M is a free module of dimension (p,q) over R. Let A be the (super)symmetric algebra S*(M*) of the dual M* of M. Then an automorphism of M acts on the ext module
(which has dimension (1,0) if q is even and dimension (0,1) if q is odd))
as multiplication by the Berezinian.
See also
Berezin integration
References
Super linear algebra
Determinants | Berezinian | Physics | 613 |
35,467,689 | https://en.wikipedia.org/wiki/840%20%28number%29 | 840 (eight hundred [and] forty) is the natural number following 839 and preceding 841.
Mathematical properties
It is an even number.
It is a practical number.
It is a congruent number.
It is the 15th highly composite number, with 32 divisors: 1, 2, 3, 4, 5, 6, 7, 8, 10, 12, 14, 15, 20, 21, 24, 28, 30, 35, 40, 42, 56, 60, 70, 84, 105, 120, 140, 168, 210, 280, 420, 840. Since the sum of its divisors (excluding the number itself) 2040 > 840
It is an abundant number and also a superabundant number.
It is an idoneal number.
It is the least common multiple of the numbers from 1 to 8.
It is the smallest number divisible by every natural number from 1 to 10, except 9.
It is the largest number k such that all coprime quadratic residues modulo k are squares. In this case, they are 1, 121, 169, 289, 361 and 529.
It is an evil number.
It is a palindrome number and a repdigit number repeated in the positional numbering system in base 29 (SS) and in that in base 34 (OO).
It is the sum of a twin prime (419 + 421).
References
Integers | 840 (number) | Mathematics | 299 |
4,590,798 | https://en.wikipedia.org/wiki/182%20%28number%29 | 182 (one hundred [and] eighty-two) is the natural number following 181 and preceding 183.
In mathematics
182 is an even number
182 is a composite number, as it is a positive integer with a positive divisor other than one or itself
182 is a deficient number, as the sum of its proper divisors, 154, is less than 182
182 is a member of the Mian–Chowla sequence: 1, 2, 4, 8, 13, 21, 31, 45, 66, 81, 97, 123, 148, 182
182 is a nontotient number, as there is no integer with exactly 182 coprimes below it
182 is an odious number
182 is a pronic number, oblong number or heteromecic number, a number which is the product of two consecutive integers (13 × 14)
182 is a repdigit in the D'ni numeral system (77), and in base 9 (222)
182 is a sphenic number, the product of three prime factors
182 is a square-free number
182 is an Ulam number
References
External links
Number Facts and Trivia: 182
The Number 182
The Positive Integer 182
Number Gossip: 182
Integers | 182 (number) | Mathematics | 250 |
214,804 | https://en.wikipedia.org/wiki/Porting | In software engineering, porting is the process of adapting software for the purpose of achieving some form of execution in a computing environment that is different from the one that a given program (meant for such execution) was originally designed for (e.g., different CPU, operating system, or third party library). The term is also used when software/hardware is changed to make them usable in different environments.
Software is portable when the cost of porting it to a new platform is significantly less than the cost of writing it from scratch. The lower the cost of porting software relative to its implementation cost, the more portable it is said to be. This is distinct from cross-platform software, which is designed from the ground up without any single "native" platform.
Etymology
The term "port" is derived from the Latin portāre, meaning "to carry". When code is not compatible with a particular operating system or architecture, the code must be "carried" to the new system.
The term is not generally applied to the process of adapting software to run with less memory on the same CPU and operating system.
Software developers often claim that the software they write is portable, meaning that little effort is needed to adapt it to a new environment. The amount of effort actually needed depends on several factors, including the extent to which the original environment (the source platform) differs from the new environment (the target platform), the experience of the original authors in knowing which programming language constructs and third party library calls are unlikely to be portable, and the amount of effort invested by the original authors in only using portable constructs (platform specific constructs often provide a cheaper solution).
History
The number of significantly different CPUs and operating systems used on the desktop today is much smaller than in the past. The dominance of the x86 architecture means that most desktop software is never ported to a different CPU. In that same market, the choice of operating systems has effectively been reduced to three: Microsoft Windows, macOS, and Linux. However, in the embedded systems and mobile markets, portability remains a significant issue, with the ARM being a widely used alternative.
International standards, such as those promulgated by the ISO, greatly facilitate porting by specifying details of the computing environment in a way that helps reduce differences between different standards-conforming platforms. Writing software that stays within the bounds specified by these standards represents a practical although nontrivial effort. Porting such a program between two standards-compliant platforms (such as POSIX.1) can be just a matter of loading the source code and recompiling it on the new platform, but practitioners often find that various minor corrections are required, due to subtle platform differences. Most standards suffer from "gray areas" where differences in interpretation of standards lead to small variations from platform to platform.
There also exists an ever-increasing number of tools to facilitate porting, such as the GNU Compiler Collection, which provides consistent programming languages on different platforms, and Autotools, which automates the detection of minor variations in the environment and adapts the software accordingly before compilation.
The compilers for some high-level programming languages (e.g. Eiffel, Esterel) gain portability by outputting source code in another high level intermediate language (such as C) for which compilers for many platforms are generally available.
Two activities related to (but distinct from) porting are emulating and cross-compiling.
Porting compilers
Instead of translating directly into machine code, modern compilers translate to a machine independent intermediate code in order to enhance portability of the compiler and minimize design efforts. The intermediate language defines a virtual machine that can execute all programs written in the intermediate language (a machine is defined by its language and vice versa). The intermediate code instructions are translated into equivalent machine code sequences by a code generator to create executable code. It is also possible to skip the generation of machine code by actually implementing an interpreter or JIT for the virtual machine.
The use of intermediate code enhances portability of the compiler, because only the machine dependent code (the interpreter or the code generator) of the compiler itself needs to be ported to the target machine. The remainder of the compiler can be imported as intermediate code and then further processed by the ported code generator or interpreter, thus producing the compiler software or directly executing the intermediate code on the interpreter. The machine independent part can be developed and tested on another machine (the host machine). This greatly reduces design efforts, because the machine independent part needs to be developed only once to create portable intermediate code.
An interpreter is less complex and therefore easier to port than a code generator, because it is not able to do code optimizations due to its limited view of the program code (it only sees one instruction at a time, and users need a sequence to do optimization). Some interpreters are extremely easy to port, because they only make minimal assumptions about the instruction set of the underlying hardware. As a result, the virtual machine is even simpler than the target CPU.
Writing the compiler sources entirely in the programming language the compiler is supposed to translate, makes the following approach, better known as compiler bootstrapping, feasible on the target machine:
Port the interpreter. This needs to be coded in assembly code, using an already present assembler on the target.
Adapt the source of the code generator to the new machine.
Execute the adapted source using the interpreter with the code generator source as input. This will generate the machine code for the code generator.
The difficult part of coding the optimization routines is done using the high-level language instead of the assembly language of the target.
According to the designers of the BCPL language, interpreted code (in the BCPL case) is more compact than machine code, typically by a factor of two to one. Interpreted code however runs about ten times slower than compiled code on the same machine.
The designers of the Java programming language try to take advantage of the compactness of interpreted code, because a Java program may need to be transmitted over the Internet before execution can start on the target's Java virtual machine (JVM).
Porting of video games
Porting is also the term used when a video game designed to run on one platform, be it an arcade, video game console, or personal computer, is converted to run on a different platform, perhaps with some minor differences. From the beginning of video games through to the 1990s, "ports", at the time often known as "conversions", were often not true ports, but rather reworked versions of the games due to the limitations of different systems. For example, the 1982 game The Hobbit, a text adventure augmented with graphic images, has significantly different graphic styles across the range of personal computers that its ports were developed for. However, many 21st century video games are developed using software (often in C++) that can output code for one or more consoles as well as for a PC without the need for actual porting (instead relying on the common porting of individual component libraries).
Porting arcade games to home systems with inferior hardware was difficult. The ported version of Pac-Man for the Atari 2600 omitted many of the visual features of the original game to compensate for the lack of ROM space and the hardware struggled when multiple ghosts appeared on the screen creating a flickering effect. The poor performance of the Atari 2600 Pac-Man is cited by some scholars as a cause of the video game crash of 1983.
Many early ports suffered significant gameplay quality issues because computers greatly differed. Richard Garriott stated in 1984 at Origins Game Fair that Origin Systems developed video games for the Apple II first then ported them to Commodore 64 and Atari 8-bit computers, because the latter machines' sprites and other sophisticated features made porting from them to Apple "far more difficult, perhaps even impossible". Reviews complained of ports that suffered from "Apple conversionitis", retaining the Apple's "lousy sound and black-white-green-purple graphics"; after Garriott's statement, when Dan Bunten asked "Atari and Commodore people in the audience, are you happy with the Apple rewrites?" the audience shouted "No!" Garriott responded, "[otherwise] the Apple version will never get done. From a publisher's point of view that's not money wise".
Others worked differently. Ozark Softscape, for example, wrote M.U.L.E. for the Atari first because it preferred to develop for the most advanced computers, removing or altering features as necessary during porting. Such a policy was not always feasible; Bunten stated that "M.U.L.E. can't be done for an Apple", and that the non-Atari versions of The Seven Cities of Gold were inferior. Compute!'s Gazette wrote in 1986 that when porting from Atari to Commodore the original was usually superior. The latter's games' quality improved when developers began creating new software for it in late 1983, the magazine stated.
In porting arcade games, the terms "arcade perfect" or "arcade accurate" were often used to describe how closely the gameplay, graphics, and other assets on the ported version matched the arcade version. Many arcade ports in the early 1980s were far from arcade perfect as home consoles and computers lacked the sophisticated hardware in arcade games, but games could still approximate the gameplay. Notably, Space Invaders on the Atari VCS became the console's killer app despite its differences, while the later Pac-Man port was notorious for its deviations from the arcade version. Arcade-accurate games became more prevalent starting in the 1990s as home consoles caught up to the power of arcade systems. Notably, the Neo Geo system from SNK, which was introduced as a multi-game arcade system, would also be offered as a home console with the same specifications. This allowed arcade perfect games to be played at home.
A "console port" is a game that was originally or primarily made for a console before a version is created which can be played on a personal computer. The process of porting games from console to PC is often regarded more cynically than other types of port due to the more powerful hardware some PCs have even at console launch being underutilized, partially due to console hardware being fixed throughout each generation as newer PCs constantly become even more powerful. While broadly similar today, some architectural differences persist, such as the use of unified memory and smaller OSs on consoles. Other objections arise from user interface differences conventional to consoles, such as gamepads, TFUIs accompanied by narrow FoV, fixed checkpoints, online restricted to official servers or P2P, poor or no modding support, as well as the generally greater reliance among console developers on internal hard coding and defaults instead of external APIs and configurability, all of which may require expensive deep reaching redesign to avoid a "lazy" feeling port to PC.
See also
Software portability
Cross-platform software
Write once, compile anywhere
Program transformation
List of system quality attributes
Language binding
Source-to-source compiler
Console emulator
Source port
Poshlib
Meaning of unported
References
Interoperability
Source code
de:Portierung | Porting | Engineering | 2,286 |
77,439,932 | https://en.wikipedia.org/wiki/Catherine%20Rosenberg | Catherine P. Rosenberg is an electrical engineer whose research interests include resource management in wireless sensor networks, quality of service in network traffic engineering, and smart grids in energy systems. Educated in France and the US, she has worked in France, the US, the UK, and Canada, where she is a professor in the department of Electrical and Computer Engineering and Cisco Research Chair in 5G Systems at the University of Waterloo.
Education and career
Rosenberg earned a diploma in telecommunications engineering at the École nationale supérieure des télécommunications de Bretagne in 1983, and a master's degree in computer science at the University of California, Los Angeles in 1984. She completed her Ph.D. in 1986 through Paris-Sud University, under the direction of Erol Gelenbe.
After working at Alcatel and Bell Labs, Rosenberg took her first faculty position from 1988 to 1996, in electrical and computer engineering at Polytechnique Montréal. After working in the UK for Nortel from 1996 to 1999, she became a professor of electrical and computer engineering at Purdue University in the US from 1999 to 2004. In 2004 she took her present position as a professor at the University of Waterloo, also serving as chair of the Department of Electrical and Computer Engineering. She was named as a tier 1 Canada Research Chair in the Future Internet in 2010 (renewed in 2017), and Cisco Research Chair in 5G Systems in 2018.
Recognition
Rosenberg was elected as an IEEE Fellow in 2011, "for contributions to resource management in wireless and satellite networks". She was elected to the Canadian Academy of Engineering in 2013.
References
External links
Home page
Year of birth missing (living people)
Living people
Electrical engineers
Women electrical engineers
University of California, Los Angeles alumni
Scientists at Bell Labs
Academic staff of Polytechnique Montréal
Purdue University faculty
Academic staff of the University of Waterloo
Fellows of the IEEE
Fellows of the Canadian Academy of Engineering | Catherine Rosenberg | Engineering | 380 |
1,112,101 | https://en.wikipedia.org/wiki/Assortative%20mating | Assortative mating (also referred to as positive assortative mating or homogamy) is a mating pattern and a form of sexual selection in which individuals with similar phenotypes or genotypes mate with one another more frequently than would be expected under a random mating pattern.
A majority of the phenotypes that are subject to assortative mating are body size, visual signals (e.g. color, pattern), and sexually selected traits such as crest size.
The opposite of assortative is disassortative mating, also referred to "negative assortative mating", in which case its opposite is termed "positive assortative mating".
Causes
Several hypotheses have been proposed to explain the phenomenon of assortative mating. Assortative mating has evolved from a combination of different factors, which vary across different species.
Assortative mating with respect to body size can arise as a consequence of intrasexual competition. In some species, size is correlated with fecundity in females. Therefore, males choose to mate with larger females, with the larger males defeating the smaller males in courting them. Examples of species that display this type of assortative mating include the jumping spider Phidippus clarus and the leaf beetle Diaprepes abbreviatus. In other cases, larger females are better equipped to resist male courtship attempts, and only the largest males are able to mate with them.
Assortative mating can, at times, arise as a consequence of social competition. Traits in certain individuals may indicate competitive ability which allows them to occupy the best territories. Individuals with similar traits that occupy similar territories are more likely to mate with one another. In this scenario, assortative mating does not necessarily arise from choice, but rather by proximity. This was noted in western bluebirds although there is no definite evidence that this is the major factor resulting in color dependent assortative mating in this species. Different factors may apply simultaneously to result in assortative mating in any given species.
In non-human animals
Assortative mating in animals has been observed with respect to body size and color. Size-related assortative mating is prevalent across many species of vertebrates and invertebrates.
It has been found in the simultaneous hermaphrodites such as the land snail Bradybaena pellucida. One reason for its occurrence can be reciprocal intromission (i.e. both individuals provide both male and female gametes during a single mating) that happens in this species. Therefore, individuals with similar body size pair up with one another to facilitate this exchange. Moreover, it is known that larger individuals in such hermaphroditic species produce more eggs, so mutual mate choice is another factor leading to assortative mating in this species.
Evidence for size-related assortative mating has also been found in the mangrove snail, Littoraria ardouiniana and in the Japanese common toad, Bufo japonicus.
The second common type of assortative mating occurs with respect to coloration. This type of assortative mating is more common in socially monogamous bird species such as the eastern bluebirds (Sialia sialis) and western bluebirds (Sialia mexicana). In both species more brightly colored males mated with more brightly colored females and less brightly colored individuals paired with one another. Eastern bluebirds also mate assortatively for territorial aggression due to fierce competition for a limited number of nesting sites with tree swallows. Two highly aggressive individuals are better equipped to protect their nest, encouraging assortative mating between such individuals.
Assortative mating with respect to two common color morphs: striped and unstriped also exists in a polymorphic population of eastern red-backed salamanders (Plethodon cinereus).
Assortative mating is also found in many socially monogamous species of birds. Monogamous species are often involved in bi-parental care of their offspring. Since males are equally invested in the offspring as the mother, both genders are expected to display mate choice, a phenomenon termed as mutual mate choice. Mutual mate choice occurs when both males and females are searching for a mate that will maximize their fitness. In birds, female and male ornamentation can indicate better overall condition or such individuals might have better genes, or be better suited as parents.
In humans
Assortative mating in humans has been widely observed and studied, and can be broken down into two types of human assortative mating. These are:
genetic assortative mating (assortative mating with mate choice based on genetic type and phenotypical expression); and
social assortative mating (assortative mating with mate choice based on social, cultural, and other societal factors)
Genetic assortative mating is well studied and documented. In 1903 Pearson and colleagues reported strong correlations in height, span of arms, and the length of the left forearm between husband and wife in 1000 couples. Assortative mating with regards to appearance does not end there. Males prefer female faces that resemble their own when provided images of three women, with one image modified to resemble their own. However, the same result does not apply to females selecting male faces. Genetically related individuals (3rd or 4th cousin level) exhibit higher fitness than unrelated individuals.
Assortative mating based on genomic similarities plays a role in human marriages in the United States. Spouses are more genetically similar to each other than two randomly chosen individuals. The probability of marriage increases by roughly 15% for every one standard deviation increase in genetic similarity. However, some researchers argue that this assortative mating is caused purely by population stratification (the fact that people are more likely to marry within ethnic subgroups such as Swedish-Americans).
At the same time, individuals display disassortative mating for genes in the major histocompatibility complex region on chromosome 6. Individuals feel more attracted to odors of individuals who are genetically different in this region. This promotes MHC heterozygosity in the children, making them less vulnerable to pathogens. Apart from humans, disassortative mating with regards to the MHC coding region has been widely studied in mice, and has also been reported to occur in fish.
In addition to genetic assortative mating, humans also demonstrate patterns of assortative mating based on sociological factors as well. Sociological assortative mating is typically broken down into three categories, mate choice based on socio-economic status, mate choice based on racial or ethnic background, and mate choice based on religious beliefs.
Assortative mating based on socio-economic status is the broadest of these general categories. It includes the tendency of humans to prefer to mate within their socio-economic peers, that is, those with similar social standing, job prestige, educational attainment, or economic background as they themselves. This tendency has always been present in society: there was no historical area when most of the individuals preferred to sort, and had actually sorted, negatively into couples or matched randomly along these traits. Still, this tendency was weaker in some generations than in others. For instance, in the 20th century in the Western world, late Boomers had weaker aggregate preferences for educational homogamy than early Boomers had when being young adults; also, the members of the early Generation-X were typically much less "picky" about spousal education than the members of the late Generation-X were. This trend is evidenced by the search criteria of online dating site users.
Another form of sociological assortative mating is assortative mating based on racial and ethnic background. Mentioned above in the context of the genetically similar preferring to mate with one another, this form of assortative mating can take many varied and complicated forms. While the tendency mentioned above does exist, and people do tend to marry those genetically similar to themselves, especially if within the same racial or ethnic group, this trend can change in various ways. It is common, for example, for the barriers to intermarriage with the general population experienced by a minority population to decrease as the numbers of the minority population increase. This assimilation reduces the prevalence of this form of assortative mating. However, growth of a minority population does not necessarily lead to decreased barriers to intermarriage. This can be seen in the sharp increase in the non-white Hispanic population of the United States in the 1990s and 2000s that correlated with a sharp decrease in the percentage of non-white Hispanics intermarrying with the general population.
Religious assortative mating is the tendency of individuals to marry within their own religious group. This tendency is prevalent and observable, and changes according to three main factors. The first of these is the proportion of available spouses in the area who already follow the same religion as the person searching for a mate. Areas where religious beliefs are already similar for most people will always have high degrees of religious inbreeding. The second is the social distance between the intermarrying religious groups, or the physical proximity and social interactivity of these groups. Finally, the third factor is the personal views one holds towards marrying outside of a religion. Those who greatly value adherence to religious tradition may be more likely to be averse to marrying across religious lines. Although not necessarily religious, a good example of humans mating assortatively based on belief structure can be found in the tendency of humans to marry based on levels of charitable giving. Couples show similarities in terms of their contributions to public betterment and charities, and this can be attributed to mate choice based on generosity rather than phenotypic convergence.
Assortative mating also occurs among people with mental disorders such as ADHD, in which one person with ADHD is more likely to marry or have a child with another individual with ADHD.
Effects
Assortative mating has reproductive consequences. Positive assortative mating increases genetic relatedness within a family, whereas negative assortative mating accomplishes the opposite effect. Either strategy may be employed by the individuals of a species depending upon which strategy maximizes fitness and enables the individuals to maximally pass on their genes to the next generation. For instance, in the case of eastern bluebirds, assortative mating for territorial aggression increases the probability of the parents obtaining and securing a nest site for their offspring. This in turn increases the likelihood of survival of the offspring and consequently fitness of the individuals. In birds whose coloration represents well being and fecundity of the bird, positive assortative mating for color increases the chances of genes being passed on and of the offspring being in good condition. Also, positive assortative mating for behavioral traits allows for more efficient communication between the individuals and they can cooperate better to raise their offspring.
On the other hand, mating between individuals of genotypes which are too similar allows for the accumulation of harmful recessive alleles, which can decrease fitness. Such mating between genetically similar individuals is termed inbreeding which can result in the emergence of autosomal recessive disorders. Moreover, assortative mating for aggression in birds can lead to inadequate parental care. An alternate strategy can be disassortative mating, in which one individual is aggressive and guards the nest site while the other individual is more nurturing and fosters the young; however, this risks the breakdown of coadapted gene complexes, leading to outbreeding depression. This division of labor increases the chances of survival of the offspring. A classic example of this is in the case of the white-throated sparrow (Zonotrichia albicollis). This bird exhibits two color morphs – white striped and tan striped. In both sexes, the white striped birds are more aggressive and territorial whereas tan striped birds are more engaged in providing parental care to their offspring. Therefore, disassortative mating in these birds allows for an efficient division of labor in terms of raising and protecting their offspring.
Positive assortative mating is a key element leading to reproductive isolation within a species, which in turn may result speciation in sympatry over time. Sympatric speciation is defined as the evolution of a new species without geographical isolation. Speciation from assortative mating has occurred in the Middle East blind mole rat, cicadas, and the European corn borer.
Like other animals, humans also display these genetic results of assortative mating. What makes humans unique, however, is the tendency towards seeking mates that are not only similar to them in genetics and in appearances, but those who are similar to them economically, socially, educationally, and culturally. These tendencies toward using sociological characteristics to select a mate has many effects on the lives and livelihoods of those who choose to marry one another, as well as their children and future generations. Within a generation, assortative mating is sometimes cited as a source of inequality, as those who mate assortatively would marry people of similar station to themselves, thus amplifying their current station. There is debate, however, about whether this growing preference for educational and occupational similarities in spouses is due to increased preferences for these traits or the shift in workload that occurred as women entered the workforce. This concentration of wealth in families also perpetuates across generations as parents pass their wealth on to their children, with each successive generation inheriting the resources of both of its parents. The combined resources of the parents allow them to give their child a better life growing up, and the combined inheritances from both parents place them at an even greater advantage than they would be with their superior education and childhoods. This has an enormous impact on the development of the social economic structure of a society.
Economics
A related concept of 'assortative matching' has been developed within economics. This relates to efficiencies in production available if workers are evenly matched in their skills or productivity. A consideration of this assortative matching forms the basis of Kremer's 1993 O-ring theory of economic development.
See also
Directional selection
Disruptive selection
Endogamy
Genetic sexual attraction
Koinophilia
Matching hypothesis
Negative selection (natural selection)
Reinforcement (speciation)
References
Mating
Sexual selection
Fertility | Assortative mating | Biology | 2,881 |
39,029 | https://en.wikipedia.org/wiki/Bamboo | Bamboos are a diverse group of mostly evergreen perennial flowering plants making up the subfamily Bambusoideae of the grass family Poaceae. Giant bamboos are the largest members of the grass family, in the case of Dendrocalamus sinicus having individual stalks (culms) reaching a length of , up to in thickness and a weight of up to . The internodes of bamboos can also be of great length. Kinabaluchloa wrayi has internodes up to in length. and Arthrostylidium schomburgkii has internodes up to in length, exceeded in length only by papyrus. By contrast, the stalks of the tiny bamboo Raddiella vanessiae of the savannas of French Guiana measure only in length by about in width. The origin of the word "bamboo" is uncertain, but it probably comes from the Dutch or Portuguese language, which originally borrowed it from Malay or Kannada.
In bamboo, as in other grasses, the internodal regions of the stem are usually hollow and the vascular bundles in the cross-section are scattered throughout the walls of the stalk instead of in a cylindrical cambium layer between the bark (phloem) and the wood (xylem) as in dicots and conifers. The dicotyledonous woody xylem is also absent. The absence of secondary growth wood causes the stems of monocots, including the palms and large bamboos, to be columnar rather than tapering.
Bamboos include some of the fastest-growing plants in the world, due to a unique rhizome-dependent system. Certain species of bamboo can grow within a 24-hour period, at a rate of almost an hour (equivalent to every 90 seconds). Growth up to in 24 hours has been observed in the instance of Japanese giant timber bamboo (Phyllostachys bambusoides). This rapid growth and tolerance for marginal land, make bamboo a good candidate for afforestation, carbon sequestration and climate change mitigation.
Bamboo is versatile and has notable economic and cultural significance in South Asia, Southeast Asia, and East Asia, being used for building materials, as a food source, and as a raw product, and depicted often in arts, such as in bamboo paintings and bambooworking. Bamboo, like wood, is a natural composite material with a high strength-to-weight ratio useful for structures. Bamboo's strength-to-weight ratio is similar to timber, and its strength is generally similar to a strong softwood or hardwood timber. Some bamboo species have displayed remarkable strength under test conditions. Bambusa tulda of Bangladesh and adjoining India has tested as high as 60,000 psi (400 MPa) in tensile strength. Other bamboo species make extraordinarily hard material. Bambusa tabacaria of China contains so much silica that it will make sparks when struck by an axe.
Taxonomy
Bamboos have long been considered the most basal grass genera, mostly because of the presence of bracteate, indeterminate inflorescences, "pseudospikelets", and flowers with three lodicules, six stamens, and three stigmata. Following more recent molecular phylogenetic research, many tribes and genera of grasses formerly included in the Bambusoideae are now classified in other subfamilies, e.g. the Anomochlooideae, the Puelioideae, and the Ehrhartoideae. The subfamily in its current sense belongs to the BOP clade of grasses, where it is sister to the Pooideae (bluegrasses and relatives).
The bamboos comprise three clades classified as tribes, and these strongly correspond with geographic divisions representing the New World herbaceous species (Olyreae), tropical woody bamboos (Bambuseae), and temperate woody bamboos (Arundinarieae). The woody bamboos do not form a monophyletic group; instead, the tropical woody and herbaceous bamboos are sister to the temperate woody bamboos. Altogether, more than 1,400 species are placed in 115 genera.
21 genera:
Subtribe Buergersiochloinae
one genus: Buergersiochloa.
Subtribe Olyrineae
17 genera: Agnesia, Arberella, Cryptochloa, Diandrolyra, Ekmanochloa, Froesiochloa, Lithachne, Maclurolyra, Mniochloa, Olyra, Parodiolyra, Piresiella, Raddia, Raddiella, Rehia, Reitzia (syn. Piresia), Sucrea.
Subtribe Parianinae
three genera: Eremitis, Pariana, Parianella.
73 genera:
Subtribe Arthrostylidiinae:
15 genera: Actinocladum, Alvimia, Arthrostylidium, Athroostachys, Atractantha, Aulonemia, Cambajuva, Colanthelia, Didymogonyx, Elytrostachys, Filgueirasia, Glaziophyton, Merostachys, Myriocladus, Rhipidocladum.
Subtribe Bambusinae:
17 genera: Bambusa, Bonia, Cochinchinochloa, Dendrocalamus, Fimbribambusa, Gigantochloa, Maclurochloa, Melocalamus, Neomicrocalamus, Oreobambos, Oxytenanthera, Phuphanochloa, Pseudoxytenanthera, Soejatmia, Thyrsostachys, Vietnamosasa, Yersinochloa.
Subtribe Chusqueinae:
one genus: Chusquea.
Subtribe Dinochloinae:
7 genera: Cyrtochloa, Dinochloa, Mullerochloa, Neololeba, Pinga, Parabambusa, Sphaerobambos.
Subtribe Greslaniinae:
one genus: Greslania.
Subtribe Guaduinae:
5 genera: Apoclada, Eremocaulon, Guadua, Olmeca, Otatea.
Subtribe Hickeliinae:
9 genera: Cathariostachys, Decaryochloa, Hickelia, Hitchcockella, Nastus, Perrierbambus, Sirochloa, Sokinochloa, Valiha.
Subtribe Holttumochloinae:
3 genera: Holttumochloa, Kinabaluchloa, Nianhochloa.
Subtribe Melocanninae:
9 genera: Annamocalamus, Cephalostachyum, Davidsea, Melocanna, Neohouzeaua, Ochlandra, Pseudostachyum, Schizostachyum, Stapletonia.
Subtribe Racemobambosinae:
3 genera: Chloothamnus, Racemobambos, Widjajachloa.
Subtribe Temburongiinae:
one genus: Temburongia.
incertae sedis
2 genera: Ruhooglandia, Temochloa.
31 genera: Acidosasa, Ampelocalamus, Arundinaria, Bashania, Bergbambos, Chimonobambusa, Chimonocalamus, Drepanostachyum, Fargesia, Ferrocalamus, Gaoligongshania, Gelidocalamus, Himalayacalamus, Indocalamus, Indosasa, Kuruna, Oldeania, Oligostachyum, Phyllostachys, Pleioblastus, Pseudosasa, Sarocalamus, Sasa, Sasaella, Sasamorpha, Semiarundinaria, Shibataea, Sinobambusa, Thamnocalamus, Vietnamocalamus, Yushania.
Distribution
Most bamboo species are native to warm and moist tropical and to warm temperate climates. Their range also extends to cool mountainous regions and highland cloud forests.
In the Asia-Pacific region, they occur across East Asia, from north to 50 °N latitude in Sakhalin, to south to northern Australia, and west to India and the Himalayas. China, Japan, Korea, India and Australia, all have several endemic populations. They also occur in small numbers in sub-Saharan Africa, confined to tropical areas, from southern Senegal in the north to southern Mozambique and Madagascar in the south. In the Americas, bamboo has a native range from 47 °S in southern Argentina and the beech forests of central Chile, through the South American tropical rainforests, to the Andes in Ecuador near , with a noticeable gap through the Atacama Desert.
Three species of bamboo, all in the genus Arundinaria, are also native through Central America and Mexico, northward into the Southeastern United States. Bamboo thickets called canebrakes once formed a dominant ecosystem in some parts of the Southeastern United States, but they are now considered critically endangered ecosystems. Canada and continental Europe are not known to have any native species of bamboo. Many species are also cultivated as garden plants outside of this range, including in Europe and areas of North America where no native wild bamboo exists.
Recently, some attempts have been made to grow bamboo on a commercial basis in the Great Lakes region of east-central Africa, especially in Rwanda. In the United States, several companies are growing, harvesting, and distributing species such as Phyllostachys nigra (Henon) and Phyllostachys edulis (Moso).
Ecology
The two general patterns for the growth of bamboo are "clumping", and "running", with short and long underground rhizomes, respectively. Clumping bamboo species tend to spread slowly, as the growth pattern of the rhizomes is to simply expand the root mass gradually, similar to ornamental grasses. Running bamboos need to be controlled during cultivation because of their potential for aggressive behavior. They spread mainly through their rhizomes, which can spread widely underground and send up new culms to break through the surface. Running bamboo species are highly variable in their tendency to spread; this is related to the species, soil and climate conditions. Some send out runners of several meters a year, while others stay in the same general area for long periods. If neglected, over time, they can cause problems by moving into adjacent areas.
Bamboos include some of the fastest-growing plants on Earth, with reported growth rates up to in 24 hours. These depend on local soil and climatic conditions, as well as species, and a more typical growth rate for many commonly cultivated bamboos in temperate climates is in the range of per day during the growing period. Primarily growing in regions of warmer climates during the late Cretaceous period, vast fields existed in what is now Asia. Some of the largest timber bamboo grow over tall, and be as large as in diameter. The size range for mature bamboo is species-dependent, with the smallest bamboos reaching only several inches high at maturity. A typical height range covering many of the common bamboos grown in the United States is , depending on species. Anji County of China, known as the "Town of Bamboo", provides the optimal climate and soil conditions to grow, harvest, and process some of the most valued bamboo poles available worldwide.
Unlike all trees, individual bamboo culms emerge from the ground at their full diameter and grow to their full height in a single growing season of three to four months. During this time, each new shoot grows vertically into a culm with no branching out until the majority of the mature height is reached. Then, the branches extend from the nodes and leafing out occurs. In the next year, the pulpy wall of each culm slowly hardens. During the third year, the culm hardens further. The shoot is now a fully mature culm. Over the next 2–5 years (depending on species), fungus begins to form on the outside of the culm, which eventually penetrates and overcomes the culm. Around 5–8 years later (species- and climate-dependent), the fungal growths cause the culm to collapse and decay. This brief life means culms are ready for harvest and suitable for use in construction within about three to seven years. Individual bamboo culms do not get any taller or larger in diameter in subsequent years than they do in their first year, and they do not replace any growth lost from pruning or natural breakage. Bamboo has a wide range of hardiness depending on species and locale. Small or young specimens of an individual species produce small culms initially. As the clump and its rhizome system mature, taller and larger culms are produced each year until the plant approaches its particular species limits of height and diameter.
Many tropical bamboo species die at or near freezing temperatures, while some of the hardier temperate bamboos survive temperatures as low as . Some of the hardiest bamboo species are grown in USDA plant hardiness zone 5, although they typically defoliate and may even lose all above-ground growth, yet the rhizomes survive and send up shoots again the next spring. In milder climates, such as USDA zone 7 and above, most bamboo remain fully leafed out and green year-round.
Mass flowering
Bamboos seldom and unpredictably flower and the frequency of flowering varies greatly from species to species. Once flowering takes place, a plant declines and often dies entirely. In fact, many species only flower at intervals as long as 65 or 120 years. These taxa exhibit mass flowering (or gregarious flowering), with all plants in a particular 'cohort' flowering over a several-year period. Any plant derived through clonal propagation from this cohort will also flower regardless of whether it has been planted in a different location. The longest mass flowering interval known is 120 years, and it is for the species Phyllostachys bambusoides (Sieb. & Zucc.). In this species, all plants of the same stock flower at the same time, regardless of differences in geographic locations or climatic conditions, and then the bamboo dies. The commercially important bamboo Guadua, or Cana brava (Guadua angustifolia) bloomed for the first time in recorded history in 1971, suggesting a blooming interval well in excess of 130 years. The lack of environmental impact on the time of flowering indicates the presence of some sort of "alarm clock" in each cell of the plant which signals the diversion of all energy to flower production and the cessation of vegetative growth. This mechanism, as well as the evolutionary cause behind it, is still largely a mystery.
Invasive species
Some bamboo species are acknowledged as having high potential for becoming invasive species. A study commissioned by International Bamboo and Rattan Organisation, found that invasive species typically are varieties that spread via rhizomes rather than by clumping, as most commercially viable woody bamboos do. In the United States, the National Invasive Species Information Center agency of the Department of Agriculture has Golden Bamboo (Phyllostachys aurea) listed as an invasive species.
Animal diet
Bamboo contains large amounts of protein and very low amounts of carbohydrates allowing this plant to be the source of food for many animals. Soft bamboo shoots, stems and leaves are the major food source of the giant panda of China, the red panda of Nepal, and the bamboo lemurs of Madagascar. The red panda can eat up to a day which is also about the full body weight of the animal. With raw bamboo containing trace amounts of harmful cyanide with higher concentrations in bamboo shoots, the golden bamboo lemur ingests many times the quantity of the taxiphyllin-containing bamboo that would be lethal to a human.
Mountain gorillas of Central Africa also feed on bamboo, and have been documented consuming bamboo sap which was fermented and alcoholic; chimpanzees and elephants of the region also eat the stalks. The larvae of the bamboo borer (the moth Omphisa fuscidentalis) of Laos, Myanmar, Thailand and Yunnan, China feed off the pulp of live bamboo. In turn, these caterpillars are considered a local delicacy. Bamboo is also used for livestock feed with research showing some bamboo varieties have higher protein content over other varieties of bamboo.
Cultivation
General
In Brazil, the Brazilian Center for Innovation and Sustainability - CEBIS, a non-profit organization, promotes the development of Brazil's bamboo production chain. Last year, it helped with the approval of law n~21,162 in the state of Paraná, which encourages Bamboo Culture aiming at the dissemination of its agricultural cultivation and the valorization of bamboo as an instrument for promoting the sustainable socioeconomic development of the State through its multiple functionalities. Bamboo cultivation neutralizes carbon emissions. Bamboo cultivation is cheap and in addition to adding value to its production chain, it is a sustainable crop that brings environmental, economic and social benefits. Its production can be used from construction to food. Recently, it was qualified and classified for the National Commission for Sustainable Development Objectives - CNDOS of the Presidency of the Republic of the federal government of Brazil.
Harvesting
Bamboo used for construction purposes must be harvested when the culms reach their greatest strength and when sugar levels in the sap are at their lowest, as high sugar content increases the ease and rate of pest infestation. As compared to forest trees, bamboo species grow fast. Bamboo plantations can be readily harvested for a shorter period than tree plantations.
Harvesting of bamboo is typically undertaken according to these cycles:
Lifecycle of the culm: As each individual culm goes through a five to seven-year lifecycle, they are ideally allowed to reach this level of maturity prior to full capacity harvesting. The clearing out or thinning of culms, particularly older decaying culms, helps to ensure adequate light and resources for new growth. Well-maintained clumps may have a productivity three to four times that of an unharvested wild clump. Consistent with the lifecycle described above, bamboo is harvested from two to three years through to five to seven years, depending on the species.
Annual cycle: Most all growth of new bamboo occurs during the wet season and disturbing the clump during this phase will potentially damage the upcoming crop, while harvesting immediately prior to the wet/growth season may also damage new shoots, therefore harvesting is best a few months prior to the start of the wet season. Also during this high-rainfall period, sap levels are at their highest, and then diminish towards the dry season.
Daily cycle: During the height of the day, photosynthesis is at its peak, producing the highest levels of sugar in sap, making this the least ideal time of day to harvest and many traditional practitioners believe the best time to harvest is at dawn or dusk on a waning moon.
Leaching
Leaching is the removal of sap after harvest. In many areas of the world, the sap levels in harvested bamboo are reduced either through leaching or post-harvest photosynthesis.
For example:
Cut bamboo is raised clear of the ground and leaned against the rest of the clump for one to two weeks until leaves turn yellow to allow full consumption of sugars by the plant.
A similar method is undertaken, but with the base of the culm standing in fresh water, either in a large drum or stream to leach out sap.
Cut culms are immersed in a running stream and weighted down for three to four weeks.
Water is pumped through the freshly cut culms, forcing out the sap (this method is often used in conjunction with the injection of some form of treatment).
In the process of water leaching, the bamboo is dried slowly and evenly in the shade to avoid cracking in the outer skin of the bamboo, thereby reducing opportunities for pest infestation.
Durability of bamboo in construction is directly related to how well it is handled from the moment of planting through harvesting, transportation, storage, design, construction, and maintenance. Bamboo harvested at the correct time of year and then exposed to ground contact or rain will break down just as quickly as incorrectly harvested material.
Toxicity
Gardeners working with bamboo plants have occasionally reported allergic reactions varying from no effects during previous exposures, to immediate itchiness and rash developing into red welts after several hours where the skin had been in contact with the plant (contact allergy), and in some cases into swollen eyelids and breathing difficulties (dyspnoea). A skin prick test using bamboo extract was positive for the immunoglobulin E (IgE) in an available case study. The shoots (newly emerged culms) of bamboo contain the toxin taxiphyllin (a cyanogenic glycoside), which produces cyanide in the gut.
Uses
Culinary
The shoots of most species are edible either raw or cooked, with the tough sheath removed. Cooking removes the slight bitterness. The shoots are used in numerous Asian dishes and broths, and are available in supermarkets in various sliced forms, in both fresh and canned versions.
The bamboo shoot in its fermented state forms an important ingredient in cuisines across the Himalayas. In Assam, India, for example, it is called khorisa. In Nepal, a delicacy popular across ethnic boundaries consists of bamboo shoots fermented with turmeric and oil, and cooked with potatoes into a dish that usually accompanies rice ( () in Nepali).
In Indonesia, they are sliced thin and then boiled with santan (thick coconut milk) and spices to make a dish called gulai rebung. Other recipes using bamboo shoots are sayur lodeh (mixed vegetables in coconut milk) and lun pia (sometimes written lumpia: fried wrapped bamboo shoots with vegetables). The shoots of some species contain toxins that need to be leached or boiled out before they can be eaten safely.
Pickled bamboo, used as a condiment, may also be made from the pith of the young shoots.
The sap of young stalks tapped during the rainy season may be fermented to make ulanzi (a sweet wine) or simply made into a soft drink. Bamboo leaves are also used as wrappers for steamed dumplings which usually contains glutinous rice and other ingredients, such as the zongzi from China.
Pickled bamboo shoots ( ) are cooked with black-eyed beans as a delicacy in Nepal. Many Nepalese restaurants around the world serve this dish as aloo bodi tama. Fresh bamboo shoots are sliced and pickled with mustard seeds and turmeric and kept in glass jar in direct sunlight for the best taste. It is used alongside many dried beans in cooking during winters. Baby shoots (Nepali: tusa) of a very different variety of bamboo ( ) native to Nepal is cooked as a curry in hilly regions.
In Sambalpur, India, the tender shoots are grated into juliennes and fermented to prepare kardi. The name is derived from the Sanskrit word for bamboo shoot, karira. This fermented bamboo shoot is used in various culinary preparations, notably amil, a sour vegetable soup. It is also made into pancakes using rice flour as a binding agent. The shoots that have turned a little fibrous are fermented, dried, and ground to sand-sized particles to prepare a garnish known as hendua. It is also cooked with tender pumpkin leaves to make sag green leaves.
In Konkani cuisine, the tender shoots (kirlu) are grated and cooked with crushed jackfruit seeds to prepare kirla sukke.
In southern India and some regions of southwest China, the seeds of the dying bamboo plant are consumed as a grain known as "bamboo rice". The taste of cooked bamboo seeds is reported to be similar to wheat and the appearance similar to rice, but bamboo seeds have been found to have lower nutrient levels than both. The seeds can be pulverized into a flour with which to make cakes.
The Indian state of Sikkim has promoted bamboo water bottles to keep the state free from plastic bottles
The empty hollow in the stalks of larger bamboo is often used to cook food in many Asian cultures. Soups are boiled and rice is cooked in the hollows of fresh stalks of bamboo directly over a flame. Similarly, steamed tea is sometimes rammed into bamboo hollows to produce compressed forms of pu'er tea. Cooking food in bamboo is said to give the food a subtle but distinctive taste.
Fuel
Working
Writing surface
Bamboo was in widespread use in early China as a medium for written documents. The earliest surviving examples of such documents, written in ink on string-bound bundles of bamboo strips (or "slips"), date from the fifth century BC during the Warring States period. References in earlier texts surviving on other media indicate some precursor of these Warring States period bamboo slips was used as early as the late Shang period (from about 1250 BC).
Bamboo or wooden strips were used as the standard writing material during the early Han dynasty, and excavated examples have been found in abundance. Subsequently, paper began to displace bamboo and wooden strips from mainstream uses, and by the fourth century AD, bamboo slips had been largely abandoned as a medium for writing in China.
Bamboo fiber has been used to make paper in China since early times. A high-quality, handmade bamboo paper is still produced in small quantities. Coarse bamboo paper is still used to make spirit money in many Chinese communities.
Bamboo pulps are mainly produced in China, Myanmar, Thailand, and India, and are used in printing and writing papers. Several paper industries are surviving on bamboo forests. Ballarpur (Chandrapur, Maharstra) paper mills use bamboo for paper production. The most common bamboo species used for paper are Dendrocalamus asper and Bambusa blumeana. It is also possible to make dissolving pulp from bamboo. The average fiber length is similar to hardwoods, but the properties of bamboo pulp are closer to softwood pulps due to it having a very broad fiber length distribution. With the help of molecular tools, it is now possible to distinguish the superior fiber-yielding species/varieties even at juvenile stages of their growth, which can help in unadulterated merchandise production.
In Central India, there are regular bamboo working circles in forest areas of Maharashtra, Madhyapradesh, Odisha and Chhattisgarh. Most of the bamboo is harvested for papermaking. Bamboo is cut after three years of its germination. No cutting is done during the rainy season (July–September); broken and malformed culms are harvested first.
Writing pen
In olden times, people in India used hand-made pens (known as Kalam or boru (बोरू)) made from thin bamboo sticks (with diameters of 5–10 mm and lengths of 100–150 mm) by simply peeling them on one side and making a nib-like pattern at the end. The pen would then be dipped in ink for writing.
Textiles
Since the fibers of bamboo are very short (less than ), they are not usually transformed into yarn by a natural process. The usual process by which textiles labeled as being made of bamboo are produced uses only rayon made from the fibers with heavy employment of chemicals. To accomplish this, the fibers are broken down with chemicals and extruded through mechanical spinnerets; the chemicals include lye, carbon disulfide, and strong acids. Retailers have sold both end products as "bamboo fabric" to cash in on bamboo's current ecofriendly cachet. The Canadian Competition Bureau and the US Federal Trade Commission, as of mid-2009, are cracking down on the practice of labeling bamboo rayon as natural bamboo fabric. Under the guidelines of both agencies, these products must be labeled as rayon with the optional qualifier "from bamboo".
Fabric
Construction
Bamboo, like true wood, is a natural building material with a high strength-to-weight ratio useful for structures. In its natural form, bamboo as a construction material is traditionally associated with the cultures of South Asia, East Asia, and the South Pacific, to some extent in Central and South America, and by extension in the aesthetic of Tiki culture.
In China and India, bamboo was used to hold up simple suspension bridges, either by making cables of split bamboo or twisting whole culms of sufficiently pliable bamboo together. One such bridge in the area of Qian-Xian is referenced in writings dating back to 960 AD and may have stood since as far back as the third century BC, due largely to continuous maintenance.
Bamboo has also long been used as scaffolding; the practice has been banned in China for buildings over six stories, but is still in continuous use for skyscrapers in Hong Kong.
In the Philippines, the nipa hut is a fairly typical example of the most basic sort of housing where bamboo is used; the walls are split and woven bamboo, and bamboo slats and poles may be used as its support.
In Japanese architecture, bamboo is used primarily as a supplemental or decorative element in buildings such as fencing, fountains, grates, and gutters, largely due to the ready abundance of quality timber.
Many ethnic groups in remote areas that have water access in Asia use bamboo that is 3–5 years old to make rafts. They use 8 to 12 poles, long, laid together side by side to a width of about . Once the poles are lined up together, they cut a hole crosswise through the poles at each end and use a small bamboo pole pushed through that hole like a screw to hold all the long bamboo poles together. Floating houses use whole bamboo stalks tied together in a big bunch to support the house floating in the water.
Fishing and aquaculture
Due to its flexibility, bamboo is also used to make fishing rods. The split cane rod is especially prized for fly fishing.
Firecrackers
Bamboo has been traditionally used in Malaysia as a firecracker called a meriam buluh or bamboo cannon. Four-foot-long sections of bamboo are cut, and a mixture of water and calcium carbide are introduced. The resulting acetylene gas is ignited with a stick, producing a loud bang.
Weapons
Bamboo has often been used to construct weapons and is still incorporated in several Asian martial arts.
A bamboo staff, sometimes with one end sharpened, is used in the Tamil martial art of silambam, a word derived from a term meaning "hill bamboo".
Staves used in the Indian martial art of gatka are commonly made from bamboo, a material favoured for its light weight.
A bamboo sword called a shinai is used in the Japanese martial art of kendo.
Bamboo is used for crafting the bows, called yumi, and arrows used in the Japanese martial art kyūdō.
The first gunpowder-based weapons, such as the fire lance, were made of bamboo.
The Chinese Langxian, or "Wolf Brush Spear". Some variants of this weapon were just long bamboo poles with a spearhead that still had layers of leaves attached. The Langxian was mainly used as a defensive weapon in Qi Jiguang's Mandarin Duck Formation.
Sharpened bamboo javelins weighted with sand known as bagakay were used as disposable missile weapons in both land and naval warfare in the Philippines. They were thrown in groups at a time at enemy ships or massed enemy formations. Non-disposable finely-crafted throwing spears made from bamboo weighted with sand known as sugob were also used. Sugob were mainly used for close-quarters combat and were only thrown when they could be retrieved.
Metal-tipped blowgun-spear called sumpit (or sumpitan), used by various ethnic groups in the islands of the Philippines, Borneo, and Sulawesi, were generally made from hollowed bamboo. They used thick short darts dipped in the concentrated sap of Antiaris toxicaria which could cause lethal cardiac arrest.
The simple sharpened bamboo spear, known as bambu runcing (literally 'sharp bamboo' or 'pointed bamboo'), is a legendary symbol of Indonesian revolutionary spirit, embodying the will of the Indonesian people, who were often ill-equipped, to fight for independence against the Dutch occupation who held air- and naval supremacy along with Commonwealth aid.
Punji sticks are stakes of sharpened bamboo typically used in area denial and booby traps. Punji sticks were widely used in the Vietnam War by the Viet Cong.
Desalination
Bamboo can be used in water desalination. A bamboo filter is used to remove the salt from seawater.
Musical instruments
Indicator of climate change
The Song dynasty (960–1279 AD) Chinese scientist and polymath Shen Kuo (1031–1095) used the evidence of underground petrified bamboo found in the dry northern climate of Yan'an, Shanbei region, Shaanxi province to support his geological theory of gradual climate change.
Kitchenware and other usage
Bamboo is frequently used for cooking utensils within many cultures, and is used in the manufacture of chopsticks and bamboo steamers. In modern times, some see bamboo tools as an eco-friendly alternative to other manufactured utensils. Bamboo is also used to make eating utensils such as chopsticks, trays, and tea scoops. Several manufacturers offer bamboo bicycles, surfboards, snowboards, and skateboards.
Bamboo has traditionally been used to make a wide range of everyday utensils and cutting boards, particularly in Japan, where archaeological excavations have uncovered bamboo baskets dating to the Late Jōmon period (2000–1000 BC). Bamboo also has a long history of use in Asian furniture. Chinese bamboo furniture is a distinct style based on a millennia-long tradition, and bamboo is also used for floors due to its high hardness.
Additionally, bamboo is used to create bracelets, earrings, necklaces, and other jewelry.
In culture
Several Asian cultures, including that of the Andaman Islands, believe humanity emerged from a bamboo stem.
China
Bamboo's long life makes it a Chinese symbol of uprightness and an Indian symbol of friendship. The rarity of its blossoming has led to the flowers' being regarded as a sign of impending famine. This may be due to rats feeding upon the profusion of flowers, then multiplying and destroying a large part of the local food supply. The most recent flowering began in May 2006 (see Mautam). Various bamboo species bloom in this manner about every 28–60 years.
In Chinese culture, the bamboo, plum blossom, orchid, and chrysanthemum (often known as méi lán zhú jú in Chinese) are collectively referred to as the Four Gentlemen. These four plants also represent the four seasons and, in Confucian ideology, four aspects of the junzi ("prince" or "noble one"). The pine (sōng ), the bamboo (zhú ), and the plum blossom (méi ) are also admired for their perseverance under harsh conditions, and are together known as the "Three Friends of Winter" () in Chinese culture.
Attributions of character
Bamboo, one of the "Four Gentlemen" (bamboo, orchid, plum blossom and chrysanthemum), plays such an important role in traditional Chinese culture that it is even regarded as a behavior model of the gentleman. As bamboo has features such as uprightness, tenacity, and modesty, people endow bamboo with integrity, elegance, and plainness, though it is not physically strong. Countless poems praising bamboo written by ancient Chinese poets are actually metaphorically about people who exhibited these characteristics. An ancient poet, Bai Juyi (772–846), thought that to be a gentleman, a man does not need to be physically strong, but he must be mentally strong, upright, and perseverant. Just as a bamboo is hollow-hearted, he should open his heart to accept anything of benefit and never have arrogance or prejudice.
Bamboo is not only a symbol of a gentleman, but also plays an important role in Buddhism, which was introduced into China in the first century. As canons of Buddhism forbids cruelty to animals, flesh and egg were not allowed in the diet. The tender bamboo shoot (sǔn in Chinese) thus became a nutritious alternative. Preparation methods developed over thousands of years have come to be incorporated into Asian cuisines, especially for monks. A Buddhist monk, Zan Ning, wrote a manual of the bamboo shoot called Sǔn Pǔ () offering descriptions and recipes for many kinds of bamboo shoots. Bamboo shoot has always been a traditional dish on the Chinese dinner table, especially in southern China.
In ancient times, those who could afford a big house with a yard would plant bamboo in their garden.
Mythology
In a Chinese legend, the Emperor Yao gave two of his daughters to the future Emperor Shun as a test for his potential to rule. Shun passed the test of being able to run his household with the two emperor's daughters as wives, and thus Yao made Shun his successor, bypassing his unworthy son. After Shun's death, the tears of his two bereaved wives fell upon the bamboos growing there explains the origin of spotted bamboo. The two women later became goddesses Xiangshuishen after drowning themselves in the Xiang River.
Japan
Bamboo is a symbol of prosperity in Japan, and are used to make New Year's decorations called kadomatsu. Bamboo forests sometimes surround Shinto shrines and Buddhist temples as part of a sacred barrier against evil. In the folktale Tale of the Bamboo Cutter (Taketori Monogatari), princess Kaguya emerges from a shining bamboo section.
In Japan, the Chinese "Three Friends of Winter" (kansai sanyū) concept is traditionally used as a ranking system, where pine ( matsu) is the first rank, bamboo ( take) is the second rank, and plum ( ume) is the third rank. This system is used in many traditional arts like with sushi sets, embroidering kimono or tiers of accommodations at traditional ryōkan taverns.
Bamboo is known to be a strong material and able to withstand extreme heat. It is the only plant known to have survived the atomic bombings of Hiroshima in 1945.
Malaysia
In Malaysia, a similar story includes a man who dreams of a beautiful woman while sleeping under a bamboo plant; he wakes up and breaks the bamboo stem, discovering the woman inside.
Philippines
In Philippine mythology, one of the more famous creation accounts tells of the first man Malakás ("Strong") and the first woman Maganda ("Beautiful") each emerging from one half of a split bamboo stem on an island formed after the battle between Sky and Ocean.
Vietnam
Attributions of character
Bamboo plays an important part of the culture of Vietnam. Bamboo symbolizes the spirit of Vovinam (a Vietnamese martial arts): cương nhu phối triển (coordination between hard and soft (martial arts)). Bamboo also symbolizes the Vietnamese hometown and Vietnamese soul: the gentlemanlike, straightforwardness, hard working, optimism, unity, and adaptability. A Vietnamese proverb says, "Tre già, măng mọc" (When the bamboo is old, the bamboo sprouts appear), the meaning being Vietnam will never be annihilated; if the previous generation dies, the children take their place. Therefore, the Vietnamese nation and Vietnamese values will be maintained and developed eternally. Traditional Vietnamese villages are surrounded by thick bamboo hedges (lũy tre).
During Ngô Đình Diệm's presidency, bamboo was the national symbol of South Vietnam, it was featured on the national coat of arms, presidential standard, and South Vietnamese đồng coins at the time.
Mythology
A bamboo cane is also the weapon of Vietnamese legendary hero, Thánh Gióng, who had grown up immediately and magically since the age of three because of his wish to liberate his land from Ân invaders. The ancient Vietnamese legend Cây tre trăm đốt (The Hundred-knot Bamboo Tree) tells of a poor, young farmer who fell in love with his landlord's beautiful daughter. The farmer asked the landlord for his daughter's hand in marriage, but the proud landlord would not allow her to be bound in marriage to a poor farmer. The landlord decided to foil the marriage with an impossible deal; the farmer must bring him a "bamboo tree of 100 nodes". But Gautama Buddha (Bụt) appeared to the farmer and told him that such a tree could be made from 100 nodes from several different trees. Bụt gave to him four magic words to attach the many nodes of bamboo: Khắc nhập, khắc xuất, which means "joined together immediately, fell apart immediately". The triumphant farmer returned to the landlord and demanded his daughter. Curious to see such a long bamboo, the landlord was magically joined to the bamboo when he touched it, as the young farmer said the first two magic words. The story ends with the happy marriage of the farmer and the landlord's daughter after the landlord agreed to the marriage and asked to be separated from the bamboo.
Africa
Tanzania
Tanzania possesses a large diversity of bamboo species.
Bozo
The Bozo ethnic group of West Africa take their name from the Bambara phrase bo-so, which means "bamboo house".
Saint Lucia
Bamboo is also the national plant of St. Lucia.
Hawaiian
Hawaiian bamboo ('ohe) is a kinolau or body form of the Polynesian creator god Kāne.
North America
Arundinaria bamboos, known as giant cane or river cane, are a central part of the material cultures of Southeastern Native American nations, so much so that they have been called "the plastic of the Southeastern Indians." Among the Cherokee, river cane has been used to make waterproof baskets, mats, fishing poles, flutes, blowguns, arrows, and to build houses, among other uses; the seed and young shoots are also edible. Traditional Cherokee double-woven baskets, crafted from river cane that has been split and dyed in various colors, are sometimes considered among the finest in the world. Since the North American bamboos are now rare, with 98% of their original extent eliminated, the Cherokee have initiated an effort to restore it.
See also
List of bamboo species
Bambuseae
Bamboo blossom
International Network for Bamboo and Rattan
Bamboo construction
Bamboo textile
Bamboo processing machine
Ceremonial pole
Mautam
References
Further reading
Bamboo – The Plant and its Uses. Part of the Tropical Forestry book series (TROPICAL, volume 10), 2015.
External links
Bamboo for Climate Change by INBAR.
Bamboo Structural Design ISO Standards
Building materials
Bamboo
Bamboo
Bamboo
Bamboo
Bamboo
National symbols of Saint Lucia
National symbols of Japan
National symbols of China
Rhizomatous plants
Stem vegetables | Bamboo | Physics,Engineering | 8,837 |
45,632,133 | https://en.wikipedia.org/wiki/History%20of%20Roman%20and%20Byzantine%20domes | Domes were a characteristic element of the architecture of Ancient Rome and of its medieval continuation, the Byzantine Empire. They had widespread influence on contemporary and later styles, from Russian and Ottoman architecture to the Italian Renaissance and modern revivals. The domes were customarily hemispherical, although octagonal and segmented shapes are also known, and they developed in form, use, and structure over the centuries. Early examples rested directly on the rotunda walls of round rooms and featured a central oculus for ventilation and light. Pendentives became common in the Byzantine period, provided support for domes over square spaces.
Early wooden domes are known only from a literary source, but the use of wooden formwork, concrete, and unskilled labor enabled domes of monumental size in the late Republic and early Imperial period, such as the so-called "Temple of Mercury" bath hall at Baiae. Nero introduced the dome into Roman palace architecture in the 1st century and such rooms served as state banqueting halls, audience rooms, or throne rooms. The Pantheon's dome, the largest and most famous example, was built of concrete in the 2nd century and may have served as an audience hall for Hadrian. Imperial mausolea, such as the Mausoleum of Diocletian, were domed beginning in the 3rd century. Some smaller domes were built with a technique of using ceramic tubes in place of a wooden centering for concrete, or as a permanent structure embedded in the concrete, but light brick became the preferred building material over the course of the 4th and 5th centuries. Brick ribs allowed for a thinner structure and facilitated the use of windows in the supporting walls, replacing the need for an oculus as a light source.
Christian baptisteries and shrines were domed in the 4th century, such as the Lateran Baptistery and the likely wooden dome over the Church of the Holy Sepulchre. Constantine's octagonal church in Antioch may have been a precedent for similar buildings for centuries afterward. The first domed basilica may have been built in the 5th century, with a church in southern Turkey being the earliest proposed example, but the 6th century architecture of Justinian made domed church architecture standard throughout the Roman east. His Hagia Sophia and Church of the Holy Apostles inspired copies in later centuries.
Cruciform churches with domes at their crossings, such as the churches of Hagia Sophia in Thessaloniki and St. Nicholas at Myra, were typical of 7th and 8th century architecture and bracing a dome with barrel vaults on four sides became the standard structural system. Domes over windowed drums of cylindrical or polygonal shape were standard after the 9th century. In the empire's later period, smaller churches were built with smaller diameter domes, normally less than after the 10th century. Exceptions include the 11th century domed-octagons of Hosios Loukas and Nea Moni, and the 12th century Chora Church, among others. The cross-in-square plan, with a single dome at the crossing or five domes in a quincunx pattern, as at the Church of St. Panteleimon, was the most popular type from the 10th century until the fall of Constantinople in 1453.
Overview
Rounded arches, vaults, and domes distinguish Roman architecture from that of Ancient Greece and were facilitated by the use of concrete and brick. By varying the weight of the aggregate material in the concrete, the weight of the concrete could be altered, allowing lighter layers to be laid at the top of concrete domes. But concrete domes also required expensive wooden formwork, also called shuttering, to be built and kept in place during the curing process, which would usually have to be destroyed to be removed. Formwork for brick domes need not be kept in place as long and could be more easily reused. The mortar and aggregate of Roman concrete was built up in horizontal layers laid by hand against wooden form-work with the thickness of the layers determined by the length of the workday, rather than being poured into a mold as concrete is today. Roman concrete domes were thus built similarly to the earlier corbel domes of the Mediterranean region, although they have different structural characteristics. The aggregate used by the Romans was often rubble, but lightweight aggregate in the upper levels served to reduce stresses. Empty "vases and jugs" could be hidden inside to reduce weight. The dry concrete mixtures used by the Romans were compacted with rams to eliminate voids, and added animal blood acted as a water reducer. Because Roman concrete was weak in tension, it did not provide any structural advantage over the use of brick or stone. But, because it could be constructed with unskilled slave labor, it provided a constructional advantage and facilitated the building of large-scale domes.
Roman domes were used in baths, villas, palaces, and tombs. Oculi were common features. They were customarily hemispherical in shape and partially or totally concealed on the exterior. In order to buttress the horizontal thrusts of a large hemispherical masonry dome, the supporting walls were built up beyond the base to at least the haunches of the dome and the dome was then also sometimes covered with a conical or polygonal roof. A variety of other shapes, including shallow saucer domes, segmental domes, and ribbed domes were also sometimes used. Stone or brick ribs were usually flush with the inside surface of Roman domes where they would not have been visible. The audience halls of many imperial palaces were domed. Domes were "closely associated with senatorial, imperial, and state-sponsored patrons" and proliferated in the capital cities and other cities with imperial affiliations. Domes were also very common over polygonal garden pavilions. Depictions on late Roman coins suggest that wooden bulbous domes sheathed in metal were used on late Roman towers in the eastern portion of the empire. Construction and development of domes declined in the west with the decline and fall of the western portion of the empire.
In Byzantine architecture, a supporting structure of four arches with pendentives between them allowed the spaces below domes to be opened up. Pendentives allowed for weight loads to be concentrated at just four points on a more practical square plan, rather than a circle. Until the 9th century, domes were low with thick buttressing and did not project much into the exterior of their buildings. Drums were cylindrical when used and likewise low and thick. After the 9th century, domes were built higher and used polygonal drums decorated with engaged columns and arcades. Exterior dome decoration was more elaborate by the 12th century and included engaged columns along with niches, blind arcades, and string courses. Multiple domes on a single building were normal.
Domes were important elements of baptisteries, churches, and tombs. They were normally hemispherical and had, with occasional exceptions, windowed drums. Roofing for domes ranged from simple ceramic tile to more expensive, more durable, and more form-fitting lead sheeting. The domes and drums typically incorporated wooden tension rings at several levels to resist deformation in the mortar and allow for faster construction. Metal clamps between stone cornice blocks, metal tie rods, and metal chains were also used to stabilize domed buildings. Timber belts at the bases of domes helped to stabilize the walls below them during earthquakes, but the domes themselves remained vulnerable to collapse. The surviving ribbed or pumpkin dome examples in Constantinople are structurally equivalent and those techniques were used interchangeably, with the number of divisions corresponding to the number of windows. Aided by the small scale of churches after the 6th century, such ribbed domes could be built with formwork only for the ribs. Pumpkin domes could have been built in self-supporting rings and small domical vaults were effectively corbelled, dispensing with formwork altogether.
History
Late Republic and early Imperial period
Roman baths played a leading role in the development of domed construction in general, and monumental domes in particular. Modest domes in baths dating from the 2nd and 1st centuries BC are seen in Pompeii, in the cold rooms of the Terme Stabiane and the Terme del Foro. These domes are very conical in shape, similar to those on an Assyrian bas-relief found in Nineveh. At a Roman era tepidarium in Cabrera de Mar, Spain, a dome has been identified from the middle of the 2nd century BC that used a refined version of the parallel arch construction found in an earlier Hellenistic bath dome in Sicily. According to Vitruvius, the temperature and humidity of domed warm rooms could be regulated by raising or lowering bronze discs located under an oculus. Domes were particularly well suited to the hot rooms of baths circular in plan to facilitate even heating from the walls. However, the extensive use of domes did not occur before the 1st century AD.
Varro's book on agriculture describes an aviary with a wooden dome decorated with the eight winds that is compared by analogy to the eight winds depicted on the Tower of the Winds, which was built in Athens at about the same time. This aviary with its wooden dome may represent a fully developed type. Wooden domes in general would have allowed for very wide spans. Their earlier use may have inspired the development and introduction of large stone domes of previously unprecedented size. Complex wooden forms were necessary for dome centering and support during construction, and they seem to have eventually become more efficient and standardized over time. The "so-called " is a domed Greek cross structure dated to either the 1st century BC or the 1st century AD. The hemispherical dome was made from large stone ashlar blocks pierced by four holes with shafts extending diagonally up to the outside surface.
Domes reached monumental size in the Roman Imperial period. Although imprints of the formwork itself have not survived, deformations from the ideal of up to at the so-called "Temple of Mercury" in Baiae suggest a centering of eight radiating frames, with horizontal connectors supporting radial formwork for the shallow dome. The building, actually a concrete frigidarium pool for a bath, dates to either the late Roman Republic, or the reign of the first emperor Augustus (27 BC – 14 AD), making it the first large Roman dome. There are five openings in the dome: a circular oculus and four square skylights. The dome has a span of and is the largest known dome built before that of the Pantheon. It is also the earliest preserved concrete dome.
First century
While there are earlier examples in the Republican period and early Imperial period, the growth of domed construction increased under Emperor Nero and the Flavians in the 1st century AD, and during the 2nd century. Centrally planned halls become increasingly important parts of palace and palace villa layouts beginning in the 1st century, serving as state banqueting halls, audience rooms, or throne rooms. Formwork was arranged either horizontally or radially, but there is not enough surviving evidence from the 1st and 2nd centuries to say what was typical.
The opulent palace architecture of the Emperor Nero (54 – 68 AD) marks an important development. There is evidence of a dome in his Domus Transitoria at the intersection of two corridors, resting on four large piers, which may have had an oculus at the center. In Nero's Domus Aurea, or "Golden House", planned by Severus and Celer, the walls of a large octagonal room transition to an octagonal domical vault, which then transitions to a dome with an oculus. This is the earliest known example of a dome in the city of Rome itself.
The Domus Aurea was built after 64 AD and the dome was over in diameter. This octagonal and semicircular dome is made of concrete and the oculus is made of brick. The radial walls of the surrounding rooms buttress the dome, allowing the octagonal walls directly beneath it to contain large openings under flat arches and for the room itself to be unusually well lit. Because there is no indication that mosaic or other facing material had ever been applied to the surface of the dome, it may have been hidden behind a tent-like fabric canopy like the pavilion tents of Hellenistic (and earlier Persian) rulers. The oculus is unusually large, more than two-fifths the span of the room, and it may have served to support a lightweight lantern structure or tholos, which would have covered the opening. Circular channels on the upper surface of the oculus also support the idea that this lantern, perhaps itself domed, was the rotating dome referred to in written accounts.
According to Suetonius, the Domus Aurea had a dome that perpetually rotated on its base in imitation of the sky. It was reported in 2009 that newly discovered foundations of a round room may be those of a rotating domed dining hall. Also reported in contemporary sources is a ceiling over a dining hall in the palace fitted with pipes so that perfume could rain from the ceiling, although it is not known whether this was a feature of the same dome. The expensive and lavish decoration of the palace caused such scandal that it was abandoned soon after Nero's death and public buildings such as the Baths of Titus and the Colosseum were built at the site.
The only intact dome from the reign of Emperor Domitian is a wide example in what may have been a nymphaeum at his villa at Albano. It is now the church of . Domitian's 92 AD Domus Augustana established the apsidal semi-dome as an imperial motif. Square chambers in his palace on the Palatine Hill used pendentives to support domes. His palace contained three domes resting over walls with alternating apses and rectangular openings. An octagonal domed hall existed in the domestic wing. Unlike Nero's similar octagonal dome, its segments extended all the way to the oculus. The dining hall of this private palace, called the Coenatio Jovis, or Dining Hall of Jupiter, contained a rotating ceiling like the one Nero had built, but with stars set into the simulated sky.
Second century
During the reign of Emperor Trajan, domes and semi-domes over exedras were standard elements of Roman architecture, possibly due to the efforts of Trajan's architect, Apollodorus of Damascus, who was famed for his engineering ability. Two rotundas in diameter were finished in 109 AD as part of the Baths of Trajan, built over the Domus Aurea, and exedras wide were built as part of the markets north-east of his forum. The architecture of Trajan's successor, Hadrian, continued this style. Three wide exedras at Trajan's Baths have patterns of coffering that, as in the later Pantheon, align with lower niches only on the axes and diagonals and, also as in the Pantheon, that alignment is sometimes with the ribs between the coffers, rather than with the coffers themselves.
The Pantheon in Rome, completed by Emperor Hadrian as part of the Baths of Agrippa, has the most famous, best preserved, and largest Roman dome. Its diameter was more than twice as wide as any known earlier dome. Although considered an example of Hadrianic architecture, there is brickstamp evidence that the rebuilding of the Pantheon in its present form was begun under Trajan. Speculation that the architect of the Pantheon was Apollodorus has not been proven, although there are stylistic commonalities between his large coffered half-domes at Trajan's Baths and the dome of the Pantheon. Other indicators that the designer was either Apollodorus or someone in his circle who was "closer in artistic sensibility to Trajan’s era than Hadrian’s" are the monumental size and the incorporation of tiny passages in the structure. The building's dimensions seem to reference Archimedes' treatise On the Sphere and Cylinder, the dome may use rows of 28 coffers because 28 was considered by the Pythagoreans to be a perfect number, and the design balances its complexity with underlying geometrical simplicity. Dating from the 2nd century, it is an unreinforced concrete dome wide resting on a circular wall, or rotunda, thick. This rotunda, made of brick-faced concrete, contains a large number of relieving arches and voids. Seven interior niches and the entrance way divide the wall structurally into eight virtually independent piers. These openings and additional voids account for a quarter of the rotunda wall's volume. The only opening in the dome is the brick-lined oculus at the top, in diameter, that provides light and ventilation for the interior.
The shallow coffering in the dome accounts for a less than five percent reduction in the dome's mass, and is mostly decorative. The aggregate material hand-placed in the concrete is heaviest at the base of the dome and changes to lighter materials as the height increases, dramatically reducing the stresses in the finished structure. In fact, many commentators have cited the Pantheon as an example of the revolutionary possibilities for monolithic architecture provided by the use of Roman pozzolana concrete. However, vertical cracks seem to have developed very early, such that in practice the dome acts as an array of arches with a common keystone, rather than as a single unit. The exterior step-rings used to compress the "haunches" of the dome, which would not be necessary if the dome acted as a monolithic structure, may be an acknowledgement of this by the builders themselves. Such buttressing was common in Roman arch construction. The cracks in the dome can be seen from the upper internal chambers of the rotunda, but have been covered by re-rendering on the inside surface of the dome and by patching on the outside of the building. The Pantheon's roof was originally covered with gilt bronze tiles, but these were removed in 663 by Emperor Constans II and replaced with lead roofing.
The function of the Pantheon remains an open question. Strangely for a temple, its inscription, which attributes this third building at the site to the builder of the first, Marcus Agrippa, does not mention any god or group of gods. Its name, Pantheon, comes from the Greek for "all gods" but is unofficial, and it was not included in the list of temples restored by Hadrian in the Historia Augusta. Circular temples were small and rare, and Roman temples traditionally allowed for only one divinity per room. The Pantheon more resembles structures found in imperial palaces and baths. Hadrian is believed to have held court in the rotunda using the main apse opposite the entrance as a tribune, which may explain its very large size. Later Roman buildings similar to the Pantheon include a (c. 145) in the old Hellenistic city of Pergamon and the so-called "Round Temple" at Ostia (c. 230–240), which may have been related to the Imperial cult. The Pergamon dome was about 80 Roman feet wide, versus about 150 for the Pantheon, and made of brick over a cut stone rotunda. The Ostia dome was 60 Roman feet wide and made of brick-faced concrete. No later dome built in the Imperial era came close to the span of the Pantheon. It remained the largest dome in the world for more than a millennium and is still the world's largest unreinforced concrete dome.
Use of concrete facilitated the complex geometry of the octagonal domed hall at the 2nd century Small Thermal Baths of Hadrian's Villa in Tivoli. The vaulting has collapsed, but a virtual reconstruction suggests that the walls of the octagonal hall, which alternate flat and convex, merged into a spherical cap. Segmented domes made of radially concave wedges, or of alternating concave and flat wedges, appear under Hadrian in the 2nd century and most preserved examples of the style date from this period. Hadrian's villa has examples at the Piazza D'Oro and in the semidome of the Serapeum. Recorded details of the decoration of the segmented dome at the Piazza D'Oro suggests it was made to evoke a billowing tent, perhaps in imitation of the canopies used by Hellenistic kings. Other examples exist at the Hadrianic baths of Otricoli and the so-called "Temple of Venus" at Baiae. This style of dome required complex centering and radially oriented formwork to create its tight curves, and the earliest surviving direct evidence of radial formwork is found at the caldarium of the Large Baths at Hadrian's villa. Hadrian was an amateur architect and it was apparently domes of Hadrian's like these that Trajan's architect, Apollodorus of Damascus, derisively called "pumpkins" prior to Hadrian becoming emperor. According to Dio Cassius, the memory of this insult contributed to Hadrian as emperor having Apollodorus exiled and killed.
In the middle of the 2nd century, some of the largest domes were built near present-day Naples, as part of large bath complexes taking advantage of the volcanic hot springs in the area. At the bath complex at Baiae, there are remains of a collapsed dome spanning , called the "Temple of Venus", and a larger half-collapsed dome spanning called the "Temple of Diana". The dome of the "Temple of Diana", which may have been a nymphaeum as part of the bath complex, can be seen to have had an ogival section made of horizontal layers of mortared brick and capped with light tufa. It dates to the second half of the 2nd century and is the third largest dome known from the Roman world. The second largest is the collapsed "Temple of Apollo" built nearby along the shore of Lake Avernus. The span cannot be precisely measured due to its ruined state, but it was more than in diameter.
Octagonal rooms of the Baths of Antoninus in Carthage were covered with cloister vaults and have been dated to 145–160.
In the second half of the 2nd century in North Africa, a distinctive type of nozzle tube shape was developed in the tradition of the terracotta tube dome at the Hellenistic era baths of Morgantina, an idea that had been preserved in the use of interlocking terracotta pots for kiln roofs. This tube could be mass-produced on potter's wheels and interlocked to form a permanent centering for concrete domes, avoiding the use of wooden centering altogether. This spread mainly in the western Mediterranean.
Although rarely used, the pendentive dome was known in 2nd century Roman architecture and possibly earlier, in funerary monuments such as the and the on the Via Nomentana. Pendentive domes would be used much more widely in the Byzantine period. A "Roman tomb in Palestine at Kusr-en-Nêuijîs" had a pendentive dome over the square intersection of cruciform barrel vaults and has been dated to the 2nd century. A small dome on spherical pendentives at Beurey-Beauguay on the Côte-d'Or department of France has been dated to the 2nd or 3rd century. A stone voussoir dome over the caldarium of the West Bath of Jerash has been dated to the second century.
Third century
The large rotunda of the Baths of Agrippa, the oldest public baths in Rome, has been dated to the Severan period at the beginning of the 3rd century, but it is not known whether this is an addition or simply a reconstruction of an earlier domed rotunda.
In the 3rd century, imperial mausolea began to be built as domed rotundas rather than tumulus structures or other types, following similar monuments by private citizens. Pagan and Christian domed mausolea from this time can be differentiated in that the structures of the buildings also reflect their religious functions. The pagan buildings are typically two story, dimly lit, free-standing structures with a lower crypt area for the remains and an upper area for devotional sacrifice. Christian domed mausolea contain a single well-lit space and are usually attached to a church. The first St. Peter's Basilica would later be built near a preexisting early 3rd century domed rotunda that may have been a mausoleum. In the 5th century the rotunda would be dedicated to St. Andrew and joined to the Mausoleum of Honorius.
Examples from the 3rd century include the brick dome of the Mausoleum of Diocletian, and the mausoleum at Villa Gordiani. The Villa Gordiani also contains remains of an oval gored dome. The Mausoleum of Diocletian uses small arched squinches of brick built up from a circular base in an overlapping scales pattern, called a "stepped squinches dome". The scales pattern was a popular Hellenistic motif adopted by the Parthians and Sasanians, and such domes are likely related to Persian "squinch vaults". In addition to the mausoleum, the Palace of Diocletian also contains a rotunda near the center of the complex that may have served as a throne room. It has side niches similar to those of an octagonal mausoleum but was located at the end of an apparently barrel-vaulted hall like the arrangement found in later Sasanian palaces.
Masonry domes were less common in the Roman provinces, although the 3rd century "Temple of Venus" at Baalbek was built with a stone dome in diameter. A stone corbelled dome wide, later known as "Arthur's O'on", was located in Scotland three kilometers north of the Falkirk fort on the Antonine Wall and may have been a Roman victory monument from the reign of Carausius. It was destroyed in 1743.
The technique of building lightweight domes with interlocking hollow ceramic tubes further developed in North Africa and Italy in the late 3rd and early 4th centuries. By the 4th century, the thin and lightweight tubed vaulting had become a vaulting technique in its own right, rather than simply serving as a permanent centering for concrete. It was used in early Christian buildings in Italy. Arranging these terracotta tubes in a continuous spiral created a dome that was not strong enough for very large spans, but required only minimal centering and formwork. The later dome of the Baptistry of Neon in Ravenna is an example.
Fourth century
In the 4th century, Roman domes proliferated due to changes in the way domes were constructed, including advances in centering techniques and the use of brick ribbing. The so-called "Temple of Minerva Medica", for example, used brick ribs along with step-rings and lightweight pumice aggregate concrete to form a decagonal dome. The material of choice in construction gradually transitioned during the 4th and 5th centuries from stone or concrete to lighter brick in thin shells. The use of ribs stiffened the structure, allowing domes to be thinner with less massive supporting walls. Windows were often used in these walls and replaced the oculus as a source of light, although buttressing was sometimes necessary to compensate for large openings. The Mausoleum of Santa Costanza has windows beneath the dome and nothing but paired columns beneath that, using a surrounding barrel vault to buttress the structure.
The dome of the Mausoleum of Galerius was built around 300 AD close to the imperial palace as either a mausoleum or a throne room. It was converted into a church in the 5th century. Also in Thessaloniki, at the Tetrarchic palace, an octagonal building has been excavated with a 24.95 meter span that may have been used as a throne room. It is known not to have been used as a church and was unsuitable as a mausoleum, and was used for some period between about 311 and when it was destroyed before about 450. The octagonal "Domus Aurea", or "Golden Octagon", built by Emperor Constantine in 327 at the imperial palace of Antioch likewise had a domical roof, presumably of wood and covered with gilded lead. It was dedicated two years after the Council of Nicea to "Harmony, the divine power that unites Universe, Church, and Empire". It may have been both the cathedral of Antioch as well as the court church of Constantine, and the precedent for the later octagonal plan churches near palaces of Saints Sergius and Bacchus and Hagia Sophia by Justinian and Aachen Cathedral by Charlemagne. The dome was rebuilt by 537–8 with cypress wood from Daphne after being destroyed in a fire. Most domes on churches in the Syrian region were built of wood, like that of the later Dome of the Rock in Jerusalem, and the dome of the Domus Aurea survived a series of earthquakes in the 6th century that destroyed the rest of the building. There is no record of the church being rebuilt after the earthquake of 588, perhaps due to the general abandonment of many public buildings in what was no longer a capital of the Empire.
Constantine built the Church of the Nativity in Bethlehem around 333 as a large basilica with an octagonal structure at the eastern end, over the cave said to be the birthplace of Jesus. The domed octagon had an external diameter of 18 meters. It was later destroyed and when rebuilt by Justinian the octagon was replaced with a tri-apsidal structure.
Centralized buildings of circular or octagonal plan also became used for baptistries and reliquaries due to the suitability of those shapes for assembly around a single object. Baptisteries began to be built in the manner of domed mausolea during the 4th century in Italy. The octagonal Lateran Baptistery or the baptistery of the Holy Sepulchre may have been the first, and the style spread during the 5th century. In the second half of the fourth century, domed octagonal baptisteries similar to the form of contemporary imperial mausolea developed in the region of North Italy near Milan. Examples include the (late 4th century), a domed baptistery in Naples (4th to 6th centuries), and a baptistery in Aquileia (late 4th century). Part of a baths complex begun in the early 4th century, the brick Church of St. George in Sofia was a caldarium that was converted in the middle of the fifth century. It is a rotunda with four apse niches in the corners. The best preserved example of Roman architecture in the city, it has been used as a baptistery, church, mosque, and mausoleum over the centuries. The dome rises to about 14 m from the floor with a diameter of about 9.5 m. Its original function as a hypocaust hall is disputed and, based on its form, the building may originally have been a Christian martyrium. It was half-destroyed by the Huns in 447 and was rebuilt in the 11th century.
In the middle of the 4th century in Rome, domes were built as part of the Baths of Constantine and the . Domes over the calderia, or hot rooms, of the older Baths of Agrippa and the Baths of Caracalla were also rebuilt at this time. Between the second half of the 4th century and the middle of the 5th century, domed mausolea for wealthy families were built attached to a new type of martyrial basilica before burials within the basilica itself, closer to the martyr's remains, made such attached buildings obsolete. A pagan rotunda from this period located on the Via Sacra was later incorporated into the Basilica of Saints Cosmas and Damian as a vestibule around 526. The was built with a dome using the pottery technique of Ravenna, and was later connected to the Basilica of Sant'Ambrogio.
Christian mausolea and shrines developed into the "centralized church" type, often with a dome over a raised central space. The Church of the Holy Apostles, or Apostoleion, probably planned by Constantine but built by his successor Constantius in the new capital city of Constantinople, combined the congregational basilica with the centralized shrine. With a similar plan to that of the Church of Saint Simeon Stylites, four naves projected from a central rotunda containing Constantine's tomb and spaces for the tombs of the twelve Apostles. Above the center may have been a clerestory with a wooden dome roofed with bronze sheeting and gold accents. The oblong decagon of today's St. Gereon's Basilica in Cologne, Germany, was built upon an extraordinary and richly decorated 4th century Roman building with an apse, semi-domed niches, and dome. A church built in the city's northern cemetery, its original dedication is unknown. It may have been built by Julianus, the governor of Gaul from 355 to 360 who would later become emperor, as a mausoleum for his family. The oval space may have been patterned after imperial audience halls or buildings such as the Temple of Minerva Medica.
The largest centrally planned Early Christian church, Milan's San Lorenzo Maggiore, was built in the middle of the 4th century while that city served as the capital of the Western Empire and may have been domed with a light material, such as timber or cane. There are two theories about the shape of this dome: a Byzantine-style dome on spherical pendentives with a ring of windows similar to domes of the later Justinian era, or an octagonal cloister vault following Roman trends and like the vaulting over the site's contemporary chapel of Saint Aquiline, possibly built with vaulting tubes, pieces of which had been found in excavations. Although these tubes have been shown to date from a medieval reconstruction, there is evidence supporting the use of Roman concrete in the original. Alternatively, the central covering may have been a square groin vault. The building may have been the church of the nearby imperial palace and a proposed construction between 355 and 374 under the Arian bishop Auxentius of Milan, who later "suffered a kind of damnatio memoriae at the hands of his orthodox successors", may explain the lack of records about it. Fires in 1071 and 1075 damaged the building and the central covering collapsed in 1103. It was rebuilt with a Romanesque dome that lasted until 1573, when it collapsed and was replaced by the present structure. The original vaulting was concealed by a square drum externally rather than the octagon of today, which dates from the 16th century.
Fluted or coffered domed structures appear in art with greater frequency from the late 4th century.
The early church of St. John at Ephesus mentioned in a late fourth century account by Etheria appears to have been a timber-roofed cruciform building with arms of roughly equal length and four central piers supporting a dome approximately 3.5 meters wide.
Emperor Theodosius completed an octagonal domed church dedicated to John the Baptist in the Hebdomon suburb of Constantinople around 392. It contained the relic of the head of John the Baptist and served as a coronation site for a series of emperors. The remains were destroyed in 1965 and the exact layout is not known, but it may have been a double-shell octagon similar to the Basilica of San Vitale in Ravenna.
The Church of the Holy Sepulchre in Jerusalem was likely built with a wooden dome over the shrine by the end of the 4th century. The rotunda, in diameter and centered on the tomb of Christ, consisted of a domed center room surrounded by an ambulatory. The dome rose over a ground floor, gallery, and clerestory and may have had an oculus. The dome was about wide. Razed to the ground in 1009 by the Fatimid Caliph, it was rebuilt in 1048 by Emperor Constantine IX Monomachos, reportedly with a mosaic depicting Christ and the Twelve Apostles. The current dome is a 1977 renovation in thin reinforced concrete.
Fifth century
By the 5th century, structures with small-scale domed cross plans existed across the Christian world. Examples include the Mausoleum of Galla Placidia, the martyrium attached to the Basilica of San Simpliciano, and churches in Macedonia and on the coast of Asia Minor. In Italy, the Baptistery of San Giovanni in Naples and the Church of Santa Maria della Croce in Casarano have surviving early Christian domes. In Tolentino, the mausoleum of Catervus was modeled on the Pantheon, but at one-quarter scale and with three protruding apses, around 390–410. The Baptistery of Neon in Ravenna was completed in the middle of the 5th century and there were 5th century domes in the baptisteries at Padula and Novara. Small brick domes are also found in towers of Constantinople's early 5th century land walls. Underground cisterns in Constantinople, such as the Cistern of Philoxenos and the Basilica Cistern, were composed of a grid of columns supporting small domes, rather than groin vaults. The square bay with an overhead sail vault or dome on pendentives became the basic unit of architecture in the early Byzantine centuries, found in a variety of combinations.
Early examples of Byzantine domes existed over the hexagonal hall of the Palace of Antiochos, the hexagon at Gülhane, the martyium of Sts. Karpos and Papylos, and the rotunda at the Myrelaion. The timber-roofed in Athens had a dome over its sanctuary. The 5th century St. Mary's church in Ephesus had small rectangular side rooms with sail vaults made of arched brick courses. The brick dome of the baptistery at St. Mary's was composed of a series of tightly arched meridional sections. The Church of Saint Simeon Stylites likely had a wooden polygonal dome over its central wide octagon.
In the city of Rome, at least 58 domes in 44 buildings are known to have been built before domed construction ended in the middle of the 5th century. The last imperial domed mausoleum in the city was that of Emperor Honorius, built in 415 next to St. Peter's Basilica. It was demolished in 1519 as part of the rebuilding of St. Peter's, but had a dome 15.7 meters wide and its appearance is known from some images. The last domed church in the city of Rome for centuries was Santo Stefano al Monte Celio around 460. It had an unusual centralized plan and a 22 meter wide dome made with , a technique that may have been imported from the new western capital of Ravenna. Although they continued to be built elsewhere in Italy, domes would not be built again within Rome until 1453. Other 5th century Italian domes may include (first half of the 5th century), the at the Basilica of Sant'Ambrogio, the chapel of St. Maria Mater Domini in the , and Sicily's of Malvagna (5th or 6th century) and San Pietro ad Baias (5th or 6th century).
In Jerusalem, Sion Church was built with a wooden dome between 456 and 460. The Church of the Kathisma was built along the road from Jerusalem to Bethlehem around 456 with an octagonal plan. It was built over the site of a rock said to be used as a seat by the Virgin Mary as she traveled to Bethlehem while pregnant with Jesus, corresponding to a story told in the Protoevangelium of James. The outer diameter was similar to that of the Church of the Holy Sepulchre at 26–27 meters, and the innermost octagon supported a dome 15.5 meters wide.
With the end of the Western Roman Empire, domes became a signature feature of the church architecture of the surviving Eastern Roman Empire. A transition from timber-roofed basilicas to vaulted churches seems to have occurred there between the late 5th century and the 7th century, with early examples in Constantinople, Asia Minor, and Cilicia. The first known domed basilica may have been a church at Meriamlik in southern Turkey, dated to between 471 and 494, although the ruins do not provide a definitive answer. It is possible earlier examples existed in Constantinople, where it has been suggested that the plan for the Meriamlik church itself was designed, but no domed basilica has been found there before the 6th century.
Sixth century
The 6th century marks a turning point for domed church architecture. Centrally planned domed churches had been built since the 4th century for very particular functions, such as palace churches or martyria, with a slight widening of use around 500 AD, but most church buildings were timber-roofed halls on the basilica plan. Under Justin I in the 520s, Justinian seems to have razed the Basilica of St. John in Ephesus and replaced it with a greek cross cruciform building with five domes similar to his later Church of the Holy Apostles in Constantinople. This version of the building was described by Procopius in The Buildings. Justinian would later replace the western arm of this building, likely in the 550s, expanding it from one domed bay to two domed bays.
The Church of St. Polyeuctus in Constantinople (524–527) may have been built as a large and lavish domed basilica similar to the Meriamlik church of fifty years before—and to the later Hagia Irene of Emperor Justinian—by Anicia Juliana, a descendant of the former imperial house, although the linear walls suggest a timber roof, rather than a brick dome. There is a story that she used the contribution to public funds that she had promised Justinian on his ascension to the throne to roof her church in gold. The church included an inscription praising Juliana for having "surpassed Solomon" with the building, and it may have been with this in mind that Justinian would later say of his Hagia Sophia, "Solomon, I have vanquished thee!".
In the second third of the 6th century, church building by the Emperor Justinian used the domed cross unit on a monumental scale, in keeping with Justinian's emphasis on bold architectural innovation. His church architecture emphasized the central dome and his architects made the domed brick-vaulted central plan standard throughout the Roman east. This divergence with the Roman west from the second third of the 6th century may be considered the beginning of a "Byzantine" architecture. Timber-roofed basilicas, which had previously been the standard church form, would continue to be so in the medieval west.
The earliest existing of Justinian's domed buildings may be the central plan Church of Saints Sergius and Bacchus in Constantinople, completed by 536. It is called the "Little Hagia Sophia" mosque today, but may have been begun five years earlier than that building. The dome rests on an octagonal base created by eight arches on piers and is divided into sixteen sections. Those sections above the flat sides of the octagon are flat and contain a window at their base, alternating with sections from the corners of the octagon that are scalloped, creating an unusual kind of pumpkin dome. Its dates of construction are disputed and may have begun in 532. The alternating scalloped and flat surfaces of the current dome resemble those in Hadrian's half-dome Serapeum in Tivoli, but may have replaced an original drum and dome similar to that over the Basilica of San Vitale in Ravenna. The building was built within the precinct of the Palace of Hormistas, the residence of Justinian before his ascension to the throne in 527, and includes an inscription mentioning the "sceptered Justinian" and "God-crowned Theodora".
After the Nika Revolt destroyed much of the city of Constantinople in 532, including the churches of Hagia Sophia ("Holy Wisdom") and Hagia Irene ("Holy Peace"), Justinian had the opportunity to rebuild. Both had been basilica plan churches and both were rebuilt as domed basilicas, although the Hagia Sophia was rebuilt on a much grander scale. Built by Anthemius of Tralles and Isidore of Miletus in Constantinople between 532 and 537, the Hagia Sophia has been called the greatest building in the world. It is an original and innovative design with no known precedents in the way it covers a basilica plan with dome and semi-domes. Periodic earthquakes in the region have caused three partial collapses of the dome and necessitated repairs. The precise shape of the original central dome completed in 537 was significantly different from the current one and, according to contemporary accounts, much bolder.
Procopius wrote that the original dome seemed "not to rest upon solid masonry, but to cover the space with its golden dome suspended from heaven." Byzantine chronicler John Malalas reported that this dome was 20 byzantine feet lower than its replacement. One theory is that the original dome continued the curve of the existing pendentives (which were partially reconstructed after its collapse), creating a massive sail vault pierced with a ring of windows. This vault would have been part of a theoretical sphere in diameter (the distance of the diagonal of the square bay defined by the pendentives), 7 percent greater than the span of the Pantheon's dome. Another theory raises the shallow cap of this dome (the portion above what are today the pendentives) on a relatively short recessed drum containing the windows. This first dome partially collapsed due to an earthquake in 558 and the design was then revised to the present profile. Earthquakes also caused partial collapses of the dome in 989 and 1346, so that the present dome consists of portions dating from the 6th century, on the north and south sides, and portions from the 10th and 14th centuries on the west and east sides, respectively. There are irregularities where these sectors meet.
The current central dome, above the pendentives, is about thick. It is about wide and contains 40 radial ribs that spring from between the 40 windows at its base. Four of the windows were blocked as part of repairs in the 10th century. The ring of windows at the base of the central dome are in the portion where the greatest hoop tension would have been expected and so they may have been used to help alleviate cracking along the meridians. Iron cramps between the marble blocks of its cornice helped to reduce outward thrusts at the base and limit cracking, like the wooden tension rings used in other Byzantine brick domes. The dome and pendentives are supported by four large arches springing from four piers. Additionally, two huge semi-domes of similar proportion are placed on opposite sides of the central dome and themselves contain smaller semi-domes between an additional four piers. The Hagia Sophia, as both the cathedral of Constantinople and the church of the adjacent Great Palace of Constantinople, has a form of octagonal plan.
The city of Ravenna, Italy, had served as the capital of the Western Roman Empire after Milan from 402 and the capital of the subsequent kingdoms of Odoacer and of Theodoric until Justinian's reconquest in 540. An octagonal building in Ravenna, begun under Theodoric in 525, was completed under the Byzantines in 547 as the Basilica of San Vitale and contains a terracotta dome. It may belong to a school of architecture from 4th and 5th century Milan. The building is similar to the Byzantine Church of Saints Sergius and Bacchus and the later Chrysotriklinos, or throne hall and palace church of Constantinople, and it would be used as the model for Charlemagne's palace chapel at Aix-la-Chapelle. Hollow amphorae were fitted inside one another to provide a lightweight structure for the dome and avoid additional buttressing. It is in diameter. The amphorae were arranged in a continuous spiral, which required minimal centering and formwork but was not strong enough for large spans. The dome was covered with a timber roof, which would be the favored practice for later medieval architects in Italy although it was unusual at the time.
In Constantinople, Justinian also tore down the aging Church of the Holy Apostles and rebuilt it on a grander scale between 536 and 550. The original building was a cruciform basilica with a central domed mausoleum. Justinian's replacement was apparently likewise cruciform but with a central dome and four flanking domes. The central dome over the crossing had pendentives and windows in its base, while the four domes over the arms of the cross had pendentives but no windows. The domes appear to have been radically altered between 944 and 985 by the addition of windowed drums beneath all five domes and by raising the central dome higher than the others. The second most important church in the city after the Hagia Sophia, it fell into disrepair after the Latin occupation of Constantinople between 1204 and 1261 and it was razed to the ground by Mehmed the Conqueror in 1461 to build his Fatih Mosque on the site. Justinian's Basilica of St. John at Ephesus and Venice's St Mark's Basilica are derivative of Holy Apostles. More loosely, the Cathedral of St. Front and the Basilica of Saint Anthony of Padua are also derived from this church.
The sacristy of the in Vicenza, Italy, is part of an older cruciform domed church built by General Narses in 554. The style of the church was characteristic of the Byzantine churches of Ravenna.
Justinian and his successors modernized frontier fortifications throughout the century. The example at Qasr ibn Wardan (564) in the desert of eastern Syria is particularly impressive, containing a governor's palace, barracks, and a church built with techniques and to plans possibly imported from Constantinople. The church dome is unusual in that the pendentives sprang from an octagonal drum, rather than the four main arches, and in that it was made of brick, which was rare in Syria.
The Golden Triclinium, or Chrysotriklinos, of the Great Palace of Constantinople served as an audience hall for the Emperor as well as a palace chapel. Nothing of it has survived except descriptions, which indicate that it had a pumpkin dome containing sixteen windows in its webs and that the dome was supported by the arches of eight niches connecting to adjoining rooms in the building's likely circular plan. Alternatively, the building may have been octagonal in plan, rather than circular. The building was not free-standing and was located at the intersection of the public and private parts of the palace. Smaller windows filled with thin sheets of alabaster may have existed over each of the curtain-covered side niches and below the cornice at the base of the dome. The dome seems to have had webs that alternated straight and concave, like those of the dome of Justinian's Church of Saints Sergius and Bacchus, and may have been built about 40 years after that church. It was begun under Emperor Justin II, completed by his successor Tiberius II, and continued to be improved by subsequent rulers. It was connected to the imperial living quarters and was a space used for assembly before religious festivals, high promotions and consultations, as a banqueting hall, a chapel for the emperor, and a throne room. Never fully described in any of its frequent mentions in Byzantine texts, the room was restricted to members of the court and the "most highly rated foreigners". In the 10th century, the throne in the east niche chamber was directly below an icon of an enthroned Christ.
Other 6th century examples of domed constructions may include Nostra Segnora de Mesumundu in Siligo, Sardinia (before 534), Sant’Angelo in Perugia, near San Donaci (6th or 7th century), and the Trigona of Cittadella near Noto (6th or 7th century).
Seventh century
The period of Iconoclasm, roughly corresponding to the 7th to 9th centuries, is poorly documented but can be considered a transitional period. The cathedral of Sofia has an unsettled date of construction, ranging from the last years of Justinian to the middle of the 7th century, as the Balkans were lost to the Slavs and Bulgars. It combines a barrel-vaulted cruciform basilica plan with a crossing dome hidden externally by the drum. It resembles some Romanesque churches of later centuries, although the type would not be popular in later Byzantine architecture.
Destruction by earthquakes or invaders in the seventh to ninth centuries seems to have encouraged the development of masonry domes and vaulting experimentation over basilicas in Anatolia. The Sivrihisar Kizil Kilise has a dome over an octagonal drum with windows on a square platform and was built around 600, before the battles in the region in the 640s. The domed Church of Mary in Ephesus may have been built in the late sixth or first half of the seventh century with reused bricks. The smaller Church of the Dormition of the Monastery of Hyacinth in Nicaea had a dome supported on four narrow arches and dates prior to 727. The lobed dome of the Church of St. Clement at Ancyra was supported by pendentives that also included squinch-like arches, a possible indication of unfamiliarity with pendentives by the builders. The upper portion of the Church of St. Nicholas at Myra was destroyed, but it had a dome on pendentives over the nave that might have been built between 602 and 655, although it has been attributed to the late eighth or early ninth centuries.
Eighth century
Part of the fifth-century basilica of St. Mary at Ephesus seems to have been rebuilt in the eighth century as a cross-domed church, a development typical of the seventh to eighth centuries and similar to the cross-domed examples of Hagia Sophia in Thessaloniki, St. Nicholas at Myra, St. Clement's at Ankara, and the church of the Koimesis at Nicaea.
With the decline in the empire's resources following losses in population and territory, domes in Byzantine architecture were used as part of more modest new buildings. The large-scale churches of Byzantium were, however, kept in good repair. The upper portion of the Church of Hagia Irene was thoroughly rebuilt after the 740 Constantinople earthquake. The nave was re-covered with an elliptical domical vault hidden externally by a low cylinder on the roof, in place of the earlier barrel vaulted ceiling, and the original central dome from the Justinian era was replaced with one raised upon a high windowed drum. The barrel vaults supporting these two new domes were also extended out over the side aisles, creating cross-domed units. By bracing the dome with broad arches on all four sides, the cross-domed unit provided a more secure structural system. These units, with most domes raised on drums, became a standard element on a smaller scale in later Byzantine church architecture, and all domes built after the transitional period were braced with bilateral symmetry. The dome over the at Sige was replaced in the 19th century, but the original was dated in the 18th century to 780.
A small, unisex monastic community in Bithynia, near Constantinople, may have developed the cross-in-square plan church during the Iconoclastic period, which would explain the plan's small scale and unified naos. The ruined church of St. John at Pelekete monastery is an early example. Monks had supported the use of icons, unlike the government-appointed secular clergy, and monasticism would become increasingly popular. A new type of privately funded urban monastery developed from the 9th century on, which may help to explain the small size of subsequent building.
Ninth century
Timber-roofed basilicas, which had been the standard form until the 6th century, would be displaced by domed churches from the 9th century onward. In the Middle Byzantine period (c. 843 – 1204), domes were normally built to emphasize separate functional spaces, rather than as the modular ceiling units they had been earlier. Resting domes on circular or polygonal drums pierced with windows eventually became the standard style, with regional characteristics.
Single and multi-domed basilicas on Cyprus proposed to date from the ninth or tenth centuries include the Church of Saint Photios of Gialousa (Karpasia), the Church of Saint George of Afentrika (Karpasia), the Monastery of Saint Barnabas (Salamis), the (Geroskipou), and the (Peristerona).
The cross-in-square plan, with a single dome at the crossing or five domes in a quincunx pattern, became widely popular in the Middle Byzantine period. Examples include an early 9th century church in Tirilye, now called the Fatih Mosque. The Nea Ekklesia of Emperor Basil I was built in Constantinople around 880 as part of a substantial building renovation and construction program during his reign. It had five domes, which are known from literary sources, but different arrangements for them have been proposed under at least four different plans. One has the domes arranged in a cruciform pattern like those of the contemporaneous Church of St. Andrew at Peristerai or the much older Church of the Holy Apostles in Constantinople. Others arrange them in a quincunx pattern, with four minor domes in the corners of a square and a larger fifth in the center, as part of a cross-domed or cross-in-square plan. It is often suggested that the five-domed design of St. Panteleimon at Nerezi, from 1164, is based on that of the Nea Ekklesia.
Tenth century
In the Middle Byzantine period, more complex plans emerge, such as the integrated chapels of Theotokos of Lips, a monastic church in Constantinople that was built around 907. It included four small chapels on its second floor gallery level that may have been domed.
The cross-in-square is the most common church plan from the 10th century until the fall of Constantinople in 1453. This type of plan, with four columns supporting the dome at the crossing, was best suited for domes less than wide and, from the 10th to the 14th centuries, a typical Byzantine dome measured less than in diameter. For domes beyond that width, variations in the plan were required such as using piers in place of the columns and incorporating further buttressing around the core of the building.
The palace chapel of the Myrelaion in Constantinople was built around 920 as a cross-in-square church and remains a good example. The earliest cross-in-square in Greece is the Panagia church at the monastery of Hosios Loukas, dated to the late 10th century, but variations of the type can be found from southern Italy to Russia and Anatolia. They served in a wide variety of church roles, including domestic, parish, monastic, palatial, and funerary.
The distinctive rippling eaves design for the roofs of domes began in the 10th century. In mainland Greece, circular or octagonal drums became the most common.
Eleventh century
In Constantinople, drums with twelve or fourteen sides were popular beginning in the 11th century. The 11th century rock-cut churches of Cappadocia, such as Karanlik Kilise and Elmali Kilise in Göreme, have shallow domes without drums due to the dim natural lighting of cave interiors.
The domed-octagon plan is a variant of the cross-in-square plan. The earliest extant example is the katholikon at the monastery of Hosios Loukas, with a wide dome built in the first half of the 11th century. This hemispherical dome was built without a drum and supported by a remarkably open structural system, with the weight of the dome distributed on eight piers, rather than four, and corbelling used to avoid concentrating weight on their corners. The use of squinches to transition from those eight supports to the base of the dome has led to speculation of a design origin in Arab, Sasanian, or Caucasian architecture, although with a Byzantine interpretation. Similar openness in design was used in the earlier Myrelaion church, as originally built, but the katholikon of Hosios Loukas is perhaps the most sophisticated design since the Hagia Sophia. The smaller monastic church at Daphni, c. 1080, uses a simpler version of this plan.
The katholikon of Nea Moni, a monastery on the island of Chios, was built some time between 1042 and 1055 and featured a nine sided, ribbed dome rising above the floor (this collapsed in 1881 and was replaced with the slightly taller present version). The transition from the square naos to the round base of the drum is accomplished by eight conches, with those above the flat sides of the naos being relatively shallow and those in the corners of the being relatively narrow. The novelty of this technique in Byzantine architecture has led to it being dubbed the "island octagon" type, in contrast to the "mainland octagon" type of Hosios Loukas. Speculation on design influences have ranged from Arab influence transmitted via the recently built domed octagon chapels at the Church of the Holy Sepulchre in Jerusalem or the Al-Hakim Mosque in Islamic Cairo, to Caucasian buildings such as the Armenian Cathedral of the Holy Cross. Later copies of the Nea Moni, with alterations, include the churches of , Agioi Apostoli at Pyrghi, , and the in Chortiatis.
Twelfth century
The larger scale of some Byzantine buildings of the 12th century required a more stable support structure for domes than the four slender columns of the cross-in-square type could provide. The Byzantine churches today called Kalenderhane Mosque, Gül Mosque, and the Enez Fatih mosque all had domes greater than in diameter and used piers as part of large cruciform plans, a practice that had been out of fashion for several centuries. A variant of the cross-in-square, the "so-called atrophied Greek cross plan", also provides greater support for a dome than the typical cross-in-square plan by using four piers projecting from the corners of an otherwise square naos, rather than four columns. This design was used in the Chora Church of Constantinople in the 12th century after the previous cross-in-square structure was destroyed by an earthquake.
The 12th century Pantokrator monastic complex (1118–36) was built with imperial sponsorship as three adjoining churches. The south church, a cross-in-square, has a ribbed dome over the naos, domical vaults in the corners, and a pumpkin dome over the narthex gallery. The north church is also a cross-in-square plan. The middle church, the third to be built, fills the long space between the two earlier churches with two oval domes of the pumpkin and ribbed types over what appear to be separate functional spaces. The western space was an imperial mausoleum, whereas the eastern dome covered a liturgical space.
There is a written account by Nicholas Mesarites of a Persian-style muqarnas dome built as part of a late 12th century imperial palace in Constantinople. Called the "Mouchroutas Hall", it may have been built as part of an easing in tensions between the court of Manuel I Komnenos and Kilij Arslan II of the Sultanate of Rum around 1161, evidence of the complex nature of the relations between the two states. The account, written by Nicholas Mesarites shortly before the Fourth Crusade, is part of a description of the coup attempt by John Komnenos in 1200, and may have been mentioned as a rhetorical device to disparage him.
Thirteenth century
The Late Byzantine Period, from 1204 to 1453, has an unsettled chronology of buildings, especially during the Latin Occupation. The fragmentation of the empire, beginning in 1204, is reflected in a fragmentation of church design and regional innovations.
The church of Hagia Sophia in the Empire of Trebizond dates to between 1238 and 1263 and has a variation on the quincunx plan. Heavy with traditional detailing from Asia Minor, and possibly Armenian or Georgian influence, the brick pendentives and drum of the dome remain Byzantine.
After 1261, new church architecture in Constantinople consisted mainly of additions to existing monastic churches, such as the Monastery of Lips and Pammakaristos Church, and as a result the building complexes are distinguished in part by an asymmetric array of domes on their roofs. This effect may have been in imitation of the earlier triple-church Pantokrator monastic complex.
In the Despotate of Epirus, the Church of the Parigoritissa (1282–9) is the most complex example, with a domed octagon core and domed ambulatory. Built in the capital of Arta, its external appearance resembles a cubic palace. The upper level narthex and galleries have five domes, with the middle dome of the narthex an open lantern. This Greek-cross octagon design, similar to the earlier example at Daphni, is one of several among the various Byzantine principalities. Another is found in the Hagia Theodoroi at Mistra (1290–6).
Fourteenth and fifteenth centuries
Mistra was ruled from Constantinople after 1262, then was the suzerain of the Despotate of the Morea from 1348 to 1460. In Mistra, there are several basilica plan churches with domed galleries that create a five-domed cross-in-square over a ground-level basilica plan. The Aphentiko at Brontochion Monastery was built c. 1310–22 and the later church of the Pantanassa Monastery (1428) is of the same type. The Aphentiko may have been originally planned as a cross-in-square church, but has a blend of longitudinal and central plan components, with an interior divided into nave and aisles like a basilica. The barrel-vaulted nave and cross arms have a dome at their crossing, and the corner bays of the galleries are also domed to form a quincunx pattern. A remodeling of the Metropolis church in Mistra created an additional example. The Pantanassa incorporates Western elements in that domes in its colonnaded porch are hidden externally, and its domes have ribs of rectangular section similar to those of Salerno, Ravello, and Palermo.
In Thessaloniki, a distinctive type of church dome developed in the first two decades of the 14th century. It is characterized by a polygonal drum with rounded colonnettes at the corners, all brick construction, and faces featuring three arches stepped back within one another around a narrow "single-light window". One of the hallmarks of Thessalonian churches was the plan of a domed naos with a peristoon wrapped around three sides. The churches of Hagios Panteleimon, Hagia Aikaterine, and Hagioi Apostoloi have domes on these ambulatory porticoes. The five domes of the Hagioi Apostoloi, or Church of the Holy Apostles, in Thessaloniki (c. 1329) makes it an example of a five-domed cross-in-square church in the Late Byzantine style, as is the Gračanica monastery, built around 1311 in Serbia. The architect and artisans of the Gračanica monastery church probably came from Thessaloniki and its style reflects Byzantine cultural influence. The church has been said to represent "the culmination of Late Byzantine architectural design."
A 15th-century account of a Russian traveler to Constantinople mentions an abandoned hall, presumably domed, "in which the sun, the moon, and the stars succeeded each other as in heaven."
Influence
Armenia
Constantinople's cultural influence extended from Sicily to Russia. Armenia, as a border state between the Roman-Byzantine and Sasanian empires, was influenced by both. The exact relationship between Byzantine architecture and that of the Caucasus is unclear. Georgia and Armenia produced many central planned, domed buildings in the 7th century and, after a lull during the Arab invasions, the architecture flourished again in the Middle Byzantine Period. Armenian church domes were initially wooden structures. Etchmiadzin Cathedral (c. 483) originally had a wooden dome covered by a wooden pyramidal roof before this was replaced with stone construction in 618. Churches with stone domes became the standard type after the 7th century, perhaps benefiting from a possible exodus of stonecutters from Syria, but the long traditions of wooden construction carried over stylistically. Some examples in stone as late as the 12th century are detailed imitations of clearly wooden prototypes. Armenian church building was prolific in the late 6th and 7th centuries and, by the 7th century, the churches tend to be either central plans or combinations of central and longitudinal plans. Domes were supported by either squinches (which were used in the Sasanian Empire but rarely in the Byzantine) or pendentives like those of the Byzantine empire, and the combination of domed-cross plan with the hall-church plan could have been influenced by the architecture of Justinian. Domes and cross arms were added to the longitudinal cathedral of Dvin from 608 to 615 and a church in Tekor. Other domed examples include Ptghnavank in Ptghni (c. 600), a church in T'alinn (662-85), the Cathedral of Mren (629-40), and the Mastara Church (9th and 10th centuries). An 11th-century Armenian source names an Armenian architect, Trdat, as responsible for the rebuilding of the dome of Hagia Sophia in Constantinople after the 989 earthquake caused a partial collapse of the central dome. Although squinches were the more common supporting system used to support Armenian domes, pendentives are always used beneath the domes attributed to Trdat, which include the 10th-century monasteries of Marmasen, Sanahin, and Helpat, as well as the patriarchal cathedral of Argina (c. 985), the Cathedral of Ani (989-1001), and the palace chapel of King Gagik II (c. 1001–1005).
The Balkans
In the Balkans, where Byzantine rule weakened in the 7th and 8th centuries, domed architecture may represent Byzantine influence or, in the case of the centrally planned churches of 9th-century Dalmatia, the revival of earlier Roman mausoleum types. An interest in Roman models may have been an expression of the religious maneuvering of the region between the Church of Constantinople and that of Rome. Examples include the Church of Sv. Luka in Kotor, the Church of Sv. Trojce near Split, and the early 9th century Church of Sv. Donat in Zadar. The Church of Sv. Donat, originally domed, may have been built next to a palace and resembles palace churches in the Byzantine tradition. The architectural chronology of the central and eastern Balkans is unsettled during the period of the First Bulgarian Empire, in part because of similarity between Justinian-era churches from the 6th century and what may have been a revival of that style in the late 9th and early 10th centuries under the Christianized Bulgarian tsars. Remains of the Round Church in Preslav, a building traditionally associated with the rule of Tsar Simeon (893–927), indicate that it was a domed palace chapel. Its construction features, however, resemble instead 3rd and 4th century Roman mausolea, perhaps due to the association of those structures with the imperial idea.
The Rus'
Byzantine architecture was introduced to the Rus' people in the 10th century, with churches after the conversion of Prince Vladimir of Kiev being modeled after those of Constantinople, but made of wood. The Russian onion dome was a later development. The earliest architecture of Kiev, the vast majority of which was made of wood, has been lost to fire, but by the 12th century masonry domes on low drums in Kiev and Vladimir-Suzdal were little different than Byzantine domes, although modified toward the "helmet" type with a slight point. The Cathedral of St. Sophia in Kiev (1018–37) was distinctive in having thirteen domes, for Jesus and the Twelve Apostles, but they have since been remodeled in the Baroque style and combined with an additional eight domes. The pyramidal arrangement of the domes was a Byzantine characteristic, although, as the largest and perhaps most important 11th century building in the Byzantine tradition, many of the details of this building have disputed origins. Bulbous onion domes on tall drums were a development of northern Russia, perhaps due to the demands of heavy ice and snowfall along with the more rapid innovation permitted by the Novgorod region's emphasis on wooden architecture. The central dome of the Cathedral of St. Sophia (1045–62) in Novgorod dates from the 12th century and shows a transitional stage. Other churches built around this time are those of St. Nicholas (1113), the Nativity of the Virgin (1117), and St. George (1119–30).
Romanesque Europe
In Romanesque Italy, Byzantine influence can most clearly be seen in Venice's St Mark's Basilica, from about 1063, but also in the domed churches of southern Italy, such as Canosa Cathedral (1071) and the (c. 1160). In Norman Sicily, architecture was a fusion of Byzantine, Islamic, and Romanesque forms, but the dome of the Palatine Chapel (1132–43) at Palermo was decorated with Byzantine mosaic, as was that of the church of Santa Maria dell'Ammiraglio (1140s). The unusual use of domes on pendentives in a series of seventy Romanesque churches in the Aquitaine region of France strongly suggests a Byzantine influence. St. Mark's Basilica was modeled on the now-lost Byzantine Church of the Holy Apostles in Constantinople, and Périgueux Cathedral in Aquitaine (c. 1120) likewise has five domes on pendentives in a Greek cross arrangement. Other examples include the domed naves of Angoulême Cathedral (1105–28), Cahors Cathedral (c. 1100–1119), and the (c. 1130).
Orthodox Africa and Europe
The Throne Hall of Dongola, built in the 9th century at Old Dongola, was used by the kings of Makuria, the most powerful kingdom in medieval Africa, for 450 years until 1317. The upper floor contained a likely cruciform room with a small dome at the center, in imitation of the audience halls of the Byzantine emperors. Bulgarian tsars had similar halls.
Byzantium's neighboring Orthodox powers in Europe emerged as architectural centers in their own right during the Late Byzantine Period. The Bulgarian churches of Nesebar are similar to those in Constantinople at this time. The style and vaulting in the Nesebar cross-in-square churches of Christ Pantocrator and St John Aliturgetos, for example, are similar to examples in Constantinople. Following the construction of Gračanica monastery, the architecture of Serbia used the "so-called Athonite plan", for example at Ravanica (1375–7). In Romania, Wallachia was influenced by Serbian architecture and Moldavia was more original, such as in the Voroneț Monastery with its small dome. Moscow emerged as the most important center of architecture following the fall of Constantinople in 1453. The Cathedral of the Assumption (1475–79), built in the Kremlin to house the icon of Our Lady of Vladimir, was designed in a traditional Russian style by an Italian architect.
Italian Renaissance
Italian Renaissance architecture combined Roman and Romanesque practices with Byzantine structures and decorative elements, such as domes with pendentives over square bays. The Cassinese Congregation used windowed domes in the Byzantine style, and often also in a quincunx arrangement, in their churches built between 1490 and 1546, such as the Abbey of Santa Giustina. The technique of using wooden tension rings at several levels within domes and drums to resist deformation, frequently said to be a later invention of Filippo Brunelleschi, was common practice in Byzantine architecture. The technique of using double shells for domes, although revived in the Renaissance, originated in Byzantine practice. The dome of the Pantheon, as a symbol of Rome and its monumental past, was particularly celebrated and imitated, although copied only loosely. Studied in detail from the early Renaissance on, it was an explicit point of reference for the dome of St. Peter's Basilica and inspired the construction of domed rotundas with temple-front porches throughout western architecture into the modern era. Examples include Palladio's chapel at Maser (1579–80), Bernini's church of S. Maria dell'Assunzione (1662-4), the Library Rotunda of the University of Virginia (1817–26), and the church of St. Mary in Malta (1833–60). Other examples include the church of San Simeone Piccolo in Venice (1718–38), the church of Gran Madre di Dio in Turin (1818–31), and the church of San Francesco di Paola, Naples in Naples (19th century).
Ottoman Empire
Ottoman architecture adopted the Byzantine dome form and continued to develop it. One type of mosque was modeled after Justinian's Church of Sergius and Bacchus with a dome over an octagon or hexagon contained within a square, such as the Üç Şerefeli Mosque (1437–47). The dome and semi-domes of the Hagia Sophia, in particular, were replicated and refined. A "universal mosque design" based upon this development spread throughout the world. The first Ottoman mosque to use a dome and semi-dome nave vaulting scheme like that of Hagia Sophia was the mosque of Beyazit II. Only two others were modeled similarly: Kılıç Ali Pasha Mosque and the Süleymaniye Mosque (1550–57). Other Ottoman mosques, although superficially similar to Hagia Sophia, have been described as structural criticisms of it. When Mimar Sinan set out to build a dome larger than that of Hagia Sophia with Selimiye Mosque (1569–74), he used a more stable octagonal supporting structure. The Selimiye Mosque is of the type originating with the Church of Sergius and Bacchus. Three other Imperial mosques in Istanbul built in this "Classical Style" of Hagia Sophia include four large semi-domes around the central dome, rather than two: Şehzade Camii, Sultan Ahmed I Camii (completed in 1616), and the last to be built: Yeni Cami (1597–1663).
Modern revival
A Byzantine revival style of architecture occurred in the 19th and 20th centuries. An early example of the revival style in Russia was the Cathedral of Christ the Saviour (1839–84), which was approved by the Tsar to be a model for other churches in the empire. The style's popularity spread through scholarly publications produced after the independence of Greece and the Balkans from the Ottoman Empire. It was used throughout Europe and North America, peaking in popularity between 1890 and 1914. The Greek Orthodox St Sophia's Cathedral (1877–79) and Roman Catholic Westminster Cathedral (begun 1895), both in London, are examples. The throne room of Neuschwanstein Castle (1885–86) was built by King Ludwig II in Bavaria. In the late 19th century, the Hagia Sophia became a widespread model for Greek Orthodox churches. In southeastern Europe, monumental national cathedrals built in the capital cities of formerly Ottoman areas used Neo-Classical or Neo-Byzantine styles. Sofia's Alexander Nevsky Cathedral and Belgrade's Church of Saint Sava are examples, and used Hagia Sophia as a model due to their large sizes. Synagogues in the United States were built in a variety of styles, as they had been in Europe (and often with a mixture of elements from different styles), but the Byzantine Revival style was the most popular in the 1920s. Domed examples include The Temple of Cleveland (1924), the synagogue of KAM Isaiah Israel (1924) in Chicago, based upon San Vitale in Ravenna and Hagia Sophia in Istanbul, and the synagogue of Congregation Emanu-El (1926) in San Francisco.
In the United States, Greek Orthodox churches beginning in the 1950s tended to use a large central dome with a ring of windows at its base evocative of the central dome of Hagia Sophia, rather than more recent or more historically common Byzantine types, such as the Greek-cross-octagon or five-domed quincunx plans. Examples include Annunciation Greek Orthodox Church, completed in 1961 but designed by Frank Lloyd Wright in 1957, Ascension Greek Orthodox Cathedral of Oakland (1960), and Annunciation Greek Orthodox Cathedral in Atlanta (1967). The use of a large central dome in American Greek Orthodox churches continued in the 1960s and 1970s before moving toward smaller Middle Byzantine domes, or versions of Early Christian basilicas.
See also
List of Roman domes
History of architecture
References
Citations
Sources
Ancient Roman architectural elements
Byzantine architecture
Domes
History of structural engineering | History of Roman and Byzantine domes | Engineering | 16,524 |
74,597,943 | https://en.wikipedia.org/wiki/List%20of%20herbicides | This is a list of herbicides. These are chemical compounds which have been registered as herbicides. The names on the list are the ISO common name for the active ingredient which is formulated into the branded product sold to end-users. The University of Hertfordshire maintains a database of the chemical and biological properties of these materials, including their brand names and the countries and dates where and when they have been introduced. The industry-sponsored Herbicide Resistance Action Committee (HRAC) advises on the use of herbicides in crop protection and classifies the available compounds according to their chemical structures and mechanism of action so as to manage the risks of pesticide resistance developing. The 2024 HRAC poster of herbicide modes of action includes the majority of chemicals listed below.
The Weed Science Society of America also classifies herbicides by their mechanism of action, with the HRAC classification system.
0-9
A
B
C
D
E
F
G
H
I
K
L
M
N
O
P
Q
R
S
T
V
X
Z
See also
List of fungicides
List of insecticides
References
herbicides
Herbicides | List of herbicides | Chemistry,Biology | 218 |
46,457,885 | https://en.wikipedia.org/wiki/Gender%20and%20emotional%20expression | The study of the relationship between gender and emotional expression is the study of the differences between men and women in behavior that expresses emotions. These differences in emotional expression may be primarily due to cultural expectations of femininity and masculinity.
Major theories
Many psychologists reject the notion that men experience emotions less frequently than women do. Instead, researchers have suggested that men exhibit restrictive emotionality. Restrictive emotionality refers to a tendency to inhibit the expression of certain emotions, and an unwillingness to self-disclose intimate feelings. Men's restrictive emotionality has been shown to influence health, emotional appraisal, and overall identity. Furthermore, tendencies toward restrictive emotionality are correlated with an increased risk of certain anxiety disorders.
Research has suggested that women express emotions more frequently than men on average. Multiple researchers have found that women cry more frequently, and for longer durations than men at similar ages. The gender differences appear to peak in the most fertile years. This is possibly due to hormonal differences, as several studies have shown that certain sex hormones influence the way that emotions are expressed.
Other researchers found this gender difference decreases over time. In Handbook of Emotions, Leslie R. Brody and Judith A. Hall report that this difference in emotional expression starts at a young age, as early as 4 and 6 years old, as girls begin to express more sadness and anxiety than their male counterparts. Brody and Hall (2008) report that women generally smile, laugh, nod, and use hand gestures more than men do. The only known exception to this rule is that men more frequently express anger. However, all of these effects are not commonly observed until after preschool, suggesting that these differences might be the result of certain socialization processes. Women are also more accurate at expressing their emotions, when "posing deliberately and when observed unobtrusively." This increased expressiveness in emotional expression and is consistent across cultures, with women reporting more intense emotional experiences and more overt emotional expressions across 37 cultures.
It has been found that men and women each more accurately display gender-stereotypic expressions: men more accurately express anger, contempt, and happiness, while women more accurately express fear and happiness. Other studies have shown that women show higher levels of expression accuracy and judgement of nonverbal emotional cues than men overall. These patterns are not consistent across cultures, suggesting that socialization influences gender differences in emotional expression. For example, research has suggested that in Japan, women convey anger and contempt better than men do.
Major empirical findings
Some research has shown that culture and context-specific gender roles have a stronger influence on emotional expression than do biological factors. In a 2002 paper, Wester et al. conclude: "In sum, empirical evidence suggests that girls are socialized to be emotional, nonaggressive, nurturing, and obedient, whereas boys are socialized to be unemotional, aggressive, achievement oriented, and self-reliant. Peers continue this process as children develop and mature in effect constraining how, where, why, and with whom certain emotions are expressed." In one cross-cultural study, it was shown that in nearly all cultures, women generally cry more than men; however, the gender difference tends to be more significant in democratic and affluent countries.
Another study suggests that people tend to exhibit more intense negative facial expressions in solitary conditions, and smile more when others are present. In this experiment, men and women did not differ in their anger expression in non-social conditions. However, women were more likely to express their anger in the solitary condition as opposed to the social condition. Men, on the other hand, seemed to be less concerned with appearing positive to others; they showed no difference in their expression of anger based on whether or not others were present.
In another recent study, Coats and Feldman found that women who were more accurate at expressing happiness were judged as more popular, while men who were more accurate at expressing anger were judged as more popular. This suggests that there are negative consequences for people who are less accurate at expressing gender-stereotypical emotions.
These consequences also extend to judging others' emotions. Studies have shown that there are negative social consequences for children who are deficient at judging gender-stereotypic nonverbal cues—angry nonverbal cues for boys, and happy, sad, and fearful nonverbal cues for girls. Communication of emotion involves both detection and expression of emotions or moods. The ability to detect non-verbal cues leads to successful communication of emotions. In computer-mediated communication (CMC), the absence of body language and visibility restricts one's ability to correctly recognize others' emotions. For this reason, emoticons are widely used in online communication to replace non-verbal behaviors that emphasize or clarify one's feelings. Surprisingly, there is no static gender difference in the use of emoticons. In some studies, both men and women display an increase in emoticon use in the context of a mixed-gender group chat. Others show that men use more emoticons when interacting with women, while women show no change when interacting with men.
Nature versus nurture
The social-developmental hypothesis is one of the major arguments for the impact of nurture on emotional expression. The social-developmental theory explains gender differences in emotion expression through emphasizing "children's active role in their development of gendered behavior" through learning by watching adults or through interactions with their parents and peers. This hypothesis points to the fact that infants are not born with the same differences in emotional expression, and gender differences generally grow more pronounced as children age. In a 2012 meta-analysis conducted by Tara M. Chaplin and Amelia Aldao, researchers reviewed gender differences in emotion expression from the infancy period through adolescence in order to determine the impact of development and age on gender differences. Their findings support the notion that social factors in a child's development play a large role in the gender differences that later emerge, as "gender differences were not found in infancy…but they emerged by the toddler/preschool period and in childhood". One possible explanation for this developmental difference comes from the child's parents. In many Western cultures, for example, parents discuss and express a broader range of emotions with their daughters than with their sons. As children grow older, these patterns continue with their peers.
The second major argument in support of social influences on emotion expression involves the idea that a society's gender roles reinforce gender differences. Social constructionism states that children grow up in the context of gender roles that naturally place them in role-specific situations, influencing their emotion expression in that context. Gender stereotypes in heteronormative societies enforce expectations for women to suppress anger and contempt, but express other emotions using words and facial expressions; and discourage men from verbally expressing emotions, with the exception of anger or contempt. As an adaptive feature, regulation of expression of emotion involves consideration of the social demands of any given situation. Studies have shown that "fewer gender differences in emotion expression may be found when children are with someone they trust and know well than when children are with an unfamiliar person". Generally, people are trained to behave in a "socially acceptable" way around strangers or acquaintances, suggesting that the social context of an environment can shape the levels of emotion expression.
Biological factors also play a role in influencing emotion expression. One central biological argument is related to cognitive differences between genders. In a 2008 study using functional magnetic resonance imaging (fMRI) to monitor brain activity in participants, researchers found that men and women differ in neural responses when experiencing negative emotions. The authors of the study state: "Compared with women, men showed lesser increases in prefrontal regions that are associated with reappraisal, greater decreases in the amygdala, which is associated with emotional responding, and lesser engagement of ventral striatal regions, which are associated with reward processing." The way that male and female brains respond to emotions likely impacts the expression of those emotions.
The biological roots of gender differences interact with the social environment in various ways. Biological theorists propose that females and males have innate differences that exist at birth, but unfold with age and maturation in response to interactions with their specific environments. An important argument for this viewpoint is that "gender differences in emotion expression are the result of a combination of biologically based temperamental predispositions and the socialization of boys and girls to adopt gender-related display rules for emotion expression". It has been suggested that even infant males display higher levels of activity and arousal than do infant girls as well as a lower ability for language and behavior inhibitory controls, which are biologically based characteristics. This "nature" argument interacts with "nurture" in that "parents and other socialization agents may respond to boys in ways that dampen emotional expressiveness…as a way to down-regulate their high emotional arousal and activity levels". On the other hand, girls are encouraged to utilize their communication skills to verbally express their emotions to parents and other adults, which would also highlight expression differences between genders.
Controversies
Emotions are complex and involve different components, such as physiological arousal, expressive behaviors, and conscious experience. While the expressive component of emotion has been widely studied, it remains unclear whether or not men and women differ in other aspects of emotion. Most researchers agree that women are more emotionally expressive, but not that they experience more emotions than men do. Some studies have shown that women are more likely to produce inauthentic smiles than men do, while others have shown the opposite. This debate is significant because emotion can be generated by adopting an action that is associated with a particular emotion, such as smiling and speaking softly.
A possible explanation is that both men and women's emotional expressiveness is susceptible to social factors. Men and women may be reinforced by social and cultural standards to express emotions differently, but it is not necessarily true in terms of experiencing emotions. For instance, studies suggest that women often occupy roles that conform to feminine display rules, which require them to amplify their emotional response to impress others.
See also
Display rules
Gender polarization
Sex differences in emotional intelligence
References
Emotion
Gender roles
Gender-related stereotypes
Social constructionism | Gender and emotional expression | Biology | 2,061 |
5,705,013 | https://en.wikipedia.org/wiki/Sulfinic%20acid | Sulfinic acids are oxoacids of sulfur with the structure RSO(OH). In these organosulfur compounds, sulfur is pyramidal.
Structure and properties
Sulfinic acids RSO2H are typically more acidic than the corresponding carboxylic acid RCO2H. Sulfur is pyramidal, consequently sulfinic acids are chiral. The free acids are typically unstable, disproportionating to the sulfonic acid RSO3H and thiosulfonate RSSO3R. The formal anhydride of a sulfinic acid has not joining oxygen atom, but is instead a sulfinyl sulfone (R–S+(–O−)–S2+(–O−)2–), and disproportionation is believed to occur through the free-radical fission of this intermediate.
Alkylation of sulfinic acids can give either sulfones or sulfinate esters, depending on the solvent and reagent. Strongly polarized reactants (e.g. trimethyloxonium tetrafluoroborate) give esters, whereas relatively unpolarized reactants (e.g. an alkyl halide or enone) give sulfones. Sulfinates react with Grignard reagents to give sulfoxides, and undergo a variant of the Claisen condensation towards the same end.
Cobalt(III) salts can oxidize sulfinic acids to disulfones, although yields are only 30–50%.
Preparation
Sulfinic acids are often prepared in situ by acidification of the corresponding sulfinate salts, which are typically more robust than the acid. These salts are generated by reduction of sulfonyl chlorides with metals, although thiolates also reduce sulfonate thioesters to a sulfinate and a disulfide.
An alternative route is the reaction of Grignard reagents with sulfur dioxide. Transition metal sulfinates are also generated by insertion of sulfur dioxide into metal alkyls, a reaction that may proceed via a metal sulfur dioxide complex.
Sulfones may eliminate in base, particularly if a strong nucleophile is present; thus for example sodium cyanide causes bis(2butanone-4yl) sulfone to split into levulinonitrile and 3oxobutane 1sulfinic acid:
SO2((CH2)2Ac)2 + NaCN → NaSO2(CH2)2Ac + NC(CH2)2Ac
The nitrile presumably forms through conjugate addition of cyanide to the corresponding enone.
Friedel-Crafts addition of thionyl chloride to an alkene gives an αchloro sulfinyl chloride, typically complexed to a Lewis acid. Likewise a carbanion can attack thionyl chloride to give a sulfinyl chloride. Careful hydrolysis then gives a sulfinic acid. Sulfinyl chlorides attack sulfinates to give sulfinyl sulfones (sulfinic anhydrides).
Unsubstituted sulfinic acid, when R is the hydrogen atom, is a higher energy isomer of sulfoxylic acid, both of which are unstable.
Examples
An example of a simple, well-studied sulfinic acid is phenylsulfinic acid. A commercially important sulfinic acid is thiourea dioxide, which is prepared by the oxidation of thiourea with hydrogen peroxide.
(NH2)2CS + 2H2O2 → (NH)(NH2)CSO2H + 2H2O
Another commercially important sulfinic acid is hydroxymethyl sulfinic acid, which is usually employed as its sodium salt (HOCH2SO2Na). Called Rongalite, this anion is also commercially useful as a reducing agent.
Sulfinates
The conjugate base of a sulfinic acid is a sulfinate anion. The enzyme cysteine dioxygenase converts cysteine into the corresponding sulfinate. One product of this catabolic reaction is the sulfinic acid hypotaurine. Sulfinite also describes esters of sulfinic acid. Cyclic sulfinite esters are called sultines.
References
External links
Diagram at ucalgary.ca
Diagram at acdlabs.com
Functional groups | Sulfinic acid | Chemistry | 931 |
17,143,970 | https://en.wikipedia.org/wiki/BTZ%20black%20hole | The BTZ black hole, named after Máximo Bañados, Claudio Teitelboim, and Jorge Zanelli, is a black hole solution for (2+1)-dimensional topological gravity with a negative cosmological constant.
History
In 1992, Bañados, Teitelboim, and Zanelli discovered the BTZ black hole solution . This came as a surprise, because when the cosmological constant is zero, a vacuum solution of (2+1)-dimensional gravity is necessarily flat (the Weyl tensor vanishes in three dimensions, while the Ricci tensor vanishes due to the Einstein field equations, so the full Riemann tensor vanishes), and it can be shown that no black hole solutions with event horizons exist. But thanks to the negative cosmological constant in the BTZ black hole, it is able to have remarkably similar properties to the 3+1 dimensional Schwarzschild and Kerr black hole solutions, which model real-world black holes.
Properties
The similarities to the ordinary black holes in 3+1 dimensions:
It admits a no hair theorem, fully characterizing the solution by its ADM-mass, angular momentum and charge.
It has the same thermodynamical properties as traditional black hole solutions such as Schwarzschild or Kerr black holes, e.g. its entropy is captured by a law directly analogous to the Bekenstein bound in (3+1)-dimensions, essentially with the surface area replaced by the BTZ black hole's circumference.
Like the Kerr black hole, a rotating BTZ black hole contains an inner and an outer horizon, analogous to an ergosphere.
Since (2+1)-dimensional gravity has no Newtonian limit, one might fear that the BTZ black hole is not the final state of a gravitational collapse. It was however shown, that this black hole could arise from collapsing matter and we can calculate the energy-moment tensor of BTZ as same as (3+1) black holes.
The BTZ solution is often discussed in the realm on (2+1)-dimensional quantum gravity.
The case without charge
The metric in the absence of charge is
where are the black hole radii and is the radius of AdS3 space. The mass and angular momentum of the black hole is
BTZ black holes without any electric charge are locally isometric to anti-de Sitter space. More precisely, it corresponds to an orbifold of the universal covering space of AdS3.
A rotating BTZ black hole admits closed timelike curves.
See also
Cosmic string
MTZ black hole
AdS black hole
References
Notes
Bibliography
Black holes
Quantum gravity
Mathematical methods in general relativity | BTZ black hole | Physics,Astronomy | 548 |
5,027,756 | https://en.wikipedia.org/wiki/Sour%20sanding | Sour sanding, or sour sugar, is a food ingredient that is used to impart a sour flavor to candy.
It is made from sugar along with citric acid, tartaric acid and malic acid.
It is used to coat sour candies such as lemon drops and Sour Patch Kids, or to make hard candies taste tart, such as SweeTarts.
See also
Acidulant
References
Food ingredients | Sour sanding | Technology | 86 |
4,563,227 | https://en.wikipedia.org/wiki/Multi-tap | Multi-tap (multi-press) is a text entry system for mobile phones. The alphabet is printed under each key (beginning on "2") in a three-letter sequence as follows; ABC under 2 key, DEF under 3 key, etc. Exceptions are the "7" key, which adds a letter ("PQRS"), and the "9" key which includes "Z". Punctuation is typically accessed via the "1" key and various functions mapped to the "*" key and "#" key.
The system is used by repeatedly pressing the same key to cycle through the letters for that key. For example, pressing the "3" key twice would indicate the letter "E". Pausing for a set period of time will automatically choose the current letter in the cycle, as will pressing a different key.
It is commonly used in conjunction with text-messaging services. Some portable telecommunications devices (such as the BlackBerry) have bypassed the need for this by incorporating a mini-keyboard for users to type on. As of 2012, most mobile phones with fewer keys than alphabet letters offer a predictive text input method.
See also
Telephone keypad letter mapping
References
Input methods for handheld devices
Mobile phones | Multi-tap | Technology | 251 |
18,428,013 | https://en.wikipedia.org/wiki/Mathematical%20descriptions%20of%20opacity | When an electromagnetic wave travels through a medium in which it gets attenuated (this is called an "opaque" or "attenuating" medium), it undergoes exponential decay as described by the Beer–Lambert law. However, there are many possible ways to characterize the wave and how quickly it is attenuated. This article describes the mathematical relationships among:
attenuation coefficient;
penetration depth and skin depth;
complex angular wavenumber and propagation constant;
complex refractive index;
complex electric permittivity;
AC conductivity (susceptance).
Note that in many of these cases there are multiple, conflicting definitions and conventions in common use. This article is not necessarily comprehensive or universal.
Background: unattenuated wave
Description
An electromagnetic wave propagating in the +z-direction is conventionally described by the equation:
where
E0 is a vector in the x-y plane, with the units of an electric field (the vector is in general a complex vector, to allow for all possible polarizations and phases);
ω is the angular frequency of the wave;
k is the angular wavenumber of the wave;
Re indicates real part;
e is Euler's number.
The wavelength is, by definition,
For a given frequency, the wavelength of an electromagnetic wave is affected by the material in which it is propagating. The vacuum wavelength (the wavelength that a wave of this frequency would have if it were propagating in vacuum) is
where c is the speed of light in vacuum.
In the absence of attenuation, the index of refraction (also called refractive index) is the ratio of these two wavelengths, i.e.,
The intensity of the wave is proportional to the square of the amplitude, time-averaged over many oscillations of the wave, which amounts to:
Note that this intensity is independent of the location z, a sign that this wave is not attenuating with distance. We define I0 to equal this constant intensity:
Complex conjugate ambiguity
Because
either expression can be used interchangeably. Generally, physicists and chemists use the convention on the left (with e−iωt), while electrical engineers use the convention on the right (with e+iωt, for example see electrical impedance). The distinction is irrelevant for an unattenuated wave, but becomes relevant in some cases below. For example, there are two definitions of complex refractive index, one with a positive imaginary part and one with a negative imaginary part, derived from the two different conventions. The two definitions are complex conjugates of each other.
Attenuation coefficient
One way to incorporate attenuation into the mathematical description of the wave is via an attenuation coefficient:
where α is the attenuation coefficient.
Then the intensity of the wave satisfies:
i.e.
The attenuation coefficient, in turn, is simply related to several other quantities:
absorption coefficient is essentially (but not quite always) synonymous with attenuation coefficient; see attenuation coefficient for details;
molar absorption coefficient or molar extinction coefficient, also called molar absorptivity, is the attenuation coefficient divided by molarity (and usually multiplied by ln(10), i.e., decadic); see Beer-Lambert law and molar absorptivity for details;
mass attenuation coefficient, also called mass extinction coefficient, is the attenuation coefficient divided by density; see mass attenuation coefficient for details;
absorption cross section and scattering cross section are both quantitatively related to the attenuation coefficient; see absorption cross section and scattering cross section for details;
The attenuation coefficient is also sometimes called opacity; see opacity (optics).
Penetration depth and skin depth
Penetration depth
A very similar approach uses the penetration depth:
where δpen is the penetration depth.
Skin depth
The skin depth is defined so that the wave satisfies:
where δskin is the skin depth.
Physically, the penetration depth is the distance which the wave can travel before its intensity reduces by a factor of . The skin depth is the distance which the wave can travel before its amplitude reduces by that same factor.
The absorption coefficient is related to the penetration depth and skin depth by
Complex angular wavenumber and propagation constant
Complex angular wavenumber
Another way to incorporate attenuation is to use the complex angular wavenumber:
where k is the complex angular wavenumber.
Then the intensity of the wave satisfies:
i.e.
Therefore, comparing this to the absorption coefficient approach,
In accordance with the ambiguity noted above, some authors use the complex conjugate definition:
Propagation constant
A closely related approach, especially common in the theory of transmission lines, uses the propagation constant:
where γ is the propagation constant.
Then the intensity of the wave satisfies:
i.e.
Comparing the two equations, the propagation constant and the complex angular wavenumber are related by:
where the * denotes complex conjugation.
This quantity is also called the attenuation constant, sometimes denoted α.
This quantity is also called the phase constant, sometimes denoted β.
Unfortunately, the notation is not always consistent. For example, is sometimes called "propagation constant" instead of γ, which swaps the real and imaginary parts.
Complex refractive index
Recall that in nonattenuating media, the refractive index and angular wavenumber are related by:
where
n is the refractive index of the medium;
c is the speed of light in vacuum;
v is the speed of light in the medium.
A complex refractive index can therefore be defined in terms of the complex angular wavenumber defined above:
where n is the refractive index of the medium.
In other words, the wave is required to satisfy
Then the intensity of the wave satisfies:
i.e.
Comparing to the preceding section, we have
This quantity is often (ambiguously) called simply the refractive index.
This quantity is called the extinction coefficient and denoted κ.
In accordance with the ambiguity noted above, some authors use the complex conjugate definition, where the (still positive) extinction coefficient is minus the imaginary part of .
Complex electric permittivity
In nonattenuating media, the electric permittivity and refractive index are related by:
where
μ is the magnetic permeability of the medium;
ε is the electric permittivity of the medium.
"SI" refers to the SI system of units, while "cgs" refers to Gaussian-cgs units.
In attenuating media, the same relation is used, but the permittivity is allowed to be a complex number, called complex electric permittivity:
where ε is the complex electric permittivity of the medium.
Squaring both sides and using the results of the previous section gives:
AC conductivity
Another way to incorporate attenuation is through the electric conductivity, as follows.
One of the equations governing electromagnetic wave propagation is the Maxwell-Ampere law:
where is the displacement field.
Plugging in Ohm's law and the definition of (real) permittivity
where σ is the (real, but frequency-dependent) electrical conductivity, called AC conductivity.
With sinusoidal time dependence on all quantities, i.e.
the result is
If the current were not included explicitly (through Ohm's law), but only implicitly (through a complex permittivity), the quantity in parentheses would be simply the complex electric permittivity. Therefore,
Comparing to the previous section, the AC conductivity satisfies
Notes
References
Electromagnetic radiation
Scattering, absorption and radiative transfer (optics) | Mathematical descriptions of opacity | Physics,Chemistry | 1,573 |
26,226,561 | https://en.wikipedia.org/wiki/Toxic%20equivalency%20factor | Toxic equivalency factor (TEF) expresses the toxicity of dioxins, furans and PCBs in terms of the most toxic form of dioxin, 2,3,7,8-TCDD. The toxicity of the individual congeners may vary by orders of magnitude.
With the TEFs, the toxicity of a mixture of dioxins and dioxin-like compounds can be expressed in a single number – the toxic equivalency (TEQ). It is a single figure resulting from the product of the concentration and individual TEF values of each congener.
The TEF/TEQ concept has been developed to facilitate risk assessment and regulatory control. While the initial and current set of TEFs only apply to dioxins and dioxin-like chemicals (DLCs), the concept can theoretically be applied to any group of chemicals satisfying the extensive similarity criteria used with dioxins, primarily that the main mechanism of action is shared across the group. Thus far, only the DLCs have had such a high degree of evidence of toxicological similarity.
There have been several systems over the years in operation, such as the International Toxic Equivalents for dioxins and furans only, represented as I-TEQDF, as well as several country-specific TEFs. The present World Health Organization scheme, represented as WHO-TEQDFP, which includes PCBs is now universally accepted.
Chemical mixtures and additivity
Humans and wildlife are rarely exposed to solitary contaminants, but rather to complex mixtures of potentially harmful compounds. Dioxins and DLCs are no exception. This is important to consider when assessing toxicity because the effects of chemicals in a mixture are often different from when acting alone. These differences can take place on the chemical level, where the properties of the compounds themselves change due to the interaction, creating a new dose at the target tissue and a quantitatively different effect. They may also act together (simple similar action) or independently on the organism at the receptor during uptake, when transported throughout the body, or during metabolism, to produce a joint effect. Joint effects are described as being additive (using dose, response/risk, or measured effect), synergistic, or antagonistic. A dose-additive response occurs when the mixture effect is determined by the sum of the component chemical doses, each weighted by its relative toxic potency. A risk-additive response occurs when the mixture response is the sum of component risks, based on the probability law of independent events. An effect-additive mixture response occurs when the combined effect of exposure a chemical mixture is equal to the sums of the separate component chemical effects, e.g., incremental changes in relative liver weight. Synergism occurs when the combined effect of chemicals together is greater than the additivity prediction based on their separate effects. Antagonism describes where the combined effect is less than the additive prediction. Clearly it is important to identify which kind of additivity is being used. These effects reflect the underlying modes of action and mechanisms of toxicity of the chemicals.
Additivity is an important concept here because the TEF method operates under the assumption that the assessed contaminants are dose-additive in mixtures. Because dioxins and DLCs act similarly at the AhR, their individual quantities in a mixture can be added together as proportional values, i.e. TEQs, to assess the total potency. This notion is fairly well supported by research. Some interactions have been observed and some uncertainties remain, including application to other than oral intake.
TEF
Exposure to environmental media containing 2,3,7,8-TCDD and other dioxins and dioxin-like compounds can be harmful to humans as well as to wildlife. These chemicals are resistant to metabolism and biomagnify up the food chain. Toxic and biological effects of these compounds are mediated through the aryl hydrocarbon receptor (AhR). Oftentimes results of human activity leads to instances of these chemicals as mixtures of DLCs in the environment. The TEF approach has also been used to assess the toxicity of other chemicals including PAHs and xenoestrogens.
The TEF approach uses an underlying assumption of additivity associated with these chemicals that takes into account chemical structure and behavior. For each chemical the model uses comparative measures from individual toxicity assays, known as relative effect potency (REP), to assign a single scaling factor known as the TEF.
TCDD
2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) is the reference chemical to which the toxicity of other dioxins and DLCs are compared. TCDD is the most toxic DLC known. Other dioxins and DLCs are assigned a scaling factor, or TEF, in comparison to TCDD. TCDD has a TEF of 1.0. Sometimes PCB 126 is also used as a reference chemical, with a TEF of 0.1.
Determination of TEF
TEFs are determined using a database of REPs that meet WHO established criteria, using different biological models or endpoints and are considered estimates with an order of magnitude of uncertainty. The characteristics necessary for inclusion of a compound in the WHO's TEF approach include:
Structural similarity to polychlorinated dibenzo-p-dioxins or polychlorinated dibenzofurans
Capacity to bind to the aryl hydrocarbon receptor (AhR)
Capacity to elicit AhR-mediated biochemical and toxic responses
Persistence and accumulation in the food chain
All viable REPs for a chemical are compiled into a distribution, and the TEF is selected based on half order of magnitude increments on a logarithmic scale. The TEF is typically selected from the 75th percentile of the REP distribution in order to be protective of health.
In vivo and in vitro studies
REP distributions are not weighted to give more importance to certain types of studies. Current focus of REPs is on in vivo studies rather than in vitro. This is because all types of in vivo studies (acute, subchronic, etc.) and different endpoints have been combined, and associated REP distributions are shown as a single box plot.
TEQ
Toxic Equivalents (TEQs) report the toxicity-weighted concentration of mixtures of PCDDs, PCDFs, and PCBs. The reported value provides toxicity information about the mixture of chemicals and is more meaningful to toxicologists than reporting the total concentration. To obtain TEQs, the concentration of each chemical in a mixture is multiplied by its TEF and is then summed with all other chemicals to report the total toxicity-weighted concentration. TEQs are then used for risk characterization and management purposes, such as prioritizing areas of cleanup.
Calculation
The toxic equivalency of a mixture is defined by the sum of the concentrations of individual compounds (Ci) multiplied by their relative toxicity (TEF):
TEQ = Σ[Ci × TEFi]
Applications
Risk assessment
Risk assessment is the process by which one estimates the probability of some adverse effect, such as that of a contaminant in the environment. Environmental risk assessments are conducted to help protect human health and the environment and are often used to assist in meeting regulations such as those stipulated by CERCLA in the United States. Risk assessments may take place retroactively, i.e., when assessing the contamination hazard at a superfund site, or predictively, such as when planning waste discharges.
The complex nature of chemicals mixtures in the environment presents a challenge to risk assessment. The TEF approach was developed to help assess the toxicity of DLCs and other environmental contaminants with additive effects and is currently endorsed by the World Health Organization
Human health
Human exposure to dioxins and DLCs is a cause for public and regulatory concern. Health concerns include endocrine, developmental, immune and carcinogenic effects. The route of exposure is primarily through the ingestion of animal products such as meat, dairy, fish, and human breast milk. However, humans are also exposed to high levels of "natural dioxins" in cooked foods and vegetables. The human diet accounts for over 95% of the total uptake of TEQ.
Risks in humans are typically calculated from known ingestion of contaminants or from blood or adipose tissue samples. However, human intake data is limited, and calculations from blood and tissue are not well supported. This presents a limitation to the TEF application in risk assessment to humans.
Fish and wildlife
DLC exposure to wildlife results from various sources including the atmospheric deposition of emissions (e.g. waste incineration) over terrestrial and aquatic habitats and contamination from waste effluents. Contaminants then bioaccumulate up the food chain. The WHO has derived TEFs for fish, bird, and mammal species, however differences among taxa for some compounds are orders of magnitude apart. Compared to mammals, fish are less responsive to mono-ortho PCBs.
Limitations
The TEF approach DLC risk assessment operates under certain assumptions which attach varying degrees of uncertainty. These assumptions include:
Individual compounds all act through the same biologic pathway
Individual effects are dose-additive
Dose-response curves are similarly shaped
Individual compounds are similarly distributed throughout the body
TEFs are assumed to be equivalent for all effects, all exposure scenarios and all species, although this may not be the reality. The TEF method only accounts for toxicity effects related to the AhR mechanism - however, some DLC toxicity may be mediated through other processes. Dose-additivity may not be applicable to all DLCs and exposure scenarios, particularly those involving low doses. Interactions with other chemicals that may induce antagonistic effects are not considered and those may be species-specific. In terms of human health risk assessments, estimates of relative potency from animal studies are assumed to be predictive of toxicity in humans, although there are species-specific differences in the AhR. Nevertheless, In vivo mixture studies have shown that WHO 1998 TEF values predicted mixture toxicity within a factor of two or less
A probabilistic approach may provide an advantage in the determination of TEF because it will better describe the level of uncertainty present in a TEF value
The use of TEF values to assess abiotic matrices such as soil, sediment, and water is problematic because TEF values are primarily calculated from oral intake studies.
History and development
Dating back to the 1980s there is a long history of developing TEFs and how to use them. New research being conducted influences guiding criteria for assigning TEFs as the science progresses. The World Health Organization has held expert panels to reach a global consensus on how to assign TEFs in conjunction with new data. Each individual country recommends their own TEF values, typically endorsing the WHO global consensus TEFs.
Other compounds for potential inclusion
Based on mechanistic considerations, PCB 37, PBDDs, PBDFs, PXCDDs, PXCDFs, PCNs, PBNs and PBBs can be included in the TEF concept. However, most of these compounds lack human exposure data. Thus, TEF values for these compounds are in the process of review
See also
Dioxins and dioxin-like compounds
Sources
TRI Dioxin and Dioxin-like Compounds Toxic Equivalency Reporting Rule – Proposed Rule (US EPA) Archived at webcitation on 2 October 2012.
Concentration indicators
Environmental toxicology
Equivalent units | Toxic equivalency factor | Mathematics,Environmental_science | 2,367 |
22,834 | https://en.wikipedia.org/wiki/Ozone%20layer | The ozone layer or ozone shield is a region of Earth's stratosphere that absorbs most of the Sun's ultraviolet radiation. It contains a high concentration of ozone (O3) in relation to other parts of the atmosphere, although still small in relation to other gases in the stratosphere. The ozone layer contains less than 10 parts per million of ozone, while the average ozone concentration in Earth's atmosphere as a whole is about 0.3 parts per million. The ozone layer is mainly found in the lower portion of the stratosphere, from approximately above Earth, although its thickness varies seasonally and geographically.
The ozone layer was discovered in 1913 by French physicists Charles Fabry and Henri Buisson. Measurements of the sun showed that the radiation sent out from its surface and reaching the ground on Earth is usually consistent with the spectrum of a black body with a temperature in the range of , except that there was no radiation below a wavelength of about 310 nm at the ultraviolet end of the spectrum. It was deduced that the missing radiation was being absorbed by something in the atmosphere. Eventually the spectrum of the missing radiation was matched to only one known chemical, ozone. Its properties were explored in detail by the British meteorologist G. M. B. Dobson, who developed a simple spectrophotometer (the Dobsonmeter) that could be used to measure stratospheric ozone from the ground. Between 1928 and 1958, Dobson established a worldwide network of ozone monitoring stations, which continue to operate to this day. The "Dobson Unit" (DU), a convenient measure of the amount of ozone overhead, is named in his honor.
The ozone layer absorbs 97 to 99 percent of the Sun's medium-frequency ultraviolet light (from about 200 nm to 315 nm wavelength), which otherwise would potentially damage exposed life forms near the surface.
In 1985, atmospheric research revealed that the ozone layer was being depleted by chemicals released by industry, mainly chlorofluorocarbons (CFCs). Concerns that increased UV radiation due to ozone depletion threatened life on Earth, including increased skin cancer in humans and other ecological problems, led to bans on the chemicals, and the latest evidence is that ozone depletion has slowed or stopped. The United Nations General Assembly has designated September 16 as the International Day for the Preservation of the Ozone Layer.
Venus also has a thin ozone layer at an altitude of 100 kilometers above the planet's surface.
Sources
The photochemical mechanisms that give rise to the ozone layer were discovered by the British physicist Sydney Chapman in 1930. Ozone in the Earth's stratosphere is created by ultraviolet light striking ordinary oxygen molecules containing two oxygen atoms (O2), splitting them into individual oxygen atoms (atomic oxygen); the atomic oxygen then combines with unbroken O2 to create ozone, O3. The ozone molecule is unstable (although, in the stratosphere, long-lived) and when ultraviolet light hits ozone it splits into a molecule of O2 and an individual atom of oxygen, a continuing process called the ozone–oxygen cycle. Chemically, this can be described as:
O2{} + \mathit{h}\nu_{uv} -> 2O
O + O2 <-> O3
About 90 percent of the ozone in the atmosphere is contained in the stratosphere. Ozone concentrations are greatest between about , where they range from about 2 to 8 parts per million. If all of the ozone were compressed to the pressure of the air at sea level, it would be only thick.
Ultraviolet light
Although the concentration of the ozone in the ozone layer is very small, it is vitally important to life because it absorbs biologically harmful ultraviolet (UV) radiation coming from the Sun. Extremely short or vacuum UV (10–100 nm) is screened out by nitrogen. UV radiation capable of penetrating nitrogen is divided into three categories, based on its wavelength; these are referred to as UV-A (400–315 nm), UV-B (315–280 nm), and UV-C (280–100 nm).
UV-C, which is very harmful to all living things, is entirely screened out by a combination of dioxygen (< 200 nm) and ozone (> about 200 nm) by around altitude. UV-B radiation can be harmful to the skin and is the main cause of sunburn; excessive exposure can also cause cataracts, immune system suppression, and genetic damage, resulting in problems such as skin cancer. The ozone layer (which absorbs from about 200 nm to 310 nm with a maximal absorption at about 250 nm) is very effective at screening out UV-B; for radiation with a wavelength of 290 nm, the intensity at the top of the atmosphere is 350 million times stronger than at the Earth's surface. Nevertheless, some UV-B, particularly at its longest wavelengths, reaches the surface, and is important for the skin's production of vitamin D in mammals.
Ozone is transparent to most UV-A, so most of this longer-wavelength UV radiation reaches the surface, and it constitutes most of the UV reaching the Earth. This type of UV radiation is significantly less harmful to DNA, although it may still potentially cause physical damage, premature aging of the skin, indirect genetic damage, and skin cancer.
Distribution in the stratosphere
The thickness of the ozone layer varies worldwide and is generally thinner near the equator and thicker near the poles. Thickness refers to how much ozone is in a column over a given area and varies from season to season. The reasons for these variations are due to atmospheric circulation patterns and solar intensity.
The majority of ozone is produced over the tropics and is transported towards the poles by stratospheric wind patterns. In the northern hemisphere these patterns, known as the Brewer–Dobson circulation, make the ozone layer thickest in the spring and thinnest in the fall. When ozone is produced by solar UV radiation in the tropics, it is done so by circulation lifting ozone-poor air out of the troposphere and into the stratosphere where the sun photolyzes oxygen molecules and turns them into ozone. Then, the ozone-rich air is carried to higher latitudes and drops into lower layers of the atmosphere.
Research has found that the ozone levels in the United States are highest in the spring months of April and May and lowest in October. While the total amount of ozone increases moving from the tropics to higher latitudes, the concentrations are greater in high northern latitudes than in high southern latitudes, with spring ozone columns in high northern latitudes occasionally exceeding 600 DU and averaging 450 DU whereas 400 DU constituted a usual maximum in the Antarctic before anthropogenic ozone depletion. This difference occurred naturally because of the weaker polar vortex and stronger Brewer–Dobson circulation in the northern hemisphere owing to that hemisphere's large mountain ranges and greater contrasts between land and ocean temperatures. The difference between high northern and southern latitudes has increased since the 1970s due to the ozone hole phenomenon. The highest amounts of ozone are found over the Arctic during the spring months of March and April, but the Antarctic has the lowest amounts of ozone during the summer months of September and October,
Depletion
The ozone layer can be depleted by free radical catalysts, including nitric oxide (NO), nitrous oxide (N2O), hydroxyl (OH), atomic chlorine (Cl), and atomic bromine (Br). While there are natural sources for all of these species, the concentrations of chlorine and bromine increased markedly in recent decades because of the release of large quantities of man-made organohalogen compounds, especially chlorofluorocarbons (CFCs) and bromofluorocarbons. These highly stable compounds are capable of surviving the rise to the stratosphere, where Cl and Br radicals are liberated by the action of ultraviolet light. Each radical is then free to initiate and catalyze a chain reaction capable of breaking down over 100,000 ozone molecules. By 2009, nitrous oxide was the largest ozone-depleting substance (ODS) emitted through human activities.
The breakdown of ozone in the stratosphere results in reduced absorption of ultraviolet radiation. Consequently, unabsorbed and dangerous ultraviolet radiation is able to reach the Earth's surface at a higher intensity. Ozone levels have dropped by a worldwide average of about 4 percent since the late 1970s. For approximately 5 percent of the Earth's surface, around the north and south poles, much larger seasonal declines have been seen, and are described as "ozone holes". "Ozone holes" are actually patches in the ozone layer in which the ozone is thinner. The thinnest parts of the ozone are at the polar points of Earth's axis. The discovery of the annual depletion of ozone above the Antarctic was first announced by Joe Farman, Brian Gardiner and Jonathan Shanklin, in a paper which appeared in Nature on May 16, 1985.
Regulation attempts have included but not have been limited to the Clean Air Act implemented by the United States Environmental Protection Agency. The Clean Air Act introduced the requirement of National Ambient Air Quality Standards (NAAQS) with ozone pollutions being one of six criteria pollutants. This regulation has proven to be effective since counties, cities and tribal regions must abide by these standards and the EPA also provides assistance for each region to regulate contaminants. Effective presentation of information has also proven to be important in order to educate the general population of the existence and regulation of ozone depletion and contaminants. A scientific paper was written by Sheldon Ungar in which the author explores and studies how information about the depletion of the ozone, climate change and various related topics. The ozone case was communicated to lay persons "with easy-to-understand bridging metaphors derived from the popular culture" and related to "immediate risks with everyday relevance". The specific metaphors used in the discussion (ozone shield, ozone hole) proved quite useful and, compared to global climate change, the ozone case was much more seen as a "hot issue" and imminent risk. Lay people were cautious about a depletion of the ozone layer and the risks of skin cancer.
Satellites burning up upon re-entry into Earth's atmosphere produce aluminum oxide (Al2O3) nanoparticles that endure in the atmosphere for decades. Estimates for 2022 alone were ~17 metric tons (~30kg of nanoparticles per ~250kg satellite). Increasing populations of satellite constellations can eventually lead to significant ozone depletion.
"Bad" ozone can cause adverse health risks respiratory effects (difficulty breathing) and is proven to be an aggravator of respiratory illnesses such as asthma, COPD and emphysema. That is why many countries have set in place regulations to improve "good" ozone and prevent the increase of "bad" ozone in urban or residential areas. In terms of ozone protection (the preservation of "good" ozone) the European Union has strict guidelines on what products are allowed to be bought, distributed or used in specific areas. With effective regulation, the ozone is expected to heal over time.
In 1978, the United States, Canada and Norway enacted bans on CFC-containing aerosol sprays that damage the ozone layer but the European Community rejected a similar proposal. In the U.S., chlorofluorocarbons continued to be used in other applications, such as refrigeration and industrial cleaning, until after the discovery of the Antarctic ozone hole in 1985. After negotiation of an international treaty (the Montreal Protocol), CFC production was capped at 1986 levels with commitments to long-term reductions. This allowed for a ten-year phase-in for developing countries (identified in Article 5 of the protocol). Since then, the treaty was amended to ban CFC production after 1995 in developed countries, and later in developing countries. All of the world's 197 countries have signed the treaty. Beginning January 1, 1996, only recycled or stockpiled CFCs were available for use in developed countries like the US. The production phaseout was possible because of efforts to ensure that there would be substitute chemicals and technologies for all ODS uses.
On August 2, 2003, scientists announced that the global depletion of the ozone layer might be slowing because of the international regulation of ozone-depleting substances. In a study organized by the American Geophysical Union, three satellites and three ground stations confirmed that the upper-atmosphere ozone-depletion rate slowed significantly over the previous decade. Some breakdown was expected to continue because of ODSs used by nations which have not banned them, and because of gases already in the stratosphere. Some ODSs, including CFCs, have very long atmospheric lifetimes ranging from 50 to over 100 years. It has been estimated that the ozone layer will recover to 1980 levels near the middle of the 21st century. A gradual trend toward "healing" was reported in 2016.
Compounds containing C–H bonds (such as hydrochlorofluorocarbons, or HCFCs) have been designed to replace CFCs in certain applications. These replacement compounds are more reactive and less likely to survive long enough in the atmosphere to reach the stratosphere where they could affect the ozone layer. While being less damaging than CFCs, HCFCs can have a negative impact on the ozone layer, so they are also being phased out. These in turn are being replaced by hydrofluorocarbons (HFCs) and other compounds that do not destroy stratospheric ozone at all.
The residual effects of CFCs accumulating within the atmosphere lead to a concentration gradient between the atmosphere and the ocean. This organohalogen compound is able to dissolve into the ocean's surface waters and is able to act as a time-dependent tracer. This tracer helps scientists study ocean circulation by tracing biological, physical and chemical pathways.
Implications for astronomy
As ozone in the atmosphere prevents most energetic ultraviolet radiation reaching the surface of the Earth, astronomical data in these wavelengths have to be gathered from satellites orbiting above the atmosphere and ozone layer. Most of the light from young hot stars is in the ultraviolet and so study of these wavelengths is important for studying the origins of galaxies. The Galaxy Evolution Explorer, GALEX, is an orbiting ultraviolet space telescope launched on April 28, 2003, which operated until early 2012.
See also
Cambrian explosion
Nuclear winter
Oxygen
United Nations Environment Programme
Short-lived climate pollutants
References
Further reading
Science
Ritchie, Hannah, "What We Learned from Acid Rain: By working together, the nations of the world can solve climate change", Scientific American, vol. 330, no. 1 (January 2024), pp. 75–76. "[C]ountries will act only if they know others are willing to do the same. With acid rain, they did act collectively.... We did something similar to restore Earth's protective ozone layer.... [T]he cost of technology really matters.... In the past decade the price of solar energy has fallen by more than 90 percent and that of wind energy by more than 70 percent. Battery costs have tumbled by 98 percent since 1990, bringing the price of electric cars down with them....[T]he stance of elected officials matters more than their party affiliation.... Change can happen – but not on its own. We need to drive it." (p. 76.)
United Nations Environment Programme (2010). Environmental Effects of Ozone Depletion and its Interactions with Climate Change: 2010 Assessment. Nairobi: UNEP.
Policy
(Ambassador Benedick was the Chief U.S. Negotiator at the meetings that resulted in the Montreal Protocol.)
External links
Stratospheric ozone: an electronic textbook
Ozone Layer Info
The CAMS stratospheric ozone service delivers maps, datasets and validation reports about the past and current state of the ozone layer.
Layer
Ultraviolet radiation | Ozone layer | Physics,Chemistry | 3,293 |
18,558,539 | https://en.wikipedia.org/wiki/Nitrocellulose%20slide | A nitrocellulose slide (or nitrocellulose film slide) is a glass microscope slide that is coated with nitrocellulose that is used to bind biological material, often protein, for colorimetric and fluorescence detection assays. For this purpose, a nitrocellulose slide is generally considered to be superior to glass, because it binds a great deal more protein, and protects the tertiary structure of the protein (and other biological material, i.e.: cells). Typically, nitrocellulose slides have a thin, opaque film of nitrocellulose on a standard 25mm × 75 mm glass microscope slide. The film is extremely sensitive to contact, and to foreign material; contact causes deformation and deposition of material, especially liquids.
A nitrocellulose slide is different from a nitrocellulose membrane, which usually filters protein from solution (i.e.: physician's office pregnancy tests), but that it serves a similar goal: to detect the presence and/or concentration level of certain biological material.
Microarrays
Nitrocellulose slides are used mainly in proteomics to do protein microarrays with automated systems that print the slides and record results. Microarrays of cell analytes, arrays of cell lysate, antibody microarrays, tissue printing, immunoarrays, etc. are also possible with the slide.
Nitrocellulose fluorescence
Due to their high surface roughness, conventional white nitrocellulose films scatter and reflect large amounts of excitation and emission light during the fluorescence detection in the microarray scanner. In addition, nitrocellulose exhibits a natural autofluorescence at the detection wavelengths commonly used. Both these factors lead to a high background fluorescent signal from these membrane slides. To overcome this problem, a new process has been developed to generate black membranes that absorb the scattered light, significantly reducing the background auto-fluorescence and thus offering a very low and homogenous auto-fluorescence to achieve a significantly improved dynamic range. These slides are commercially available through Schott AG. Nevertheless, conventional white nitrocellulose films continue to be the dominant surface for many protein microarray applications because the claims above have not proved relevant to end user requirements. Regardless, nitrocellulose slide manufacturers like Grace Bio-Labs continue to develop new nitrocellulose surfaces to further optimize their use in protein microarrays.
A method for protein quantitation on nitrocellulose coated glass slides uses near-IR fluorescent detection with quantum dots. Traditional porous nitrocellulose signal to noise is limited by auto-fluorescence of the nitrocellulose at the respective required wavelengths of excitation and emission for standard organic fluorescent detection probes. Near IR detection probes are excited and read at emission wavelengths outside the range of nitrocellulose fluorescence.
References
Microscopy
Biochemistry methods | Nitrocellulose slide | Chemistry,Biology | 637 |
66,226,078 | https://en.wikipedia.org/wiki/N-%28n-Butyl%29thiophosphoric%20triamide | N-(n-Butyl)thiophosphoric triamide (NBPT) is the organophosphorus compound with the formula SP(NH2)2(NHC4H9). It is an amide of thiophosphoric acid. A white solid, NBPT is an "enhanced efficiency fertilizer", intended to limit the release of nitrogen-containing gases following fertilization. Regarding its chemical structure, the molecule features tetrahedral phosphorus bonded to sulfur and three amido groups.
Use
NBPT functions as an inhibitor of the enzyme urease. Urease, pervasive in soil microorganisms, converts urea into ammonia, which is susceptible to volatilization if produced faster than it can be utilized by plants. Approximately 0.5% by weight NBPT is mixed with the urea.
See also
Phenyl phosphorodiamidate, another urease inhibitor
References
Thiophosphoryl compounds
Soil improvers
Fertilizers | N-(n-Butyl)thiophosphoric triamide | Chemistry | 217 |
6,064,125 | https://en.wikipedia.org/wiki/Perispirit | In Spiritism, perispirit or perisprit is the subtle body that is used by the spirit to connect with the perceptions created by the brain. The term is found among the extensive terminology originally devised by Allan Kardec in his books about Spiritism. Its first use was in a commentary (by Kardec) to the answer given by the spirits to the 93rd question of The Spirits Book:
Is the spirit, properly so called, without a covering, or is it, as some declare, surrounded by a substance of some kind?
"The spirit is enveloped in a substance which would appear to you as mere vapor, but which, nevertheless, appears very gross to us, thought it is sufficiently vaporous to allow the spirit to float in the atmosphere, and to transport himself through space at pleasure."
As the germ of a fruit is surrounded by the perisperm so the spirit, properly so called, is surrounded by an envelope which, by analogy, may be designated as the perispirit.
Kardec was compelled to develop further the notion, especially by given "scientific" fundamentation to his theory. He studied the properties of what was then called "fluids" (electricity, magnetism, heat) and broadened the research towards those he termed "psychic" or "spiritual fluids". Both terms, especially the previous have stuck and are still used (or abused) up to now.
Properties
In Kardec's later conception, found in The Book on Mediums, he described the perispirit (then assumed as "technical term") in terms of a "fluidic body" with the following properties:
It is made of the "Universal Cosmic Fluid", which in different densities and states, is the source of all matter;
It enclosed the spirit proper;
It gave the spirit an appearance drawn from his previous life and his current state, serving as a shape by which spirits saw each other;
It sends forth "fluidic" emanations that can affect those around;
Being "subtle" and semi-material, it was able to act as a bond between the physical body (material) and the spirit (immaterial);
It allowed the spirit to act over matter other than that of its body, to some extent;
It is constantly under change, as the spirit progresses and may eventually be harmed, even destroyed.
It won't be necessary any more when all spirits attain perfection.
Importance for mediumship
The perispirit plays a key role in the phenomenon of mediumship, which actually involves the interaction of the perispirit of the medium and that of a disembodied spirit.
When invited to our plane of existence by a medium, "spirits who inhabit worlds of higher degree than ours ... are obliged to clothe themselves with" a garment composed of perispirit.
"The most elevated spirits, when they come to visit us, assume a terrestrial perispirit, which they retain during their stay among us".
"According to Kardec, it is through the perispirit that disincarnate spirits ... can move objects." (Thus, the perispirit is responsible for poltergeist manifestations.)
New-Age conception
Due to syncretism, some variations of spiritism accept the perispirit as an actual "body" possessing power centres, defined more or less in the same way that Theosophy and Yoga define the Chakras, thus making the concept of perispirit similar to that of an astral body, a concept that was unknown to Kardec.
According to this orientalizing view, the perispirit had the function of modelling the physical body ("soma" [in Greek; "deha" in Sanskrit]) after the design determined by the karma, with each chakra linking itself to a gland and to the nervous system. This perispirit would use the chakras to command the body and to receive sensorial impressions from it.
Garment of soul in Gnosticism
In Mandaic soteriology, the soul of the dead, upon entering the House of Life, "receives a garment and a wreath." (Here, the "garment" = perispirit; and the "wreath" = halo.)
The "metaphor of soul as garment" is a commonplace in Russian mysticism.
Notes
References
Allan Kardec (translated by Anna Blackwell): The Spirits' Book.
Carlos S. Alvarado : "Human Radiations". Journal of the Society for Psychical Research. Vol. 70.3; No. 884. http://www.healthsystem.virginia.edu/internet/personalitystudies/publicationslinks/alvarado-human-radiations-jspr-2006.pdf
Spiritism
Vitalism | Perispirit | Biology | 1,003 |
49,212,014 | https://en.wikipedia.org/wiki/Tricholoma%20moserianum | Tricholoma moserianum is a European mushroom of the agaric genus Tricholoma. It was formally described in 1989 by Marcel Bon. The specific epithet honours Austrian mycologist Meinhard Moser.
See also
List of Tricholoma species
References
moserianum
Fungi described in 1990
Fungi of Europe
Fungus species | Tricholoma moserianum | Biology | 71 |
8,131,122 | https://en.wikipedia.org/wiki/Network%20management%20application | In the network management model, a network management application (NMA) is the software that sits on the network management station (NMS) and retrieves data from management agents (MAs) for the purpose of monitoring and controlling various devices on the network. It is defined by the ISO/OSI network management model and its subset of protocols, namely Simple Network Management Protocol (SNMP) and Common Management Information Protocol (CMIP).
References
Network management | Network management application | Technology,Engineering | 94 |
5,794,294 | https://en.wikipedia.org/wiki/Socioecology | Socioecology is the scientific study of how social structure and organization are influenced by an organism's environment. Socioecology is primarily related to anthropology, geography, sociology, and ecology. Specifically, the term is used in human ecology, the study of the interaction between humans and their environment. Socioecological models of human health examine the interaction of many factors, ranging from narrowest (individual behaviors) to broadest (federal policies). The factors of socioecological models consist of individual behaviors, sociodemographic factors (race, education, socioeconomic status), interpersonal factors (romantic, family, and coworker relationships), community factors (physical and social environment), and societal factors (local, state, and federal policies.
References
External links
Socioecology Research Today (free online)
Environmental social science | Socioecology | Environmental_science | 169 |
24,508,674 | https://en.wikipedia.org/wiki/Gymnopilus%20pseudofulgens | Gymnopilus pseudofulgens is a species of mushroom in the family Hymenogastraceae.
See also
List of Gymnopilus species
External links
Gymnopilus pseudofulgens at Index Fungorum
pseudofulgens
Fungus species | Gymnopilus pseudofulgens | Biology | 52 |
14,853,095 | https://en.wikipedia.org/wiki/High%20potential%20iron%E2%80%93sulfur%20protein | High potential iron-sulfur proteins (HIPIP) are a class of iron-sulfur proteins. They are ferredoxins that participate in electron transfer in photosynthetic bacteria as well as in Paracoccus denitrificans.
Structure
The HiPIPs are small proteins, typically containing 63 to 85 amino acid residues. The sequences show significant variation. As shown in the following schematic representation the iron-sulfur cluster is bound by four conserved cysteine residues.
[ 4Fe-4S cluster]
| | | |
xxxxxxxxxxxxxxxxxxxCxCxxxxxxxCxxxxxCxxxx
C: conserved cysteine residue involved in the binding of the 4Fe-4S core.
[Fe4S4] clusters
The [Fe4S4] clusters are abundant cofactors of metalloproteins. They participate in electron-transfer sequences. The core structure for the [Fe4S4] cluster is a cube with alternating Fe and S vertices. These clusters exist in two oxidation states with a small structural change. Two families of [Fe4S4] clusters are known: the ferredoxin (Fd) family and the high-potential iron–suflur protein (HiPIP) family. Both HiPIP and Fd share the same resting state: [Fe4S4]2+, which have the same geometric and spectroscopic features. Differences arise when it comes to their active state: HiPIP forms by oxidation to [Fe4S4]3+, and Fd is formed by reduction to [Fe4S4]+.
\underset{(for\ HiPIP)}{[Fe4S4]^3+} <=>[\ce{oxidation}] \underset{(resting\ state)}{[Fe4S4]^2+} <=>[\ce{reduction}] \underset{(for\ Fd)}{[Fe4S4]+}
The different oxidation states are explained by the proteins that combined with the [Fe4S4] cluster. Analysis from crystallographic data suggests that HiPIP is capable of preserving its higher oxidation state by forming fewer hydrogen bonds with water. The characteristic fold of the proteins wraps the [Fe4S4] cluster in a hydrophobic core, only being able to form about five conserved H-bond to the cluster ligands from the backbone. In contrast, the protein associated with the Fd's allows these clusters to contact solvent resulting in 8 protein H-bonding interactions. The protein binds Fd via conserved CysXXCysXXCys structure (X stands for any amino acid). Also, the unique protein structure and dipolar interactions from peptide and intermolecular water contribute to shielding the [Fe4S4]3+ cluster from the attack of random outside electron donors, which protects itself from hydrolysis.
Synthetic analogues
HiPIP analogues can be synthesized by ligand exchange reactions of [Fe4S4{N(SiMe3)2}4]− with 4 equiv of thiols (HSR) as follows:
[Fe4S4{N(SiMe3)2}4]− + 4RSH → [Fe4S4(SR)4]− + 4 HN(SiMe3)2
The precursor cluster [Fe4S4{N(SiMe3)2}4]− can be synthesized by one-pot reaction of FeCl3, NaN(SiMe3)2, and NaSH. The synthesis of HiPIP analogues can help people understand the factors that cause variety redox of HiPIP.
Biochemical reactions
HiPIPs take part in many oxidizing reactions in creatures, and are especially known with photosynthetic anaerobic bacteria, such as Chromatium, and Ectothiorhodospira. HiPIPs are periplasmic proteins in photosynthetic bacteria. They play a role of electron shuttles in the cyclic electron flow between the photosynthetic reaction center and the cytochrome bc1 complex. Other oxidation reactions HiPIP involved include catalyzing Fe(II) oxidation, being electron donor to reductase and electron accepter for some thiosulfate-oxidizing enzyme.
References
External links
- High potential iron-sulfur proteins in PROSITE
Further reading
Protein families
Peripheral membrane proteins | High potential iron–sulfur protein | Biology | 919 |
50,761,050 | https://en.wikipedia.org/wiki/Kit%20Parker | Kevin Kit Parker is a lieutenant colonel in the United States Army Reserve and the Tarr Family Professor of Bioengineering and Applied Physics at Harvard University. His research includes cardiac cell biology and tissue engineering, traumatic brain injury, and biological applications of micro- and nanotechnologies. Additional work in his laboratory has included fashion design, marine biology, and the application of counterinsurgency methods to countering transnational organized crime.
Early life and education
Parker attended Boston University's College of Engineering and graduated in 1989. He earned a Master of Science degree in 1993 and a doctoral degree in applied physics in 1998 from Vanderbilt University.
Military career
Parker is a paratrooper who has served in the United States Army since 1992. After the September 11 attacks, he served two tours of duty in Afghanistan.
In addition to his combat tours, Parker conducted two missions into Afghanistan as part of the Gray Team in 2011.
Civilian career
Initially, at Harvard the focus of his research was heart muscle cells. He turned to traumatic brain injury in 2005 after realizing that an Army friend of his, who had received injuries in an IED blast in Iraq in 2005, was suffering from an undiagnosed medical condition rather than a psychological problem.
Other research of Parker's includes designing camouflage using skin cells of cuttlefish and the use of a cotton candy machine to make dressings for wounds.
Parker served on the Defense Science Research Council for nearly a decade, the Defense Science Board Task Force on Autonomy, and has consulted to other US government agencies as well as the medical device and pharma industry.
In 2011, Parker headed Harvard's committee for reintroducing ROTC at the university.
In July 2016, it was announced that The Disease Biophysics Group at Harvard, led by Kit Parker, created a tissue-engineered soft-robotic ray that swims using wave-like fin motions, and turns according to externally applied light cues.
C3 course controversy
In January 2021, students at the Harvard School of Engineering and Applied Sciences created a petition objecting to Parker's course on Counter-Criminal Continuum policing, or C3 policing. Titled "Data Fusion in Complex Systems: A Case Study," the course promised to engage graduate student researchers to analyze the efficacy of C3 techniques in Springfield, Massachusetts.
The petition objected to the lack of research into the potential harms of C3 policing, particularly the ethical implications for marginalized communities. The Dean of the Engineering School soon announced the class was canceled, and committed to reviewing the process of vetting class offerings.
Awards
Bronze Star
Army Commendation Medal with V device
Combat Infantryman Badge
References
External links
Kit Parker at Harvard
Biological engineering
United States Army officers
American physicists
Boston University College of Engineering alumni
Vanderbilt University alumni
Living people
Harvard University faculty
Year of birth missing (living people) | Kit Parker | Engineering,Biology | 567 |
53,341,985 | https://en.wikipedia.org/wiki/Bickley%E2%80%93Naylor%20functions | In physics, engineering, and applied mathematics, the Bickley–Naylor functions are a sequence of special functions arising in formulas for thermal radiation intensities in hot enclosures. The solutions are often quite complicated unless the problem is essentially one-dimensional (such as the radiation field in a thin layer of gas between two parallel rectangular plates). These functions have practical applications in several engineering problems related to transport of thermal or neutron, radiation in systems with special symmetries (e.g. spherical or axial symmetry). W. G. Bickley was a British mathematician born in 1893.
Definition
The nth Bickley−Naylor function is defined by
and it is classified as one of the generalized exponential integral functions.
All of the functions for positive integer n are monotonously decreasing functions, because is a decreasing function and is a positive increasing function for .
Properties
The integral defining the function generally cannot be evaluated analytically, but can be approximated to a desired accuracy with Riemann sums or other methods, taking the limit as a → 0 in the interval of integration, [a, /2].
Alternative ways to define the function include the integral, integral forms the Bickley-Naylor function:
where is the modified Bessel function of the zeroth order. Also by definition we have .
Series expansions
The series expansions of the first and second order Bickley functions are given by:
where is the Euler constant and
is the th harmonic number.
Recurrence relation
The Bickley functions also satisfy the following recurrence relation:
where .
Asymptotic expansions
The asymptotic expansions of Bickley functions are given as
for .
Successive differentiation
Differentiating with respect to x gives
Successive differentiation yields
The values of these functions for different values of the argument x were often listed in tables of special functions in the era when numerical calculation of integrals was slow. A table that lists some approximate values of the three first functions Kin is shown below.
Computer code
Computer code in Fortran is made available by Amos.
See also
Exponential integral
References
Special functions | Bickley–Naylor functions | Mathematics | 426 |
12,506,366 | https://en.wikipedia.org/wiki/Empty%20%28magazine%29 | Empty was a cult Australian creative magazine published in the early 21st century. It was concerned largely with printed design work, photography, illustration and film, created for the professional creative community.
The magazine was published by Sydney-based Design is Kinky studio, curators of the Semi-Permanent design festival, a fixture in design culture's global landscape, which occurs annually in Australia.
The magazine served largely as a gallery of artwork, both domestic and international. It also featured cultural commentary and interviews with artists, animators, other magazines, and so on. Interview subjects included Mark Andrews, head of story on The Incredibles (cover story, issue 2, late 2004) and Dan Houser of Rockstar Games (issue 3, early-2005).
Empty was launched in April 2004 and was published somewhat arbitrarily, but usually bimonthly. It featured little to no advertising.
The editor was Andrew Johnstone, creator of Empty and the above-mentioned Design is Kinky and Semi-Permanent.
The magazine enjoyed newsstand distribution, but was distributed only within Australia.
Staff
Editor Andrew Johnstone
Art Direction Design is Kinky Studio
References
External links
Design is Kinky website
2004 establishments in Australia
Film magazines published in Australia
Cultural magazines
Design magazines
Magazines established in 2004 | Empty (magazine) | Engineering | 254 |
58,507,434 | https://en.wikipedia.org/wiki/Aspergillus%20cibarius | Aspergillus cibarius is a species of fungus in the genus Aspergillus. It is from the Aspergillus section. The species was first described in 2012. It has been reported to produce asperflavin, auroglaucin, bisanthrons, dihydroauroglaucin, echinulins, emodin, erythroglaucin, flavoglaucin, neoechinulins physcion, tetracyclic, and tetrahydroauroglaucin.
Growth and morphology
A. cibarius has been cultivated on both Czapek yeast extract agar (CYA) plates and yeast extract sucrose agar (YES) plates. The growth morphology of the colonies can be seen in the pictures below.
References
cibarius
Fungi described in 2012
Fungus species | Aspergillus cibarius | Biology | 181 |
29,987,643 | https://en.wikipedia.org/wiki/Omphalina%20pyxidata | Omphalina pyxidata is a species of fungus in the family Tricholomataceae, and the type species of the genus Omphalina. It is found in North America and Europe.
References
External links
Tricholomataceae
Fungus species
Taxa named by Jean Baptiste François Pierre Bulliard | Omphalina pyxidata | Biology | 62 |
580,852 | https://en.wikipedia.org/wiki/Mononuclear%20phagocyte%20system | In immunology, the mononuclear phagocyte system or mononuclear phagocytic system (MPS) also known as the macrophage system is a part of the immune system that consists of the phagocytic cells located in reticular connective tissue. The cells are primarily monocytes and macrophages, and they accumulate in lymph nodes and the spleen. The Kupffer cells of the liver and tissue histiocytes are also part of the MPS. The mononuclear phagocyte system and the monocyte macrophage system refer to two different entities, often mistakenly understood as one.
"Reticuloendothelial system" is an older term for the mononuclear phagocyte system, but it is used less commonly now, as it is understood that most endothelial cells are not macrophages.
The mononuclear phagocyte system is also a somewhat dated concept trying to combine a broad range of cells, and should be used with caution.
Cell types and locations
The spleen is the second largest unit of the mononuclear phagocyte system. The monocyte is formed in the bone marrow and transported by the blood; it migrates into the tissues, where it transforms into a histiocyte or a macrophage.
Macrophages are diffusely scattered in the connective tissue and in liver (Kupffer cells), spleen and lymph nodes (sinus histiocytes), lungs (alveolar macrophages), and central nervous system (microglia). The half-life of blood monocytes is about 1 day, whereas the life span of tissue macrophages is several months or years. The mononuclear phagocyte system is part of both humoral and cell-mediated immunity. The mononuclear phagocyte system has an important role in defense against microorganisms, including mycobacteria, fungi, bacteria, protozoa, and viruses. Macrophages remove senescent erythrocytes, leukocytes, and megakaryocytes by phagocytosis and digestion.
Functions
Formation of new red blood cells (RBCs) and white blood cells (WBCs).
Destruction of senescent RBCs.
Formation of plasma proteins.
Formation of bile pigments.
Storage of iron. In the liver, Kupffer cells store excess iron from catabolism of heme from the breakdown of red blood cells. In bone marrow and spleen, iron is stored in MPS cells mostly as ferritin; in iron overload states, most of the iron is stored as hemosiderin.
Clearance of heparin via heparinases
Hematopoiesis
The various cell types of the mononuclear phagocyte system are all part of the myeloid lineage from the CFU-GEMM (precursor of granulocytes, erythrocytes, monocytes and megakaryocytes)
References
External links
Immune system | Mononuclear phagocyte system | Biology | 635 |
16,781,434 | https://en.wikipedia.org/wiki/Kappa%20Coronae%20Borealis%20b | Kappa Coronae Borealis b is an extrasolar planet approximately 98 light-years away in the constellation of Corona Borealis. This planet was discovered by Johnson et al., who used the radial velocity method to detect wobbling of the star caused by a planet move around by its tug of gravity. It was first discovered in September 2007 and was published in November.
The planet is 1.8 Jupiter masses, or 570 Earth masses, although only the minimum mass is known since the inclination is not known. It orbits at a distance of 2.7 astronomical units, or 400 gigameters, and takes 1,208 days, or 3.307 years, to orbit around Kappa Coronae Borealis.
See also
HD 16175 b
HD 167042 b
Rho Coronae Borealis b
References
Corona Borealis
Exoplanets discovered in 2007
Giant planets
Exoplanets detected by radial velocity
es:Kappa Coronae Borealis#Sistema planetario | Kappa Coronae Borealis b | Astronomy | 194 |
11,066,006 | https://en.wikipedia.org/wiki/A%20Certain%20Magical%20Index | is a Japanese light novel series written by Kazuma Kamachi and illustrated by Kiyotaka Haimura, which has been published by ASCII Media Works under their Dengeki Bunko imprint since April 2004 in a total of three separate series. The first ran from April 2004 to October 2010, the second from March 2011 to July 2019, and the third from February 2020 to present.
The plot is set in a world where humans called espers possess supernatural abilities. The light novels focus on Toma Kamijo, a young high school student in Academy City with the ability to cancel other espers' powers, as he encounters an English nun named Index. His ability, which allows him to cancel other powers by touching them, and his relationship with Index prove dangerous to other sorcerers and espers who wanted to discover the secrets behind him and Index, as well as the city.
A manga adaptation by Chuya Kogino began serialization in Monthly Shōnen Gangan in April 2007. J.C.Staff produced two 24-episode anime series between 2008 and 2011. An animated film was released in February 2013. A 26-episode third season aired between 2018 and 2019. Several spin-offs and other adaptations have also been made, including several video games.
After being rejected for the Dengeki Novel Prize, Kamachi was contacted by Kazuma Miki, an editor at ASCII Media Works who had him write several test novels. He got the chance to write a full series and decided to create it with the concept of exploring the rules of magic, rather than just having it exist, and the inclusion of science to oppose its elements. The resulting series has seen success with both critics and audiences, with critics praising the action and characters.
Synopsis
Setting
A Certain Magical Index is set in a world where supernatural abilities are a reality. Individuals who possess special powers acquired via science are called . Those Espers who gain their abilities without the aid of special scientific instruments, whether at birth or otherwise, are referred to as . Others, known as , gain their powers upon mastering the power of magic, either from obtaining knowledge from different mythologies or by using mystical artifacts, although the existence of sorcerers is a secret to the public. While Sorcerers align themselves with different beliefs, Espers are aligned with scientific institutions. This leads to a power struggle between the magic and science factions for control of the world.
Plot
The story is set in , a technologically advanced independent city-state located in the west of Tokyo, Japan, which is known for its educational and research institutions. Toma Kamijo is a student in Academy City whose right hand's power called the Imagine Breaker could negate any supernatural powers but also his luck, much to his chagrin. One day, Toma meets a young English girl named Index, a nun from Necessarius, a secret magic branch of the Church of England, whose mind had been implanted with the Index Librorum Prohibitorum – 103,000 forbidden magical books the Church stored in secret locations. His encounter with her leads him to meet others from the secret worlds of science and magic. Toma's unusual power places him at the center of conflicts between the Sorcerers and Espers in Academy City who planned to unravel the secrets behind Academy City, Index, and Toma's special power.
Besides its manga adaptation, the series also has four spin-offs focusing on other characters. One of them is A Certain Scientific Railgun, which focuses on Mikoto Misaka, an Electromaster and the third most powerful Esper in Academy City. The second, A Certain Scientific Accelerator, focuses on Accelerator, a teenager capable of controlling vectors and the most powerful Esper in Academy City. The third, A Certain Scientific Dark Matter, deals with the second most powerful Esper in Academy City named Teitoku Kakine and his past. The fourth, A Certain Scientific Mental Out, follows the fifth Level 5 Esper and most powerful psychological psychic named Misaki Shokuhō in her election campaign for the next president of Tokiwadai Middle School's student body.
Development
Kazuma Kamachi started his work with A Certain Magical Index when he received a call from Kazuma Miki, the light novel editor, after his submission was rejected at the 9th Dengeki Game Novel Prize (now Dengeki Novel Prize) in 2002. Miki began to train him in writing multiple test novels, including a story about a girl sister and a boy with a mysterious arm which became the basis for his light novel.
In his essay titled "If It's Interesting Anything Goes: 600 Million Copies Printed—The Day in the Life of a Certain Editor", Miki stated that the light novel was originally called but he chose the current title because the word "Index" left a "strong personality in the sense and direction" upon his reading of the manuscript. Kamachi revealed that Index's name was derived from the kanji he found from looking in the encyclopedia since he wanted a name that would "stand out for the character the story was centered around". He avoided using difficult kanji for Toma Kamijo's name to help readers able to read it but he kept the meaning behind the character's epithet ("The One Who Purifies God and Slays Demons") a secret for now.
When it came the time to write the story, Kamachi investigated the terms "sorcerer" and "real" online for him to build the world of magic and the magicians' existence in his light novel. He also realized that most other Dengeki Bunko novels had magic as a main theme; however, none of them actually invested much into its inner-workings. As such, he decided to use this as the premise for the series. In setting up the world of science, Kamachi "needed some kind of power to oppose" the magic side. This helped him to create the idea of Toma Kamijo's known power, the Imagine Breaker, and set the stage for the world-building of Academy City "where science, espers, and magic would gather".
Upon the release of the first volume of the light novel, Miki stated that "(to be blunt) it sold like crazy. Shortly after the official release date, we had to do a reprint half the size of the original printing. It was on a Monday. I still remember it now. It was quite an achievement for an unknown newcomer". After the success of the first volume, the second volume was completed in 17 days. Subsequent volumes were also completed quickly, with the story for the ninth volume completed before the release of the fifth volume.
Media
Light novels
A Certain Magical Index is a light novel series written by Kamachi and illustrated by Kiyotaka Haimura. ASCII Media Works published 25 volumes between April 10, 2004, and August 10, 2011, under their Dengeki Bunko imprint; 22 comprise the main story while the other three are short story collections. On May 4, 2021, all volumes were free to read for one day in the Dengeki Novekomi app developed by Kadokawa Corporation. Yen Press licensed the series in North America in April 2014, which began releasing under their Yen On imprint in November, and its omnibus edition in November 2022.
A sequel series, titled , began publication on March 10, 2011 and concluded on July 10, 2019, with the publication of its 23rd volume. On May 4, 2021, they began to be published in the Dengeki Novekomi app. A third light novel series, titled , began publication on February 7, 2020. Yen Press has acquired New Testament for English publication.
Kamachi wrote short novels exclusively for Haimura's three artbook collections. The first short novel, titled A Certain Magical Index SS: Love Letter Competition, was included in the release of Kiyotaka Haimura Artbook rainbow spectrum:colors on February 28, 2011. The second short novel, titled New Testament: A Certain Magical Index SS, was included in the release of Kiyotaka Haimura Artbook 2 rainbow spectrum:notes on September 9, 2014. The third short novel, titled Genesis Testament: A Certain Magical Index SS, was included in the release of Kiyotaka Haimura Artbook 3 CROSS on May 9, 2020.
The bonus novels that Kamachi wrote for the series' Blu-ray/DVD releases are compiled into two volumes titled to commemorate the 15th anniversary of his debut, which were published on June 10 and August 7, 2020.
A spin-off novel series written by Kamachi and illustrated by Nilitsu focusing on the women belonging to Academy City's dark side, titled , was announced in January 2023. As of August 10, 2023, a total of two volumes have been released in Japan.
Manga
The series has been adapted into two manga series. The one based on the novels is illustrated by Chuya Kogino and started serialization in the May 2007 issue of Square Enix's Monthly Shōnen Gangan. The first tankōbon volume was released in Japan on November 10, 2007. As of November 12, 2024, 31 volumes have been published. Yen Press has licensed the series in North America and has been publishing the manga since May 19, 2015. The manga is also licensed in Italy by Star Comics.
The other manga adaptation based on A Certain Magical Index: The Movie – The Miracle of Endymion is illustrated by Ryōsuke Asakura and was serialized from February 12 to October 12, 2013. Square Enix published the first volume on August 27 and the second volume on October 22.
Spin-offs
A side-story manga series illustrated by Motoi Fuyukawa, titled A Certain Scientific Railgun, started serialization in the April 2007 issue of Dengeki Daioh. As of March 2023, 18 volumes have been released. Its spin-off, titled A Certain Scientific Railgun: Astral Buddy, was serialized in Dengeki Daioh from April 27, 2017, to July 27, 2020. It was published in four volumes. A second side-story manga series illustrated by Arata Yamaji, titled A Certain Scientific Accelerator, was serialized in Dengeki Daioh from December 27, 2013, to July 27, 2020. It was published in twelve volumes. A third side-story manga series illustrated by Nankyoku Kisaragi, titled , was serialized in ASCII Media Works magazine Dengeki Daioh from August 27, 2019, to March 26, 2020 (the same day the compiled volume was published). A fourth side-story manga series illustrated by Yasuhito Nogi, titled , began serialization in Comic Newtype website on July 27, 2021. As of January 10, 2024, three volumes have been published.
A yonkoma manga spin-off illustrated by Mijin Kouka, titled , was serialized in Square Enix's Monthly Shōnen Gangan magazine from September 12, 2013, to May 12, 2016. A total of five tankōbon volumes had been published in Japan from February 22, 2014, to May 21, 2016. In March 2023, Kamachi revealed the manga adaptation of A Certain ITEM of Dark Side by Strelka which is serialized in Dengeki Daioh magazine starting October 26 of that year.
Anime
A 24-episode anime adaptation of A Certain Magical Index was produced by J.C.Staff and directed by Hiroshi Nishikiori, which was aired in Japan from October 4, 2008, to March 19, 2009. The anime was collected into eight Blu-ray and DVD sets, which were released from January 23 to August 21, 2009.
A second season titled A Certain Magical Index II began airing in Japan from October 8, 2010, to April 1, 2011 and was also streamed on Nico Nico Douga. The second season's eight volumes of Blu-ray/DVD sets were released from January 26 to September 22, 2011.
An animated film titled was released in Japan on February 23, 2013. It is based on an original story written by Kamachi and features the main characters from both Index and Railgun anime series, along with the new ones designed by Haimura. The Blu-ray/DVD set of the film, which was released on August 28, is included with a bonus anime titled A Certain Magical Index-tan: The Movie – The Miracle of Endymion... Happened, or Maybe Not.
The 26-episode third season titled A Certain Magical Index III aired in Japan from October 5, 2018, to April 5, 2019. The third season's eight volumes of Blu-ray/DVD sets were released from December 26, 2018 to July 31, 2019, with episodes 6 and 7 of the bonus short anime parody titled A Certain Magical Index-tan, which depicted Index in her chibi form, included in the first and fifth limited edition releases. The third season was originally planned to be a reboot but it was later decided to be a sequel instead. The three seasons were released on Hulu in Japan on March 24, 2022.
In North America, Funimation (which now goes by the name of Crunchyroll LLC) has licensed the series for home video and streaming. An English language dub began streaming on their website in September 2012 and was released on DVD on December 11. The first season aired in North America on the Funimation Channel on January 21, 2013. It can also been seen on the Crunchyroll streaming service after the Funimation brand was unified with the former in 2022.
The series was released in Australia by a partnership between Universal Pictures Home Entertainment and Sony Pictures. Funimation has also licensed the film in North America and released it at theaters in the United States on January 12, 2015. Animatsu Entertainment released the series in the United Kingdom. In Southeast Asia, Muse Communication licensed the series and broadcast it through i-Fun Anime Channel and their YouTube channel.
Music
Maiko Iuchi of I've Sound is in charge of music for the series' three seasons. The first opening theme music, used in episodes 1–16 of the first season of A Certain Magical Index, is "PSI-Missing", while the second one, used in episode 17 onwards, is "Masterpiece", both performed by Mami Kawada. The first ending theme music, used in episodes 1–19, is , while the second one, used in episode 20 onwards, is , both performed by Iku.
The first opening theme music of A Certain Magical Index II is "No Buts!", while the second one, introduced in episode 17, is "See Visions", both performed by Kawada. The first ending theme music, used in episodes 1–13, is "Magic∞World", while the second one, used in episode 14 onwards, is , both performed by Kurosaki.
The ending theme music of A Certain Magical Index: The Movie – The Miracle of Endymion is "Fixed Star" by Kawada and the single was released on February 20, 2013. The first opening theme music of A Certain Magical Index III is "Gravitation", while the second one is "Roar", both performed by Kurosaki. The first ending theme music is , while the second one is , both performed by Yuka Iguchi.
Video games
A 3D fighting game titled A Certain Magical Index was developed by Shade for the PlayStation Portable (PSP) and published by ASCII Media Works on January 27, 2011. Heroz developed a social card game for Mobage titled A Certain Magical Index: Struggle Battle, which was published by ASCII Media Works on December 25, 2012. The game was updated to A Certain Magical Index: Struggle Battle II, but was later announced at the end of its service on March 30, 2018.
Guyzware and Namco Bandai Games announced on June 8, 2012, a collaboration project for a game adaptation of the series, which was revealed to be a crossover visual novel game for PSP between A Certain Magical Index and A Certain Scientific Railgun franchises titled A Certain Magical and Scientific Ensemble. The game was released on February 21, 2013. Heroz also developed an action puzzle game titled A Certain Magical and Scientific Puzzdex, which was published by ASCII Media Works in 2014. NetEase Games developed a massively multiplayer online role-playing game based on the series with supervision from Kadokawa Corporation, titled A Certain Magical Index: Genuine Mobile Game, which was released to the Chinese market in 2017.
On January 4, 2019, Square Enix released a teaser trailer that announced their game titled A Certain Magical Index: Imaginary Fest, which was released on July 4. Fuji Shoji, a Japanese company known for their pachinko and pachislot products, released a teaser trailer on August 24, 2020, for the series' pachinko, which was later launched in November. Sun Electronics adapted it into a mobile game and launched it on December 17, 2020.
Other media
A radio drama was broadcast in Dengeki Taishō, narrating the story of an encounter with the mysterious self-proclaimed "former" sorcerer by Toma Kamijo and Index in a family restaurant after Mikoto Misaka decided to go back due to urgent business. It was later released as a drama CD, which was included in the mail-in order of the 48th volume of Dengeki hp. The drama CD, which contains a new story about Misaka and Kuroko Shirai with their "urgent business" and a duel request by a Level 3 psychic girl from Tokiwadai Middle School, became available for purchase in December 2007.
Geneon Universal Entertainment (now NBCUniversal Entertainment Japan) released four audio dramas for the first season of Index under the title A Certain Magical Index Archives from March 25 to August 21, 2009. The same enterprise released another four audio dramas for the second season from May 25 to August 24, 2011. A drama CD for A Certain Magical Index III was released as a bonus for customers who purchased the limited edition volume sets of the series via Amazon'''s Japanese website.
Several characters from A Certain Magical Index cross over with other characters created by Kamachi in his light novel , which was published on February 10, 2015. As a collaboration with Sega's Virtual On video game franchise, Kamachi wrote the crossover light novel titled with mecha illustrations designed by Hajime Katoki, which was released on May 10, 2016. One of Kamachi's light novel works, , crosses over with A Certain Magical Index under the title , which was released in May 2020.
A manga adaptation of The Circumstance Leading to a Simple Killer Princess' Marriage Was a Certain Magical Heavy Zashiki Warashi was serialized on Monthly Shōnen Gangan from February 12 to October 10, 2015. The first tankōbon volume was published on November 21, 2015, and the final volume on December 22. A manga adaptation of A Certain Magical Virtual-On began publication in ASCII Media Works' Monthly Denplay Comic magazine from March 10, 2018, to June 26, 2019, with a total of three tankōbon volumes.
The series is featured in Dengeki Gakuen RPG: Cross of Venus for the Nintendo DS, with Index appearing as a supporting character. The series was also adapted into Bushiroad's Weiß Schwarz collectible card game, which was released on April 24, 2010. Index also makes a cameo appearance in the Oreimo PSP game. Sega's Dengeki Bunko: Fighting Climax brings Mikoto Misaka as a playable character, while Toma Kamijo and Accelerator are assist characters. Sega and Dengeki Bunko later collaborated to develop A Certain Magical Virtual-On for the PlayStation 4 and PlayStation Vita, which was released on February 15, 2018. Toma, Mikoto, Accelerator, Kuroko Shirai, and Kazari Uiharu appear as playable characters in Dengeki Bunko: Crossing Void, a 2018 mobile game developed by 91Act and Sega.
Reception
Awards
The light novel series has consistently ranked in the top ten light novels in Takarajimasha's guidebook Kono Light Novel ga Sugoi!. Notably, the series ranked first in 2011, while also ranking in the top three in 2012, 2013, 2014, and 2017. In 2020, the series was inducted into the hall of fame, thus barred from ranking in future years. Kamachi, Haimura, and several of the series' characters have also ranked in the guidebook, notably with Mikoto Misaka winning the award for best female character nine times in ten years.
In Kadokawa Shoten's Light Novel Award contest held in 2007, A Certain Magical Index was a runner-up in the action category. The series also ranked in Kadokawa Light Novel Expo 2020's top light novels in the infinite passion category.
Sales
In May 2010, it was reported that A Certain Magical Index became Dengeki Bunko's number one bestseller and it became the first Dengeki Bunko series to sell over 10 million copies. Later that year, it became the fifth best-selling light novel in Japan, beating other popular series such as Full Metal Panic! and Haruhi Suzumiya. It was reported in October 2014 that the entire franchise, including the light novels and manga, had sold over 28 million copies. It was reported in August 2017 that the light novels have sold over 16.35 million copies. In July 2018, the series was reported to have sold over 30 million copies. It was reported with the release of Sorcery Hacker >> Divulge the Magic Vulnerability that the physical sales of the series had reached 18 million copies. As of May 2021, it was reported that the light novel, manga, and spin-off series reached 31 million copies.
Critical reception
Matthew Warner from The Fandom Post rated the first volume of the light novel an 'A', calling it a "fantastic start". Sean Gaffney from A Case Suitable for Treatment also praised it, calling it a "solid beginning", while also noted that it can be a bit slow at times. Theron Martin from Anime News Network also praised it for the concept, as well as for keeping Toma Kamijo balanced, while also criticizing the illustrations.
Richard Gutierrez from The Fandom Post praised the premise of the manga, but criticized the execution due to the lack of background it provides. Leroy Douresseaux from Comic Book Bin praised the volume he reviewed, stating the art by Chuya Kogino fits the series perfectly. However, Erkael from Manga News was more critical, specifically for the artwork, but he did praise the story and concept.
Chris Beveridge from The Fandom Post praised the anime adaptation, calling it a "fun series" and "pretty engaging". Ian Wolf from Anime UK News also praised the series, specifically for the action, while calling the music "[just] okay". Like Beveridge and Wolf, Carl Kimlinger from Anime News Network also offered praise to the series for the characters and action, while criticizing for being a bit generic at times. Like Kimlinger, Theron Martin from the same website also praised the action and characters, while criticizing it for feeling preachy at times. André Van Renssen from Active Anime called the series "a decent show", comparing it to Shakugan no Shana'' for its action. Despite that, they also criticized the series for being too violent at times.
Notes
References
External links
2004 Japanese novels
2007 manga
2008 anime television series debuts
2011 Japanese novels
2016 Japanese novels
Action anime and manga
ASCII Media Works manga
Book series introduced in 2004
Dengeki Bunko
Dengeki Daioh
AT-X (TV network) original programming
Fiction books about psychic powers
Funimation
Gangan Comics manga
Fiction about genetic engineering
J.C.Staff
Kadokawa Dwango franchises
Light novels
Muse Communication
NBCUniversal Entertainment Japan
Anime and manga set in schools
Science fantasy anime and manga
Shōnen manga
Square Enix franchises
Television shows based on light novels
Warner Entertainment Japan franchises
Works published under a pseudonym
Yen Press titles | A Certain Magical Index | Engineering,Biology | 4,922 |
59,031,392 | https://en.wikipedia.org/wiki/Quantum%20speed%20limit | In quantum mechanics, a quantum speed limit (QSL) is a limitation on the minimum time for a quantum system to evolve between two distinguishable (orthogonal) states. QSL theorems are closely related to time-energy uncertainty relations. In 1945, Leonid Mandelstam and Igor Tamm derived a time-energy uncertainty relation that bounds the speed of evolution in terms of the energy dispersion. Over half a century later, Norman Margolus and Lev Levitin showed that the speed of evolution cannot exceed the mean energy, a result known as the Margolus–Levitin theorem. Realistic physical systems in contact with an environment are known as open quantum systems and their evolution is also subject to QSL. Quite remarkably it was shown that environmental effects, such as non-Markovian dynamics can speed up quantum processes, which was verified in a cavity QED experiment.
QSL have been used to explore the limits of computation and complexity. In 2017, QSLs were studied in a quantum oscillator at high temperature. In 2018, it was shown that QSL are not restricted to the quantum domain and that similar bounds hold in classical systems. In 2021, both the Mandelstam-Tamm and the Margolus–Levitin QSL bounds were concurrently tested in a single experiment which indicated there are "two different regimes: one where the Mandelstam-Tamm limit constrains the evolution at all times, and a second where a crossover to the Margolus-Levitin limit occurs at longer times."
In quantum sensing, QSLs impose fundamental constraints on the maximum achievable time resolution of quantum sensors. These limits stem from the requirement that quantum states must evolve to orthogonal states to extract precise information. For example, in applications like Ramsey interferometry, the QSL determines the minimum time required for phase accumulation during control sequences, directly impacting the sensor's temporal resolution and sensitivity.
Preliminary definitions
The speed limit theorems can be stated for pure states, and for mixed states; they take a simpler form for pure states. An arbitrary pure state can be written as a linear combination of energy eigenstates:
The task is to provide a lower bound for the time interval required for the initial state to evolve into a state orthogonal to . The time evolution of a pure state is given by the Schrödinger equation:
Orthogonality is obtained when
and the minimum time interval required to achieve this condition is called the orthogonalization interval or orthogonalization time.
Mandelstam–Tamm limit
For pure states, the Mandelstam–Tamm theorem states that the minimum time required for a state to evolve into an orthogonal state is bounded below:
,
where
,
is the variance of the system's energy and is the Hamiltonian operator. The quantum evolution is independent of the particular Hamiltonian used to transport the quantum system along a given curve in the projective Hilbert space; the distance along this curve is measured by the Fubini–Study metric. This is sometimes called the quantum angle, as it can be understood as the arccos of the inner product of the initial and final states.
For mixed states
The Mandelstam–Tamm limit can also be stated for mixed states and for time-varying Hamiltonians. In this case, the Bures metric must be employed in place of the Fubini–Study metric. A mixed state can be understood as a sum over pure states, weighted by classical probabilities; likewise, the Bures metric is a weighted sum of the Fubini–Study metric. For a time-varying Hamiltonian and time-varying density matrix the variance of the energy is given by
The Mandelstam–Tamm limit then takes the form
,
where is the Bures distance between the starting and ending states. The Bures distance is geodesic, giving the shortest possible distance of any continuous curve connecting two points, with understood as an infinitessimal path length along a curve parametrized by Equivalently, the time taken to evolve from to is bounded as
where
is the time-averaged uncertainty in energy. For a pure state evolving under a time-varying Hamiltonian, the time taken to evolve from one pure state to another pure state orthogonal to it is bounded as
This follows, as for a pure state, one has the density matrix The quantum angle (Fubini–Study distance) is then and so one concludes when the initial and final states are orthogonal.
Margolus–Levitin limit
For the case of a pure state, Margolus and Levitin obtain a different limit, that
where is the average energy,
This form applies when the Hamiltonian is not time-dependent, and the ground-state energy is defined to be zero.
For time-varying states
The Margolus–Levitin theorem can also be generalized to the case where the Hamiltonian varies with time, and the system is described by a mixed state. In this form, it is given by
with the ground-state defined so that it has energy zero at all times.
This provides a result for time varying states. Although it also provides a bound for mixed states, the bound (for mixed states) can be so loose as to be uninformative. The Margolus–Levitin theorem has not yet been experimentally established in time-dependent quantum systems, whose Hamiltonians are driven by arbitrary time-dependent parameters, except for the adiabatic case.
Dual Margolus–Levitin limit
In addition to the original Margolus–Levitin limit, a dual bound exists for quantum systems with a bounded energy spectrum. This dual bound, also known as the Ness–Alberti–Sagi limit or the Ness limit, depends on the difference between the state's mean energy and the energy of the highest occupied eigenstate. In bounded systems, the minimum time required for a state to evolve to an orthogonal state is bounded by
where is the energy of the highest occupied eigenstate and is the mean energy of the state. This bound complements the original Margolus–Levitin limit and the Mandelstam–Tamm limit, forming a trio of constraints on quantum evolution speed.
Levitin–Toffoli limit
A 2009 result by Lev B. Levitin and Tommaso Toffoli states that the precise bound for the Mandelstam–Tamm theorem is attained only for a qubit state. This is a two-level state in an equal superposition
for energy eigenstates and . The states and are unique up to degeneracy of the energy level and an arbitrary phase factor This result is sharp, in that this state also satisfies the Margolus–Levitin bound, in that and so This result establishes that the combined limits are strict:
Levitin and Toffoli also provide a bound for the average energy in terms of the maximum. For any pure state the average energy is bounded as
where is the maximum energy eigenvalue appearing in (This is the quarter-pinched sphere theorem in disguise, transported to complex projective space.) Thus, one has the bound
The strict lower bound is again attained for the qubit state
with .
Bremermann's limit
The quantum speed limit bounds establish an upper bound at which computation can be performed. Computational machinery is constructed out of physical matter that follows quantum mechanics, and each operation, if it is to be unambiguous, must be a transition of the system from one state to an orthogonal state. Suppose the computing machinery is a physical system evolving under Hamiltonian that does not change with time. Then, according to the Margolus–Levitin theorem, the number of operations per unit time per unit energy is bounded above by
This establishes a strict upper limit on the number of calculations that can be performed by physical matter. The processing rate of all forms of computation cannot be higher than about 6 × 1033 operations per second per joule of energy. This is including "classical" computers, since even classical computers are still made of matter that follows quantum mechanics.
This bound is not merely a fanciful limit: it has practical ramifications for quantum-resistant cryptography. Imagining a computer operating at this limit, a brute-force search to break a 128-bit encryption key requires only modest resources. Brute-forcing a 256-bit key requires planetary-scale computers, while a brute-force search of 512-bit keys is effectively unattainable within the lifetime of the universe, even if galactic-sized computers were applied to the problem.
The Bekenstein bound limits the amount of information that can be stored within a volume of space. The maximal rate of change of information within that volume of space is given by the quantum speed limit. This product of limits is sometimes called the Bremermann–Bekenstein limit; it is saturated by Hawking radiation. That is, Hawking radiation is emitted at the maximal allowed rate set by these bounds.
References
Further reading
Quantum mechanics
Mathematical physics | Quantum speed limit | Physics,Mathematics | 1,834 |
24,772,738 | https://en.wikipedia.org/wiki/Harrison%20Mixbus | Harrison Mixbus is a digital audio workstation (DAW) released in 2009, compatible with Microsoft Windows, macOS X, and Linux. It is built on the open-source DAW "Ardour" but includes additional features developed by Harrison Audio Consoles, such as analog-modeled EQ, compression, summing on channel strips, and a master bus with limiter and loudness monitoring tools.
Features of Mixbus
Mixbus has the features of Ardour, with additional functionality from proprietary DSP, replicating the workflow, signal path, and sound of a Harrison console.
Each channel strip in Mixbus features analog modeled 3 bands EQ (including a high pass filter), compression (with 3 compressor types), panning, and summing.
It includes 8 stereo mixbuses featuring tone controls, tape saturation, and compression (including a sidechain compressor).
The master bus is similar to the mix buses but has the addition of a limiter, a K14 meter for loudness monitoring, and a stereo correlation meter.
Mixbus started as an audio-only workstation. In earlier versions, it also depended on the JACK audio server as its backend. Since version 3, Mixbus supports both audio and MIDI tracks, and it no longer depends on JACK, although JACK can still be used as one of its audio backends.
See also
List of MIDI editors and sequencers
References
Audio editing software for Linux
Linux
Digital audio editors for Linux
Digital audio recording
Digital audio workstation software
Linux software
MacOS audio editors | Harrison Mixbus | Engineering | 317 |
4,555,788 | https://en.wikipedia.org/wiki/Society%20of%20Chemical%20Industry | The Society of Chemical Industry (SCI) is a learned society set up in 1881 "to further the application of chemistry and related sciences for the public benefit".
Offices
The society's headquarters is in Belgrave Square, London. There are semi-independent branches in the United States, Canada and Australia.
Aims
The society aims to accelerate the rate of scientific innovations being commercialised by industry to benefit society. It does this through promoting collaborations between scientists and industrialists, running technical and innovation conferences, building communities across academia and industry and publishing scientific content through its journals and digital platforms.
It also promotes science education.
History
On 21 November 1879, Lancashire chemist John Hargreaves canvassed a meeting of chemists and managers in Widnes, St Helens and Runcorn to consider the formation of a chemical society. Modelled on the successful Tyne Chemical Society already operating in Newcastle, the newly proposed South Lancashire Chemical Society held its first meeting on 29 January 1880 in Liverpool, with the eminent industrial chemist and soda manufacturer Ludwig Mond presiding.
It was quickly decided that the society should not be limited to just the local region and the title 'the Society of Chemical Industry’ was finally settled upon at a meeting in London on 4 April 1881, as being 'more inclusive'. Held at the offices of the Chemical Society, now the headquarters of the Royal Society of Chemistry, in Burlington House, this meeting was presided over by Henry Roscoe, appointed first president of SCI, and attended by Eustace Carey, Ludwig Mond, FA Abel, Lowthian Bell, William H Perkin, Walter Weldon, Edward Rider Cook, Thomas Tyrer and George E Davis; all prominent scientists, industrialists and MPs of the time.
The society grew rapidly, launching international and regional sections. In 1881 Ivan Levinstein was a founder of the Manchester Section of the Society of Chemical Industry, later following Sir Henry Roscoe as chair of the Section. Levinstein also served as president of the Society of Chemical Industry between 1901 and 1903.
Prominent early members included William Lever, George Matthey, Ludwig Mond, Henry Armstrong, Leo Baekeland, Rudolph Messel, Charles Tennant, Richard Seligman, Ferdinand Hurter and Marie Stopes.
Membership
The original membership fee was very steep for the time: The first subscription fee was set at one guinea, which would be equivalent to nearly £400 today. Four grades of membership were agreed at the time: member, associate, student and honorary, with most appointments made on the basis of a review of their 'eligibility' by the SCI council. Despite the high fee, by the time of the first official meeting of the Society of Chemical Industry in June 1881, it had attracted over 300 members.
Incorporation
An Extraordinary General Meeting was held on 27 March 1906, under the direction of president Edward Divers and secretary C. G. Cresswell, to discuss a motion to apply for incorporation under a royal charter. The resolution was formally proposed by Sir (Thomas) Boverton Redwood. After some discussion, the motion was unanimously supported.
The society was formally incorporated, by Royal Charter, as of 17 June 1907, and its bylaws were published in the Journal of the Society of Chemical Industry. By that time, it had expanded to include a number of satellite chapters, including Canada, New South Wales, New York and New England as well as locations within Great Britain.
Headquarters
The first headquarters of the newly fledged Society of Chemical Industry was established in 1881 at Palace Chambers, Bridge Street, Westminster, London. After a series of changes of address, the society finally moved to its fifth and present location at 14/15 – and initially 16 – Belgrave Square in 1955. Owned by the Duke of Westminster, along with the rest of Belgravia, the building was and still is part of the Grosvenor Estate and had recently been commandeered by the Ministry of Defence during World War II. The former Nazi commander Rudolf Hess is believed to have been interrogated in the building after he flew to Britain late in the war.
Activities and events
SCI organises over 100 conferences and events per year which are focused on cutting edge scientific and special interest subjects. These are primarily organised through SCI member-led technical, international and regional interest groups.
SCI runs free Public Evening Lectures, both at its headquarters as well as online, through its SCITalks! programme.
The society has an extensive awards programmes designed to raise awareness of the benefits of the practical application of chemistry and related sciences across scientific disciplines and industrial sectors. The SCI also confers scholarships and travel bursaries to student members, and celebrates accomplished scientists, educators and business people through a number of international awards, medals, and lectureships.
International groups
International groups include:
Society of Chemical Industry (American Section)
Journals
The society publishes a number of peer-reviewed scientific journals in conjunction with John Wiley & Sons:
Biofuels, Bioproducts and Biorefining
Energy Science & Engineering
Greenhouse Gases: Science and Technology
Journal of Chemical Technology and Biotechnology
Journal of the Science of Food and Agriculture
Polymer International
Chemistry & Industry
SCI also publishes the well-established magazine Chemistry & Industry (C&I).
Chemistry & Industry was launched by the society in 1923. From 1923 it has documented the advancements in chemistry and related science and the inventions being developed by large companies and start ups. It covers a diverse set of technologies and application areas and it is widely read across the community and is circulated internationally.
Awards and honours
The society has an extensive awards and honours programme.
The Honours programme was established in 1996 and is designed to raise awareness of the benefits of the practical application of chemistry and related sciences across scientific disciplines and industrial sectors and to celebrate accomplished scientists, inventors and entrepreneurs through a number of international awards, medals, and lectureships.
The most prestigious honours are the Society Medals, of which there are around 12, and these recognise those who exhibit leadership in promoting the objectives and values of the society. The Society Medals are awarded to persons who have made significant contributions in the field of chemical sciences, innovation and entrepreneurship.
References
External links
1881 establishments in the United Kingdom
Chemical engineering organizations
Chemical industry in the United Kingdom
International organisations based in London
Learned societies of the United Kingdom
Organisations based in the City of Westminster
Scientific organizations established in 1881
Scientific organisations based in the United Kingdom | Society of Chemical Industry | Chemistry,Engineering | 1,274 |
223,315 | https://en.wikipedia.org/wiki/Stellar%20population | In 1944, Walter Baade categorized groups of stars within the Milky Way into stellar populations.
In the abstract of the article by Baade, he recognizes that Jan Oort originally conceived this type of classification in 1926.
Baade observed that bluer stars were strongly associated with the spiral arms, and yellow stars dominated near the central galactic bulge and within globular star clusters. Two main divisions were defined as Population I star and population II, with another newer, hypothetical division called population III added in 1978.
Among the population types, significant differences were found with their individual observed stellar spectra. These were later shown to be very important and were possibly related to star formation, observed kinematics, stellar age, and even galaxy evolution in both spiral and elliptical galaxies. These three simple population classes usefully divided stars by their chemical composition or metallicity.
By definition, each population group shows the trend where lower metal content indicates higher age of stars. Hence, the first stars in the universe (very low metal content) were deemed population III, old stars (low metallicity) as population II, and recent stars (high metallicity) as population I. The Sun is considered population I, a recent star with a relatively high 1.4% metallicity. Note that astrophysics nomenclature considers any element heavier than helium to be a "metal", including chemical non-metals such as oxygen.
Stellar development
Observation of stellar spectra has revealed that stars older than the Sun have fewer heavy elements compared with the Sun. This immediately suggests that metallicity has evolved through the generations of stars by the process of stellar nucleosynthesis.
Formation of the first stars
Under current cosmological models, all matter created in the Big Bang was mostly hydrogen (75%) and helium (25%), with only a very tiny fraction consisting of other light elements such as lithium and beryllium. When the universe had cooled sufficiently, the first stars were born as population III stars, without any contaminating heavier metals. This is postulated to have affected their structure so that their stellar masses became hundreds of times more than that of the Sun. In turn, these massive stars also evolved very quickly, and their nucleosynthetic processes created the first 26 elements (up to iron in the periodic table).
Many theoretical stellar models show that most high-mass population III stars rapidly exhausted their fuel and likely exploded in extremely energetic pair-instability supernovae. Those explosions would have thoroughly dispersed their material, ejecting metals into the interstellar medium (ISM), to be incorporated into the later generations of stars. Their destruction suggests that no galactic high-mass population III stars should be observable. However, some population III stars might be seen in high-redshift galaxies whose light originated during the earlier history of the universe. Scientists have found evidence of an extremely small ultra metal-poor star, slightly smaller than the Sun, found in a binary system of the spiral arms in the Milky Way. The discovery opens up the possibility of observing even older stars.
Stars too massive to produce pair-instability supernovae would have likely collapsed into black holes through a process known as photodisintegration. Here some matter may have escaped during this process in the form of relativistic jets, and this could have distributed the first metals into the universe.
Formation of the observed stars
The oldest stars observed thus far, known as population II, have very low metallicities; as subsequent generations of stars were born, they became more metal-enriched, as the gaseous clouds from which they formed received the metal-rich dust manufactured by previous generations of stars from population III.
As those population II stars died, they returned metal-enriched material to the interstellar medium via planetary nebulae and supernovae, enriching further the nebulae, out of which the newer stars formed. These youngest stars, including the Sun, therefore have the highest metal content, and are known as population I stars.
Chemical classification by Walter Baade
Population I stars
Population I stars are young stars with the highest metallicity out of all three populations and are more commonly found in the spiral arms of the Milky Way galaxy. The Sun is considered as an intermediate population I star, while the sun-like Arae is much richer in metals. (The term "metal rich star" is used to describe stars with a significantly higher metallicity than the Sun; higher than can be explained by measurement error.)
Population I stars usually have regular elliptical orbits of the Galactic Center, with a low relative velocity. It was earlier hypothesized that the high metallicity of population I stars makes them more likely to possess planetary systems than the other two populations, because planets, particularly terrestrial planets, are thought to be formed by the accretion of metals. However, observations of the Kepler Space Telescope data have found smaller planets around stars with a range of metallicities, while only larger, potential gas giant planets are concentrated around stars with relatively higher metallicity – a finding that has implications for theories of gas-giant formation. Between the intermediate population I and the population II stars comes the intermediate disc population.
Population II stars
Population II, or metal-poor, stars are those with relatively little of the elements heavier than helium. These objects were formed during an earlier time of the universe. Intermediate population II stars are common in the bulge near the centre of the Milky Way, whereas population II stars found in the galactic halo are older and thus more metal-deficient. Globular clusters also contain high numbers of population II stars.
A characteristic of population II stars is that despite their lower overall metallicity, they often have a higher ratio of "alpha elements" (elements produced by the alpha process, like oxygen and neon) relative to iron (Fe) as compared with population I stars; current theory suggests that this is the result of type II supernovas being more important contributors to the interstellar medium at the time of their formation, whereas type Ia supernova metal-enrichment came at a later stage in the universe's development.
Scientists have targeted these oldest stars in several different surveys, including the HK objective-prism survey of Timothy C. Beers et al. and the Hamburg-ESO survey of Norbert Christlieb et al., originally started for faint quasars. Thus far, they have uncovered and studied in detail about ten ultra-metal-poor (UMP) stars (such as Sneden's Star, Cayrel's Star, BD +17° 3248) and three of the oldest stars known to date: HE 0107-5240, HE 1327-2326 and HE 1523-0901. Caffau's star was identified as the most metal-poor star yet when it was found in 2012 using Sloan Digital Sky Survey data. However, in February 2014 the discovery of an even lower-metallicity star was announced, SMSS J031300.36-670839.3 located with the aid of SkyMapper astronomical survey data. Less extreme in their metal deficiency, but nearer and brighter and hence longer known, are HD 122563 (a red giant) and HD 140283 (a subgiant).
Population III stars
Population III stars are a hypothetical population of extremely massive, luminous and hot stars with virtually no "metals", except possibly for intermixing ejecta from other nearby, early population III supernovae. The term was first introduced by Neville J. Woolf in 1965. Such stars are likely to have existed in the very early universe (i.e., at high redshift) and may have started the production of chemical elements heavier than hydrogen, which are needed for the later formation of planets and life as we know it.
The existence of population III stars is inferred from physical cosmology, but they have not yet been observed directly. Indirect evidence for their existence has been found in a gravitationally lensed galaxy in a very distant part of the universe. Their existence may account for the fact that heavy elements – which could not have been created in the Big Bang – are observed in quasar emission spectra. They are also thought to be components of faint blue galaxies. These stars likely triggered the universe's period of reionization, a major phase transition of the hydrogen gas composing most of the interstellar medium. Observations of the galaxy UDFy-38135539 suggest that it may have played a role in this reionization process. The European Southern Observatory discovered a bright pocket of early population stars in the very bright galaxy Cosmos Redshift 7 from the reionization period around 800 million years after the Big Bang, at . The rest of the galaxy has some later redder population II stars. Some theories hold that there were two generations of population III stars.
Current theory is divided on whether the first stars were very massive or not. One possibility is that these stars were much larger than current stars: several hundred solar masses, and possibly up to 1,000 solar masses. Such stars would be very short-lived and last only 2–5 million years. Such large stars may have been possible due to the lack of heavy elements and a much warmer interstellar medium from the Big Bang. Conversely, theories proposed in 2009 and 2011 suggest that the first star groups might have consisted of a massive star surrounded by several smaller stars. The smaller stars, if they remained in the birth cluster, would accumulate more gas and could not survive to the present day, but a 2017 study concluded that if a star of 0.8 solar masses () or less was ejected from its birth cluster before it accumulated more mass, it could survive to the present day, possibly even in our Milky Way galaxy.
Analysis of data of extremely low-metallicity population II stars such as HE 0107-5240, which are thought to contain the metals produced by population III stars, suggest that these metal-free stars had masses of 20~130 solar masses. On the other hand, analysis of globular clusters associated with elliptical galaxies suggests pair-instability supernovae, which are typically associated with very massive stars, were responsible for their metallic composition. This also explains why there have been no low-mass stars with zero metallicity observed, despite models constructed for smaller population III stars. Clusters containing zero-metallicity red dwarfs or brown dwarfs (possibly created by pair-instability supernovae) have been proposed as dark matter candidates, but searches for these types of MACHOs through gravitational microlensing have produced negative results.
Population III stars are considered seeds of black holes in the early universe. Unlike high-mass black hole seeds, such as direct collapse black holes, they would have produced light ones. If they could have grown to larger than expected masses, then they could have been quasi-stars, other hypothetical seeds of heavy black holes which would have existed in the early development of the Universe before hydrogen and helium were contaminated by heavier elements.
Detection of population III stars is a goal of NASA's James Webb Space Telescope.
On 8 December 2022, astronomers reported the possible detection of Population III stars, in a high-redshift galaxy called RX J2129–z8He II.
See also
Lists of astronomical objects
Lists of stars
Peekaboo Galaxy
Notes
References
Further reading
Physical cosmological concepts
Population | Stellar population | Physics,Astronomy | 2,307 |
5,100,937 | https://en.wikipedia.org/wiki/Cavity%20wall | A cavity wall is a type of wall that has an airspace between the outer face and the inner, usually structural, construction. The skins typically are masonry, such as brick or cinder block. Masonry is an absorbent material that can retain rainwater or condensation. One function of the cavity is to drain water through weep holes at the base of the wall system or above windows. The weep holes provide a drainage path through the cavity that allows accumulated water an outlet to the exterior of the structure. Usually, weep holes are created by leaving out mortar at the vertical joints between bricks at regular intervals, by the insertion of tubes, or by inserting an absorbent wicking material into the joint. Weep holes are placed wherever a cavity is interrupted by a horizontal element, such as door or window lintels, masonry bearing angles, or slabs. A cavity wall with masonry as both inner and outer vertical elements is more commonly referred to as a double wythe masonry wall.
History
Cavity walls were first used in Greco-Roman buildings, but fell out of use until the 19th century, when they were reintroduced in the United Kingdom, gaining widespread use in the 1920s. In the 20th century metal ties came into use to bind the layers together. Initially cavity widths were narrow and were primarily implemented to reduce the passage of moisture into the interior of the building. The introduction of insulation into the cavity became standard in the 1970s and then required by most building codes in the 1990s.
Advantages
Resist wind driven rain
Thermal break provided by slow moving air films and airgap
Enables use of insulation in the cavity if the cavity is designed for it
Tie types
A tie in a cavity wall is used to secure the internal and external walls, typically using metal tie straps or truss-like assemblies of welded wire that link the masonry thicknesses together.
Components
A cavity wall is composed of two masonry walls separated by an air space. The outer wall is made of brick and faces the outside of the building structure. The inner wall may be constructed of masonry units such as concrete block, structural clay, brick or reinforced concrete. These two walls are fastened together with metal ties or bonding blocks. The ties strengthen the cavity wall.
The water barrier is a water-resistant membrane, either applied to the inner side of the cavity as a film or as a troweled or sprayed liquid.
The flashing component is important. Its main purpose is to direct water out of the cavity. Metal flashing usually extends from the interior wall through the outer wall and a weep hole with a downward curve allows the water to drain. Flashing systems in cavity walls are typically located close to the base of the wall, so that it will collect the water that goes down the wall.
Weep holes are drainage holes left in the exterior wall of the cavity wall, to provide an exit way for water in the cavity.
Expansion and control joints do not have to be aligned in cavity walls.
In modern cavity wall construction, cavity insulation is typically added. This construction makes it possible to add a continuous insulation layer between the two wythes and, vertically, through the slabs, which minimizes thermal bridges. However, industry recommendations, often mandated by building codes, typically require that a cavity wall maintain at least a drainage space free of masonry elements or insulation.
Insulation
Cavity wall insulation is used to reduce heat loss through a cavity wall by filling a portion of the air space with material that inhibits heat transfer.
During construction of new buildings, cavities are often partially filled with rigid insulation panels placed between the two components of the wall
United Kingdom
In the United Kingdom, grants from the government and from energy companies are widely available to help with the cost of cavity wall insulation. The Affordable Warmth Objective (HHCRO) provides help for low income and vulnerable households to improve the energy efficiency of their properties and reduce heating bills.
Government led research led to the development of advice on the installation of insulation, the definition of local rain exposure zones, and the creation of British Standard BS 8104 in 1992 that sets out the calculation procedure for assessing exposure of walls to wind driven rain to guide the installation of insulation.
A significant number of properties that had the insulation installed by successive UK government-backed schemes were installed incorrectly or were unsuitable for the property. Incorrectly installed cavity wall insulation (CWI) causes water to seep into a property's walls, causing structural problems and damp patches that may also manifest into mould. In some cases, the damp and mould resulting from CWI can cause health problems or exacerbate existing conditions, particularly respiratory conditions. This has led to the formation of the Cavity Wall Insulation Victims Alliance (CWIVA). On 3 February 2015 the CWIVA took the debate to the houses of parliament discussing the cavity wall insulation industry.
Issues
Breathing performance; early cavity wall buildings exchange moisture readily with the indoor and outdoor environment. Materials used for repairs must be selected with care to not affect the materials' breathing performance.
Cavity wall insulation installed in older buildings can create problems with moisture retention.
Thermal mass cavity walls are thick walls. These help stabilize the interior environment of a building better than thinner modern walls.
Environmental Influences: The orientation or design of a building may affect the performance of different façades on a building. Some walls may receive more rainwater and wind than others depending in their orientation or protection to some of the faces.
Moisture is one of the main problems in materials weathering.
References
External links
Whittemore, H. L. (1939). Structural properties of a concrete-block cavity-wall construction sponsored by the National Concrete Masonry Association. Washington, D.C.: U.S. Dept. of Commerce, National Bureau of Standards .
Whittemore, H. L. (1939). Structural properties of a reinforced-brick wall construction and a brick-tile cavity-wall construction sponsored by the Structural Clay Products Institute. Washington, D.C.: U.S. Dept. of Commerce, National Bureau of Standards .
Brick Cavity Walls: A Performance Analysis Based on Measurements and Simulations. Journal of Building Physics. October 2007 v31: p95-124 Article must be purchased.
Cavity wall, Energy Saving Trust
Cavity wall insulation (CWI): consumer guide to issues arising from installations, 14 October 2019, Department for Business, Energy & Industrial Strategy
Cavity Wall Insulation Victims Alliance
Masonry
Construction
Types of wall | Cavity wall | Engineering | 1,281 |
12,096,743 | https://en.wikipedia.org/wiki/LiveStation | Livestation was a platform for distributing live television and radio broadcasts over a data network. It was originally developed by Skinkers Ltd. and is now an independent company called Livestation Ltd. The service was originally based on peer-to-peer technology acquired from Microsoft Research. Between mid-June 2013 and mid-July Livestation was unavailable to some subscribers due to technical issues.
In late 2016, the service closed down without notice.
Overview
Livestation aggregated international news channels online and offered them in some ways:
Free to watch: Some channels could be watched for free on the Livestation website or on their desktop player, a freely downloadable video application that presented all the channels through one interface.
Premium service: Some of the free channels were also available on a subscription basis both in higher quality (800 kbit/s) and in lower (256 kbit/s) delivered via an international content distribution network for higher reliability.
Mobile: Livestation launched BBC World News on the iPhone in 16 European countries and Al Jazeera English globally. The apps were available in the iOS AppStore and streamed the live TV channel 24/7 on both Wi-Fi and 3G connections.
Livestation broadcast streams are encoded in VC-1 format (Livestation is not currently using peer-to-peer). Playback controls were overlaid on top of the video stream. Unlike services such as Joost which offer video-on-demand channels, Livestation streams live broadcasts.
Livestation provided a website, mobile website and native applications for the iOS, Android, Nokia and Blackberry handsets. Early models of Samsung TV were also supported. They also provided desktop software available for Windows, Mac (including PowerPC) and Linux. The cross-platform compatibility of the desktop software was facilitated by the Qt framework. Social networking features were later added that include the ability to chat with other viewers and also find out what others are watching through a user-generated rating system. You could search and select the available channels either from the website or from within the software.
In the first quarter of 2011 by 1047 percent, resulting in the first profitable quarter in its history.
Between mid-June and mid-July 2013, Livestation suffered a prolonged series of technical issues and was unavailable to some users.
In early 2015, Livestation re-branded their entire site changing what channels were offered and bringing in an interactive feature. Some stations on the app were not on the main site and vice versa.
Available channels
Stations available until closure and former live TV news channels in the global offering (which comes with a default installation) included, as of 2016:
ABS-CBN News Channel
Al Aan TV
Al-Alam News Network
Al Arabiya
Al Jazeera
Al Jazeera English
Al Jazeera Mubasher
Al Mayadeen
Al Nabaa TV
BBC Arabic
BBC Persian
BBC World News
BBC World Service Radio
CNBC
CNBC Arabiya (EMEA)
Bloomberg TV
BBC News Channel
CCTV News
CNC World
CNN International
C-SPAN
Democratic Voice of Burma
Deutsche Welle TV and radio
eNCA
Euronews
Espreso TV
Fox News Radio
France24
HispanTV
i24news
Kurdast News
Libya TV
NASA TV
NHK World News
One News
Press TV
RFI Afrique and Monde.
Reuters TV
Russia Today
SAMAA TV
Sky News Arabia
Sky News International
TeleSUR
United Nations Television
UNHCR TV
VOA Persian
As of 2016, the Livestation site is closed.
See also
IPTV
Internet Television
TVUnetworks
References
External links
Official website
Live Station Status
Internet television streaming services
Internet properties established in 2008
Internet properties disestablished in 2016
Defunct companies based in London
Defunct video on demand services
Microsoft Research
Television technology | LiveStation | Technology | 747 |
41,047,821 | https://en.wikipedia.org/wiki/Reynolds%20stress%20equation%20model | Reynolds stress equation model (RSM), also referred to as second moment closures are the most complete classical turbulence model. In these models, the eddy-viscosity hypothesis is avoided and the individual components of the Reynolds stress tensor are directly computed. These models use the exact Reynolds stress transport equation for their formulation. They account for the directional effects of the Reynolds stresses and the complex interactions in turbulent flows. Reynolds stress models offer significantly better accuracy than eddy-viscosity based turbulence models, while being computationally cheaper than Direct Numerical Simulations (DNS) and Large Eddy Simulations.
Shortcomings of Eddy-viscosity based models
Eddy-viscosity based models like the and the models have significant shortcomings in complex, real-life turbulent flows. For instance, in flows with streamline curvature, flow separation, flows with zones of re-circulating flow or flows influenced by mean rotational effects, the performance of these models is unsatisfactory.
Such one- and two-equation based closures cannot account for the return to isotropy of turbulence, observed in decaying turbulent flows. Eddy-viscosity based models cannot replicate the behaviour of turbulent flows in the Rapid Distortion limit, where the turbulent flow essentially behaves as an elastic medium (instead of viscous).
Reynolds Stress Transport Equation
Reynolds Stress equation models rely on the Reynolds Stress Transport equation. The equation for the transport of kinematic Reynolds stress is
Rate of change of + Transport of by convection = Transport of by diffusion + Rate of production of + Transport of due to turbulent pressure-strain interactions + Transport of due to rotation + Rate of dissipation of .
The six partial differential equations above represent six independent Reynolds stresses. While the Production term () is closed and does not require modelling, the other terms, like pressure strain correlation () and dissipation (), are unclosed and require closure models.
Production term
The Production term that is used in CFD computations with Reynolds stress transport equations is
Physically, the Production term represents the action of the mean velocity gradients working against the Reynolds stresses. This accounts for the transfer of kinetic energy from the mean flow to the fluctuating velocity field. It is responsible for sustaining the turbulence in the flow through this transfer of energy from the large scale mean motions to the small scale fluctuating motions.
This is the only term that is closed in the Reynolds Stress Transport Equations. It requires no models for its direct evaluation. All other terms in the Reynolds Stress Transport Equations are unclosed and require closure models for their evaluation.
Rapid Pressure-Strain Correlation term
The rapid pressure-strain correlation term redistributes energy among the Reynolds stresses components. This is dependent on the mean velocity gradient and rotation of the co-ordinate axes. Physically, this arises due to the interaction among the fluctuating velocity field and the mean velocity gradient field. The simplest linear form of the model expression is
Here is the Reynolds stress anisotropy tensor, is the rate of strain term for the mean velocity field and is the rate of rotation term for the mean velocity field. By convention, are the coefficients of the rapid pressure strain correlation model. There are many different models for the rapid pressure strain correlation term that are used in simulations. These include the Launder-Reece-Rodi model, the Speziale-Sarkar-Gatski model, the Hallback-Johanssen model, the Mishra-Girimaji model, besides others.
Slow Pressure-Strain Correlation term
The slow pressure-strain correlation term redistributes energy among the Reynolds stresses. This is responsible for the return to isotropy of decaying turbulence where it redistributes energy to reduce the anisotropy in the Reynolds stresses. Physically, this term is due to the self-interactions amongst the fluctuating field. The model expression for this term is given as
There are many different models for the slow pressure strain correlation term that are used in simulations. These include the Rotta model
, the Speziale-Sarkar model
, besides others.
Dissipation term
The traditional modelling of the dissipation rate tensor assumes that the small dissipative eddies are isotropic. In this model the dissipation only affects the normal Reynolds stresses.
or
where is dissipation rate of turbulent kinetic energy, when i = j and 0 when i ≠ j and is the dissipation rate anisostropy defined as .
However, as has been shown by e.g. Rogallo,
Schumann & Patterson,
Uberoi,
Lee & Reynolds and Groth, Hallbäck & Johansson
there exist many situations where this simple model of the dissipation rate tensor is insufficient due to the fact that even the small dissipative eddies are anisotropic. To account for this anisotropy in the dissipation rate tensor Rotta proposed a linear model relating the anisotropy of the dissipation rate stress tensor to the anisotropy of the stress tensor.
or
where .
The parameter is assumed to be a function the turbulent Reynolds number, the mean strain rate etc. Physical considerations imply that should tend to zero when the turbulent Reynolds number tends to infinity and to unity when the turbulent Reynolds number tends to zero. However, the strong realizability condition implies that should be identically equal to 1.
Based on extensive physical and numerical (DNS and EDQNM) experiments in combination with a strong adherence to fundamental physical and mathematical limitations and boundary conditions Groth, Hallbäck and Johansson proposed an improved model for the dissipation rate tensor.
where is the second invariant of the tensor and is a parameter that, in principle, could depend on the turbulent Reynolds number, the mean strain rate parameter etc.
However, Groth, Hallbäck and Johansson used rapid distortion theory to evaluate the limiting value of which turns out to be 3/4. Using this value the model was tested in DNS-simulations of four different homogeneous turbulent flows. Even though the parameters in the cubic dissipation rate model were fixed through the use of realizability and RDT prior to the comparisons with the DNS data the agreement between model and data was very good in all four cases.
The main difference between this model and the linear one is that each component of is influenced by the complete anisotropic state. The benefit of this cubic model is apparent from the case of an irrotational plane strain in which the streamwise component of is close to zero for moderate strain rates whereas the corresponding component of is not. Such a behaviour cannot be described by a linear model.
Diffusion term
The modelling of diffusion term is based on the assumption that the rate of transport of Reynolds stresses by diffusion is proportional to the gradients of Reynolds stresses. This is an application of the concept of the gradient diffusion hypothesis to modeling the effect of spatial redistribution of the Reynolds stresses due to the fluctuating velocity field. The simplest form of that is followed by commercial CFD codes is
where , and .
Rotational term
The rotational term is given as
here is the rotation vector, =1 if i,j,k are in cyclic order and are different,=-1 if i,j,k are in anti-cyclic order and are different and =0 in case any two indices are same.
Advantages of RSM
1) Unlike the k-ε model which uses an isotropic eddy viscosity, RSM solves all components of the turbulent transport.
2) It is the most general of all turbulence models and works reasonably well for a large number of engineering flows.
3) It requires only the initial and/or boundary conditions to be supplied.
4) Since the production terms need not be modeled, it can selectively damp the stresses due to buoyancy, curvature effects etc.
See also
Reynolds Stress
Isotropy
Turbulence Modeling
Eddy
k-epsilon turbulence model
See also
k-epsilon turbulence model
Mixing length model
References
Bibliography
"Turbulent Flows", S. B. Pope, Cambridge University Press (2000).
"Modelling Turbulence in Engineering and the Environment: Second-Moment Routes to Closure", Kemal Hanjalić and Brian Launder, Cambridge University Press (2011).
Turbulence
Turbulence models | Reynolds stress equation model | Chemistry | 1,686 |
2,903,095 | https://en.wikipedia.org/wiki/Sigma%20Aurigae | Sigma Aurigae, Latinized from σ Aurigae, is a giant star in the northern constellation of Auriga. It is faintly visible to the naked eye with an apparent visual magnitude of 4.99. With an annual parallax shift of 6.21 mas, it is approximately distant from the Earth. This is an evolved giant star with a stellar classification of K4 III.
Sigma Aurigae has a 12th magnitude companion at an angular separation of 8 arcseconds, as well as two fainter companions at 28 and 35" respectively. All are background objects, stars much further away than Sigma itself.
Sigma Aurigae, along with λ Aur and μ Aur, were Kazwini's Al Ḣibāʽ (ألحباع), the Tent. According to the catalogue of stars in the Technical Memorandum 33-507 – A Reduced Star Catalog Containing 537 Named Stars, Al Ḣibāʽ were the title for three stars: λ Aur as Al Ḣibāʽ I, μ Aur as Al Ḣibāʽ II and σ Aur as Al Ḣibāʽ III.
References
External links
HR 1773
CCDM J05247+3723
Image Sigma Aurigae
035186
Double stars
025292
Aurigae, Sigma
Auriga
K-type giants
Aurigae, 21
1773
Durchmusterung objects | Sigma Aurigae | Astronomy | 282 |
4,777,875 | https://en.wikipedia.org/wiki/Clifford%20theory | In mathematics, Clifford theory, introduced by , describes the relation between representations of a group and those of a normal subgroup.
Alfred H. Clifford
Alfred H. Clifford proved the following result on the restriction of finite-dimensional irreducible representations from a group G to a normal subgroup N of finite index:
Clifford's theorem
Theorem. Let π: G → GL(n,K) be an irreducible representation with K a field. Then the restriction of π to N breaks up into a direct sum of irreducible representations of N of equal dimensions. These irreducible representations of N lie in one orbit for the action of G by conjugation on the equivalence classes of irreducible representations of N. In particular the number of pairwise nonisomorphic summands is no greater than the index of N in G.
Clifford's theorem yields information about the restriction of a complex irreducible character of a finite group G to a normal subgroup N. If μ is a complex character of N, then for a fixed element g of G, another character, μ(g), of N may be constructed by setting
for all n in N. The character μ(g) is irreducible if and only if μ is. Clifford's theorem states that if χ is a complex irreducible character of G, and μ is an irreducible character of N with
then
where e and t are positive integers, and each gi is an element of G. The integers e and t both divide the index [G:N]. The integer t is the index of a subgroup of G, containing N, known as the inertial subgroup of μ. This is
and is often denoted by
The elements gi may be taken to be representatives of all the right cosets of the subgroup IG(μ) in G.
In fact, the integer e divides the index
though the proof of this fact requires some use of Schur's theory of projective representations.
Proof of Clifford's theorem
The proof of Clifford's theorem is best explained in terms of modules (and the module-theoretic version works for irreducible modular representations). Let K be a field, V be an irreducible K[G]-module, VN be its restriction to N and U be an irreducible K[N]-submodule of VN. For each g in G and n in N, the equality holds, since N was a normal subgroup of G. Therefore, g.U is an irreducible K[N]-submodule of VN, and is a K[G]-submodule of V, hence must be all of V by irreducibility. Now VN is expressed as a sum of irreducible submodules, and this expression may be refined to a direct sum. The proof of the character-theoretic statement of the theorem may now be completed in the case K = C. Let χ be the character of G afforded by V and μ be the character of N afforded by U. For each g in G, the C[N]-submodule g.U affords the character μ(g) and . The respective equalities follow because χ is a class-function of G and N is a normal subgroup. The integer e appearing in the statement of the theorem is this common multiplicity.
Corollary of Clifford's theorem
A corollary of Clifford's theorem, which is often exploited, is that the irreducible character χ appearing in the theorem is induced from an irreducible character of the inertial subgroup IG(μ). If, for example, the irreducible character χ is primitive (that is, χ is not induced from any proper subgroup of G), then G = IG(μ) and χN = eμ. A case where this property of primitive characters is used particularly frequently is when N is Abelian and χ is faithful (that is, its kernel contains just the identity element). In that case, μ is linear, N is represented by scalar matrices in any representation affording character χ and N is thus contained in the center of G. For example, if G is the symmetric group S4, then G has a faithful complex irreducible character χ of degree 3. There is an Abelian normal subgroup N of order 4 (a Klein 4-subgroup) which is not contained in the center of G. Hence χ is induced from a character of a proper subgroup of G containing N. The only possibility is that χ is induced from a linear character of a Sylow 2-subgroup of G.
Further developments
Clifford's theorem has led to a branch of representation theory in its own right, now known as Clifford theory. This is particularly relevant to the representation theory of finite solvable groups, where normal subgroups usually abound. For more general finite groups, Clifford theory often allows representation-theoretic questions to be reduced to questions about groups that are close (in a sense which can be made precise) to being simple.
found a more precise version of this result for the restriction of irreducible unitary representations of locally compact groups to closed normal subgroups in what has become known as the "Mackey machine" or "Mackey normal subgroup analysis".
References
Representation theory | Clifford theory | Mathematics | 1,096 |
251,002 | https://en.wikipedia.org/wiki/Red%20River%20Floodway | The Red River Floodway () is an artificial flood control waterway in Western Canada. It is a long channel which, during flood periods, takes part of the Red River's flow around the city of Winnipeg, Manitoba to the east and discharges it back into the Red River below the dam at Lockport. It can carry floodwater at a rate of up to , expanded in the 2000s from its original channel capacity of .
The Floodway was pejoratively nicknamed Duff's Ditch by opponents of its construction, after Premier Duff Roblin, whose Progressive Conservative government initiated the project, partly in response to the disastrous 1950 Red River flood. It was completed in time and under budget. Subsequent events have vindicated the plan, leading to the nickname becoming an affectionate one. Since its completion in 1968, the Floodway is estimated to have prevented over $40 billion in cumulative flood damage. It was designated a National Historic Site of Canada in 2000, as the floodway is an outstanding engineering achievement both in terms of function and impact.
From south to north, the Floodway passes through the extreme southeastern part of Winnipeg and the rural municipalities of Ritchot, Springfield, East St. Paul, and St. Clements.
History
Following the submission of the Royal Commission report Manitobans were strongly divided as to whether the province could afford the capital costs of a mammoth engineering project that would benefit primarily Winnipeg. The project was championed by Dufferin (Duff) Roblin, the Leader of the Opposition and head of the Manitoba Progressive Conservative Party, but it was vehemently denounced by opponents as a monumental, and potentially ruinous, waste of money. Indeed, the projected Red River Floodway was derisively referred to as “Duff”s Folly” and “Duff’s Ditch”, and decried as “approximating the building of the pyramids of Egypt in terms of usefulness.” The construction of the floodway and Assiniboine River works, would entail a capital cost of over $72 million, amortized over fifty years at 4% interest, at a time when the province had a population of only 900,000 and an annual net provincial revenue of about $74 million. Following the formation of a new provincial government in June 1958, Duff Roblin, the newly elected Premier of Manitoba, continued to promote the floodway, and managed to secure a commitment from the federal government of Prime Minister John Diefenbaker to pay up to 60% of the construction costs.
Construction of the Floodway started on November 27, 1962, and finished in March 1968. The construction was a major undertaking with of earth excavated—more than what was moved for the Suez Canal.
At the time, the project was the second largest earth-moving project in the world – next only to the construction of the Panama Canal. The total cost at the time was $63 million, equivalent to approximately $505 million today.
Design
The Floodway protection system includes more than just the channel to the east of the city, but also the dikes along the river through Winnipeg and the West Dike extending to the southwest from the floodway inlet. Primarily as a result of the Floodway, the city suffered little flood damage. After the 1997 flood, a 2004 re-assessment of the floodway and its channel capacity indicated that could be passed through the floodway during a major flood, but this is considered above the design capacity as it would submerge bridges, and the decision was made to further expand the floodway.
Although the term "floodway gates" is used for the control structure, this is a misnomer as the gates are actually on the Red River as it enters the city and not on the floodway channel. When Red River flows exceed what can safely be handled by the river channel within the city, the gates begin to close by rising up out of the river bed, to the degree needed, restricting water flow into the city to manageable amounts. The resulting upstream back-up of the Red River then flows into the adjacent floodway entrance, diverting the excess flow that could not be safely handled by the river channel within the city. Under flood conditions, even when the floodway is in operation, the Red River within the city will still carry greater than normal amounts of water and some local flood mitigation measures still may be required within the city. The rise in river levels upstream of the gates when in operation needs to be contained by a diking system.
The West Dike which extends to near the village of Brunkild is the limiting factor on the volume of water that can be diverted around the city, as a result of the extremely low grades in the area. This dike was urgently extended by 42 km from its previous western terminus near Domain MB in 1997 to prevent flood water from doing an end run around the original dike. In 2003, the province announced plans to expand the Floodway, increasing its flow capacity from . It was decided to widen the Floodway as opposed to deepening it because of the soil and ground conditions in the area. Many underground aquifers in the area are used for drinking water for rural residents and the aquifers could potentially be contaminated if the floodway were deeper. There is also potential for pressures to increase in the aquifers, causing a "blowout" to occur, where water would flow from the aquifers in the ground to the surface and reduce the capacity of the Floodway. Officials decided widening the floodway would be the best option despite the lower hydraulic capacity that would result.
Flow rates
Below are the peak flow rates recorded on the Red River Floodway since it was completed in 1968.
1997 Red River Flood
The 1997 flood was a 100-year flood. It came close to overwhelming Winnipeg's existing flood protection system. At the time, the Winnipeg Floodway was designed to protect against a flow of , but the 1997 flow was . To compensate, the province broke operational rules for the Floodway, as defined in legislation, during the night of April 30 / May 1, to prevent waters in Winnipeg from rising above the designed limit of above the "James Avenue datum", but causing additional flooding upriver. Winnipeg Mayor Susan Thompson, announcing that the design limit had been reached, misinterpreted this as good news that the flooding had peaked. City sand-bagging stopped, and national reporters left the city, but the water continued to rise inside and outside of the city until the peak late on May 3 / early on May 4. The city officials have said that the peak occurred on May 1; scientific reports record a peak on May 3/4.
Expansion
Since the 1997 flood resulted in water levels that took the existing floodway to the limits of its capacity, various levels of government commissioned engineering studies for a major increase in flood protection for the City of Winnipeg. Work began in late 2005 under a provincial collective bargaining agreement and has included modifications to rail and road crossings as well as transmission line spans, upgrades to inlet control structures and fire protection, increased elevation of existing dikes (including the Brunkild dike), and the widening of the entire floodway channel. The NDP government set aside a portion of the construction budget for aboriginal construction firms. The Red River Floodway Expansion was completed in late 2010 at a final cost of more than $665,000,000. Since the completion of the expansion, the capacity of the floodway has increased to per second, the estimated level of a 1-in-700 year flood event. (Using the flow rates of Niagara Falls as a standard of comparison, this is more than double its average of 1,833 cubic metres and about a third over its maximum.) The expanded floodway now protects over 140,000 homes, over 8,000 businesses, and will prevent more than $12 billion in damage to the provincial economy in the event of a 1-in-700 year flood.
The NDP government was criticized by Conservative Brian Pallister, then the Member of Parliament, for requiring workers in construction companies working on the floodway to unionize. Pallister, MP for the Portage—Lisgar constituency and future Manitoba premier, told parliament, "the Manitoba NDP government is planning to proceed with a plan to force every worker on the Red River floodway expansion to unionize, despite the fact that 95% of Manitoba's construction companies are not unionized."
The diversion of flood water has been criticized for shifting the impact of flooding from urban Winnipeg to rural communities such as Emerson, Morris, St Adolphe. In 1997 these towns and the surrounding farm buildings and lands ended up with the bulk of the flood water in order to save Winnipeg from flood damage. In 2011, the Manitoba government intentionally diverted water from the Assiniboine River to save Winnipeg which ended up flooding communities around Lake Manitoba - The communities of Pinaymootang, Lake St. Martin, Little Saskatchewan and Dauphin River were severely impacted, as well as the surrounding farmland and cottages.
Considerations in the United States
The city of Fargo, North Dakota faces very similar flooding challenges to Winnipeg due to its similar topography and position upstream of the Red River. In 2008, the US Army Corps of Engineers began a feasibility study of flood mitigation techniques for the area. During this study, the city faced catastrophic flooding, catapulting the project into public consciousness. In 2010, the US Federal government agreed to work with the city, its smaller sister city of Moorhead, Minnesota, as well as Cass and Clay counties to begin the formal planning process. The Federal government additionally pledged significant financial support for the project. The result was the Fargo-Moorhead Area Diversion Project, which is currently under construction as of 2024.
See also
Portage Diversion (Assiniboine River Floodway)
Shellmouth Reservoir
Notes
External links
Manitoba Floodway Authority
A Review of the Red River Floodway Operating Rules - Manitoba Conservation
Flood control works
CBC Video Archives: Duff's Ditch is completed
Manitoba Historical Society: “Duff’s Ditch”: The Origins, Construction, and Impact of the Red River Floodway
Red River of the North
Buildings and structures in Manitoba
Geography of Winnipeg
Flood control projects
Flood control in Canada
Macro-engineering
National Historic Sites in Manitoba | Red River Floodway | Engineering | 2,075 |
23,580,901 | https://en.wikipedia.org/wiki/C4H9NO2 | The molecular formula (molar mass: 103.12 g/mol) may refer to:
α-Aminobutyric acid
β-Aminobutyric acid
γ-Aminobutyric acid (GABA)
2-Aminoisobutyric acid
3-Aminoisobutyric acid
Nitroisobutane
n-Nitrobutane
Butyl nitrite
Dimethylglycine
Isobutyl nitrite
Molecular formulas | C4H9NO2 | Physics,Chemistry | 95 |
372,548 | https://en.wikipedia.org/wiki/Motor%20nerve | A motor nerve, or efferent nerve, is a nerve that contains exclusively efferent nerve fibers and transmits motor signals from the central nervous system (CNS) to the muscles of the body. This is different from the motor neuron, which includes a cell body and branching of dendrites, while the nerve is made up of a bundle of axons. Motor nerves act as efferent nerves which carry information out from the CNS to muscles, as opposed to afferent nerves (also called sensory nerves), which transfer signals from sensory receptors in the periphery to the CNS. Efferent nerves can also connect to glands or other organs/issues instead of muscles (and so motor nerves are not equivalent to efferent nerves). The vast majority of nerves contain both sensory and motor fibers and are therefore called mixed nerves.
Structure and function
Motor nerve fibers transduce signals from the CNS to peripheral neurons of proximal muscle tissue. Motor nerve axon terminals innervate skeletal and smooth muscle, as they are heavily involved in muscle control. Motor nerves tend to be rich in acetylcholine vesicles because the motor nerve, a bundle of motor nerve axons that deliver motor signals and signal for movement and motor control. Calcium vesicles reside in the axon terminals of the motor nerve bundles. The high calcium concentration outside of presynaptic motor nerves increases the size of end-plate potentials (EPPs).
Protective tissues
Within motor nerves, each axon is wrapped by the endoneurium, which is a layer of connective tissue that surrounds the myelin sheath. Bundles of axons are called fascicles, which are wrapped in perineurium. All of the fascicles wrapped in the perineurium are wound together and wrapped by a final layer of connective tissue known as the epineurium. These protective tissues defend nerves from injury, pathogens and help to maintain nerve function. Layers of connective tissue maintain the rate at which nerves conduct action potentials.
Spinal cord exit
Most motor pathways originate in the motor cortex of the brain. Signals run down the brainstem and spinal cord ipsilaterally, on the same side, and exit the spinal cord at the ventral horn of the spinal cord on either side. Motor nerves communicate with the muscle cells they innervate through motor neurons once they exit the spinal cord.
Motor nerve types
Motor nerves can vary based on the subtype of motor neuron they are associate with.
Alpha
Alpha motor neurons target extrafusal muscle fibers. The motor nerves associated with these neurons innervate extrafusal fibers and are responsible for muscle contraction. These nerve fibers have the largest diameter of the motor neurons and require the highest conduction velocity of the three types.
Beta
Beta motor neurons innervate intrafusal fibers of muscle spindles. These nerves are responsible for signaling slow twitch muscle fibers.
Gamma
Gamma motor neurons, unlike alpha motor neurons, are not directly involved in muscle contraction. The nerves associated with these neurons do not send signals that directly adjust the shortening or lengthening of muscle fibers. However, these nerves are important in keeping muscle spindles taut.
Neurodegeneration
Motor neural degeneration is the progressive weakening of neural tissues and connections in the nervous system. Muscles begin to weaken as there are no longer any motor nerves or pathways that allows for muscle innervation. Motor neuron diseases can be viral, genetic or be a result of environmental factors. The exact causes remain unclear, however many experts believe that toxic and environmental factors play a large role.
Neuroregeneration
There are problems with neuroregeneration due to many sources, both internal and external. There is a weak regenerative ability of nerves and new nerve cells cannot simply be made. The outside environment can also play a role in nerve regeneration. Neural stem cells (NSCs), however, are able to differentiate into many different types of nerve cells. This is one way that nerves can "repair" themselves. NSC transplant into damaged areas usually leads to the cells differentiating into astrocytes which assists the surrounding neurons. Schwann cells have the ability to regenerate, but the capacity that these cells can repair nerve cells declines as time goes on as well as distance the Schwann cells are from site of damage.
See also
Sensory nerve
Afferent nerve fiber
Efferent nerve fiber
Sensory neuron
Motor neuron (efferent neuron)
References
Nervous system | Motor nerve | Biology | 911 |
31,933,730 | https://en.wikipedia.org/wiki/E.%20S.%20Russell | Edward Stuart Russell OBE FLS (25 March 1887 – 24 August 1954) was a Scottish biologist and philosopher of biology.
Russell was born near Glasgow. He studied at Greenock Academy and later at Glasgow University under Sir Graham Kerr and worked with J. Arthur Thompson after he graduated. He was influenced by his friend Patrick Geddes and in his zoological studies, sought to find holistic principles. He also believed in Lamarckian heritability. He was involved in fishery research, working on research vessels and publishing on the biology of cephalopods and quantitative methods for gathering fishery data. He also worked as Scottish Fisheries expert, Inspector of Fisheries and as an advisor to HM Government. He was the first editor of the Journal du Conceil (now ICES Journal of Marine Science). He was an honorary lecturer on animal behaviour at the University College, London for about fifteen years. He was elected President of the Zoology section of the British Association in 1934. From 1940—42, he served as the President of the Linnean Society. He died at Hastings, East Sussex, from heart failure at the age of 67.
Russell favored holism and organicism. He was a critic of the modern synthesis and presented his own evolutionary theory uniting developmental biology with heredity but opposing Mendelian inheritance. He was influenced by Karl Ernst von Baer and Johann Wolfgang von Goethe. He saw teleology as inherent in the organism.
Books
Form and Function: A Contribution to the History of Animal Morphology (1916)
The Study of Living Things: Prolegomena to a Functional Biology (1924)
The Interpretation of Development and Heredity: A Study in Biological Method (1930)
The Behavior of Animals (1934)
The Directiveness of Organic Activities (1945)
The Diversity of Animals: An Evolutionary Study (1962)
References
External links
Reviews
1887 births
1954 deaths
Officers of the Order of the British Empire
Presidents of the Linnean Society of London
Theoretical biologists
Alumni of the University of Glasgow
Lamarckism
Non-Darwinian evolution
20th-century Scottish zoologists
Philosophers of biology | E. S. Russell | Biology | 418 |
47,719,483 | https://en.wikipedia.org/wiki/NGC%20526 | NGC 526 is a pair of interacting lenticular galaxies in the constellation of Sculptor. Both the constituents are classified as S0 lenticular galaxies. This pair was first discovered by John Herschel on September 1, 1834. Dreyer, the compiler of the catalogue described the galaxy as "faint, small, a little extended, the preceding of 2", the other object being NGC 527.
See also
List of NGC objects (1–1000)
References
External links
SEDS
Lenticular galaxies
Sculptor (constellation)
0526
05120
Discoveries by John Herschel
Astronomical objects discovered in 1834 | NGC 526 | Astronomy | 119 |
37,067,540 | https://en.wikipedia.org/wiki/Iota%20Gruis | Iota Gruis, Latinized from ι Gruis, is a binary star system in the southern constellation of Grus. It has an apparent visual magnitude of 3.90, which is bright enough to be seen with the naked eye at night. The distance to this system, as determined using an annual parallax shift of 17.80 mas as seen from the Earth, is about 183 light years.
This is a single-lined spectroscopic binary with an orbital period of and an eccentricity of 0.66. The yellow-hued primary component is an evolved K-type giant star with a stellar classification of K1 III. It is an X-ray emitter with a flux of .
References
K-type giants
Spectroscopic binaries
Grus (constellation)
Gruis, Iota
218670
114421
8820
Durchmusterung objects | Iota Gruis | Astronomy | 179 |
5,987,236 | https://en.wikipedia.org/wiki/Multimedia%20search | Multimedia search enables information search using queries in multiple data types including text and other multimedia formats.
Multimedia search can be implemented through multimodal search interfaces, i.e., interfaces that allow to submit search queries not only as textual requests, but also through other media.
We can distinguish two methodologies in multimedia search:
Metadata search: the search is made on the layers of metadata.
Query by example: The interaction consists in submitting a piece of information (e.g., a video, an image, or a piece of audio) for the purpose of finding similar multimedia items.
Metadata search
Search is made using the layers in metadata which contain information of the content of a multimedia file. Metadata search is easier, faster and effective because instead of working with complex material, such as an audio, a video or an image, it searches using text.
There are three processes which should be done in this method:
Summarization of media content (feature extraction). The result of feature extraction is a description.
Filtering of media descriptions (for example, elimination of Redundancy)
Categorization of media descriptions into classes.
Query by example
In query by example, the element used to search is a multimedia content (image, audio, video). In other words, the query is a media. Often, it's used audiovisual indexing. It will be necessary to choose the criteria we are going to use for creating metadata. The process of search can be divided in three parts:
Generate descriptors for the media which we are going to use as query and the descriptors for the media in our database.
Compare descriptors of the query and our database’s media.
List the media sorted by maximum coincidence.
Multimedia search engine
There are two big search families, in function of the content:
Visual search engine
Audio search engine
Visual search engine
Inside this family we can distinguish two topics: image search and video search
Image search: Although usually it's used simple metadata search, increasingly is being used indexing methods for making the results of users queries more accurate using query by example. For example, QR codes.
Video search: Videos can be searched for simple metadata or by complex metadata generated by indexing. The audio contained in the videos is usually scanned by audio search engines.
Audio search engine
There are different methods of audio searching:
Voice search engine: Allows the user to search using speech instead of text. It uses algorithms of speech recognition. An example of this technology is Google Voice Search.
Music search engine: Although most of applications which searches music works on simple metadata (artist, name of track, album…) . There are some programs of music recognition, for example Shazam or SoundHound.
See also
Journal of Multimedia
List of search engines
Multimedia
Multimedia information retrieval
Search engine indexing
Streaming media
Video search engine
References
Information retrieval genres
Multimedia | Multimedia search | Technology | 581 |
1,331,633 | https://en.wikipedia.org/wiki/Controlled%20low%20strength%20material | Controlled low strength material, abbreviated CLSM, also known as flowable fill, is a type of weak, runny concrete mix used in construction for non-structural purposes such as backfill or road bases.
Description
CLSM consists of a mixture of Portland cement, water, aggregate and sometimes fly ash. Unlike ordinary concrete, CLSM has much lower strength. The strength of CLSM is less than , while ordinary concrete has strengths exceeding . As a result, CLSM is not suitable for supporting buildings, bridges, or other structures. Instead, it is primarily used as a replacement for compacted backfill. It also flows much better than ordinary concrete, having the consistency of a milkshake. The first known use of CLSM was in 1964. CLSM is typically a ready mix concrete rather than soil cement which is a low strength cement made using local soil, and is similar to a slurry.
Transportation
CLSM as a highway construction material is becoming more widespread throughout the United States. Data received from questionnaires sent by the Pennsylvania Department of Transportation (PennDOT) in 1991 and the Transportation Research Board (TRB) in 1992 indicated that approximately 30 states had some experience with the use of flowable fill, and at least 24 states have a specification for flowable fill.
Most state transportation agencies have used flowable fill mainly as a trench backfill for storm drainage and utility lines on street and highway projects. Flowable fill has also been used to backfill abutments and retaining walls, fill abandoned pipelines and utility vaults, cavities, and settled areas, and help to convert abandoned bridges into culverts. The most frequent use of flowable fill is reported in the states of Minnesota, Maryland, Michigan, Iowa, and Indiana.
Although most states have somewhat limited experience to date with flowable fill, nearly all states that have used the material have thus far indicated satisfactory performance with little or no problems. Several states have noted that metal or plastic pipes tend to float unless anchored, and some states have reported some resistance to the use of the material by contractors or engineers. Since flowable fill is normally a comparatively low-strength material, there are no strict quality requirements for fly ash used in flowable fill or controlled low strength material mixtures. Fly ash is well suited for use in flowable fill mixtures. Its fine particle sizing (nonplastic silt) and spherical particle shape enhances mix flowability. Its relatively low dry unit weight (usually in the 890 to 1300 kg/m3 (55 to 80 lb/ft3) range) assists in producing a relatively lightweight fill, and its pozzolanic or cementitious properties provide for lower cement requirements than would normally be required to achieve equivalent strengths.
Fly ash flowable fill mixes
There are two basic types of flowable fill mixes that contain fly ash: high fly ash content mixes and low fly ash content mixes.
There are no specific requirements for the types of fly ash that may be used in flowable fill mixtures. "Low lime" or Class F fly ash is well suited for use in high fly ash content mixes, but can also be used in low fly ash content mixes. "High lime" or Class C fly ash, because it is usually self-cementing, is almost always used only in low fly ash content flowable fill mixes. There is also a flowable fill product in which both Class F and Class C fly ash are used in varying mix proportions.
References
Road construction
Cement
External links
Flowable fill resources page of the National Ready Mixed Concrete Association (US) | Controlled low strength material | Engineering | 729 |
15,995,094 | https://en.wikipedia.org/wiki/Bhutanese%20art | Bhutanese art (Dzongkha: འབྲུག་པའི་སྒྱུ་རྩལ) is similar to Tibetan art. Both are based upon Vajrayana Buddhism and its pantheon of teachers and divine beings.
The major orders of Buddhism in Bhutan are the Drukpa Lineage and the Nyingma. The former is a branch of the Kagyu school and is known for paintings documenting the lineage of Buddhist masters and the 70 Je Khenpo (leaders of the Bhutanese monastic establishment). The Nyingma school is known for images of Padmasambhava ("Guru Rinpoche"), who is credited with introducing Buddhism into Bhutan in the 7th century. According to legend, Padmasambhava hid sacred treasures for future Buddhist masters, especially Pema Lingpa, to find. Tertöns are also frequent subjects of Nyingma art.
Each divine being is assigned special shapes, colors, and/or identifying objects, such as lotus, conch-shell, thunderbolt, and begging bowl. All sacred images are made to exact specifications that have remained remarkably unchanged for centuries.
Bhutanese art is particularly rich in bronzes of different kinds that are collectively known by the name Kham-so (made in Kham) even though they are made in Bhutan because the technique of making them was originally imported from that region of Tibet. Wall paintings and sculptures, in these regions, are formulated on the principal ageless ideals of Buddhist art forms. Even though their emphasis on detail is derived from Tibetan models, their origins can be discerned easily, despite the profusely embroidered garments and glittering ornaments with which these figures are lavishly covered. In the grotesque world of demons, the artists apparently had a greater freedom of action than when modeling images of divine beings.
The arts and crafts of Bhutan that represents the exclusive "spirit and identity of the Himalayan kingdom" is defined as the art of Zorig Chosum, which means the “thirteen arts and crafts of Bhutan”; the thirteen crafts are carpentry, painting, paper making, blacksmithery, weaving, sculpting and many other crafts. The Institute of Zorig Chosum in Thimphu is the premier institution of traditional arts and crafts set up by the Government of Bhutan with the sole objective of preserving the rich culture and tradition of Bhutan and training students in all traditional art forms; there is another similar institution in eastern Bhutan known as Trashi Yangtse. Bhutanese rural life is also displayed in the Folk Heritage Museum in Thimphu. There is also a Voluntary Artists Studio in Thimphu to encourage and promote the art forms among the youth of Thimphu. The thirteen arts and crafts of Bhutan and the institutions established in Thimphu to promote these art forms are:
Traditional Bhutanese arts
In Bhutan, the traditional arts are known as zorig chusum (zo = the ability to make; rig = science or craft; chusum = thirteen). These practices have been gradually developed through the centuries, often passed down through families with long-standing relations to a particular craft. These traditional crafts represent hundreds of years of knowledge and ability that has been passed down through generations.
The great 15th century tertön, Pema Lingpa is traditionally credited with introducing the arts into Bhutan. In 1680, Ngawang Namgyal, the Zhabdrung Rinpoche, ordered the establishment of the school for instruction in the thirteen traditional arts. Although the skills existed much earlier, it is believed that the zorig chusum was first formally categorized during the rule of Gyalse Tenzin Rabgye (1680-1694), the 4th Druk Desi (secular ruler). The thirteen traditional arts are:
Dezo - Paper Making: Handmade paper made mainly from the Daphne plant and gum from a creeper root.
Dozo - Stonework: Stone arts used in the construction of stone pools and the outer walls of dzongs, gompas, stupas and some other buildings.
Garzo - Blacksmithing: The manufacture of iron goods, such as farm tools, knives, swords, and utensils.
Jinzo - Clay arts: The making of religious statues and ritual objects, pottery and the construction of buildings using mortar, plaster, and rammed earth.
Lhazo - Painting: From the images on thangkas, walls paintings, and statues to the decorations on furniture and window-frames.
Lugzo - Bronze casting: Production of bronze roof-crests, statues, bells, and ritual instruments, in addition to jewelry and household items using sand casting and lost-wax casting. Larger statues are made by repoussé.
Parzo - Wood, slate, and stone carving: In wood, slate or stone, for making such items as printing blocks for religious texts, masks, furniture, altars, and the slate images adorning many shrines and altars.
Shagzo - Woodturning: Making a variety of bowls, plates, cups, and other containers.
Shingzo - Woodworking: Employed in the construction of dzongs and gompas
Thagzo - Weaving: The production of some of the most intricately woven fabrics produced in Asia.
Trözo - Silver- and gold-smithing: Working in gold, silver, and copper to make jewelry, ritual objects, and utilitarian household items.
Tshazo - Cane and bamboo work: The production of such varied items as bows and arrows, baskets, drinks containers, utensils, musical instruments, fences, and mats.
Tshemazo – Needlework: Working with needle and thread to make clothes, boots, or the most intricate of appliqué thangkas.
Characteristics of Bhutanese arts
Articles for everyday use are still fashioned today as they were centuries ago. Traditional artisanship is handed down from generation to generation. Bhutan's artisans are skilled workers in metals, wood and slate carving, and clay sculpture. Artifacts made of wood include bowls and dishes, some lined with silver. Elegant yet strong woven bamboo baskets, mats, hats, and quivers find both functional and decorative usage. Handmade paper is prepared from tree bark by a process passed down the ages.
Each region has its specialties: raw silk comes from eastern Bhutan, brocade from Lhuntshi (Kurtoe), woolen goods from Bumthang, bamboo wares from Kheng, woodwork from Tashi Yangtse, gold and silver work from Thimphu, and yak-hair products from the north or the Black Mountains.
Most Bhutanese art objects are produced for use of the Bhutanese themselves. Except for goldsmiths, silversmiths, and painters, artisans are peasants who produce these articles and fabrics in their spare time, with the surplus production being sold. Most products, particularly fabrics, are relatively expensive. In the highest qualities, every step of production is performed by hand, from dyeing hanks of thread or hacking down bamboo in the forest, to weaving or braiding the final product.
The time spent in producing handicrafts is considerable and can involve as much as two years for some woven textiles. At the same time, many modern innovations are also used for less expensive items, especially modern dyes, and yarns - Bhutan must be one of the few places where hand-woven polyester garments can be bought.
Products
Textiles
Bhutanese textiles are a unique art form inspired by nature made in the form of clothing, crafts and different types of pots in eye-catching blend of colour, texture, pattern and composition. This art form is witnessed all over Bhutan and in Thimphu in the daily life of its people. It is also a significant cultural exchange garment that is gifted to mark occasions of birth and death, auspicious functions such as weddings and professional achievements and in greeting dignitaries. Each region has its own special designs of textiles, either made of vegetable dyed wool known as yathra or pure silk called Kishuthara. It is the women, belonging to a small community, who weave these textiles as a household handicrafts heritage.
Paintings
Most Bhutanese art, including ‘Painting in Bhutanese art’, known as lhazo, is invariably religion centric. These are made by artists without inscribing their names on them. The paintings encompass various types including the traditional thangkas, which are scroll paintings made in “highly stylised and strict geometric proportions” of Buddhist iconography that are made with mineral paints. Most houses in Bhutan have religious and other symbolic motifs painted inside their houses and also on the external walls.
Sculptures
The art of making religious sculptures is unique in Bhutan and hence very popular in the Himalayan region. The basic material used for making the sculptures is clay, which is known as jinzob. The clay statues of Buddhist religious icons, made by well-known artists of Bhutan, embellish various monasteries in Bhutan. This art form of sculpture is taught to students by professional artists at the Institute of Zorig Chosum in Thimphu.
Paper making
Handmade paper known as deysho is in popular usage in Bhutan and it is durable and insect resistant. The basic material used is the bark of the Daphne plant. This paper is used for printing religious texts; traditional books are printed on this paper. It is also used for packaging gifts. Apart from handmade paper, paper factories in Bhutan also produce ornamental art paper with designs of flower petals, and leaves, and other materials. For use on special occasions, vegetable dyed paper is also made.
Wood carving
Wood carving known as Parzo is a specialised and ancient art form, which is significantly blended with modern buildings in the resurgent Bhutan. Carved wood blocks are used for printing religious prayer flags that are seen all over Bhutan in front of monasteries, on hill ridges and other religious places. Carving is also done on slate and stone. The wood that is used for carving is seasoned for at least one year prior to carving.
Sword making
The art of sword making falls under the tradition of garzo (or blacksmithing), an art form that is used to make all metal implements such as swords, knives, chains, darts and so forth. Ceremonial swords are made and gifted to people who are honoured for their achievements. These swords are to be sported by men on all special occasions. Children, wear a traditional short knife known as the dudzom. Terton Pema Lingpa, a religious treasure hunter from central Bhutan, was the most famous sword maker in Bhutan.
Boot Making
It is not uncommon to see Bhutan’s traditional boots made of cloth. The cloth is hand stitched, embroidered and appliquéd with Bhutanese motifs. They are worn on ceremonial occasions (mandatory); the colours used on the boot denote the rank and status of the person wearing it. In the pecking order, Ministers wear orange, senior officials wear red and the common people wear white boots. This art form has been revived at the Institute of Zorig Chosum in Thimphu. Women also wear boots but of shorter length reaching just above the ankle.
Bamboo Craft
Bamboo Craft made with cane and bamboo is known as thazo. It is made in many rural communities in many regions of Bhutan. Few special items of this art form are the belo and the bangchung, popularly known as the Bhutanese “Tupperware” basket made in various sizes. Baskets of varying sizes are used in the homes and for travel on horseback, and as flasks for local drink called the arra.
Bow and Arrow Making
To meet the growing demand for bow and arrow used in the national sport of archery, bamboo bows and arrows are made by craftsmen using specific types of bamboo and mountain reeds. The bamboo used are selected during particular seasons, shaped to size and skilfully made into the bow and arrow. Thimphu has the Changlimithang Stadium & Archery Ground where Archery is a special sport.
Jewellery
Intricate jewellery with motif, made of silver and gold, are much sought after by women of Bhutan. The traditional jewellery made in Bhutan are heavy bracelets, komas or fasteners attached to the kira, the traditional dress of Bhutanese women, loop ear rings set with turquoise and necklaces inlaid with gem stones such as antique turquoise, coral beads and the zhi stone. The zhi stone is considered a prized possession as it is said to have “protective powers”; this stone has black and white spiral designs called “eyes”. The zhi is also said to be an agate made into beads.
Institutions
National Institute of Zorig Chusum
The National Institute of Zorig Chusum is the centre for Bhutanese Art education. Painting is the main theme of the institute, which provides 4–6 years of training in Bhutanese traditional art forms. The curricula cover a comprehensive course of drawing, painting, wood carving, embroidery, and carving of statues. Images of Buddha are a popular painting done here.
Handicrafts emporiums
There is a large government run emporium close to the National Institute of Zorig Chusum, which deals with exquisite handicrafts, traditional arts and jewelry; gho and kira, the national dress of Bhutanese men and women, are available in this emporium. The town has many other privately owned emporiums which deal with thangkas, paintings, masks, brassware, antique jewellery, painted lama tables known as choektse, drums, Tibetan violins and so forth; Zangma Handicrafts Emporium, in particular, sells handicrafts made in the Institute of Zorig Chusum.
Folk Heritage Museum
Folk Heritage Museum in Kawajangsa, Thimphu is built on the lines of a traditional Bhutanese farm house with more-than-100-year-old vintage furniture. It is built as a three storied structure with rammed mud walls and wooden doors, windows and roof covered with slates. It reveals much about Bhutanese rural life.
Voluntary Artists Studio
Located in an innocuous building, the Voluntary Artist Studio’s objective is to encourage traditional and contemporary art forms among the youth of Thimphu who are keen to imbibe these art forms. The art works of these young artists is also available on sale in the 'Art Shop Gallery' of the studio.
National Textile Museum
The National Textile Museum in Thimphu displays various Bhutanese textiles that are extensive and rich in traditional culture. It also exhibits colourful and rare kiras and ghos (traditional Bhutanese dress, kira for women and gho for men).
Exhibitions
The Honolulu Museum of Art spent several years developing and curating The Dragon’s Gift: The Sacred Arts of Bhutan exhibition. The February - May 2008 exhibition in Honolulu will travel in 2008 and 2009 to locations around the world including the Rubin Museum of Art (New York City), the Asian Art Museum (San Francisco), Guimet Museum (Paris), the Museum of East Asian Art (Cologne, Germany), and the Museum Rietberg Zürich (Switzerland).
Selected examples of Bhutanese art
See also
Phallus paintings in Bhutan
Buddhism in Bhutan
Dzong architecture
Music of Bhutan
Vajrayana Buddhism
Eastern art history
References
Bartholomew, Terese Tse, The Art of Bhutan, Orientations, Vol. 39, No. 1, Jan./Feb. 2008, 38-44.
Bartholomew, Terese Tse, John Johnston and Stephen Little, The Dragon's Gift, the Sacred Arts of Bhutan, Chicago, Serindia Publications, 2008.
Johnston, John, "The Buddhist Art of Bhutan", Arts of Asia, Vol. 38, No. 6, Nov./Dec. 2008, 58-68.
Mehra, Girish N., Bhutan, Land of the Peaceful Dragon, Delhi, Vikas Publishing House, 1974.
Singh, Madanjeet, Himalayan Art, wall-painting and sculpture in Ladakh, Lahaul and Spiti, the Siwalik Ranges, Nepal, Sikkim, and Bhutan, New York, Macmillan, 1971.
External links
Art and the youth of Bhutan
Manuel Valencia Contemporary artist with clear Buthanese inspiration
Textile arts
Religious objects
Art by country
Buddhism in Bhutan | Bhutanese art | Physics | 3,325 |
65,691,099 | https://en.wikipedia.org/wiki/Turin%20Polytechnic%20University%20in%20Tashkent | Turin Polytechnic University in Tashkent (Uzbek: Toshkent shahridagi Turin politexnika universiteti (TTPU)) is a non-profit public higher education institution in Uzbekistan. Turin Polytechnic University in Tashkent was established in 2009 in a partnership with Politecnico di Torino, Italy. TTPU's main objective is to prepare specialists for the automotive, mechanical engineering, electrical industries and companies in the field of civil engineering and construction, and the power industry, in accordance with the educational programs adopted in collaboration with Politecnico di Torino, Italy.
TTPU has five departments: Department of Natural-Mathematical Sciences, Department of Humanitarian-Economy Sciences, Department of Control and Computer Engineering, Department of Civil Engineering and Architecture, and Department of Mechanical and Aerospace Engineering. TTPU is a teaching and research university.
History
The official foundation date of the university is April 27, 2009, when the decree of the President of the Republic of Uzbekistan No. PP-1106 “On the organization of Turin Polytechnic University in Tashkent” was issued and from that date the university began its activity as a higher educational institution in accordance with the Educational Standards of the Republic of Uzbekistan.
In summer of 2009, the first 200 students were admitted for bachelor's degree program and the new university building with academic and administrative buildings and a modern campus was commissioned.
TTPU was established from the collaboration among Polytechnic University of Turin, UZAVTOSANOAT (the leading car manufacturer in Uzbekistan), and the Uzbek Ministry of Higher Education.
The Cooperation Agreement and Double Degree Agreement was signed 2009 with the Politecnico di Torino (Italy) that developed three HE curricula in Engineering BS and MS in Uzbekistan in accordance with the Italian HE system and acknowledged from both, the Ministry of Higher and Secondary Education of Republic of Uzbekistan and the Italian legislation.
Expansion and growth
In May 2010, an academic lyceum was established under the university to prepare students hard sciences and the building of the academic lyceum with the capacity of 450 students was constructed and commissioned by September 2011. In the same year, a Metrology Center in cooperation with the Italian company Hexagon Metrology S.P.A., Mechatronics Center with the support of General Motors Powertrain JSC and the German company Festo and CAD / CAM / CAE Center were established at the university.
In 2014, the MAN training center was organized in cooperation with MAN Truck & Bus and JV MAN Auto-UZBEKISTAN LLC JV.
In 2015, admission for the master's degree program in the specialty direction of “Mechatronics” was organized in the university. In 2016, the university became one of the first higher education institutions in the field of technology to receive a certificate of ISO 9001: 2008 International Quality Standard for services in the field of education.
In 2019, the undergraduate program for obtaining a double degree diploma “2+2” was organized in cooperation with the Andijan Machine building Institute.
In November 2020, the undergraduate program with a double degree diploma “2+2” was developed in cooperation with Turin Polytechnic University in Tashkent and Pittsburg State University, Kansas, the United States of America.
Campus
The campus is located in Tashkent, Uzbekistan, with modern educational and administrative buildings, conference halls, library, sport complex, research centers, residence hall, dormitories for professors and the large soccer stadium. The campus is under 24/7 security watch, has the Information Resource Center and the cafeteria.
Moreover, the university territory includes Academic and Administrative buildings, Specialized laboratory, Technopark and Metrology center. There are also Academic Lyceum, Mechatronics Center, MAN Academy, CAD / CAM / CAE Center and CLAAS Center under the authority of the university
Education
The period of study for students to obtain an educational qualification degree is 4 years for Bachelor's and 2 years for master's degree. Students are taught in English language with the involvement of professors and teachers of Turin Polytechnic University (Politecnico di Torino, Italy).
Turin Polytechnic University in Tashkent offers the following courses:
Bachelor's degree program in Mechanical and Aerospace Engineering
Bachelor's degree program in Information Technology and Automation Systems in Industry (ICT)
Bachelor's degree program in Industrial and Civil Engineering and Architecture
Master's degree program in Mechatronic Engineering
PhD
Preparatory Programs
Short-term internships
The core engineering courses are mainly taught by Italian professors and local professors who were educated in Italy, Japan, South Korea and the United States.
TTPU's Bachelor's and master's degree programs are based on POLITO academic program and are offered at Turin Polytechnic University in Tashkent with a “mixed” approach. That is, some courses are delivered by POLITO faculty members; others, by TTPU faculty members previously trained by teachers from Polytechnic University of Turin.
In accordance with the signed Agreement between the Universities for the awarding of diplomas, graduates receive an Italian diploma of Turin Polytechnic University (Politecnico di Torino).
Activities
TTPU organizes many activities and closely cooperate with major, local and foreign companies across Uzbekistan. Moreover, it runs several international projects on education and development.
Scientific activity
TTPU runs many fundamental, innovative and practical projects and conducts educational, methodological and researches under foreign grants. Students actively participate in international science competitions. The number and the quality of scientific articles have increased; great attention is paid to the publication of scientific collections and monographs, as well as, the patenting and implementation of scientific developments. In particular, the publication of articles by doctoral students engaged in doctoral dissertations, including in prestigious foreign journals, in the web journals of Science and Scopus that is gaining momentum.
Turin Polytechnic University in Tashkent was awarded the Scopus Award-2018 in nomination “The best scientists of the year”(DilshodTulaganov) and The Scopus Award-2019 in nomination “The impact of the year.”
Sport activities
Regularly, TTPU's sport teams participate in sport activities in soccer, basketball, volleyball, table tennis, wrestling, chess, athletics and swimming competitions. Moreover, TTPU competes in collaboration with the participants from other universities. Some sport competitions take place in Sport Complex and Stadium of the university.
Juventus Academy in Tashkent
A football academy "Juventus Academy in Tashkent," which is the official branch of Juventus football Academy of Italy, was established at TTPU's campus in 2019.
Partners
TTPU closely cooperates with European, American and Asian higher education institutions and companies and with more than 40 universities from more than 19 countries. Moreover, the university has developed many international projects funded by the European Union's Erasmus + capacity building program.
See also
TEAM University Tashkent
Tashkent State Technical University
Tashkent Institute of Irrigation and Melioration
Tashkent Financial Institute
Moscow State University in Tashkent named M.V Lomonosov
Tashkent Automobile and Road Construction Institute
Tashkent State University of Economics
Tashkent State Agrarian University
Tashkent State University of Law
Tashkent University of Information Technologies
University of World Economy and Diplomacy
Universities in the United Kingdom
Education in England
Education in Uzbekistan
Tashkent
References
Universities in Uzbekistan
Tashkent
Science and technology in Uzbekistan
Education in Tashkent
Buildings and structures in Tashkent
Tashkent
Educational institutions established in 2009
Uzbekistan | Turin Polytechnic University in Tashkent | Engineering | 1,524 |
37,944,047 | https://en.wikipedia.org/wiki/Kids%20Ocean%20Day%20HK | Kids Ocean Day HK was organised by Ocean Recovery Alliance to celebrate Kids Ocean Day in Hong Kong.
Foundation
The Malibu Foundation, a California-based non-profit organisation, started Kids Ocean Day to connect children to the ocean and beaches, and to foster understanding of the environmental issues they face. The first Kids Ocean Day Hong Kong was celebrated 9 November 2012. Over 800 students, teachers and volunteers met at Repulse Bay and helped create a piece of aerial artwork featuring a Chinese white dolphin, organised by aerial artist John Quigley of Spectral Q. The design was based on 9-year-old Leung Man-Hin's artwork which won the drawing competition for the event.
Goal
To raise awareness, understanding and appreciation among Hong Kong youth about the state of the ocean and the health of its ecosystem.
Events
Picture Drawing Competition
Hong Kong Kids Ocean Film Festival
Ocean Education Program for Schools
Beach Education Class
Human Aerial Art Project
References
External links
Ocean Recovery Alliance
Kids Ocean Week on Facebook
Short Video Kids Ocean Day HK
Extended Video Kids Ocean Day HK
Malibu Foundation
Spectral Q
National CleanUp Day
Ocean pollution
Pollution | Kids Ocean Day HK | Chemistry,Environmental_science | 217 |
196,983 | https://en.wikipedia.org/wiki/Swallowing | Swallowing, also called deglutition or inglutition in scientific contexts, is the process in the body of a human that allows for a substance to pass from the mouth, to the pharynx, and into the esophagus, while shutting the epiglottis. Swallowing is an important part of eating and drinking. If the process fails and the material (such as food, drink, or medicine) goes through the trachea, then choking or pulmonary aspiration can occur. In the human body the automatic temporary closing of the epiglottis is controlled by the swallowing reflex.
The portion of food, drink, or other material that will move through the neck in one swallow is called a bolus.
In colloquial English, the term "swallowing" is also used to describe the action of taking in a large mouthful of food without any biting.
In humans
Swallowing comes so easily to most people that the process rarely prompts much thought. However, from the viewpoints of physiology, of speech–language pathology, and of health care for people with difficulty in swallowing (dysphagia), it is an interesting topic with extensive scientific literature.
Coordination and control
Eating and swallowing are complex neuromuscular activities consisting essentially of three phases, an oral, pharyngeal and esophageal phase. Each phase is controlled by a different neurological mechanism. The oral phase, which is entirely voluntary, is mainly controlled by the medial temporal lobes and limbic system of the cerebral cortex with contributions from the motor cortex and other cortical areas. The pharyngeal swallow is started by the oral phase and subsequently is coordinated by the swallowing center on the medulla oblongata and pons. The reflex is initiated by touch receptors in the pharynx as a bolus of food is pushed to the back of the mouth by the tongue, or by stimulation of the palate (palatal reflex).
Swallowing is a complex mechanism using both skeletal muscle (tongue) and smooth muscles of the pharynx and esophagus. The autonomic nervous system (ANS) coordinates this process in the pharyngeal and esophageal phases.
Phases
Oral phase
Prior to the following stages of the oral phase, the mandible depresses and the lips abduct to allow food or liquid to enter the oral cavity. Upon entering the oral cavity, the mandible elevates and the lips adduct to assist in oral containment of the food and liquid. The following stages describe the normal and necessary actions to form the bolus, which is defined as the state of the food in which it is ready to be swallowed.
1) Moistening
Food is moistened by saliva from the salivary glands (parasympathetic).
2) Mastication
Food is mechanically broken down by the action of the teeth controlled by the muscles of mastication (V3) acting on the temporomandibular joint. This results in a bolus which is moved from one side of the oral cavity to the other by the tongue. Buccinator (VII) helps to contain the food against the occlusal surfaces of the teeth. The bolus is ready for swallowing when it is held together by saliva (largely mucus), sensed by the lingual nerve of the tongue (VII—chorda tympani and IX—lesser petrosal) (V3). Any food that is too dry to form a bolus will not be swallowed.
3) Trough formation
A trough is then formed at the back of the tongue by the intrinsic muscles (XII). The trough obliterates against the hard palate from front to back, forcing the bolus to the back of the tongue.
The intrinsic muscles of the tongue (XII) contract to make a trough (a longitudinal concave fold) at the back of the tongue. The tongue is then elevated to the roof of the mouth (by the mylohyoid (mylohyoid nerve—V3), genioglossus, styloglossus and hyoglossus (the rest XII)) such that the tongue slopes downwards posteriorly. The contraction of the genioglossus and styloglossus (both XII) also contributes to the formation of the central trough.
4) Movement of the bolus posteriorly
At the end of the oral preparatory phase, the food bolus has been formed and is ready to be propelled posteriorly into the pharynx. In order for anterior to posterior transit of the bolus to occur, orbicularis oris contracts and adducts the lips to form a tight seal of the oral cavity. Next, the superior longitudinal muscle elevates the apex of the tongue to make contact with the hard palate and the bolus is propelled to the posterior portion of the oral cavity. Once the bolus reaches the palatoglossal arch of the oropharynx, the pharyngeal phase, which is reflex and involuntary, then begins. Receptors initiating this reflex are proprioceptive (afferent limb of reflex is IX and efferent limb is the pharyngeal plexus- IX and X). They are scattered over the base of the tongue, the palatoglossal and palatopharyngeal arches, the tonsillar fossa, uvula and posterior pharyngeal wall. Stimuli from the receptors of this phase then provoke the pharyngeal phase. In fact, it has been shown that the swallowing reflex can be initiated entirely by peripheral stimulation of the internal branch of the superior laryngeal nerve. This phase is voluntary and involves important cranial nerves: V (trigeminal), VII (facial) and XII (hypoglossal).
Pharyngeal phase
For the pharyngeal phase to work properly all other egress from the pharynx must be occluded—this includes the nasopharynx and the larynx. When the pharyngeal phase begins, other activities such as chewing, breathing, coughing and vomiting are concomitantly inhibited.
5) Closure of the nasopharynx
The soft palate is tensed by tensor palatini (Vc), and then elevated by levator palatini (pharyngeal plexus—IX, X) to close the nasopharynx. There is also the simultaneous approximation of the walls of the pharynx to the posterior free border of the soft palate, which is carried out by the palatopharyngeus (pharyngeal plexus—IX, X) and the upper part of the superior constrictor (pharyngeal plexus—IX, X).
6) The pharynx prepares to receive the bolus
The pharynx is pulled upwards and forwards by the suprahyoid and longitudinal pharyngeal muscles – stylopharyngeus (IX), salpingopharyngeus (pharyngeal plexus—IX, X) and palatopharyngeus (pharyngeal plexus—IX, X) to receive the bolus. The palatopharyngeal folds on each side of the pharynx are brought close together through the superior constrictor muscles, so that only a small bolus can pass.
7) Opening of the auditory tube
The actions of the levator palatini (pharyngeal plexus—IX, X), tensor palatini (Vc) and salpingopharyngeus (pharyngeal plexus—IX, X) in the closure of the nasopharynx and elevation of the pharynx opens the auditory tube, which equalises the pressure between the nasopharynx and the middle ear. This does not contribute to swallowing, but happens as a consequence of it.
8) Closure of the oropharynx
The oropharynx is kept closed by palatoglossus (pharyngeal plexus—IX, X), the intrinsic muscles of tongue (XII) and styloglossus (XII).
9) Laryngeal closure
The primary laryngopharyngeal protective mechanism to prevent aspiration during swallowing is via the closure of the true vocal folds. The adduction of the vocal cords is affected by the contraction of the lateral cricoarytenoids and the oblique and transverse arytenoids (all recurrent laryngeal nerve of vagus). Since the true vocal folds adduct during the swallow, a finite period of apnea (swallowing apnea) must necessarily take place with each swallow. When relating swallowing to respiration, it has been demonstrated that swallowing occurs most often during expiration, even at full expiration a fine air jet is expired probably to clear the upper larynx from food remnants or liquid. The clinical significance of this finding is that patients with a baseline of compromised lung function will, over a period of time, develop respiratory distress as a meal progresses.
Subsequently, false vocal fold adduction, adduction of the aryepiglottic folds and retroversion of the epiglottis take place. The aryepiglotticus (recurrent laryngeal nerve of vagus) contracts, causing the arytenoids to appose each other (closes the laryngeal aditus by bringing the aryepiglottic folds together), and draws the epiglottis down to bring its lower half into contact with arytenoids, thus closing the aditus. Retroversion of the epiglottis, while not the primary mechanism of protecting the airway from laryngeal penetration and aspiration, acts to anatomically direct the food bolus laterally towards the piriform fossa.
Additionally, the larynx is pulled up with the pharynx under the tongue by stylopharyngeus (IX), salpingopharyngeus (pharyngeal plexus—IX, X), palatopharyngeus (pharyngeal plexus—IX, X) and inferior constrictor (pharyngeal plexus—IX, X). This phase is passively controlled reflexively and involves cranial nerves V, X (vagus), XI (accessory) and XII (hypoglossal). The respiratory center of the medulla is directly inhibited by the swallowing center for the very brief time that it takes to swallow. This means that it is briefly impossible to breathe during this phase of swallowing and the moment where breathing is prevented is known as deglutition apnea.
10) Hyoid elevation
The hyoid is elevated by digastric (V & VII) and stylohyoid (VII), lifting the pharynx and larynx up even further.
11) Bolus transits pharynx
The bolus moves down towards the esophagus by pharyngeal peristalsis which takes place by sequential contraction of the superior, middle and inferior pharyngeal constrictor muscles (pharyngeal plexus—IX, X). The lower part of the inferior constrictor (cricopharyngeus) is normally closed and only opens for the advancing bolus. Gravity plays only a small part in the upright position—in fact, it is possible to swallow solid food even when standing on one's head. The velocity through the pharynx depends on a number of factors such as viscosity and volume of the bolus. In one study, bolus velocity in healthy adults was measured to be approximately 30–40 cm/s.
Esophageal phase
12) Esophageal peristalsis
Like the pharyngeal phase of swallowing, the esophageal phase of swallowing is under involuntary neuromuscular control. However, propagation of the food bolus is significantly slower than in the pharynx. The bolus enters the esophagus and is propelled downwards first by striated muscle (recurrent laryngeal, X) then by the smooth muscle (X) at a rate of 3–5 cm/s. The upper esophageal sphincter relaxes to let food pass, after which various striated constrictor muscles of the pharynx as well as peristalsis and relaxation of the lower esophageal sphincter sequentially push the bolus of food through the esophagus into the stomach.
13) Relaxation phase
Finally the larynx and pharynx move down with the hyoid mostly by elastic recoil. Then the larynx and pharynx move down from the hyoid to their relaxed positions by elastic recoil.
Swallowing therefore depends on coordinated interplay between many various muscles, and although the initial part of swallowing is under voluntary control, once the deglutition process is started, it is quite hard to stop it.
Clinical significance
Swallowing becomes a great concern for the elderly since strokes and Alzheimer's disease can interfere with the autonomic nervous system. Speech pathologists commonly diagnose and treat this condition since the speech process uses the same neuromuscular structures as swallowing. Diagnostic procedures commonly performed by a speech pathologist to evaluate dysphagia include Fiberoptic Endoscopic Evaluation of Swallowing and Modified Barium Swallow Study. Occupational Therapists may also offer swallowing rehabilitation services as well as prescribing modified feeding techniques and utensils. Consultation with a dietician is essential, in order to ensure that the individual with dysphagia is able to consume sufficient calories and nutrients to maintain health. In terminally ill patients, a failure of the reflex to swallow leads to a build-up of mucus or saliva in the throat and airways, producing a noise known as a death rattle (not to be confused with agonal respiration, which is an abnormal pattern of breathing due to cerebral ischemia or hypoxia).
Abnormalities of the pharynx and/or oral cavity may lead to oropharyngeal dysphagia. Abnormalities of the esophagus may lead to esophageal dysphagia.
The failure of the lower esophagus sphincter to respond properly to swallowing is called achalasia.
M-Type Swallowing
With practice, people can learn to swallow fluidly without closing the mouth by merely manipulating the tongue and jaw to drive fluids or foods down the esophagus. With a continuous motion, an individual forges breathing and priorities the swallowed matter. This intermediate level of muscle manipulation is similar to the techniques used by sword swallowers.
In non-mammal animals
In many birds, the esophagus is largely a mere gravity chute, and in such events as a seagull swallowing a fish or a stork swallowing a frog, swallowing consists largely of the bird lifting its head with its beak pointing up and guiding the prey with tongue and jaws so that the prey slides inside and down.
In fish, the tongue is largely bony and much less mobile and getting the food to the back of the pharynx is helped by pumping water in its mouth and out of its gills.
In snakes, the work of swallowing is done by raking with the lower jaw until the prey is far enough back to be helped down by body undulations.
See also
Dysphagia
Occlusion
Speech and language pathology
References
External links
Overview at nature.com
Anatomy and physiology of swallowing at dysphagia.com
Swallowing animation (flash) at hopkins-gi.org
[Article on French Wikipedia] See : "déglutition atypique" = unfunctional or pathological swallowing.
Normal Swallowing and Dysphagia: Pediatric Population
Reflexes
Physiology
Articles containing video clips | Swallowing | Biology | 3,295 |
63,644,269 | https://en.wikipedia.org/wiki/Samarium%28II%29%20fluoride | Samarium(II) fluoride is one of fluorides of samarium with a chemical formula SmF2. The compound crystalizes in the fluorite structure, and is significantly nonstoichiometric. Along with europium(II) fluoride and ytterbium(II) fluoride, it is one of three known rare earth difluorides, the rest are unstable.
Preparation
Samarium(II) fluoride can be prepared by using samarium or hydrogen gas to reduce samarium(III) fluoride:
Properties
Samarium(II) fluoride is a purple to black solid. This is present in the crystal structure of the cubic calcium fluoride type (space group Fmm; No. 225 with a = 587.7 pm).
References
Samarium(II) compounds
Fluorides
Lanthanide halides
Fluorite crystal structure | Samarium(II) fluoride | Chemistry | 188 |
23,149,538 | https://en.wikipedia.org/wiki/Ministry%20of%20Energy%20and%20Mines%20%28Peru%29 | The Ministry of Energy and Mines (Spanish: Ministerio de Energía y Minas, MINEM), is the government ministry responsible for the energetic and mining sectors of Peru. Additionally, it is charged with overseeing the equal distribution of energy throughout the country. , the minister of energy is .
Objectives
1. To promote the proportional, efficient, and competitive use and development of energy resources in the context of decentralization and regional development, prioritizing private investment, meeting demand, as well as the employment of alternative energy in the process of rural electrification.
2. To promote the development of the mining sub sector, to impulse private investment and legal stability, to encourage the fair exploitation and implementation of clean energy technologies in small-scale mining and in the context of the process of regional decentralization.
3. To promote the protection of the environment, with respect to energy and mining corporations as well as to encourage friendly relations between private entities, consumers, and civil society.
4. To bring about and develop planning for the sector and its institutions, as well as the efficient and effective administration of resources.
See also
Council of Ministers of Peru
Government of Peru
External links
Official Website of the Ministry of Energy and Mines of Peru
Peru | Ministry of Energy and Mines (Peru) | Engineering | 247 |
23,882,970 | https://en.wikipedia.org/wiki/Adenosinergic | Adenosinergic means "working on adenosine".
An adenosinergic agent (or drug) is a chemical which functions to directly modulate the adenosine system in the body or brain. Examples include adenosine receptor agonists, adenosine receptor antagonists (such as caffeine), and adenosine reuptake inhibitors.
See also
Adrenergic
Cannabinoidergic
Cholinergic
Dopaminergic
GABAergic
Glycinergic
Histaminergic
Melatonergic
Monoaminergic
Opioidergic
Serotonergic
References
Neurochemistry
Neurotransmitters | Adenosinergic | Chemistry,Biology | 137 |
937,664 | https://en.wikipedia.org/wiki/Chebyshev%27s%20sum%20inequality | In mathematics, Chebyshev's sum inequality, named after Pafnuty Chebyshev, states that if
and
then
Similarly, if
and
then
Proof
Consider the sum
The two sequences are non-increasing, therefore and have the same sign for any . Hence .
Opening the brackets, we deduce:
hence
An alternative proof is simply obtained with the rearrangement inequality, writing that
Continuous version
There is also a continuous version of Chebyshev's sum inequality:
If f and g are real-valued, integrable functions over [a, b], both non-increasing or both non-decreasing, then
with the inequality reversed if one is non-increasing and the other is non-decreasing.
See also
Hardy–Littlewood inequality
Rearrangement inequality
Notes
Inequalities
Sequences and series | Chebyshev's sum inequality | Mathematics | 169 |
24,156,287 | https://en.wikipedia.org/wiki/C19H17ClN2O4 | {{DISPLAYTITLE:C19H17ClN2O4}}
The molecular formula C19H17ClN2O4 may refer to:
Glafenine, a nonsteroidal anti-inflammatory drug, with risk of anaphylaxis and acute kidney failure
Oxametacin, a non-steroidal anti-inflammatory drug | C19H17ClN2O4 | Chemistry | 77 |
633,692 | https://en.wikipedia.org/wiki/Silver%20Streak%20%28film%29 | Silver Streak is a 1976 American thriller comedy film about a murder on a Los Angeles-to-Chicago train journey. It was directed by Arthur Hiller, written by Colin Higgins, and stars Gene Wilder, Jill Clayburgh, and Richard Pryor, with Patrick McGoohan, Ned Beatty, Clifton James, Ray Walston, Scatman Crothers, and Richard Kiel in supporting roles. The film score is by Henry Mancini. This film marked the first pairing of Wilder and Pryor, who were later paired in three other films.
The film is primarily set on a train called Silver Streak. A passenger accidentally finds out about the murder of an art historian and about efforts to discredit the victim's book. A shady art dealer is profiting from forged works of Rembrandt and is willing to kill in order to maintain secrecy about his crimes.
The film was released on December 8, 1976 by 20th Century Fox, and it received positive reviews from critics as well as earning $51.1 million against a budget between $5.5 million and $6.5 million.
Plot
Aboard the Silver Streak train to Chicago, book editor George Caldwell meets salesman Bob Sweet and Hilly Burns, secretary to Rembrandt historian Professor Schreiner. Hilly and George share an instant attraction and she invites him to her cabin. There, he sees Schreiner's body fall from the train's roof outside her window. Hilly believes George is mistaken, so he goes to investigate Schreiner's cabin, where he encounters Whiney and Reace, who are searching Schreiner's belongings. After Whiney implies that Hilly is in trouble, the burly Reace throws George off the train. Concerned about Hilly, George follows the train tracks until he meets a farmer, who flies George in her biplane to a station ahead of the Silver Streak where he can reboard.
Once aboard the train again, George sees Hilly with art dealer Roger Devereau and assumes they are romantically involved. He confronts Devereau, who explains that Whiney and Reace are in his employ and their confrontation was a misunderstanding. Devereau also introduces George to a seemingly alive Schreiner (in actuality his other employee, Johnson, in disguise).
Convinced he was wrong, and upset at Hilly's presumed relationship with Devereau, George gets drunk and explains the situation to Sweet, who reveals himself to be an undercover FBI agent named Stevens. He explains that the FBI has been investigating Devereau, a ruthless criminal known publicly as a professional art appraiser. Stevens believes Devereau wants Schreiner's Rembrandt letters, which could expose Devereau for authenticating forged paintings as genuine Rembrandts. George realizes the letters are hidden inside Schreiner's book, and he shows them to Stevens.
Reace interrupts and attempts to assassinate George but inadvertently kills Stevens. Reace pursues George to the train's roof, where George kills him with a harpoon gun. George falls from the roof of the moving train and again finds himself on foot.
George seeks help from a local sheriff in Dodge City, Kansas, who finds George's story unbelievable. The sheriff then gets a phone call about Stevens' murder and believes that George is the suspect. George escapes the inept sheriff and steals a patrol car, unaware that arrested car thief Grover T. Muldoon is in the back. George and Grover work together to catch up to the train at Kansas City so George can save Hilly.
With police searching for George, Grover disguises George as a black man using shoe polish so they can reboard the train. Devereau captures George, recovers the Rembrandt letters and later burns them. Devereau tells George and Hilly he plans to frame them for Schreiner's murder and then make their deaths look like murder–suicide.
Grover poses as a steward and rescues George and Hilly but, after a shootout with Devereau's men, Grover and George are forced to jump from the train to escape. They are promptly arrested and taken to a train station where they meet Chief Donaldson, who turns out to be Stevens' former partner. George tries to explain that he didn't kill Stevens; Donaldson tells George that he and the police knew all along that Devereau and his men, rather than George, were the ones who killed Stevens. The story in the news about Stevens' murder by Devereau was actually planted by Donaldson and the police. Donaldson also sent the police to the sheriff's office in Dodge City to arrest George so that they could protect him from Devereau.
As George and Grover amicably part ways, Donaldson has the train stopped and surrounded by police before evacuating the passengers. A firefight erupts, Whiney is wounded, and George, alongside a returning Grover, boards the train to kill Johnson and rescue Hilly. Devereau seizes the train controls, setting it to run at full speed without a driver, and throws Whiney from the train. Donaldson provides supporting fire from a helicopter, and George distracts Devereau which causes Donaldson to mortally wound Devereau before he is beheaded by an oncoming boxcar train.
Unable to stop the driverless Silver Streak, George and a porter uncouple the train cars from the engine to trigger their brakes, saving the remaining passengers. But the runaway engine crashes into Chicago's Central Station, destroying everything in its path. George, Hilly and Grover survey the damaged engine as Grover drives away in a stolen car. George and Hilly bid him goodbye and leave to begin their new relationship.
Cast
Gene Wilder as George Caldwell
Jill Clayburgh as Hildegarde "Hilly" Burns
Richard Pryor as Grover T. Muldoon
Patrick McGoohan as Roger Devereau
Ned Beatty as FBI Agent Bob Stevens / Bob Sweet
Clifton James as Sheriff Oliver Chauncey
Gordon Hurst as Deputy "Moose"
Ray Walston as Edgar Whiney
Scatman Crothers as Porter Ralston
Len Birman as FBI Agent Donaldson
Lucille Benson as Rita Babtree
Stefan Gierasch as Professor Arthur Schreiner / Johnson
Valerie Curtin as Plain Jane
Richard Kiel as Reace
Fred Willard as Jerry Jarvis
Ed McNamara as Benny
Henry Beckman as Conventioneer
Harvey Atkin as Conventioneer
Robert Culp as FBI Agent (uncredited)
J.A. Preston as The Waiter (uncredited)
Production
The film was based on an original screenplay by Colin Higgins, who at the time was best known for writing Harold and Maude. He wrote Silver Streak "because I had always wanted to get on a train and meet some blonde. It never happened, so I wrote a script."
Higgins wrote Silver Streak for the producers of The Devil's Daughter, a TV film he had written. Both they and Higgins wanted to get into television. The script was sent out to auction. It was set on an Amtrak train and Paramount was interested, but wanted Amtrak to give its approval. Alan Ladd Jr. and Frank Yablans at 20th Century Fox didn't want to wait and bought the script for a then-record $400,000. Ladd said "It was like the old Laurel and Hardy comedies. The hero is Laurel, he falls off the train, stumbles about, makes a fool of himself, but still gets the pretty girl. Audiences have identified with that since Buster Keaton."
Colin Higgins wanted George Segal for the hero – the character's name is George – but Fox preferred Gene Wilder. Ladd reasoned that Wilder was "younger, more identifiable for the younger audience. And he's so average, so ordinary, and he gets caught up in all these crazy adventures." (Wilder was actually older than Segal.)
Colin Higgins claimed the producers did not want Richard Pryor cast because Pryor had recently walked off The Bingo Long Traveling All-Stars & Motor Kings; he says the producer at one stage considered casting another black actor as a backup. However, Pryor was very professional during the shoot.
Release
The film had over 400 previews around the United States starting November 28, 1976 in New York City. It had its premiere at Tower East Theater in New York on Tuesday, December 7, 1976 and opened in New York City the following day. It opened in Los Angeles on Friday, December 10 before opening nationwide in an additional 350 theaters on December 22.
Reception
The film grossed over $51 million at the box office and was praised by critics, including Roger Ebert. It maintains a 76% approval rating at Rotten Tomatoes from 25 reviews. Ruth Batchelor of the Los Angeles Free Press described it as a "fabulous, funny, suspenseful, wonderful, marvelous, sexy, fantastic trip on a train, with the most lovable group of characters ever assembled." Gene Siskel of the Chicago Tribune, however, called the film "a needlessly convoluted mystery yarn, which calls everyone's identity into question except Wilder's." Siskel, who gave the film just two stars, added that "the story isn't easy to follow" and that "I'm still not sure whether Clayburgh's character, secretary to Devereaux, was in on the hustle from the beginning." (Hilly Burns was actually Professor Schreiner's secretary, not Devereaux's.)
Awards and honors
Academy Award nomination: Best Sound (Donald O. Mitchell, Douglas O. Williams, Richard Tyler, and Harold M. Etherington)
Nomination: Golden Globe Award for Best Actor – Motion Picture Musical or Comedy — Gene Wilder
Writers Guild of America nomination: Best Comedy Written Directly for the Screen – Colin Higgins
The film was chosen for the Royal Film Performance in 1977.
In 2000, American Film Institute included the film in AFI's 100 Years...100 Laughs – #95.
Score and soundtrack
Though the film dates to 1976, Henry Mancini's score was never officially released on a soundtrack album. Intrada Records' 2002 compilation became one of the year's best-selling special releases.
References
External links
Silver Streak on Soundtrack.net
Making of Silver Streak (1976) – Pre-release promotional "Making Of" documentary about the film.
Complete copy of script
1976 films
1970s English-language films
1976 action comedy films
1970s American films
1970s buddy comedy films
1970s comedy mystery films
1970s comedy thriller films
20th Century Fox films
American action comedy films
American buddy comedy films
American comedy mystery films
American comedy thriller films
English-language action comedy films
English-language buddy comedy films
English-language comedy mystery films
Fictional trains
Films shot in Calgary
Films shot in Toronto
Films directed by Arthur Hiller
Films set on trains
Films scored by Henry Mancini
Films with screenplays by Colin Higgins
Films about the Federal Bureau of Investigation
English-language comedy thriller films | Silver Streak (film) | Technology | 2,209 |
60,141,647 | https://en.wikipedia.org/wiki/Word-representable%20graph | In the mathematical field of graph theory, a word-representable graph is a graph that can be characterized by a word (or sequence) whose entries alternate in a prescribed way. In particular, if the vertex set of the graph is V, one should be able to choose a word w over the alphabet V such that letters a and b alternate in w if and only if the pair ab is an edge in the graph. (Letters a and b alternate in w if, after removing from w all letters but the copies of a and b, one obtains a word abab... or a word baba....) For example, the cycle graph labeled by a, b, c and d in clock-wise direction is word-representable because it can be represented by abdacdbc: the pairs ab, bc, cd and ad alternate, but the pairs ac and bd do not.
The word w is G's word-representant, and one says that that w represents G. The smallest (by the number of vertices) non-word-representable graph is the wheel graph W5, which is the only non-word-representable graph on 6 vertices.
The definition of a word-representable graph works both in labelled and unlabelled cases since any labelling of a graph is equivalent to any other labelling. Also, the class of word-representable graphs is hereditary. Word-representable graphs generalise several important classes of graphs such as circle graphs, 3-colorable graphs and comparability graphs. Various generalisations of the theory of word-representable graphs accommodate representation of any graph.
History
Word-representable graphs were introduced by Sergey Kitaev in 2004 based on joint research with Steven Seif on the Perkins semigroup, which has played an important role in semigroup theory since 1960. The first systematic study of word-representable graphs was undertaken in a 2008 paper by Kitaev and Artem Pyatkin, starting development of the theory. One of key contributors to the area is Magnús M. Halldórsson. Up to date, 35+ papers have been written on the subject, and the core of the book by Sergey Kitaev and Vadim Lozin is devoted to the theory of word-representable graphs. A quick way to get familiar with the area is to read one of the survey articles.
Motivation to study the graphs
According to, word-representable graphs are relevant to various fields, thus motivating to study the graphs. These fields are algebra, graph theory, computer science, combinatorics on words, and scheduling. Word-representable graphs are especially important in graph theory, since they generalise several important classes of graphs, e.g. circle graphs, 3-colorable graphs and comparability graphs.
Early results
It was shown in that a graph G is word-representable if it is k-representable for some k, that is, G can be represented by a word having k copies of each letter. Moreover, if a graph is k-representable then it is also (k + 1)-representable. Thus, the notion of the representation number of a graph, as the minimum k such that a graph is word-representable, is well-defined. Non-word-representable graphs have the representation number ∞. Graphs with representation number 1 are precisely the set of complete graphs, while graphs with representation number 2 are precisely the class of circle non-complete graphs. In particular, forests (except for single trees on at most 2 vertices), ladder graphs and cycle graphs have representation number 2. No classification for graphs with representation number 3 is known. However, there are examples of such graphs, e.g. Petersen's graph and prisms. Moreover, the 3-subdivision of any graph is 3-representable. In particular, for every graph G there exists a 3-representable graph H that contains G as a minor.
A graph G is permutationally representable if it can be represented by a word of the form p1p2...pk, where pi is a permutation. On can also say that G is permutationally k-representable. A graph is permutationally representable iff it is a comparability graph. A graph is word-representable implies that the neighbourhood of each vertex is permutationally representable (i.e. is a comparability graph). Converse to the last statement is not true. However, the fact that the neighbourhood of each vertex is a comparability graph implies that the Maximum Clique problem is polynomially solvable on word-representable graphs.
Semi-transitive orientations
Semi-transitive orientations provide a powerful tool to study word-representable graphs. A directed graph is semi-transitively oriented iff it is acyclic and for any directed path u1→u2→ ...→ut, t ≥ 2, either there is no edge from u1 to ut or all edges ui → uj exist for 1 ≤ i < j ≤ t. A key theorem in the theory of word-representable graphs states that a graph is word-representable iff it admits a semi-transitive orientation. As a corollary to the proof of the key theorem one obtain an upper bound on word-representants: Each non-complete word-representable graph G is 2(n − κ(G))-representable, where κ(G) is the size of a maximal clique in G. As an immediate corollary of the last statement, one has that the recognition problem of word-representability is in NP. In 2014, Vincent Limouzy observed that it is an NP-complete problem to recognise whether a given graph is word-representable. Another important corollary to the key theorem is that any 3-colorable graph is word-representable. The last fact implies that many classical graph problems are NP-hard on word-representable graphs.
Overview of selected results
Non-word-representable graphs
Wheel graphs W2n+1, for n ≥ 2, are not word-representable and W5 is the minimum (by the number of vertices) non-word-representable graph. Taking any non-comparability graph and adding an apex (a vertex connected to any other vertex), we obtain a non-word-representable graph, which then can produce infinitely many non-word-representable graphs. Any graph produced in this way will necessarily have a triangle (a cycle of length 3), and a vertex of degree at least 5. Non-word-representable graphs of maximum degree 4 exist and non-word-representable triangle-free graphs exist. Regular non-word representable graphs also exist. Non-isomorphic non-word-representable connected graphs on at most eight vertices were first enumerated by Heman Z.Q. Chen. His calculations were extended in, where it was shown that the numbers of non-isomorphic non-word-representable connected graphs on 5−11 vertices are given, respectively, by 0, 1, 25, 929, 54957, 4880093, 650856040. This is the sequence A290814 in the Online Encyclopaedia of Integer Sequences (OEIS).
Operations on graphs and word-representability
Operations preserving word-representability are removing a vertex, replacing a vertex with a module, Cartesian product, rooted product, subdivision of a graph, connecting two graphs by an edge and gluing two graphs in a vertex. The operations not necessarily preserving word-representability are taking the complement, taking the line graph, edge contraction, gluing two graphs in a clique of size 2 or more, tensor product, lexicographic product and strong product. Edge-deletion, edge-addition and edge-lifting with respect to word-representability (equivalently, semi-transitive orientability) are studied in.
Graphs with high representation number
While each non-complete word-representable graph G is 2(n − κ(G))-representable, where κ(G) is the size of a maximal clique in G, the highest known representation number is floor(n/2) given by crown graphs with an all-adjacent vertex. Interestingly, such graphs are not the only graphs that require long representations. Crown graphs themselves are shown to require long (possibly longest) representations among bipartite graphs.
Computational complexity
Known computational complexities for problems on word-representable graphs can be summarised as follows:
Representation of planar graphs
Triangle-free planar graphs are word-representable. A K4-free near-triangulation is 3-colourable if and only if it is word-representable; this result generalises studies in. Word-representability of face subdivisions of triangular grid graphs is studied in and word-representability of triangulations of grid-covered cylinder graphs is studied in.
Representation of split graphs
Word-representation of split graphs is studied in. In particular, offers a characterisation in terms of forbidden induced subgraphs of word-representable split graphs in which vertices in the independent set are of degree at most 2, or the size of the clique is 4, while a computational characterisation of word-representable split graphs with the clique of size 5 is given in. Also, necessary and sufficient conditions for an orientation of a split graph to be semi-transitive are given in, while in threshold graphs are shown to be word-representable and the split graphs are used to show that gluing two word-representable graphs in any clique of size at least 2 may, or may not result in a word-representable graph, which solved a long-standing open problem.
Graphs representable by pattern avoiding words
A graph is p-representable if it can be represented by a word avoiding a pattern p. For example, 132-representable graphs are those that can be represented by words w1w2...wn where there are no 1 ≤ a < b < c ≤ n such that wa < wc < wb. In it is shown that any 132-representable graph is necessarily a circle graph, and any tree and any cycle graph, as well as any graph on at most 5 vertices, are 132-representable. It was shown in that not all circle graphs are 132-representable, and that 123-representable graphs are also a proper subclass of the class of circle graphs.
Generalisations
A number of generalisations of the notion of a word-representable graph are based on the observation by Jeff Remmel that non-edges are defined by occurrences of the pattern 11 (two consecutive equal letters) in a word representing a graph, while edges are defined by avoidance of this pattern. For example, instead of the pattern 11, one can use the pattern 112, or the pattern, 1212, or any other binary pattern where the assumption that the alphabet is ordered can be made so that a letter in a word corresponding to 1 in the pattern is less than that corresponding to 2 in the pattern. Letting u be an ordered binary pattern we thus get the notion of a u-representable graph. So, word-representable graphs are just the class of 11-representable graphs. Intriguingly, any graph can be u-represented assuming u is of length at least 3.
Another way to generalise the notion of a word-representable graph, again suggested by Remmel, is to introduce the "degree of tolerance" k for occurrences of a pattern p defining edges/non-edges. That is, we can say that if there are up to k occurrence of p formed by letters a and b, then there is an edge between a and b. This gives the notion of a k-p-representable graph, and k-11-representable graphs are studied in. Note that 0-11-representable graphs are precisely word-representable graphs. The key results in are that any graph is 2-11-representable and that word-representable graphs are a proper subclass of 1-11-representable graphs. Whether or not any graph is 1-11-representable is a challenging open problem.
For yet another type of relevant generalisation, Hans Zantema suggested the notion of a k-semi-transitive orientation refining the notion of a semi-transitive orientation. The idea here is restricting ourselves to considering only directed paths of length not exceeding k while allowing violations of semi-transitivity on longer paths.
Open problems
Open problems on word-representable graphs can be found in, and they include:
Characterise (non-)word-representable planar graphs.
Characterise word-representable near-triangulations containing the complete graph K4 (such a characterisation is known for K4-free planar graphs ).
Classify graphs with representation number 3. (See for the state-of-the-art in this direction.)
Is the line graph of a non-word-representable graph always non-word-representable?
Are there any graphs on n vertices whose representation requires more than floor(n/2) copies of each letter?
Is it true that out of all bipartite graphs crown graphs require longest word-representants? (See for relevant discussion.)
Characterise word-representable graphs in terms of (induced) forbidden subgraphs.
Which (hard) problems on graphs can be translated to words representing them and solved on words (efficiently)?
Literature
The list of publications to study representation of graphs by words contains, but is not limited to
Ö. Akgün, I.P. Gent, S. Kitaev, H. Zantema. Solving computational problems in the theory of word-representable graphs. Journal of Integer Sequences 22 (2019), Article 19.2.5.
P. Akrobotu, S. Kitaev, and Z. Masárová. On word-representability of polyomino triangulations. Siberian Adv. Math. 25 (2015), 1−10.
B. Broere. Word representable graphs, 2018. Master thesis at Radboud University, Nijmegen.
B. Broere and H. Zantema. "The k-cube is k-representable," J. Autom., Lang., and Combin. 24 (2019) 1, 3-12.
J. N. Chen and S. Kitaev. On the 12-representability of induced subgraphs of a grid graph, Discussiones Mathematicae Graph Theory, to appear
T. Z. Q. Chen, S. Kitaev, and A. Saito. Representing split graphs by words, arXiv:1909.09471
T. Z. Q. Chen, S. Kitaev, and B. Y. Sun. Word-representability of face subdivisions of triangular grid graphs, Graphs and Combin. 32(5) (2016), 1749−1761.
T. Z. Q. Chen, S. Kitaev, and B. Y. Sun. Word-representability of triangulations of grid-covered cylinder graphs, Discr. Appl. Math. 213 (2016), 60−70.
G.-S. Cheon, J. Kim, M. Kim, and S. Kitaev. Word-representability of Toeplitz graphs, Discr. Appl. Math., to appear.
G.-S. Cheon, J. Kim, M. Kim, and A. Pyatkin. On k-11-representable graphs. J. Combin. 10 (2019) 3, 491−513.
I. Choi, J. Kim, and M. Kim. On operations preserving semi-transitive orient ability of graphs, Journal of Combinatorial Optimization 37 (2019) 4, 1351−1366.
A. Collins, S. Kitaev, and V. Lozin. New results on word-representable graphs, Discr. Appl. Math. 216 (2017), 136−141.
A. Daigavane, M. Singh, B.K. George. 2-uniform words: cycle graphs, and an algorithm to verify specific word-representations of graphs. arXiv:1806.04673 (2018).
M. Gaetz and C. Ji. Enumeration and extensions of word-representable graphs. Lecture Notes in Computer Science 11682 (2019) 180−192. In R. Mercas, D. Reidenbach (Eds) Combinatorics on Words. WORDS 2019.
M. Gaetz and C. Ji. Enumeration and Extensions of Word-representants, arXiv:1909.00019.
M. Gaetz and C. Ji. Enumeration and Extensions of Word-representants, Combinatorics on words, 180-192, Lecture Notes in Comput. Sci., 11682, Springer, Cham, 2019.
A. Gao, S. Kitaev, and P. Zhang. On 132-representable graphs. Australasian J. Combin. 69 (2017), 105−118.
M. E. Glen. Colourability and word-representability of near-triangulations, Pure Mathematics and Applications, 28(1), 2019, 70−76.
M. E. Glen. On word-representability of polyomino triangulations & crown graphs, 2019. PhD thesis, University of Strathclyde.
M. E. Glen and S. Kitaev. Word-Representability of Triangulations of Rectangular Polyomino with a Single Domino Tile, J. Combin.Math. Combin. Comput. 100, 131−144, 2017.
M. E. Glen, S. Kitaev, and A. Pyatkin. On the representation number of a crown graph, Discr. Appl. Math. 244, 2018, 89−93.
M.M. Halldórsson, S. Kitaev, A. Pyatkin On representable graphs, semi-transitive orientations, and the representation numbers, arXiv:0810.0310 (2008).
M.M. Halldórsson, S. Kitaev, A. Pyatkin (2010) Graphs capturing alternations in words. In: Y. Gao, H. Lu, S. Seki, S. Yu (eds), Developments in Language Theory. DLT 2010. Lecture Notes Comp. Sci. 6224, Springer, 436−437.
M.M. Halldórsson, S. Kitaev, A. Pyatkin (2011) Alternation graphs. In: P. Kolman, J. Kratochvíl (eds), Graph-Theoretic Concepts in Computer Science. WG 2011. Lecture Notes Comp. Sci. 6986, Springer, 191−202.
M. Halldórsson, S. Kitaev and A. Pyatkin. Semi-transitive orientations and word-representable graphs, Discr. Appl. Math. 201 (2016), 164−171.
M. Jones, S. Kitaev, A. Pyatkin, and J. Remmel. Representing Graphs via Pattern Avoiding Words, Electron. J. Combin. 22 (2), Res. Pap. P2.53, 1−20 (2015).
S. Kitaev. On graphs with representation number 3, J. Autom., Lang. and Combin. 18 (2013), 97−112.
S. Kitaev. A comprehensive introduction to the theory of word-representable graphs. In: É. Charlier, J. Leroy, M. Rigo (eds), Developments in Language Theory. DLT 2017. Lecture Notes Comp. Sci. 10396, Springer, 36−67.
S. Kitaev. Existence of u-representation of graphs, Journal of Graph Theory 85 (2017) 3, 661−668.
S. Kitaev, Y. Long, J. Ma, H. Wu. Word-representability of split graphs, arXiv:1709.09725 (2017).
S. Kitaev and V. Lozin. Words and Graphs, Springer, 2015. .
S. Kitaev and A. Pyatkin. On representable graphs, J. Autom., Lang. and Combin. 13 (2008), 45−54.
S. Kitaev and A. Pyatkin. Word-representable graphs: a Survey, Journal of Applied and Industrial Mathematics 12(2) (2018) 278−296.
S. Kitaev and A. Pyatkin. On semi-transitive orientability of triangle-free graphs, arXiv:2003.06204v1.
S. Kitaev and A. Saito. On semi-transitive orientability of Kneser graphs and their complements, Discrete Math., to appear.
S. Kitaev, P. Salimov, C. Severs, and H. Úlfarsson (2011) On the representability of line graphs. In: G. Mauri, A. Leporati (eds), Developments in Language Theory. DLT 2011. Lecture Notes Comp. Sci. 6795, Springer, 478−479.
S. Kitaev and S. Seif. Word problem of the Perkins semigroup via directed acyclic graphs, Order 25 (2008), 177−194.
E. Leloup. Graphes représentables par mot. Master Thesis, University of Liège, 2019
Mandelshtam. On graphs representable by pattern-avoiding words, Discussiones Mathematicae Graph Theory 39 (2019) 375−389.
С. В. Китаев, А. В. Пяткин. Графы, представимые в виде слов. Обзор результатов, Дискретн. анализ и исслед. опер., 2018, том 25,номер 2, 19−53.
Software
Software to study word-representable graphs can be found here:
M. E. Glen. Software to deal with word-representable graphs, 2017. Available at https://personal.cis.strath.ac.uk/sergey.kitaev/word-representable-graphs.html.
H. Zantema. Software REPRNR to compute the representation number of a graph, 2018. Available at https://www.win.tue.nl/~hzantema/reprnr.html.
References
Graph families
NP-complete problems | Word-representable graph | Mathematics | 4,802 |
16,350,616 | https://en.wikipedia.org/wiki/Archaeocin | Archaeocin is the name given to a new type of potentially useful antibiotic that is derived from the Archaea group of organisms. Eight archaeocins have been partially or fully characterized, but hundreds of archaeocins are believed to exist, especially within the haloarchaea. Production of these archaeal proteinaceous antimicrobials is a nearly universal feature of the rod-shaped haloarchaea.
The prevalence of archaeocins from other members of this domain is unknown simply because no one has looked for them. The discovery of new archaeocins hinges on recovery and cultivation of archaeal organisms from the environment. For example, samples from a novel hypersaline field site, Wilson Hot Springs in the Fish Springs National Wildlife Refuge in eastern Utah, recovered 350 halophilic organisms; preliminary analysis of 75 isolates showed that 48 were archaeal and 27 were bacterial.
Halocins
Halocins are classified as either peptide (≤ 10 kDa; 'microhalocins') or protein (> 10 kDa) antibiotics produced by members of the archaeal family Halobacteriaceae. To date, all of the known halocin genes are encoded on megaplasmids (> 100 kbp) and possess typical haloarcheal TATA and BRE promoter regions. Halocin transcripts are leaderless and the translated preproteins or preproproteins are most likely exported using the twin arginine translocation (Tat) pathway, as the Tat signal motif (two adjacent arginine residues) is present within the amino terminus. Halocin genes are almost universally expressed at the transition between exponential and stationary phases of growth; the only exception is halocin H1, which is induced during exponential phase. In contrast, the larger halocin proteins are heat-labile and typically obligately halophilic as they lose their activity (or activity is reduced) when desalted.
Microhalocins, peptide halocins
Currently, five peptide halocins have been partially or completely characterized at the protein and/or genetic levels: HalS8, HalR1, HalC8, HalH7, and HalU1. These antimicrobial peptides range from ~3 to 7.4 kDa in molecular mass, consisting of 36 to 76 amino acid residues. Two of the microhalocins (HalS8 and HalC8) are produced by proteolytic cleavage from a larger preproprotein by an unknown mechanism. Microhalocins are hydrophobic peptides that remain active even if desalted and/or stored at 4 °C and are fairly insensitive to heat and organic solvents. The first microhalocin to be characterized was HalS8, produced by the uncharacterized haloarchaeon S8a isolated from the Great Salt Lake, UT, USA.
Protein halocins
Two can be classified as protein halocins: HalH1 and HalH4; the molecular masses of the remaining halocins have yet to be elucidated. Halocin H1 is produced by Hfx. mediterranei M2a (formerly strain Xia3), isolated from a solar saltern near Alicante, Spain. It is a 31 kDa protein that is heat-labile, loses activity when desalted, and exhibits a broad range of inhibition within the haloarchaea. Halocin H1 has yet to be characterized at the protein and genetic levels. In contrast, HalH4, produced by Hfx. mediterranei R4 (ATCC 33500), also isolated from a solar saltern near Alicante, Spain was the first halocin discovered. The molecular mass of the mature HalH4 protein is 34.9 kDa (359 amino acids), processed from a preprotein of 39.6 kDa; the mechanism for processing is unknown. Halocin H4 is an archaeolytic halocin and adsorbs to sensitive Hbt. salinarum cells where it may be disrupting membrane permeability.
Sulfolobicins
The archaeocins produced by Sulfolobus are entirely different from halocins, since their activity is predominantly associated with the cells and not the supernatant. To date, the spectrum of sulfolobicin activity appears to be restricted to other members of the Sulfolobales: the sulfolobicin inhibited S. solfataricus P1, S. shibatae B12, and six nonproducing strains of S. islandicus. Activity appears to be archaeocidal but not archaeolytic. Two genes involved in sulfolobicin production have been identified in S. acidocaldarius and S. tokodaii. The sulfolobicins appear to represent a novel class of antimicrobial proteins.
See also
Archaea
References
Antibiotics
Archaea biology | Archaeocin | Biology | 1,030 |
24,863,994 | https://en.wikipedia.org/wiki/Wipe%20test%20counter | A wipe test counter is a device used to measure for possible radioactive contamination in a variety of environments. When using radioactive materials it is necessary to test for accidental contamination, whether from use of liquid unsealed sources or to check for leaking sealed sources. A swab or small absorbent smear can be used to “wipe” an area, the wipe is then placed into a test tube and counted, typically using a gamma counter. Testing for leaks in this manner is a method described in the ISO 9978 standard.
Equipment
Survey instruments may be used to detect surface contamination without requiring wiping, however this requires careful calibration and technique to ensure adequate sensitivity is achieved.
A gamma counter is a typical choice for measuring wipe samples for radioactivity as it allows multiple tests to be counted in a largely automated way. These systems detect radiation using a scintillator and photomultiplier tube and may allow the energy spectrum of a sample to be recorded, which can be used to identify the contaminant.
Use of a gamma camera has also been proposed, where collimators are removed to improve sensitivity.
Regulation
Wipe testing is typically a requirement of licenses to hold radioactive materials. In the United States the Nuclear Regulatory Commission requires wipe testing of sealed sources "periodically" using equipment sensitive down to 185 Becquerels. In the United Kingdom the Health and Safety Executive guidance for the Ionising Radiations Regulations 1999 requires wipe testing (usually every two years) and it is also likely to be a requirement of Environment Agency permits. In Australia licence conditions may require adherence to Australian standard AS2243.4 and ISO 9978 for wipe testing of sealed sources.
References
Radiation health effects
Particle detectors | Wipe test counter | Chemistry,Materials_science,Technology,Engineering | 345 |
3,951,393 | https://en.wikipedia.org/wiki/98P/Takamizawa | 98P/Takamizawa is a periodic comet in the Solar System.
On 29 June 2188 the comet will pass about from Earth.
References
External links
98P/Takamizawa – Seiichi Yoshida @ aerith.net
98P at Kronk's Cometography
Periodic comets
0098
098P
19840730 | 98P/Takamizawa | Astronomy | 72 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.