id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
102,193
https://en.wikipedia.org/wiki/Nonmetal
In the context of the periodic table a nonmetal is a chemical element that mostly lacks distinctive metallic properties. They range from colorless gases like hydrogen to shiny crystals like iodine. Physically, they are usually lighter (less dense) than elements that form metals and are often poor conductors of heat and electricity. Chemically, nonmetals have relatively high electronegativity or usually attract electrons in a chemical bond with another element, and their oxides tend to be acidic. Seventeen elements are widely recognized as nonmetals. Additionally, some or all of six borderline elements (metalloids) are sometimes counted as nonmetals. The two lightest nonmetals, hydrogen and helium, together make up about 98% of the mass of the observable universe. Five nonmetallic elements—hydrogen, carbon, nitrogen, oxygen, and silicon—make up the bulk of Earth's atmosphere, biosphere, crust and oceans. Industrial uses of nonmetals include in electronics, energy storage, agriculture, and chemical production. Most nonmetallic elements were identified in the 18th and 19th centuries. While a distinction between metals and other minerals had existed since antiquity, a basic classification of chemical elements as metallic or nonmetallic emerged only in the late 18th century. Since then about twenty properties have been suggested as criteria for distinguishing nonmetals from metals. Definition and applicable elements Unless otherwise noted, this article describes the stable form of an element at standard temperature and pressure (STP). Nonmetallic chemical elements are often described as lacking properties common to metals, namely shininess, pliability, good thermal and electrical conductivity, and a general capacity to form basic oxides. There is no widely accepted precise definition; any list of nonmetals is open to debate and revision. The elements included depend on the properties regarded as most representative of nonmetallic or metallic character. Fourteen elements are almost always recognized as nonmetals: Three more are commonly classed as nonmetals, but some sources list them as "metalloids", a term which refers to elements regarded as intermediate between metals and nonmetals: One or more of the six elements most commonly recognized as metalloids are sometimes instead counted as nonmetals: About 15–20% of the 118 known elements are thus classified as nonmetals. General properties Physical Nonmetals vary greatly in appearance, being colorless, colored or shiny. For the colorless nonmetals (hydrogen, nitrogen, oxygen, and the noble gases), no absorption of light happens in the visible part of the spectrum, and all visible light is transmitted. The colored nonmetals (sulfur, fluorine, chlorine, bromine) absorb some colors (wavelengths) and transmit the complementary or opposite colors. For example, chlorine's "familiar yellow-green colour ... is due to a broad region of absorption in the violet and blue regions of the spectrum". The shininess of boron, graphite (carbon), silicon, black phosphorus, germanium, arsenic, selenium, antimony, tellurium, and iodine is a result of varying degrees of metallic conduction where the electrons can reflect incoming visible light. About half of nonmetallic elements are gases under standard temperature and pressure; most of the rest are solids. Bromine, the only liquid, is usually topped by a layer of its reddish-brown fumes. The gaseous and liquid nonmetals have very low densities, melting and boiling points, and are poor conductors of heat and electricity. The solid nonmetals have low densities and low mechanical strength (being either hard and brittle, or soft and crumbly), and a wide range of electrical conductivity. This diversity in form stems from variability in internal structures and bonding arrangements. Covalent nonmetals existing as discrete atoms like xenon, or as small molecules, such as oxygen, sulfur, and bromine, have low melting and boiling points; many are gases at room temperature, as they are held together by weak London dispersion forces acting between their atoms or molecules, although the molecules themselves have strong covalent bonds. In contrast, nonmetals that form extended structures, such as long chains of selenium atoms, sheets of carbon atoms in graphite, or three-dimensional lattices of silicon atoms have higher melting and boiling points, and are all solids, as it takes more energy to overcome their stronger bonding. Nonmetals closer to the left or bottom of the periodic table (and so closer to the metals) often have metallic interactions between their molecules, chains, or layers; this occurs in boron, carbon, phosphorus, arsenic, selenium, antimony, tellurium and iodine. Covalently bonded nonmetals often share only the electrons required to achieve a noble gas electron configuration. For example, nitrogen forms diatomic molecules featuring a triple bonds between each atom, both of which thereby attain the configuration of the noble gas neon. Antimony's larger atomic size prevents triple bonding, resulting in buckled layers in which each antimony atom is singly bonded with three other nearby atoms. Good electrical conductivity occurs when there is metallic bonding, however the electrons in nonmetals are often not metallic. Good electrical and thermal conductivity associated with metallic electrons is seen in carbon (as graphite, along its planes), arsenic, and antimony. Good thermal conductivity occurs in boron, silicon, phosphorus, and germanium; such conductivity is transmitted though vibrations of the crystalline lattices of these elements. Moderate electrical conductivity is observed in the semiconductors boron, silicon, phosphorus, germanium, selenium, tellurium, and iodine. Many of the nonmetallic elements are hard and brittle, where dislocations cannot readily move so they tend to undergo brittle fracture rather than deforming. Some do deform such as white phosphorus (soft as wax, pliable and can be cut with a knife, at room temperature), in plastic sulfur, and in selenium which can be drawn into wires from its molten state. Graphite is a standard solid lubricant where dislocations move very easily in the basal planes. Allotropes Over half of the nonmetallic elements exhibit a range of less stable allotropic forms, each with distinct physical properties. For example, carbon, the most stable form of which is graphite, can manifest as diamond, buckminsterfullerene, amorphous and paracrystalline variations. Allotropes also occur for nitrogen, oxygen, phosphorus, sulfur, selenium and iodine. Chemical Nonmetals have relatively high values of electronegativity, and their oxides are usually acidic. Exceptions may occur if a nonmetal is not very electronegative, or if its oxidation state is low, or both. These non-acidic oxides of nonmetals may be amphoteric (like water, H2O) or neutral (like nitrous oxide, N2O), but never basic. Nonmetals tend to gain electrons during chemical reactions, in contrast to metals which tend to donate electrons. This behavior is related to the stability of electron configurations in the noble gases, which have complete outer shells as summarized by the duet and octet rules of thumb, more correctly explained in terms of valence bond theory. They typically exhibit higher ionization energies, electron affinities, and standard electrode potentials than metals. Generally, the higher these values are (including electronegativity) the more nonmetallic the element tends to be. For example, the chemically very active nonmetals fluorine, chlorine, bromine, and iodine have an average electronegativity of 3.19—a figure higher than that of any metallic element. The chemical distinctions between metals and nonmetals is connected to the attractive force between the positive nuclear charge of an individual atom and its negatively charged outer electrons. From left to right across each period of the periodic table, the nuclear charge (number of protons in the atomic nucleus) increases. There is a corresponding reduction in atomic radius as the increased nuclear charge draws the outer electrons closer to the nuclear core. In chemical bonding, nonmetals tend to gain electrons due to their higher nuclear charge, resulting in negatively charged ions. The number of compounds formed by nonmetals is vast. The first 10 places in a "top 20" table of elements most frequently encountered in 895,501,834 compounds, as listed in the Chemical Abstracts Service register for November 2, 2021, were occupied by nonmetals. Hydrogen, carbon, oxygen, and nitrogen collectively appeared in most (80%) of compounds. Silicon, a metalloid, ranked 11th. The highest-rated metal, with an occurrence frequency of 0.14%, was iron, in 12th place. A few examples of nonmetal compounds are: boric acid (), used in ceramic glazes; selenocysteine (), the 21st amino acid of life; phosphorus sesquisulfide (P4S3), found in strike anywhere matches; and teflon ()n), used to create non-stick coatings for pans and other cookware. Complications Adding complexity to the chemistry of the nonmetals are anomalies occurring in the first row of each periodic table block; non-uniform periodic trends; higher oxidation states; multiple bond formation; and property overlaps with metals. First row anomaly Starting with hydrogen, the first row anomaly primarily arises from the electron configurations of the elements concerned. Hydrogen is notable for its diverse bonding behaviors. It most commonly forms covalent bonds, but it can also lose its single electron in an aqueous solution, leaving behind a bare proton with tremendous polarizing power. Consequently, this proton can attach itself to the lone electron pair of an oxygen atom in a water molecule, laying the foundation for acid-base chemistry. Moreover, a hydrogen atom in a molecule can form a second, albeit weaker, bond with an atom or group of atoms in another molecule. Such bonding, "helps give snowflakes their hexagonal symmetry, binds DNA into a double helix; shapes the three-dimensional forms of proteins; and even raises water's boiling point high enough to make a decent cup of tea." Hydrogen and helium, as well as boron through neon, have unusually small atomic radii. This phenomenon arises because the 1s and 2p subshells lack inner analogues (meaning there is no zero shell and no 1p subshell), and they therefore experience less electron-electron exchange interactions, unlike the 3p, 4p, and 5p subshells of heavier elements. As a result, ionization energies and electronegativities among these elements are higher than the periodic trends would otherwise suggest. The compact atomic radii of carbon, nitrogen, and oxygen facilitate the formation of double or triple bonds. While it would normally be expected, on electron configuration consistency grounds, that hydrogen and helium would be placed atop the s-block elements, the significant first row anomaly shown by these two elements justifies alternative placements. Hydrogen is occasionally positioned above fluorine, in group 17, rather than above lithium in group 1. Helium is almost always placed above neon, in group 18, rather than above beryllium in group 2. Secondary periodicity An alternation in certain periodic trends, sometimes referred to as secondary periodicity, becomes evident when descending groups 13 to 15, and to a lesser extent, groups 16 and 17. Immediately after the first row of d-block metals, from scandium to zinc, the 3d electrons in the p-block elements—specifically, gallium (a metal), germanium, arsenic, selenium, and bromine—prove less effective at shielding the increasing positive nuclear charge. The Soviet chemist gives two more tangible examples: "The toxicity of some arsenic compounds, and the absence of this property in analogous compounds of phosphorus [P] and antimony [Sb]; and the ability of selenic acid [] to bring metallic gold [Au] into solution, and the absence of this property in sulfuric [] and [] acids." Higher oxidation states Roman numerals such as III, V and VIII denote oxidation states Some nonmetallic elements exhibit oxidation states that deviate from those predicted by the octet rule, which typically results in an oxidation state of –3 in group 15, –2 in group 16, –1 in group 17, and 0 in group 18. Examples include ammonia NH3, hydrogen sulfide H2S, hydrogen fluoride HF, and elemental xenon Xe. Meanwhile, the maximum possible oxidation state increases from +5 in group 15, to +8 in group 18. The +5 oxidation state is observable from period 2 onward, in compounds such as nitric acid HN(V)O3 and phosphorus pentafluoride PCl5. Higher oxidation states in later groups emerge from period 3 onwards, as seen in sulfur hexafluoride SF6, iodine heptafluoride IF7, and xenon(VIII) tetroxide XeO4. For heavier nonmetals, their larger atomic radii and lower electronegativity values enable the formation of compounds with higher oxidation numbers, supporting higher bulk coordination numbers. Multiple bond formation Period 2 nonmetals, particularly carbon, nitrogen, and oxygen, show a propensity to form multiple bonds. The compounds formed by these elements often exhibit unique stoichiometries and structures, as seen in the various nitrogen oxides, which are not commonly found in elements from later periods. Property overlaps While certain elements have traditionally been classified as nonmetals and others as metals, some overlapping of properties occurs. Writing early in the twentieth century, by which time the era of modern chemistry had been well-established, Humphrey observed that: ... these two groups, however, are not marked off perfectly sharply from each other; some nonmetals resemble metals in certain of their properties, and some metals approximate in some ways to the non-metals. Examples of metal-like properties occurring in nonmetallic elements include: Silicon has an electronegativity (1.9) comparable with metals such as cobalt (1.88), copper (1.9), nickel (1.91) and silver (1.93); The electrical conductivity of graphite exceeds that of some metals; Selenium can be drawn into a wire; Radon is the most metallic of the noble gases and begins to show some cationic behavior, which is unusual for a nonmetal; and In extreme conditions, just over half of nonmetallic elements can form homopolyatomic cations. Examples of nonmetal-like properties occurring in metals are: Tungsten displays some nonmetallic properties, sometimes being brittle, having a high electronegativity, and forming only anions in aqueous solution, and predominately acidic oxides. Gold, the "king of metals" has the highest electrode potential among metals, suggesting a preference for gaining rather than losing electrons. Gold's ionization energy is one of the highest among metals, and its electron affinity and electronegativity are high, with the latter exceeding that of some nonmetals. It forms the Au– auride anion and exhibits a tendency to bond to itself, behaviors which are unexpected for metals. In aurides (MAu, where M = Li–Cs), gold's behavior is similar to that of a halogen. Gold has a large enough nuclear potential that the electrons have to be considered with relativistic effects included which changes some of the properties. A relatively recent development involves certain compounds of heavier p-block elements, such as silicon, phosphorus, germanium, arsenic and antimony, exhibiting behaviors typically associated with transition metal complexes. This is linked to a small energy gap between their filled and empty molecular orbitals, which are the regions in a molecule where electrons reside and where they can be available for chemical reactions. In such compounds, this allows for unusual reactivity with small molecules like hydrogen (H2), ammonia (NH3), and ethylene (C2H4), a characteristic previously observed primarily in transition metal compounds. These reactions may open new avenues in catalytic applications. Types Nonmetal classification schemes vary widely, with some accommodating as few as two subtypes and others identifying up to seven. For example, the periodic table in the Encyclopaedia Britannica recognizes noble gases, halogens, and other nonmetals, and splits the elements commonly recognized as metalloids between "other metals" and "other nonmetals". On the other hand, seven of twelve color categories on the Royal Society of Chemistry periodic table include nonmetals. Starting on the right side of the periodic table, three types of nonmetals can be recognized: the relatively inert noble gases—helium, neon, argon, krypton, xenon, radon; the notably reactive halogen nonmetals—fluorine, chlorine, bromine, iodine; and the mixed reactivity "unclassified nonmetals", a set with no widely used collective name—hydrogen, carbon, nitrogen, oxygen, phosphorus, sulfur, selenium. The descriptive phrase unclassified nonmetals is used here for convenience. The elements in a fourth set are sometimes recognized as nonmetals: the generally unreactive metalloids, sometimes considered a third category distinct from metals and nonmetals—boron, silicon, germanium, arsenic, antimony, tellurium. The boundaries between these types are not sharp. Carbon, phosphorus, selenium, and iodine border the metalloids and show some metallic character, as does hydrogen. The greatest discrepancy between authors occurs in metalloid "frontier territory". Some consider metalloids distinct from both metals and nonmetals, while others classify them as nonmetals. Some categorize certain metalloids as metals (e.g., arsenic and antimony due to their similarities to heavy metals). Metalloids resemble the elements universally considered "nonmetals" in having relatively low densities, high electronegativity, and similar chemical behavior. Noble gases Six nonmetals are classified as noble gases: helium, neon, argon, krypton, xenon, and the radioactive radon. In conventional periodic tables they occupy the rightmost column. They are called noble gases due to their exceptionally low chemical reactivity. These elements exhibit similar properties, characterized by their colorlessness, odorlessness, and nonflammability. Due to their closed outer electron shells, noble gases possess weak interatomic forces of attraction, leading to exceptionally low melting and boiling points. As a consequence, they all exist as gases under standard conditions, even those with atomic masses surpassing many typically solid elements. Chemically, the noble gases exhibit relatively high ionization energies, negligible or negative electron affinities, and high to very high electronegativities. The number of compounds formed by noble gases is in the hundreds and continues to expand, with most of these compounds involving the combination of oxygen or fluorine with either krypton, xenon, or radon. Halogen nonmetals While the halogen nonmetals are notably reactive and corrosive elements, they can also be found in everyday compounds like toothpaste (NaF); common table salt (NaCl); swimming pool disinfectant (NaBr); and food supplements (KI). The term "halogen" itself means "salt former". Chemically, the halogen nonmetals exhibit high ionization energies, electron affinities, and electronegativity values, and are mostly relatively strong oxidizing agents. These characteristics contribute to their corrosive nature. All four elements tend to form primarily ionic compounds with metals, in contrast to the remaining nonmetals (except for oxygen) which tend to form primarily covalent compounds with metals. The highly reactive and strongly electronegative nature of the halogen nonmetals epitomizes nonmetallic character. Unclassified nonmetals Hydrogen behaves in some respects like a metallic element and in others like a nonmetal. Like a metallic element it can, for example, form a solvated cation in aqueous solution; it can substitute for alkali metals in compounds such as the chlorides (NaCl cf. HCl) and nitrates (KNO3 cf. HNO3), and in certain alkali metal complexes as a nonmetal. It attains this configuration by forming a covalent or ionic bond or, if it has initially given up its electron, by attaching itself to a lone pair of electrons. Some or all of these nonmetals share several properties. Being generally less reactive than the halogens, most of them can occur naturally in the environment. They have significant roles in biology and geochemistry. Collectively, their physical and chemical characteristics can be described as "moderately non-metallic". Sometimes they have corrosive aspects. Carbon corrosion can occur in fuel cells. Untreated selenium in soils can lead to the formation of corrosive hydrogen selenide gas. Very different, when combined with metals, the unclassified nonmetals can form interstitial or refractory compounds due to their relatively small atomic radii and sufficiently low ionization energies. They also exhibit a tendency to bond to themselves, particularly in solid compounds. Additionally, diagonal periodic table relationships among these nonmetals mirror similar relationships among the metalloids. Abundance, extraction, and uses Abundance The abundance of elements in the universe results from nuclear physics processes like nucleosynthesis and radioactive decay. The volatile noble gas nonmetal elements are less abundant in the atmosphere than expected based their overall abundance due to cosmic nucleosynthesis. Mechanisms to explain this difference is an important aspect of planetary science. Even within that challenge, the nonmetal element is unexpectedly depleted. A possible explanation comes from theoretical models of the high pressures in the Earth's core suggest there may be around 1013 tons of xenon, in the form of stable XeFe3 and XeNi3 intermetallic compounds. Five nonmetals—hydrogen, carbon, nitrogen, oxygen, and silicon—form the bulk of the directly observable structure of the Earth: about 73% of the crust, 93% of the biomass, 96% of the hydrosphere, and over 99% of the atmosphere, as shown in the accompanying table. Silicon and oxygen form highly stable tetrahedral structures, known as silicates. Here, "the powerful bond that unites the oxygen and silicon ions is the cement that holds the Earth's crust together." In the biomass, the relative abundance of the first four nonmetals (and phosphorus, sulfur, and selenium marginally) is attributed to a combination of relatively small atomic size, and sufficient spare electrons. These two properties enable them to bind to one another and "some other elements, to produce a molecular soup sufficient to build a self-replicating system." Extraction Nine of the 23 nonmetallic elements are gases, or form compounds that are gases, and are extracted from natural gas or liquid air. These elements include hydrogen, helium, nitrogen, oxygen, neon, sulfur, argon, krypton, and xenon. For example, nitrogen and oxygen are extracted from air through fractional distillation of liquid air. This method capitalizes on their different boiling points to separate them efficiently. Sulfur was extracted using the Frasch process, which involved injecting superheated water into underground deposits to melt the sulfur, which is then pumped to the surface. This technique leveraged sulfur's low melting point relative to other geological materials. It is now obtained by reacting the hydrogen sulfide in natural gas, with oxygen. Water is formed, leaving the sulfur behind. are extracted from the following sources: Gases (3): hydrogen, from methane; helium, from natural gas; sulfur, from hydrogen sulfide in natural gas Liquids (9): nitrogen, oxygen, neon, argon, krypton and xenon from liquid air; chlorine, bromine and iodine from brine Solids (12): boron, from borates; carbon occurs naturally as graphite; silicon, from silica; phosphorus, from phosphates; iodine, from sodium iodate; radon, as a decay product from uranium ores; fluorine, from fluorite; germanium, arsenic, selenium, antimony and tellurium, from sulfides. Uses Uses of nonmetals and non-metallic elements are broadly categorized as domestic, industrial, attenuative (lubricative, retarding, insulating or cooling), and agricultural Many have domestic and industrial applications in household accoutrements; medicine and pharmaceuticals; and lasers and lighting. They are components of mineral acids; and prevalent in plug-in hybrid vehicles; and smartphones. A significant number have attenuative and agricultural applications. They are used in lubricants; and flame retardants and fire extinguishers. They can serve as inert air replacements; and are used in cryogenics and refrigerants. Their significance extends to agriculture, through their use in fertilizers. Additionally, a smaller number of nonmetals or nonmetallic elements find specialized uses in explosives; and welding gases. Taxonomical history Background Around 340 BCE, in Book III of his treatise Meteorology, the ancient Greek philosopher Aristotle categorized substances found within the Earth into metals and "fossiles". The latter category included various minerals such as realgar, ochre, ruddle, sulfur, cinnabar, and other substances that he referred to as "stones which cannot be melted". Until the Middle Ages the classification of minerals remained largely unchanged, albeit with varying terminology. In the fourteenth century, the English alchemist Richardus Anglicus expanded upon the classification of minerals in his work Correctorium Alchemiae. In this text, he proposed the existence of two primary types of minerals. The first category, which he referred to as "major minerals", included well-known metals such as gold, silver, copper, tin, lead, and iron. The second category, labeled "minor minerals", encompassed substances like salts, atramenta (iron sulfate), alums, vitriol, arsenic, orpiment, sulfur, and similar substances that were not metallic bodies. The term "nonmetallic" dates back to at least the 16th century. In his 1566 medical treatise, French physician Loys de L'Aunay distinguished substances from plant sources based on whether they originated from metallic or non-metallic soils. Later, the French chemist Nicolas Lémery discussed metallic and nonmetallic minerals in his work Universal Treatise on Simple Drugs, Arranged Alphabetically published in 1699. In his writings, he contemplated whether the substance "cadmia" belonged to either the first category, akin to cobaltum (cobaltite), or the second category, exemplified by what was then known as calamine—a mixed ore containing zinc carbonate and silicate. Organization of elements by types Just as the ancients distinguished metals from other minerals, similar distinctions developed as the modern idea of chemical elements emerged in the late 1700s. French chemist Antoine Lavoisier published the first modern list of chemical elements in his revolutionary 1789 Traité élémentaire de chimie. The 33 elements known to Lavoisier were categorized into four distinct groups, including gases, metallic substances, nonmetallic substances that form acids when oxidized, and earths (heat-resistant oxides). Lavoisier's work gained widespread recognition and was republished in twenty-three editions across six languages within its first seventeen years, significantly advancing the understanding of chemistry in Europe and America. In 1802 the term "metalloids" was introduced for elements with the physical properties of metals but the chemical properties of non-metals. However, in 1811, the Swedish chemist Berzelius used the term "metalloids" to describe all nonmetallic elements, noting their ability to form negatively charged ions with oxygen in aqueous solutions. Thus in 1864, the "Manual of Metalloids" divided all elements into either metals or metalloids, with the latter group including elements now called nonmetals. Reviews of the book indicated that the term "metalloids" was still endorsed by leading authorities, but there were reservations about its appropriateness. While Berzelius' terminology gained significant acceptance, it later faced criticism from some who found it counterintuitive, misapplied, or even invalid. The idea of designating elements like arsenic as metalloids had been considered. By as early as 1866, some authors began preferring the term "nonmetal" over "metalloid" to describe nonmetallic elements. In 1875, Kemshead observed that elements were categorized into two groups: non-metals (or metalloids) and metals. He noted that the term "non-metal", despite its compound nature, was more precise and had become universally accepted as the nomenclature of choice. Development of types In 1844, , a French doctor, pharmacist, and chemist, established a basic taxonomy of nonmetals to aid in their study. He wrote: They will be divided into four groups or sections, as in the following: Organogens—oxygen, nitrogen, hydrogen, carbon Sulphuroids—sulfur, selenium, phosphorus Chloroides—fluorine, chlorine, bromine, iodine Boroids—boron, silicon. Dupasquier's quartet parallels the modern nonmetal types. The organogens and sulphuroids are akin to the unclassified nonmetals. The chloroides were later called halogens. The boroids eventually evolved into the metalloids, with this classification beginning from as early as 1864. The then unknown noble gases were recognized as a distinct nonmetal group after being discovered in the late 1800s. His taxonomy was noted for its natural basis. That said, it was a significant departure from other contemporary classifications, since it grouped together oxygen, nitrogen, hydrogen, and carbon. In 1828 and 1859, the French chemist Dumas classified nonmetals as (1) hydrogen; (2) fluorine to iodine; (3) oxygen to sulfur; (4) nitrogen to arsenic; and (5) carbon, boron and silicon, thereby anticipating the vertical groupings of Mendeleev's 1871 periodic table. Dumas' five classes fall into modern groups 1, 17, 16, 15, and 14 to 13 respectively. Suggested distinguishing criteria Much of the early analyses were phenomenological, and a variety of physical, chemical, and atomic properties have been suggested for distinguishing metals from nonmetals (or other bodies); a comprehensive early set of characteristics was stated by Rev Thaddeus Mason Harrisn in the 1803 Minor Encyclopedia . METAL, in natural history and chemistry, the name of a class of simple bodies; of which it is observed, that they posses; a lustre; that they are opaque; that they arc fusible, or may be melted; that their specific gravity is greater than that of any other bodies yet discovered; that they are better conductors of electricity, than any other body; that they are malleable, or capable of being extended and flattened by the hammer; and that they are ductile or tenacious, that is, capable of being drawn out into threads or wires. Some criteria did not last long; for instance in 1809, the British chemist and inventor Humphry Davy isolated sodium and potassium, their low densities contrasted with their metallic appearance, so the density property was tenuous although these metals was firmly established by their chemical properties. Johnson has a similar approach to Mason, distinguishing between metals and nonmetals on the basis of their physical states, electrical conductivity, mechanical properties, and the acid-base nature of their oxides: gaseous elements are nonmetals (hydrogen, nitrogen, oxygen, fluorine, chlorine and the noble gases); liquids (mercury, bromine) are either metallic or nonmetallic: mercury, as a good conductor, is a metal; bromine, with its poor conductivity, is a nonmetal; solids are either ductile and malleable, hard and brittle, or soft and crumbly: a. ductile and malleable elements are metals; b. hard and brittle elements include boron, silicon and germanium, which are semiconductors and therefore not metals; and c. soft and crumbly elements include carbon, phosphorus, sulfur, arsenic, antimony, tellurium and iodine, which have acidic oxides indicative of nonmetallic character. Several authors have noted that nonmetals generally have low densities and high electronegativity. The accompanying table, using a threshold of 7 g/cm3 for density and 1.9 for electronegativity (revised Pauling), shows that all nonmetals have low density and high electronegativity. In contrast, all metals have either high density or low electronegativity (or both). Goldwhite and Spielman added that, "... lighter elements tend to be more electronegative than heavier ones." The average electronegativity for the elements in the table with densities less than 7 gm/cm3 (metals and nonmetals) is 1.97 compared to 1.66 for the metals having densities of more than 7 gm/cm3. There is not full agreement about the use of phenomenological properties. Emsley pointed out the complexity of this task, asserting that no single property alone can unequivocally assign elements to either the metal or nonmetal category. Some authors divide elements into metals, metalloids, and nonmetals, but Oderberg disagrees, arguing that by the principles of categorization, anything not classified as a metal should be considered a nonmetal. Kneen and colleagues proposed that the classification of nonmetals can be achieved by establishing a single criterion for metallicity. They acknowledged that various plausible classifications exist and emphasized that while these classifications may differ to some extent, they would generally agree on the categorization of nonmetals. The describe electrical conductivity as the key property, arguing that this is the most common approach. One of the most commonly recognized properties used is the temperature coefficient of resistivity, the effect of heating on electrical resistance and conductivity. As temperature rises, the conductivity of metals decreases while that of nonmetals increases. However, plutonium, carbon, arsenic, and antimony appear to defy the norm. When plutonium (a metal) is heated within a temperature range of −175 to +125 °C its conductivity increases. Similarly, despite its common classification as a nonmetallic element, carbon (as graphite) is a semimetal which when heated experiences a decrease in electrical conductivity. Arsenic and antimony, which are occasionally classified as nonmetallic elements are also semimetals, and show behavior similar to carbon. Comparison of selected properties The two tables in this section list some of the properties of five types of elements (noble gases, halogen nonmetals, unclassified nonmetals, metalloids and, for comparison, metals) based on their most stable forms at standard temperature and pressure. The dashed lines around the columns for metalloids signify that the treatment of these elements as a distinct type can vary depending on the author, or classification scheme in use. Physical properties by element type Physical properties are listed in loose order of ease of their determination. Chemical properties by element type Chemical properties are listed from general characteristics to more specific details. † Hydrogen can also form alloy-like hydrides ‡ The labels low, moderate, high, and very high are arbitrarily based on the value spans listed in the table See also CHON (carbon, hydrogen, oxygen, nitrogen) List of nonmetal monographs Metallization pressure Nonmetal (astrophysics) Period 1 elements (hydrogen & helium) Properties of nonmetals (and metalloids) by group Notes References Citations Bibliography Abbott D 1966, An Introduction to the Periodic Table, J. M. Dent & Sons, London Addison WE 1964, The Allotropy of the Elements, Oldbourne Press, London Atkins PA et al. 2006, Shriver & Atkins' Inorganic Chemistry, 4th ed., Oxford University Press, Oxford, Aylward G and Findlay T 2008, SI Chemical Data, 6th ed., John Wiley & Sons Australia, Milton, Bache AD 1832, "An essay on chemical nomenclature, prefixed to the treatise on chemistry; by J. J. Berzelius", American Journal of Science, vol. 22, pp. 248–277 Baker et al. PS 1962, Chemistry and You, Lyons and Carnahan, Chicago Barton AFM 2021, States of Matter, States of Mind, CRC Press, Boca Raton, Beach FC (ed.) 1911, The Americana: A universal reference library, vol. XIII, Mel–New, Metalloid, Scientific American Compiling Department, New York Beard A, Battenberg, C & Sutker BJ 2021, "Flame retardants", in Ullmann's Encyclopedia of Industrial Chemistry, Beiser A 1987, Concepts of modern physics, 4th ed., McGraw-Hill, New York, Benner SA, Ricardo A & Carrigan MA 2018, "Is there a common chemical model for life in the universe?", in Cleland CE & Bedau MA (eds.), The Nature of Life: Classical and Contemporary Perspectives from Philosophy and Science, Cambridge University Press, Cambridge, Benzhen et al. 2020, Metals and non-metals in the periodic table, Philosophical Transactions of the Royal Society A, vol. 378, 20200213 Berger LI 1997, Semiconductor Materials, CRC Press, Boca Raton, Bertomeu-Sánchez JR, Garcia-Belmar A & Bensaude-Vincent B 2002, "Looking for an order of things: Textbooks and chemical classifications in nineteenth century France", Ambix, vol. 49, no. 3, Berzelius JJ 1811, 'Essai sur la nomenclature chimique', Journal de Physique, de Chimie, d'Histoire Naturelle, vol. LXXIII, pp. 253‒286 Bhuwalka et al. 2021, "Characterizing the changes in material use due to vehicle electrification", Environmental Science & Technology vol. 55, no. 14, Bogoroditskii NP & Pasynkov VV 1967, Radio and Electronic Materials, Iliffe Books, London Bohlmann R 1992, "Synthesis of halides", in Winterfeldt E (ed.), Heteroatom manipulation, Pergamon Press, Oxford, Boreskov GK 2003, Heterogeneous Catalysis, Nova Science, New York, Brady JE & Senese F 2009, Chemistry: The study of Matter and its Changes, 5th ed., John Wiley & Sons, New York, Brande WT 1821, A Manual of Chemistry, vol. II, John Murray, London Brandt HG & Weiler H, 2000, "Welding and cutting", in Ullmann's Encyclopedia of Industrial Chemistry, Brannt WT 1919, Metal Worker's Handy-book of Receipts and Processes, HC Baird & Company, Philadelphia Brown TL et al. 2014, Chemistry: The Central Science, 3rd ed., Pearson Australia: Sydney, Burford N, Passmore J & Sanders JCP 1989, "The preparation, structure, and energetics of homopolyatomic cations of groups 16 (the chalcogens) and 17 (the halogens)", in Liebman JF & Greenberg A (eds.), From atoms to polymers: isoelectronic analogies, VCH, New York, Bynum WF, Browne J & Porter R 1981 (eds), Dictionary of the History of Science, Princeton University Press, Princeton, Cahn RW & Haasen P, Physical Metallurgy: Vol. 1, 4th ed., Elsevier Science, Amsterdam, Cao C et al. 2021, "Understanding periodic and non-periodic chemistry in periodic tables", Frontiers in Chemistry, vol. 8, no. 813, Carapella SC 1968, "Arsenic" in Hampel CA (ed.), The Encyclopedia of the Chemical Elements, Reinhold, New York Carmalt CJ & Norman NC 1998, "Arsenic, antimony and bismuth: Some general properties and aspects of periodicity", in Norman NC (ed.), Chemistry of Arsenic, Antimony and Bismuth, Blackie Academic & Professional, London, pp. 1–38, Carrasco et al. 2023, "Antimonene: a tuneable post-graphene material for advanced applications in optoelectronics, catalysis, energy and biomedicine", Chemical Society Reviews, vol. 52, no. 4, p. 1288–1330, Challoner J 2014, The Elements: The New Guide to the Building Blocks of our Universe, Carlton Publishing Group, Chambers E 1743, in "Metal", Cyclopedia: Or an Universal Dictionary of Arts and Sciences (etc.), vol. 2, D Midwinter, London Chambers C & Holliday AK 1982, Inorganic Chemistry, Butterworth & Co., London, Chandra X-ray Observatory 2018, Abundance Pie Chart, accessed 26 October 2023 Chapin FS, Matson PA & Vitousek PM 2011, Earth's climate system, in Principles of Terrestrial Ecosystem Ecology, Springer, New York, Charlier J-C, Gonze X, Michenaud J-P 1994, "First-principles study of the stacking effect on the electronic properties of graphite(s)", Carbon, vol. 32, no. 2, pp. 289–99, Chedd G 1969, Half-way elements: The technology of metalloids, Double Day, Garden City, NY Chemical Abstracts Service 2021, CAS REGISTRY database as of November 2, Case #01271182 Chen K 1990, Industrial Power Distribution and Illuminating Systems, Marcel Dekker, New York, Chung DD 1987, "Review of exfoliated graphite", Journal of Materials Science, vol. 22, Clugston MJ & Flemming R 2000, Advanced Chemistry, Oxford University Press, Oxford, Cockell C 2019, The Equations of Life: How Physics Shapes Evolution, Atlantic Books, London, Cook CG 1923, Chemistry in Everyday Life: With Laboratory Manual, D Appleton, New York Cotton A et al. 1999, Advanced Inorganic Chemistry, 6th ed., Wiley, New York, Cousins DM, Davidson MG & García-Vivó D 2013, "Unprecedented participation of a four-coordinate hydrogen atom in the cubane core of lithium and sodium phenolates", Chemical Communications, vol. 49, Cox PA 1997, The Elements: Their Origins, Abundance, and Distribution, Oxford University Press, Oxford, Cox T 2004, Inorganic Chemistry, 2nd ed., BIOS Scientific Publishers, London, Crawford FH 1968, Introduction to the Science of Physics, Harcourt, Brace & World, New York Cressey D 2010, "Chemists re-define hydrogen bond" , Nature newsblog, accessed August 23, 2017 Crichton R 2012, Biological Inorganic Chemistry: A New Introduction to Molecular Structure and Function, 2nd ed., Elsevier, Amsterdam, Criswell B 2007, "Mistake of having students be Mendeleev for just a day", Journal of Chemical Education, vol. 84, no. 7, pp. 1140–1144, Crow JM 2013, Main group renaissance, Chemistry World, 31 May, accessed 26 December 2023 Csele M 2016, Lasers, in Ullmann's Encyclopedia of Industrial Chemistry, Dalton L 2019, "Argon reacts with nickel under pressure-cooker conditions", Chemical & Engineering News, accessed November 6, 2019 de Clave E 1651, Nouvelle Lumière philosophique des vrais principes et élémens de nature, et qualité d'iceux, contre l'opinion commune, Olivier de Varennes, Paris Daniel PL & Rapp RA 1976, "Halogen corrosion of metals", in Fontana MG & Staehle RW (eds.), Advances in Corrosion Science and Technology, Springer, Boston, de L'Aunay L 1566, Responce au discours de maistre Iacques Grevin, docteur de Paris, qu'il a escript contre le livre de maistre Loys de l'Aunay, medecin en la Rochelle, touchant la faculté de l'antimoine (Response to the Speech of Master Jacques Grévin,... Which He Wrote Against the Book of Master Loys de L'Aunay,... Touching the Faculty of Antimony), De l'Imprimerie de Barthelemi Berton, La Rochelle Davis et al. 2006, "Atomic iodine lasers", in Endo M & Walter RF (eds) 2006, Gas Lasers, CRC Press, Boca Raton, Florida, DeKock RL & Gray HB 1989, Chemical structure and bonding, University Science Books, Mill Valley, CA, Dejonghe L 1998, "Zinc–lead deposits of Belgium", Ore Geology Reviews, vol. 12, no. 5, 329–354, Desai PD, James HM & Ho CY 1984, "Electrical resistivity of aluminum and manganese", Journal of Physical and Chemical Reference Data, vol. 13, no. 4, Donohue J 1982, The Structures of the Elements, Robert E. Krieger, Malabar, Florida, Dorsey MG 2023, Holding Their Breath: How the Allies Confronted the Threat of Chemical Warfare in World War II, Cornell University Press, Ithaca, New York, pp. 12–13, Douglade J, Mercier R 1982, Structure cristalline et covalence des liaisons dans le sulfate d’arsenic(III), As2(SO4)3, Acta Crystallographica Section B, vol. 38, no, 3, 720–723, Du Y, Ouyang C, Shi S & Lei M 2010, Ab initio studies on atomic and electronic structures of black phosphorus, Journal of Applied Physics, vol. 107, no. 9, pp. 093718–1–4, Duffus JH 2002, " 'Heavy metals'—A meaningless term?", Pure and Applied Chemistry, vol. 74, no. 5, pp. 793–807, Dumas JBA 1828, Traité de Chimie Appliquée aux Arts, Béchet Jeune, Paris Dumas JBA 1859, Mémoire sur les Équivalents des Corps Simples, Mallet-Bachelier, Paris Dupasquier A 1844, Traité élémentaire de chimie industrielle, Charles Savy Juene, Lyon Eagleson M 1994, Concise Encyclopedia Chemistry, Walter de Gruyter, Berlin, Earl B & Wilford D 2021, Cambridge O Level Chemistry, Hodder Education, London, Edwards PP 2000, "What, why and when is a metal?", in Hall N (ed.), The New Chemistry, Cambridge University, Cambridge, pp. 85–114, Edwards PP et al. 2010, "... a metal conducts and a non-metal doesn't", Philosophical Transactions of the Royal Society A, 2010, vol, 368, no. 1914, Edwards PP & Sienko MJ 1983, "On the occurrence of metallic character in the periodic table of the elements", Journal of Chemical Education, vol. 60, no. 9, , Elliot A 1929, "The absorption band spectrum of chlorine", Proceedings of the Royal Society A, vol. 123, no. 792, pp. 629–644, Emsley J 1971, The Inorganic Chemistry of the Non-metals, Methuen Educational, London, Emsley J 2011, Nature's Building Blocks: An A–Z Guide to the Elements, Oxford University Press, Oxford, Encyclopaedia Britannica, 2021, Periodic table, accessed September 21, 2021 Engesser TA & Krossing I 2013, "Recent advances in the syntheses of homopolyatomic cations of the non metallic elements , , , , , , and ", Coordination Chemistry Reviews, vol. 257, nos. 5–6, pp. 946–955, Erman P & Simon P 1808, "Third report of Prof. Erman and State Architect Simon on their joint experiments", Annalen der Physik, vol. 28, no. 3, pp. 347–367 Evans RC 1966, An Introduction to Crystal Chemistry, 2nd ed., Cambridge University, Cambridge Faraday M 1853, The Subject Matter of a Course of Six Lectures on the Non-metallic Elements, (arranged by John Scoffern), Longman, Brown, Green, and Longmans, London Field JE (ed.) 1979, The Properties of Diamond, Academic Press, London, Florez et al. 2022, "From the gas phase to the solid state: The chemical bonding in the superheavy element flerovium", The Journal of Chemical Physics, vol. 157, 064304, Fortescue JAC 2012, Environmental Geochemistry: A Holistic Approach, Springer-Verlag, New York, Fox M 2010, Optical Properties of Solids, 2nd ed., Oxford University Press, New York, Fraps GS 1913, Principles of Agricultural Chemistry, The Chemical Publishing Company, Easton, PA Fraústo da Silva JJR & Williams RJP 2001, The Biological Chemistry of the Elements: The Inorganic Chemistry of Life, 2nd ed., Oxford University Press, Oxford, Gaffney J & Marley N 2017, General Chemistry for Engineers, Elsevier, Amsterdam, Ganguly A 2012, Fundamentals of Inorganic Chemistry, 2nd ed., Dorling Kindersley (India), New Delhi Gargaud M et al. (eds.) 2006, Lectures in Astrobiology, vol. 1, part 1: The Early Earth and Other Cosmic Habitats for Life, Springer, Berlin, Gatti M, Tokatly IV & Rubio A, 2010, Sodium: a charge-transfer insulator at high pressures, Physical Review Letters, vol. 104, no. 21, Georgievskii VI 1982, Mineral compositions of bodies and tissues of animals, in Georgievskii VI, Annenkov BN & Samokhin VT (eds), Mineral Nutrition of Animals: Studies in the Agricultural and Food Sciences, Butterworths, London, Gillespie RJ, Robinson EA 1959, The sulfuric acid solvent system, in Emeléus HJ, Sharpe AG (eds), Advances in Inorganic Chemistry and Radiochemistry, vol. 1, pp. 386–424, Academic Press, New York Gillham EJ 1956, A semi-conducting antimony bolometer, Journal of Scientific Instruments, vol. 33, no. 9, Glinka N 1960, General chemistry, Sobolev D (trans.), Foreign Languages Publishing House, Moscow Godfrin H & Lauter HJ 1995, "Experimental properties of 3He adsorbed on graphite", in Halperin WP (ed.), Progress in Low Temperature Physics, volume 14, Elsevier Science B.V., Amsterdam, Godovikov AA & Nenasheva N 2020, Structural-chemical Systematics of Minerals, 3rd ed., Springer, Cham, Switzerland, Goldsmith RH 1982, "Metalloids", Journal of Chemical Education, vol. 59, no. 6, pp. 526–527, Goldwhite H & Spielman JR 1984, College Chemistry, Harcourt Brace Jovanovich, San Diego, Goodrich BG 1844, A Glance at the Physical Sciences, Bradbury, Soden & Co., Boston Gresham et al. 2015, Lubrication and lubricants, in Kirk-Othmer Encyclopedia of Chemical Technology, John Wiley & Sons, , accessed Jun 3, 2024 Grondzik WT et al. 2010, Mechanical and Electrical Equipment for Buildings, 11th ed., John Wiley & Sons, Hoboken, Government of Canada 2015, Periodic table of the elements, accessed August 30, 2015 Graves Jr JL 2022, A Voice in the Wilderness: A Pioneering Biologist Explains How Evolution Can Help Us Solve Our Biggest Problems, Basic Books, New York, , Green D 2012, The Elements, Scholastic, Southam, Warwickshire, Greenberg A 2007, From alchemy to chemistry in picture and story, John Wiley & Sons, Hoboken, NJ, 978-0-471-75154-0 Greenwood NN 2001, Main group element chemistry at the millennium, Journal of the Chemical Society, Dalton Transactions, no. 14, pp. 2055–66, Greenwood NN & Earnshaw A 2002, Chemistry of the Elements, 2nd ed., Butterworth-Heinemann, Grochala W 2018, "On the position of helium and neon in the Periodic Table of Elements", Foundations of Chemistry, vol. 20, pp. 191–207, Hall RA 2021, Pop Goes the Decade: The 2000s, ABC-CLIO, Santa Barbara, California, Haller EE 2006, "Germanium: From its discovery to SiGe devices", Materials Science in Semiconductor Processing, vol. 9, nos 4–5, accessed 9 October 2013 Hampel CA & Hawley GG 1976, Glossary of Chemical Terms, Van Nostrand Reinhold, New York, Hanley JJ & Koga KT 2018, "Halogens in terrestrial and cosmic geochemical systems: Abundances, geochemical behaviors, and analytical methods" in The Role of Halogens in Terrestrial and Extraterrestrial Geochemical Processes: Surface, Crust, and Mantle, Harlov DE & Aranovich L (eds.), Springer, Cham, Harbison RD, Bourgeois MM & Johnson GT 2015, Hamilton and Hardy's Industrial Toxicology, 6th ed., John Wiley & Sons, Hoboken, Hare RA & Bache F 1836, Compendium of the Course of Chemical Instruction in the Medical Department of the University of Pennsylvania, 3rd ed., JG Auner, Philadelphia Harris TM 1803, The Minor Encyclopedia, vol. III, West & Greenleaf, Boston Hein M & Arena S 2011, Foundations of College Chemistry, 13th ed., John Wiley & Sons, Hoboken, New Jersey, Hengeveld R & Fedonkin MA 2007, "Bootstrapping the energy flow in the beginning of life", Acta Biotheoretica, vol. 55, Herman ZS 1999, "The nature of the chemical bond in metals, alloys, and intermetallic compounds, according to Linus Pauling", in Maksić, ZB, Orville-Thomas WJ (eds.), 1999, Pauling's Legacy: Modern Modelling of the Chemical Bond, Elsevier, Amsterdam, Hermann A, Hoffmann R & Ashcroft NW 2013, "Condensed astatine: Monatomic and metallic", Physical Review Letters, vol. 111, Hérold A 2006, "An arrangement of the chemical elements in several classes inside the periodic table according to their common properties", Comptes Rendus Chimie, vol. 9, no. 1, Herzfeld K 1927, "On atomic properties which make an element a metal", Physical Review, vol. 29, no. 5, Hill G, Holman J & Hulme PG 2017, Chemistry in Context, 7th ed., Oxford University Press, Oxford, Hoefer F 1845, Nomenclature et Classifications Chimiques, J.-B. Baillière, Paris Holderness A & Berry M 1979, Advanced Level Inorganic Chemistry, 3rd ed., Heinemann Educational Books, London, Horvath AL 1973, "Critical temperature of elements and the periodic system", Journal of Chemical Education, vol. 50, no. 5, House JE 2008, Inorganic Chemistry, Elsevier, Amsterdam, House JE 2013, Inorganic Chemistry, 2nd ed., Elsevier, Kidlington, Huang Y 2018, Thermodynamics of materials corrosion, in Huang Y & Zhang J (eds), Materials Corrosion and Protection, De Gruyter, Boston, pp. 25–58, Humphrey TPJ 1908, "Systematic course of study, Chemistry and physics", Pharmaceutical Journal, vol. 80, p. 58 Hussain et al. 2023, "Tuning the electronic properties of molybdenum di-sulphide monolayers via doping using first-principles calculations", Physica Scripta, vol. 98, no. 2, Imberti C & Sadler PJ, 2020, "150 years of the periodic table: New medicines and diagnostic agents", in Sadler PJ & van Eldik R 2020, Advances in Inorganic Chemistry, vol. 75, Academic Press, IUPAC Periodic Table of the Elements, accessed October 11, 2021 Janas D, Cabrero-Vilatela, A & Bulmer J 2013, "Carbon nanotube wires for high-temperature performance", Carbon, vol. 64, pp. 305–314, Jenkins GM & Kawamura K 1976, Polymeric Carbons—Carbon Fibre, Glass and Char, Cambridge University Press, Cambridge, Jentzsch AV & Matile S 2015, "Anion transport with halogen bonds", in Metrangolo P & Resnati G (eds.), Halogen Bonding I: Impact on Materials Chemistry and Life Sciences, Springer, Cham, Jensen WB 1986, Classification, symmetry and the periodic table, Computers & Mathematics with Applications, vol. 12B, nos. 1/2, pp. 487−510, Johnson RC 1966, Introductory Descriptive Chemistry, WA Benjamin, New York Jolly WL 1966, The Chemistry of the Non-metals, Prentice-Hall, Englewood Cliffs, New Jersey Jones BW 2010, Pluto: Sentinel of the Outer Solar System, Cambridge University, Cambridge, Jordan JM 2016 " 'Ancient episteme' and the nature of fossils: a correction of a modern scholarly error", History and Philosophy of the Life Sciences, vol. 38, no, 1, pp. 90–116, Kaiho T 2017, Iodine Made Simple, CRC Press, e-book, Keeler J & Wothers P 2013, Chemical Structure and Reactivity: An Integrated Approach, Oxford University Press, Oxford, Kemshead WB 1875, Inorganic chemistry, William Collins, Sons, & Company, London Kernion MC & Mascetta JA 2019, Chemistry: The Easy Way, 6th ed., Kaplan, New York, King AH 2019, "Our elemental footprint", Nature Materials, vol. 18, King RB 1994, Encyclopedia of Inorganic Chemistry, vol. 3, John Wiley & Sons, New York, King RB 1995, Inorganic Chemistry of Main Group Elements, VCH, New York, Kiiski et al. 2016, "Fertilizers, 1. General", in Ullmann's Encyclopedia of Industrial Chemistry, Kläning UK & Appelman EH 1988, "Protolytic properties of perxenic acid", Inorganic Chemistry, vol. 27, no. 21, Kneen WR, Rogers MJW & Simpson P 1972, Chemistry: Facts, Patterns, and Principles, Addison-Wesley, London, Knight J 2002, Science of Everyday Things: Real-life chemistry, Gale Group, Detroit, Koenig SH 1962, in Proceedings of the International Conference on the Physics of Semiconductors, held at Exeter, July 16–20, 1962, The Institute of Physics and the Physical Society, London Kosanke et al. 2012, Encyclopedic Dictionary of Pyrotechnics (and Related Subjects), Part 3 – P to Z, Pyrotechnic Reference Series No. 5, Journal of Pyrotechnics, Whitewater, Colorado, Kubaschewski O 1949, "The change of entropy, volume and binding state of the elements on melting", Transactions of the Faraday Society, vol. 45, Labinger JA 2019, "The history (and pre-history) of the discovery and chemistry of the noble gases", in Giunta CJ, Mainz VV & Girolami GS (eds.), 150 Years of the Periodic Table: A Commemorative Symposium, Springer Nature, Cham, Switzerland, Lanford OE 1959, Using Chemistry, McGraw-Hill, New York Larrañaga MD, Lewis RJ & Lewis RA 2016, Hawley's Condensed Chemical Dictionary, 16th ed., Wiley, Hoboken, New York, Lavoisier A 1790, Elements of Chemistry, R Kerr (trans.), William Creech, Edinburgh Lee JD 1996, Concise Inorganic Chemistry, 5th ed., Blackwell Science, Oxford, Lémery N 1699, Traité universel des drogues simples, mises en ordre alphabetique, L d'Houry, Paris, p. 118 Lewis RJ 1993, Hawley's Condensed Chemical Dictionary, 12th ed., Van Nostrand Reinhold, New York, Lewis RS & Deen WM 1994, "Kinetics of the reaction of nitric oxide with oxygen in aqueous solutions", Chemical Research in Toxicology, vol. 7, no. 4, pp. 568–574, Liptrot GF 1983, Modern Inorganic Chemistry, 4th ed., Bell & Hyman, Los Alamos National Laboratory 2021, Periodic Table of Elements: A Resource for Elementary, Middle School, and High School Students, accessed September 19, 2021 Lundgren A & Bensaude-Vincent B 2000, Communicating chemistry: textbooks and their audiences, 1789–1939, Science History, Canton, MA, MacKay KM, MacKay RA & Henderson W 2002, Introduction to Modern Inorganic Chemistry, 6th ed., Nelson Thornes, Cheltenham, Mackin M 2014, Study Guide to Accompany Basics for Chemistry, Elsevier Science, Saint Louis, Malone LJ & Dolter T 2008, Basic Concepts of Chemistry, 8th ed., John Wiley & Sons, Hoboken, Mann et al. 2000, Configuration energies of the d-block elements, Journal of the American Chemical Society, vol. 122, no. 21, pp. 5132–5137, Maosheng M 2020, "Noble gases in solid compounds show a rich display of chemistry with enough pressure", Frontiers in Chemistry, vol. 8, Maroni M, Seifert B & Lindvall T (eds) 1995, "Physical pollutants", in Indoor Air Quality: A Comprehensive Reference Book, Elsevier, Amsterdam, Martin JW 1969, Elementary Science of Metals, Wykeham Publications, London Matson M & Orbaek AW 2013, Inorganic Chemistry for Dummies, John Wiley & Sons: Hoboken, Matula RA 1979, "Electrical resistivity of copper, gold, palladium, and silver", Journal of Physical and Chemical Reference Data, vol. 8, no. 4, Mazej Z 2020, "Noble-gas chemistry more than half a century after the first report of the noble-gas compound", Molecules, vol. 25, no. 13, , , McMillan P 2006, "A glass of carbon dioxide", Nature, vol. 441, Mendeléeff DI 1897, The Principles of Chemistry, vol. 2, 5th ed., trans. G Kamensky, AJ Greenaway (ed.), Longmans, Green & Co., London Messler Jr RW 2011, The Essence of Materials for Engineers, Jones and Bartlett Learning, Sudbury, Massachusetts, Mewes et al. 2019, "Copernicium: A relativistic noble liquid", Angewandte Chemie International Edition, vol. 58, pp. 17964–17968, Mingos DMP 2019, "The discovery of the elements in the Periodic Table", in Mingos DMP (ed.), The Periodic Table I. Structure and Bonding, Springer Nature, Cham, Moeller T 1958, Qualitative Analysis: An Introduction to Equilibrium and Solution Chemistry, McGraw-Hill, New York Moeller T et al. 1989, Chemistry: With Inorganic Qualitative Analysis, 3rd ed., Academic Press, New York, Moody B 1991, Comparative Inorganic Chemistry, 3rd ed., Edward Arnold, London, Moore JT 2016, Chemistry for Dummies, 2nd ed., ch. 16, Tracking periodic trends, John Wiley & Sons: Hoboken, Morely HF & Muir MM 1892, Watt's Dictionary of Chemistry, vol. 3, Longman's Green, and Co., London Moss, TS 1952, Photoconductivity in the Elements, Butterworths Scientific, London Myers RT 1979, "Physical and chemical properties and bonding of metallic elements", Journal of Chemical Education, vol. 56, no. 11, pp. 712–73, Obodovskiy I 2015, Fundamentals of Radiation and Chemical Safety, Elsevier, Amsterdam, Oderberg DS 2007, Real Essentialism, Routledge, New York, Ostriker JP & Steinhardt PJ 2001, "The quintessential universe", Scientific American, vol. 284, no. 1, pp. 46–53 , Oxford English Dictionary 1989, "nonmetal" Orisakwe OE 2012, Other heavy metals: antimony, cadmium, chromium and mercury, in Pacheco-Torgal F, Jalali S & Fucic A (eds), Toxicity of Building Materials, Woodhead Publishing, Oxford, pp. 297–333, Parameswaran P et al. 2020, "Phase evolution and characterization of mechanically alloyed hexanary Al16.6Mg16.6Ni16.6Cr16.6Ti16.6Mn16.6 high entropy alloy", Metal Powder Report, vol. 75, no. 4, Parish RV 1977, The Metallic Elements, Longman, London, Partington JR 1944, A Text-book of Inorganic Chemistry, 5th ed., Macmillan & Co., London Partington JR 1964, A history of chemistry, vol. 4, Macmillan, London Pascoe KJ 1982, An Introduction to the Properties of Engineering Materials, 3rd ed., Von Nostrand Reinhold (UK), Wokingham, Berkshire, Pauling L 1947, General chemistry: An introduction to descriptive chemistry and modern chemical theory, WH Freeman, San Francisco Pawlicki T, Scanderbeg DJ & Starkschall G 2016, Hendee's Radiation Therapy Physics, 4th ed., John Wiley & Sons, Hoboken, NJ, p. 228, Petruševski VM & Cvetković J 2018, "On the 'true position' of hydrogen in the Periodic Table", Foundations of Chemistry, vol. 20, pp. 251–260, Phillips CSG & Williams RJP 1965, Inorganic Chemistry, vol. 1, Principles and non-metals, Clarendon Press, Oxford Pitzer K 1975, "Fluorides of radon and elements 118", Journal of the Chemical Society, Chemical Communications, no. 18, Porterfield WW 1993, Inorganic Chemistry, Academic Press, San Diego, Povh B & Rosina M 2017, Scattering and Structures: Essentials and Analogies in Quantum Physics, 2nd ed., Springer, Berlin, Powell P & Timms P 1974, The Chemistry of the Non-Metals, Chapman and Hall, London, Power PP 2010, Main-group elements as transition metals, Nature, vol. 463, 14 January 2010, pp. 171–177, Puddephatt RJ & Monaghan PK 1989, The Periodic Table of the Elements, 2nd ed., Clarendon Press, Oxford, Rahm M, Zeng T & Hoffmann R 2019, "Electronegativity seen as the ground-state average valence electron binding energy", Journal of the American Chemical Society, vol. 141, no. 1, pp. 342–351, Ramdohr P 1969, The Ore Minerals and Their Intergrowths, Pergamon Press, Oxford Rao CNR & Ganguly PA 1986, "New criterion for the metallicity of elements", Solid State Communications, vol. 57, no. 1, pp. 5–6, Rao KY 2002, Structural chemistry of glasses, Elsevier, Oxford, Rayner-Canham G 2018, "Organizing the transition metals", in Scerri E & Restrepo G (Ed's.) Mendeleev to Oganesson: A multidisciplinary perspective on the periodic table, Oxford University, New York, Rayner-Canham G 2020, The Periodic Table: Past, Present and Future, World Scientific, New Jersey, Redmer R, Hensel F & Holst B (eds) 2010, "Metal-to-Nonmetal Transitions", Springer, Berlin, Regnault MV 1853, Elements of Chemistry, vol. 1, 2nd ed., Clark & Hesser, Philadelphia Reilly C 2002, Metal Contamination of Food, Blackwell Science, Oxford, Reinhardt et al. 2015, Inerting in the chemical industry, Linde, Pullach, Germany, accessed October 19, 2021 Remy H 1956, Treatise on Inorganic Chemistry, Anderson JS (trans.), Kleinberg J (ed.), vol. II, Elsevier, Amsterdam Renouf E 1901, "Lehrbuch der Anorganischen Chemie", Science, vol. 13, no. 320, Restrepo G, Llanos EJ & Mesa H 2006, "Topological space of the chemical elements and its properties", Journal of Mathematical Chemistry, vol. 39, Rieck GD 1967, Tungsten and its Compounds, Pergamon Press, Oxford Ritter SK 2011, "The case of the missing xenon", Chemical & Engineering News, vol. 89, no. 9, Rochow EG 1966, The Metalloids, DC Heath and Company, Boston Rosenberg E 2013, Germanium-containing compounds, current knowledge and applications, in Kretsinger RH, Uversky VN & Permyakov EA (eds), Encyclopedia of Metalloproteins, Springer, New York, Roscoe HE & Schorlemmer FRS 1894, A Treatise on Chemistry: Volume II: The Metals, D Appleton, New York Royal Society of Chemistry 2021, Periodic Table: Non-metal, accessed September 3, 2021 Rudakiya DM & Patel Y 2021, Bioremediation of metals, metalloids, and nonmetals, in Panpatte DG & Jhala YK (eds), Microbial Rejuvenation of Polluted Environment, vol. 2, Springer Nature, Singapore, pp. 33–49, Rudolph J 1973, Chemistry for the Modern Mind, Macmillan, New York Russell AM & Lee KL 2005, Structure-Property Relations in Nonferrous Metals, Wiley-Interscience, New York, Salinas JT 2019 Exploring Physical Science in the Laboratory, Moreton Publishing, Englewood, Colorado, Salzberg HW 1991, From Caveman to Chemist: Circumstances and Achievements, American Chemical Society, Washington, DC, Sanderson RT 1967, Inorganic Chemistry, Reinhold, New York Scerri E (ed.) 2013, 30-Second Elements: The 50 Most Significant Elements, Each Explained In Half a Minute, Ivy Press, London, Scerri E 2020, The Periodic Table: Its Story and Its Significance, Oxford University Press, New York, Schaefer JC 1968, "Boron" in Hampel CA (ed.), The Encyclopedia of the Chemical Elements, Reinhold, New York Schmedt auf der Günne J, Mangstl M & Kraus F 2012, "Occurrence of difluorine F2 in nature—In situ proof and quantification by NMR spectroscopy", Angewandte Chemie International Edition, vol. 51, no. 31, Schweitzer GK & Pesterfield LL 2010, Aqueous Chemistry of the Elements, Oxford University Press, Oxford, Scott D 2014, Around the World in 18 Elements, Royal Society of Chemistry, e-book, Scott EC & Kanda FA 1962, The Nature of Atoms and Molecules: A General Chemistry, Harper & Row, New York Scott WAH 2001, Chemistry Basic Facts, 5th ed., HarperCollins, Glasgow, Seese WS & Daub GH 1985, Basic Chemistry, 4th ed., Prentice-Hall, Englewood Cliffs, NJ, Segal BG 1989, Chemistry: Experiment and Theory, 2nd ed., John Wiley & Sons, New York, Shanabrook BV, Lannin JS & Hisatsune IC 1981, "Inelastic light scattering in a onefold-coordinated amorphous semiconductor", Physical Review Letters, vol. 46, no. 2, 12 January, Shang et al. 2021, "Ultrahard bulk amorphous carbon from collapsed fullerene", Nature, vol. 599, pp. 599–604, Shchukarev SA 1977, New views of D. I. Mendeleev's system. I. Periodicity of the stratigraphy of atomic electronic shells in the system, and the concept of Kainosymmetry", Zhurnal Obshchei Kimii, vol. 47, no. 2, pp. 246–259 Shkol’nikov EV 2010, "Thermodynamic characterization of the amphoterism of oxides M2O3 (M = , , ) and their hydrates in aqueous media, Russian Journal of Applied Chemistry, vol. 83, no. 12, pp. 2121–2127, Sidorov TA 1960, "The connection between structural oxides and their tendency to glass formation", Glass and Ceramics, vol. 17, no. 11, Siekierski S & Burgess J 2002, Concise Chemistry of the Elements, Horwood Press, Chichester, Slye OM Jr 2008, "Fire extinguishing agents", in Ullmann's Encyclopedia of Industrial Chemistry, Smith A 1906, Introduction to Inorganic Chemistry, The Century Co., New York Smith A & Dwyer C 1991, Key Chemistry: Investigating Chemistry in the Contemporary World: Book 1: Materials and Everyday Life, Melbourne University Press, Carlton, Victoria, Smith DW 1990, Inorganic Substances: A Prelude to the Study of Descriptive Chemistry, Cambridge University Press, Cambridge, Smits et al. 2020, "Oganesson: A noble gas element that is neither noble nor a gas", Angewandte Chemie International Edition, vol. 59, pp. 23636–23640, Smulders E & Sung E 2011, "Laundry Detergents, 2, Ingredients and Products’’, In Ullmann's Encyclopedia of Industrial Chemistry, Spencer JN, Bodner GM, Rickard LY 2012, Chemistry: Structure & Dynamics, 5th ed., John Wiley & Sons, Hoboken, Stein L 1969, "Oxidized radon in halogen fluoride solutions", Journal of the American Chemical Society, vol. 19, no. 19, Stein L 1983, "The chemistry of radon", Radiochimica Acta, vol. 32, Steudel R 2020, Chemistry of the Non-metals: Syntheses – Structures – Bonding – Applications, in collaboration with D Scheschkewitz, Berlin, Walter de Gruyter, Still B 2016 The Secret Life of the Periodic Table, Cassell, London, Stillman JM 1924, The Story of Early Chemistry, D. Appleton, New York Stott RWA 1956, Companion to Physical and Inorganic Chemistry, Longmans, Green and Co, London Stuke J 1974, "Optical and electrical properties of selenium", in Zingaro RA & Cooper WC (eds.), Selenium, Van Nostrand Reinhold, New York, pp. 174 Strathern P 2000, Mendeleyev's dream: The Quest for the Elements, Hamish Hamilton, London, Suresh CH & Koga NA 2001, "A consistent approach toward atomic radii”, Journal of Physical Chemistry A, vol. 105, no. 24. Tang et al. 2021, "Synthesis of paracrystalline diamond", Nature, vol. 599, pp. 605–610, Taniguchi M, Suga S, Seki M, Sakamoto H, Kanzaki H, Akahama Y, Endo S, Terada S & Narita S 1984, "Core-exciton induced resonant photoemission in the covalent semiconductor black phosphorus", Solid State Communications, vo1. 49, no. 9, pp. 867–7, Taylor MD 1960, First Principles of Chemistry, Van Nostrand, Princeton The Chemical News and Journal of Physical Science 1864, "Notices of books: Manual of the Metalloids", vol. 9, p. 22 The Chemical News and Journal of Physical Science 1897, "Notices of books: A Manual of Chemistry, Theoretical and Practical", by WA Tilden", vol. 75, pp. 188–189 Thornton BF & Burdette SC 2010, "Finding eka-iodine: Discovery priority in modern times", Bulletin for the History of Chemistry, vol. 35, no. 2, accessed September 14, 2021 Tidy CM 1887, Handbook of Modern Chemistry, 2nd ed., Smith, Elder & Co., London Timberlake KC 1996, Chemistry: An Introduction to General, Organic, and Biological Chemistry, 6th ed., HarperCollinsCollege, Toon R 2011, "The discovery of fluorine", Education in Chemistry, Royal Society of Chemistry, accessed 7 October 2023 Tregarthen L 2003, Preliminary Chemistry, Macmillan Education: Melbourne, Tyler PM 1948, From the Ground Up: Facts and Figures of the Mineral Industries of the United States, McGraw-Hill, New York Vassilakis AA, Kalemos A & Mavridis A 2014, "Accurate first principles calculations on chlorine fluoride ClF and its ions ClF±", Theoretical Chemistry Accounts, vol. 133, no. 1436, Vernon R 2013, "Which elements are metalloids?", Journal of Chemical Education, vol. 90, no. 12, pp. 1703‒1707, Vernon R 2020, "Organising the metals and nonmetals", Foundations of Chemistry, vol. 22, pp. 217‒233 (open access) Vij et al. 2001, Polynitrogen chemistry. Synthesis, characterization, and crystal structure of surprisingly stable fluoroantimonate salts of N5+. Journal of the American Chemical Society, vol. 123, no. 26, pp. 6308−6313, Wächtershäuser G 2014, "From chemical invariance to genetic variability", in Weigand W and Schollhammer P (eds.), Bioinspired Catalysis: Metal Sulfur Complexes, Wiley-VCH, Weinheim, Wakeman TH 1899, "Free thought—Past, present and future", Free Thought Magazine, vol. 17 Wang HS, Lineweaver CH & Ireland TR 2018, The elemental abundances (with uncertainties) of the most Earth-like planet, Icarus, vol. 299, pp. 460–474, Wasewar KL 2021, "Intensifying approaches for removal of selenium", in Devi et al. (eds.), Selenium contamination in water, John Wiley & Sons, Hoboken, pp. 319–355, Weeks ME & Leicester HM 1968, Discovery of the Elements, 7th ed., Journal of Chemical Education, Easton, Pennsylvania Weetman C & Inoue S 2018, The road travelled: After main-group elements as transition metals, ChemCatChem, vol. 10, no. 19, pp. 4213–4228, Welcher SH 2009, High marks: Regents Chemistry Made Easy, 2nd ed., High Marks Made Easy, New York, Weller et al. 2018, Inorganic Chemistry, 7th ed., Oxford University Press, Oxford, Wells AF 1984, Structural Inorganic Chemistry, 5th ed., Clarendon Press, Oxford, White JH 1962, Inorganic Chemistry: Advanced and Scholarship Levels, University of London Press, London Whiteford GH & and Coffin RG 1939, Essentials of College Chemistry, 2nd ed., Mosby Co., St Louis Whitten KW & Davis RE 1996, General Chemistry, 5th ed., Saunders College Publishing, Philadelphia, Wibaut P 1951, Organic Chemistry, Elsevier Publishing Company, New York Wiberg N 2001, Inorganic Chemistry, Academic Press, San Diego, Williams RPJ 2007, "Life, the environment and our ecosystem", Journal of Inorganic Biochemistry, vol. 101, nos. 11–12, Windmeier C & Barron RF 2013, "Cryogenic technology", in Ullmann's Encyclopedia of Industrial Chemistry, Winstel G 2000, "Electroluminescent materials and devices", in Ullmann's Encyclopedia of Industrial Chemistry, Wismer RK 1997, Student Study Guide, General Chemistry: Principles and Modern Applications, 7th ed., Prentice Hall, Upper Saddle River, Woodward et al. 1999, "The electronic structure of metal oxides", In Fierro JLG (ed.), Metal Oxides: Chemistry and Applications, CRC Press, Boca Raton, World Economic Forum 2021, Visualizing the abundance of elements in the Earth's crust, accessed 21 March 2024 Wulfsberg G 2000, Inorganic Chemistry, University Science Books, Sausalito, California, Yamaguchi M & Shirai Y 1996, "Defect structures", in Stoloff NS & Sikka VK (eds.), Physical Metallurgy and Processing of Intermetallic Compounds, Chapman & Hall, New York, Yang J 2004, "Theory of thermal conductivity", in Tritt TM (ed.), Thermal Conductivity: Theory, Properties, and Applications, Kluwer Academic/Plenum Publishers, New York, pp. 1–20, Yin et al. 2018, Hydrogen-assisted post-growth substitution of tellurium into molybdenum disulfide monolayers with tunable compositions, Nanotechnology, vol. 29, no 14, Yoder CH, Suydam FH & Snavely FA 1975, Chemistry, 2nd ed, Harcourt Brace Jovanovich, New York, Young JA 2006, "Iodine", Journal of Chemical Education, vol. 83, no. 9, Young et al. 2018, General Chemistry: Atoms First, Cengage Learning: Boston, Zhao J, Tu Z & Chan SH 2021, "Carbon corrosion mechanism and mitigation strategies in a proton exchange membrane fuel cell (PEMFC): A review", Journal of Power Sources, vol. 488, #229434, Zhigal'skii GP & Jones BK 2003, The Physical Properties of Thin Metal Films, Taylor & Francis, London, Zhu W 2020, Chemical Elements In Life, World Scientific, Singapore, Zhu et al. 2014, "Reactions of xenon with iron and nickel are predicted in the Earth's inner core", Nature Chemistry, vol. 6, , Zhu et al. 2022, Introduction: basic concept of boron and its physical and chemical properties, in Fundamentals and Applications of Boron Chemistry, vol. 2, Zhu Y (ed.), Elsevier, Amsterdam, Zumdahl SS & DeCoste DJ 2010, Introductory Chemistry: A Foundation, 7th ed., Cengage Learning, Mason, Ohio, External links Nonmetals Periodic table
Nonmetal
Physics,Chemistry,Materials_science,Engineering
17,579
41,055,501
https://en.wikipedia.org/wiki/Kuwanon%20G
Kuwanon G is an antimicrobial bombesin receptor antagonist isolated from white mulberry (Morus alba). References Cyclohexenes Chromones Resorcinols
Kuwanon G
Chemistry
42
142,348
https://en.wikipedia.org/wiki/Diurnal%20motion
In astronomy, diurnal motion (, ) is the apparent motion of celestial objects (e.g. the Sun and stars) around Earth, or more precisely around the two celestial poles, over the course of one day. It is caused by Earth's rotation around its axis, so almost every star appears to follow a circular arc path, called the diurnal circle, often depicted in star trail photography. The time for one complete rotation is 23 hours, 56 minutes, and 4.09 seconds – one sidereal day. The first experimental demonstration of this motion was conducted by Léon Foucault. Because Earth orbits the Sun once a year, the sidereal time at any given place and time will gain about four minutes against local civil time, every 24 hours, until, after a year has passed, one additional sidereal "day" has elapsed compared to the number of solar days that have gone by. Relative direction The relative direction of diurnal motion in the Northern Celestial Hemisphere are as follows: Facing north, below Polaris: rightward, or eastward Facing north, above Polaris: leftward, or westward Facing south: rightward, or westward Thus, northern circumpolar stars move counterclockwise around Polaris, the north pole star. At the North Pole, the cardinal directions do not apply to diurnal motion. Within the circumpolar circle, all the stars move simply rightward, or looking directly overhead, counterclockwise around the zenith, where Polaris is. Southern Celestial Hemisphere observers are to replace north with south, left with right, and Polaris with Sigma Octantis, sometimes called the south pole star. The circumpolar stars move clockwise around Sigma Octantis. East and west are not interchanged. As seen from the Equator, the two celestial poles are on the horizon due north and south, and the motion is counterclockwise (i.e. leftward) around Polaris and clockwise (i.e. rightward) around Sigma Octantis. All motion is westward, except for the two fixed points. Apparent speed The daily arc path of an object on the celestial sphere, including the possible part below the horizon, has a length proportional to the cosine of the declination. Thus, the speed of the diurnal motion of a celestial object equals this cosine times 15° per hour, 15 arcminutes per minute, or 15 arcseconds per second. Per a certain period of time, a given angular distance travelled by an object along or near the celestial equator may be compared to the angular diameter of one of the following objects: up to one Sun or Moon diameter (about 0.5° or 30') every 2 minutes up to one diameter of the planet Venus in inferior conjunction (about 1' or 60") about every 4 seconds 2,000 diameters of the largest stars per second Star trail and time-lapse photography capture diurnal motion blur. The apparent motion of stars near the celestial pole seems slower than that of stars closer to the celestial equator. Conversely, following the diurnal motion with the camera to eliminate its arcing effect on a long exposure, can best be done with an equatorial mount, which requires adjusting the right ascension only; a telescope may have a sidereal motor drive to do that automatically. External links Timelapse video of a 5 hour diurnal motion - Youtube See also Direction determination Position of the Sun References Astrometry Stellar astronomy
Diurnal motion
Astronomy
719
3,007,111
https://en.wikipedia.org/wiki/Map-coloring%20games
Several map-coloring games are studied in combinatorial game theory. The general idea is that we are given a map with regions drawn in but with not all the regions colored. Two players, Left and Right, take turns coloring in one uncolored region per turn, subject to various constraints, as in the map-coloring problem. The move constraints and the winning condition are features of the particular game. Some players find it easier to color vertices of the dual graph, as in the Four color theorem. In this method of play, the regions are represented by small circles, and the circles for neighboring regions are linked by line segments or curves. The advantages of this method are that only a small area need be marked on a turn, and that the representation usually takes up less space on the paper or screen. The first advantage is less important when playing with a computer interface instead of pencil and paper. It is also possible to play with Go stones or Checkers. Move constraints An inherent constraint in each game is the set of colors available to the players in coloring regions. If Left and Right have the same colors available to them, the game is impartial; otherwise the game is partisan. The set of colors could also depend on the state of the game; for instance it could be required that the color used be different from the color used on the previous move. The map-based constraints on a move are usually based on the region to be colored and its neighbors, whereas in the map-coloring problem, regions are considered to be neighbors when they meet along a boundary longer than a single point. The classical map-coloring problem requires that no two neighboring regions be given the same color. The classical move constraint enforces this by prohibiting coloring a region with the same color as one of its neighbor. The anticlassical constraint prohibits coloring a region with a color that differs from the color of one of its neighbors. Another kind of constraint is entailment, in which each move after the first must color a neighbor of the region colored on the previous move. Anti-entailment is another possible constraint. Other sorts of constraints are possible, such as requiring regions that are neighbors of neighbors to use different or identical colors. This concept can be considered as applying to regions at graph distance two, and can be generalized to greater distances. Winning conditions The winner is usually the last player to move. This is called the normal play convention. The misère play convention considers the last player to move to lose the game. There are other possible winning and losing conditions possible, such as counting territory, as in Go. Monochrome and variants These games, which appeared in (Silverman, 1971), all use the classical move constraint. In the impartial game "Monochrome" there is only one color available, so every move removes the colored region and its neighbors from play. In "Bichrome" both players have a choice of two colors, subject to the classical condition. Both players choose from the same two colors, so the game is impartial. "Trichrome" extends this to three colors to the players. The condition can be extended to any fixed number of colors, yielding further games. As Silverman mentions, although the Four color theorem shows that any planar map can be colored with four colors, it does not apply to maps in which some of the colors have been filled in, so adding more than four colors may have an effect on the games. Col and Snort In "Col" there are two colors subject to the classical constraint, but Left is only allowed to color regions B"l"ue, while Right is only allowed to color them "R"ed. Thus this is a partisan game, because different moves become available to Left and Right in the course of play. "Snort" uses a similar partisan assignment of two colors, but with the anticlassical constraint: neighboring regions are not allowed to be given different colors. Coloring the regions is explained as assigning fields to bulls and cows, where neighboring fields may not contain cattle of the opposite sex, lest they be distracted from their grazing. These games were presented and analyzed in (Conway, 1976). The names are mnemonic for the difference in constraints (classical map coloring versus animal noises), but Conway also attributes them to his colleagues Colin Vout and Simon Norton. Other games The impartial game "Contact" (Silverman, 1971) uses a single color with the entailment constraint: all moves after the first color a neighbor of the most recently colored region. Silverman also provides an example of "Misère Contact". The concept of a map-coloring game may be extended to cover games such as Angels and Devils, where the rules for coloring are somewhat different in flavor. References Revised and reprinted as Revised and reprinted as Combinatorial game theory Mathematical games
Map-coloring games
Mathematics
982
30,741,477
https://en.wikipedia.org/wiki/Haim%20Aviv
Haim Aviv () (born as Haim Greenspan in Romania, 1940) was an Israeli scientist who specialized in the field of molecular biology. Aviv is considered to have a fundamental role in the shaping of the biotechnology industry in Israel, as he was widely involved in this industry since the late 1970s to this day. Birth and education Aviv was born in Romania (Arad) in 1940, and migrated to Israel at the age of ten. He was raised in the city of Rehovot, where he resided until his death in 2021. In his early twenties, he developed an interest in the field of agriculture, and planned a future in this field, almost with no connection to science. After completing an M.Sc. in The Faculty of Agriculture at the Hebrew University of Jerusalem, he began his doctorate studies in the Weizmann Institute of Science in Rehovot. At first Aviv was interested in the study of plant biology, but he soon found himself fascinated with molecular biology. In 1970 Aviv completed his doctorate, which focused on protein synthesis, and was offered a postdoctoral fellowship at the National Institutes of Health (NIH) in the US, Dr. Philip Leder's lab. In 1973 he returned to Israel, to the Weizmann Institute of Science, as a senior scientist and was then appointed associate professor with tenure. In his spare time, Aviv studied Judaism and Jewish history, especially of The Holocaust. He also collected ancient Judaica books. Academic work During his work in NIH, Aviv focused on the research of molecular processes related to differentiation, and synthesis control of Globins and Immunoglobulins, under the guidance of Dr. Philip Leder. This work was published in a series of papers, some of which were of the most quoted in the field of molecular biology for decades. Aviv developed a method for the purification of messenger RNA (mRNA) which enabled research of control mechanisms of the translation of RNA to protein, and studied the role of mRNA in the differentiation of cells and tissues. In cooperation with Edward Scolnick's lab he performed research which first made it possible to synthesize complementary DNA (cDNA), a method which was later widely used in the research of gene control mechanisms. After his return to Israel, he continued to study the field of mRNA differentiation and synthesis. During the late 1970s, Aviv entered the field of Recombinant DNA (genetic engineering) and applied research, and produced a microorganism (E. coli) containing the gene for Bovine Growth Hormone. The patented method he developed is used for the industrial production of growth hormone, in order to increase milk production by cows. Bovine growth hormone is in wide use in today's dairy industry. Advancement of Israeli biotechnology industry In 1980, Aviv initiated and started "Biotechnology General Corp.", the first Israeli Biotechnology company focusing on Recombinant DNA (genetic engineering), which exists to this day. The foundation of the company was encouraged by Ephraim Katzir, formerly the president of Israel, and was supported by a group of investors led by Mr. Fred Adler, one of the leading Venture Capitalists. After he retired from "Biotechnology General Ltd.", Aviv founded "Diatech Ltd." which specialized in medical diagnostic tools, and "Pharmos Corp."- In which he served as chairman and CEO. Pharmos initially focused on development of drugs for the treatment of Eye disease (Lotemax), but was mostly known for its later work to develop an innovative drug invented by Professor Raphael Mechoulam from the Hebrew University of Jerusalem (Dexanabinol), a synthetic canabinoid to treat head injury victims. Despite promising results in Phase 2 clinical trials, the large scale phase 3 clinical trials did not meet expectations. The lack of success in the trials provoked a negative response from the company's investors and the Israeli media. Aviv left his position at Pharmos in 2007. However, these clinical trials remain to this day the largest trials performed in the field of pharmaceutical treatment of head injury. Serious traumatic head injuries remain to this day a medical challenge. In 2000, Aviv founded "Predix Corp.", which focused on development of pharmaceuticals using advanced computerization and three-dimensional algorithms, invented by Dr. Oren Becker. As of 2011, Aviv focused his work in the company "Herbamed Ltd.", as chairman and major shareholder. Herbamed is an Israeli company which develops health-supporting food products (Functional Foods) and Nutraceuticals, which are based on scientific research and clinical evidence. The products are marketed as snack bars and beverages under the brand Nutravida. The Functional Food segment is a rapidly growing field, which is expected to have a fundamental effect on population well-being. Other than his position at Herbamed, Aviv also serves on the board of directors of Yeda Research and Development Company Ltd., which is the commercial arm of the Weizmann Institute of Science, The Ben-Gurion University of the Negev and at several companies in the field of drug development and medical devices. He chaired and was a member of the Israel National Committee for Biotechnology and other advisory committees on this subject. Among his major recommendations was the establishment of dedicated investment funds with the aid of government funding, a recommendation which is currently being implemented with the foundation of two large-scale funds which include government aid. External links An editorial on the subject of "Bridging the Gap between Food and Health" written by Aviv in 2010 Website of Herbamed Company References Israeli scientists Molecular biologists Businesspeople in the pharmaceutical industry Israeli Jews Romanian emigrants to Israel Living people 1940 births
Haim Aviv
Chemistry
1,160
6,172,616
https://en.wikipedia.org/wiki/Control-Lyapunov%20function
In control theory, a control-Lyapunov function (CLF) is an extension of the idea of Lyapunov function to systems with control inputs. The ordinary Lyapunov function is used to test whether a dynamical system is (Lyapunov) stable or (more restrictively) asymptotically stable. Lyapunov stability means that if the system starts in a state in some domain D, then the state will remain in D for all time. For asymptotic stability, the state is also required to converge to . A control-Lyapunov function is used to test whether a system is asymptotically stabilizable, that is whether for any state x there exists a control such that the system can be brought to the zero state asymptotically by applying the control u. The theory and application of control-Lyapunov functions were developed by Zvi Artstein and Eduardo D. Sontag in the 1980s and 1990s. Definition Consider an autonomous dynamical system with inputs where is the state vector and is the control vector. Suppose our goal is to drive the system to an equilibrium from every initial state in some domain . Without loss of generality, suppose the equilibrium is at (for an equilibrium , it can be translated to the origin by a change of variables). Definition. A control-Lyapunov function (CLF) is a function that is continuously differentiable, positive-definite (that is, is positive for all except at where it is zero), and such that for all there exists such that where denotes the inner product of . The last condition is the key condition; in words it says that for each state x we can find a control u that will reduce the "energy" V. Intuitively, if in each state we can always find a way to reduce the energy, we should eventually be able to bring the energy asymptotically to zero, that is to bring the system to a stop. This is made rigorous by Artstein's theorem. Some results apply only to control-affine systems—i.e., control systems in the following form: where and for . Theorems Eduardo Sontag showed that for a given control system, there exists a continuous CLF if and only if the origin is asymptotic stabilizable. It was later shown by Francis H. Clarke, Yuri Ledyaev, Eduardo Sontag, and A.I. Subbotin that every asymptotically controllable system can be stabilized by a (generally discontinuous) feedback. Artstein proved that the dynamical system () has a differentiable control-Lyapunov function if and only if there exists a regular stabilizing feedback u(x). Constructing the Stabilizing Input It is often difficult to find a control-Lyapunov function for a given system, but if one is found, then the feedback stabilization problem simplifies considerably. For the control affine system (), Sontag's formula (or Sontag's universal formula) gives the feedback law directly in terms of the derivatives of the CLF. In the special case of a single input system , Sontag's formula is written as where and are the Lie derivatives of along and , respectively. For the general nonlinear system (), the input can be found by solving a static non-linear programming problem for each state x. Example Here is a characteristic example of applying a Lyapunov candidate function to a control problem. Consider the non-linear system, which is a mass-spring-damper system with spring hardening and position dependent mass described by Now given the desired state, , and actual state, , with error, , define a function as A Control-Lyapunov candidate is then which is positive for all . Now taking the time derivative of The goal is to get the time derivative to be which is globally exponentially stable if is globally positive definite (which it is). Hence we want the rightmost bracket of , to fulfill the requirement which upon substitution of the dynamics, , gives Solving for yields the control law with and , both greater than zero, as tunable parameters This control law will guarantee global exponential stability since upon substitution into the time derivative yields, as expected which is a linear first order differential equation which has solution And hence the error and error rate, remembering that , exponentially decay to zero. If you wish to tune a particular response from this, it is necessary to substitute back into the solution we derived for and solve for . This is left as an exercise for the reader but the first few steps at the solution are: which can then be solved using any linear differential equation methods. References See also Artstein's theorem Lyapunov optimization Drift plus penalty Stability theory
Control-Lyapunov function
Mathematics
991
40,733,699
https://en.wikipedia.org/wiki/Actin%20nucleation%20core
An actin nucleation core is a protein trimer with three actin monomers. It is called a nucleation core because it leads to the energetically favorable elongation reaction once a tetramer is formed from a trimer. Actin protein dimers and trimers are energetically unfavorable. Actin nucleators like the Arp2/3 complex of proteins from the formin family are most frequently involved in this process. Actin nucleation factors start the polymerization of actin within cells. Many distinct proteins that can mediate the de novo nucleation of filaments directly interact with actin and promote it. This gives protrusive membrane formations their initial impetus. These entities may take the form of pseudopodia, invadopodia, or non-apoptotic membrane blebs. Mechanism The unfavorable kinetics of actin oligomer production prevent spontaneous actin polymerization.Once an actin nucleus has been created, the connection of the monomers happens swiftly, with the plus end developing considerably more quickly than the minus end. Actin's ATPase activity sharply rises after insertion into the filament. The filament becomes less stable as a result of spontaneous ATP hydrolysis and phosphate dissociation, making it more vulnerable to the effects of severing proteins such those in the actin depolymerizing factor (ADF)/cofilin family. The kinetic barrier that prohibits spontaneous actin polymerization gives the cell a versatile tool for temporally and spatially controlling the assembly of de novo actin filaments. Monomer binding proteins limit the availability of subunits for filament production while severing proteins, such as those in the destrin and cofilin families, regulate filament deconstruction. The cell has a flexible tool for temporally and spatially regulating the creation of de novo actin filaments thanks to the kinetic barrier that prevents spontaneous actin polymerization. Direct actin nucleation in response to external cues allows actin nucleators to swiftly and successfully initiate new actin filaments. These proteins serve as the targets of numerous intracellular signaling cascades. Most significantly, members of the Rho-GTPase family, including CDC42, are essential for controlling actin turnover and coordinating the control of actin nucleating activities. Additional application To mimic the behavior of mature LPS-DCs (LPS-treatment) (dendric cell) in terms of migration and macropinocytosis, it is sufficient to block or knock out Arp2/3 in iDCs, suggesting that Arp2/3 expression or activity is downregulated as a result of LPS-induced DC maturation. Arp2/3 expression levels were unaffected by LPS treatment of DCs, however it's likely that mature DCs exhibited reduced actin-nucleation activity. LPS-DCs and iDCs(immature dendric cell) both require mDia1-dependent actin nucleation for locomotion, while iDCs link antigen intake to cell motility using Arp2/3-dependent actin nucleation. In response to LPS sensing, Arp2/3 significantly reduces actin nucleation at the front, which allows mature DCs to adopt a quick and directional migratory mode. Inhibition of Arp2/3 increased the speed and decreased the accumulation of F-actin at the front of iDCs. As a result of the absence of Arp2/3-dependent actin at the cell front, LPS-DCs migrate more quickly than iDCs. Arpc2KO iDCs saw a similar increase in cell velocity and moved as swiftly as LPS-DCs. Additionally, in under-agarose migration studies, Arpc2KO DCs migrated substantially more swiftly. This was unrelated to DC development. In contrast to protrusion-based locomotion, the Arp2/3-dependent pool of F-actin present at the front of iDCs limits their migration. References Autoantigens Cell anatomy Cell biology Cellular processes Cytoskeleton Trimers (chemistry) Membrane biology Monomers Protein domains Proteins Structural proteins Tetramers (chemistry) Bibliography This article is a peer-reviewed and is about the surrounding tissue and cancer cells. Actin nuclei are discussed, as well as how different structures like invadopodia and pseudopodia originate. Human cells have a wide range of actin nucleators, including formins, spire, and Arp2/3 regulatory proteins, and the number is likely to increase. This article is a peer-reviewed article which is about the dendritic cell (DC) migration and induction of Arp2/3-dependent actin.
Actin nucleation core
Chemistry,Materials_science,Biology
1,004
24,957,288
https://en.wikipedia.org/wiki/Purdue%20University%20School%20of%20Aeronautics%20and%20Astronautics
The Purdue University School of Aeronautics and Astronautics is Purdue University's school of aerospace engineering contained within the Purdue University College of Engineering. The school offers B.S., M.S., and Ph.D. degrees in aeronautical and astronautical engineering. It also provides distance graduate education, including an online M.S. in Engineering with concentration in Aeronautics and Astronautics, and a distance Ph.D. Its main office and some of its labs are located in the Neil Armstrong Hall of Engineering. As of 2010, the School has awarded an estimated 6% of BS degrees and 7% of PhDs in aerospace engineering in the United States. History Aeronautical engineering education and research at Purdue dates back to early 1920s when the Aeronautical Engineering courses were offered as part of Senior Aeronautical Option in the mechanical engineering program. By 1930s the course offerings in aeronautical engineering expanded to eight with many courses taught at the Purdue Airport, the world's first university-owned airport that opened in 1934. The formal four-year curriculum in aeronautical engineering was developed by World War II and in 1942, Mechanical Engineering became the "School of Mechanical and Aeronautical Engineering." The school was officially established as a separate degree program on July 1, 1945. Graduate education at the School began with a master's degree program in Aeronautical Engineering in 1946. Ph.D. program was approved for aerodynamics and propulsion in 1948, followed by the structures area in the early 1950s. Purdue's first Ph.D. in Aeronautical Engineering was awarded to R. L. Duncan in 1950 for his work with Professor Maurice Zucrow on the performance of gas turbines. The school's present name was adopted in 1973. Purdue students have built and restored several aircraft as part of the program. The sole Curtiss P-6 Hawk was restored by students and resides at the National Museum of the United States Air Force. In 1971 students restored a Ryan PT-22 Recruit, and completed a homebuilt Schreder HP-14 glider. Facilities Purdue is home to the largest aerospace propulsion laboratory in the world. Maurice J. Zucrow Laboratories spreads across 24 acres and includes research facilities in combustion, turbines and compressors, energetic materials, hypersonics, aerodynamics, and fluid mechanics. It is also home to the largest indoor motion capture space in the world, the Purdue UAS Research and Test Facility. PURT has 600,000 cubic feet of open space, which is large enough that fixed-wing drone aircraft can safely fly inside. Motion capture cameras and systems, which are common in special effects in filmmaking, is used in drone research to verify unmanned systems navigation algorithms. The positioning data calculated by the drone can be compared to the drone's actual position as determined by the motion capture system. The motion capture system at PURT provides accuracy to within one millimeter. In 2023, the PU Applied Research Institute opened its Hypersonics and Applied Research Facility that has a Mach 8 quiet wind tunnel, designed to closely simulate hypersonic flight, and a hypersonic pulse shock tunnel that uses shock waves of high-temperature air to simulate various flight scenarios (donated by Northrop Grumman from its hypersonics facility in Ronkonkoma, New York.) Notable alumni Many of its graduates have gone on to become astronauts or other prominent members of the aerospace and defense industry. Purdue University has graduated 24 astronauts, more than any other public institution, and 15 of those hold degrees from the aerospace department. The only non-military institution to graduate more astronauts is the Massachusetts Institute of Technology. One-third of all of NASA's crewed space flights have had at least one Purdue graduate aboard, and two of the six American astronauts to fly on the Russian space station Mir held Purdue degrees. Astronauts with Purdue aerospace degrees Neil A. Armstrong, B.S. in Aeronautical Engineering, 1955 Sirisha Bandla, B.S. in Aeronautical and Astronautical Engineering, 2011 John E. Blaha, M.S. in Astronautics, 1966 Roy D. Bridges, Jr., M.S. in Astronautics, 1966 Mark N. Brown, B.S. in Aeronautical and Astronautical Engineering, 1973 John H. Casper, M.S. in Astronautics, 1967 Roger B. Chaffee, B.S. in Aeronautical Engineering, 1957 Richard O. Covey, M.S. in Aeronautics and Astronautics, 1969 Guy S. Gardner, M.S. in Aeronautics and Astronautics, 1970 Henry Charles Gordon, B.S. in Aeronautical and Astronautical Engineering, 1950 Gregory J. Harbaugh, B.S. in Aeronautical and Astronautical Engineering, 1978 Beth Moses, B.S. and M.S. in Aeronautical and Astronautical Engineering, 1992 and 1994 Loral O'Hara, M.S. in Aeronautical and Astronautical Engineering, 2009 Gary E. Payton, M.S. in Aeronautics and Astronautics, 1972 Mark L. Polansky, B.S., M.S. in Aeronautical and Astronautical Engineering, 1978 Audrey Powers, B.S. Aeronautical and Astronautical Engineering, 1999 Loren J. Shriver, M.S. in Astronautics, 1968 Charles D. Walker, B.S. in Aeronautical and Astronautical Engineering, 1971 Aerospace engineers and inventors Paul Bevilaqua, the principal inventor of lift fan engine for the Joint Strike Fighter F-35B Gene Porter Bridwell, seventh director of NASA Marshall Space Flight Center William H. Gerstenmaier, Associate Administrator for Space Operations for NASA John L. Hudson, Program Director for Joint Strike Fighter John H. McMasters Jordi Puig-Suari, co-inventor of CubeSat Daniel Raymer, a widely recognized expert in aircraft conceptual design Business executives Mike Moses, President of Virgin Galactic Others John H. Griffith, Bell X-1 test pilot Dennis Epple, American economist Notable faculty Daniel Dumbacher, 20142017 Amelia Earhart, 19351937 Thomas N. Farris, 19862009 Kathleen Howell Georgios Lianis, 19591978 James Longuski Sergey Macheret R. Byron Pipes Shu Shien-Siu, 19681979 David A. Spencer, 20162020 David Wolf Karl Dawson Wood, 19371944 Henry T. Yang, 19691994 Maurice Zucrow, 19461953 Student organizations The School of Aeronautics & Astronautics is also home to many student organizations that engage its members in a wide array of social, outreach, engineering and service activities. Aero Assist Aero Assist is a student organization at Purdue University that caters to graduate students within the school of Aeronautics & Astronautics. A committee of 10 graduate students organizes several activities that are beneficial to graduate students such as the Research Symposium Series, the Graduate Mentor Program and recreational/leisure activities for the students. Aeronautical and Astronautical Engineering Student Advisory Council AAESAC serves to facilitate interactions and the relationship between faculty and the student body, to advise the administration on issues and concerns of students pertaining to the department, and generally strives to improve the school in hopes of enhancing the educational experience. American Institute of Aeronautics & Astronautics AIAA is the leading professional society for the field of aerospace engineering. The Purdue chapter works to support the institute's main objectives which is to advance the arts, sciences, and technologies pertaining to the aerospace field. Amateur Student and Teacher Rocketry Organization A.S.T.R.O is not only focused on research of solid fuel rocketry, but interacting with the community as well. Purdue Space Day Organized by university students, Purdue Space Day (PSD) is an annual educational outreach program, which provides school students in grades 3-8 the opportunity to learn about science, technology, engineering, and math (STEM) by participating in three age-appropriate activity sessions throughout the day. Sigma Gamma Tau SGT is the American honor society for engineering students. It was founded at Purdue University on February 28, 1953. It seeks to identify and recognize achievement and excellence in the Aerospace field. Students for the Exploration and Development of Space SEDS is a prominent student-run international grass-roots movement dedicated to space advocacy. The Purdue chapter, known as the Purdue Space Program, oversees five rocketry teams, a satellite team, and promotes science outreach at local elementary schools and science centers as well as participating in space conferences such as Space Vision, NewSpace, and ISDC. Beginning in 2020, Purdue Space Program began hosting the Midwest Rocketry Forum, a podcast focusing on various stories in the space industry. Guests include Purdue alumnus, YouTube personalities, and United Launch Alliance CEO Tory Bruno. The chapter formerly hosted the Spring Space Forum, an event in which prominent members of industry, academic, and other space-related fields were invited to discuss a relevant issue. In the summer of 2022, the team began the first collegiate organization to fly a liquid methane / liquid oxygen rocket engine. The Boomie Zoomie B launch vehicle became the first liquid oxygen/liquid methane rocket to launch twice within 48 hours in June 2022, and launched for a third time with a revised valving system less than a month later. Women in Aerospace The purpose of Women in Aerospace is to provide undergraduate women in the aerospace engineering program educational, social, and professional opportunities. WIA seeks to raise awareness of the gender disparity in aerospace engineering and encourage to learn more about how to create inclusive environments. References External links Official website Aeronautical engineering schools Purdue University 1945 establishments in Indiana
Purdue University School of Aeronautics and Astronautics
Engineering
1,911
61,622,385
https://en.wikipedia.org/wiki/Human%20impact%20on%20marine%20life
Human activities affect marine life and marine habitats through overfishing, habitat loss, the introduction of invasive species, ocean pollution, ocean acidification and ocean warming. These impact marine ecosystems and food webs and may result in consequences as yet unrecognised for the biodiversity and continuation of marine life forms. The ocean can be described as the world's largest ecosystem and it is home for many species of marine life. Different activities carried out and caused by human beings such as global warming, ocean acidification, and pollution affect marine life and its habitats. For the past 50 years, more than 90 percent of global warming resulting from human activity has been absorbed into the ocean. This results in the rise of ocean temperatures and ocean acidification which is harmful to many fish species and causes damage to habitats such as coral. With coral producing materials such as carbonate rock and calcareous sediment, this creates a unique and valuable ecosystem not only providing food/homes for marine creatures but also having many benefits for humans too. Ocean acidification caused by rising levels of carbon dioxide leads to coral bleaching where the rates of calcification is lowered affecting coral growth. Additionally, another issue caused by humans which impacts marine life is marine plastic pollution, which poses a threat to marine life. According to the IPCC (2019), since 1950 "many marine species across various groups have undergone shifts in geographical range and seasonal activities in response to ocean warming, sea ice change and biogeochemical changes, such as oxygen loss, to their habitats." It has been estimated only 13% of the ocean area remains as wilderness, mostly in open ocean areas rather than along the coast. Overfishing Overfishing is occurring in one third of world fish stocks, according to a 2018 report by the Food and Agriculture Organization of the United Nations. In addition, industry observers believe illegal, unreported and unregulated fishing occurs in most fisheries, and accounts for up to 30% of total catches in some important fisheries. In a phenomenon called fishing down the foodweb, the mean trophic level of world fisheries has declined because of overfishing high trophic level fish. Habitat loss Coastal ecosystems are being particularly damaged by humans. Significant habitat loss is occurring particularly in seagrass meadows, mangrove forests and coral reefs, all of which are in global decline due to human disturbances. Coral reefs are among the more productive and diverse ecosystems on the planet, but one-fifth of them have been lost in recent years due to anthropogenic disturbances. Coral reefs are microbially driven ecosystems that rely on marine microorganisms to retain and recycle nutrients in order to thrive in oligotrophic waters. However, these same microorganisms can also trigger feedback loops that intensify declines in coral reefs, with cascading effects across biogeochemical cycles and marine food webs. A better understanding is needed of the complex microbial interactions within coral reefs if reef conservation has a chance of success in the future. Seagrass meadows have lost during recent decades. Seagrass ecosystem services, currently worth about $US1.9 trillion per year, include nutrient cycling, the provision of food and habitats for many marine animals, including the endangered dugongs, manatee and green turtles, and major facilitations for coral reef fish. One-fifth of the world's mangrove forests have also been lost since 1980. The most pressing threat to kelp forests may be the overfishing of coastal ecosystems, which by removing higher trophic levels facilitates their shift to depauperate urchin barrens. Invasive species An invasive species is a species not native to a particular location which can spread to a degree that causes damage to the environment, human economy or human health. In 2008, Molnar et al. documented the pathways of hundreds of marine invasive species and found shipping was the dominant mechanism for the transfer of invasive species in the ocean. The two main maritime mechanisms of transporting marine organisms to other ocean environments are via hull fouling and the transfer of ballast water. Ballast water taken up at sea and released in port is a major source of unwanted exotic marine life. The invasive freshwater zebra mussels, native to the Black, Caspian, and Azov seas, were probably transported to the Great Lakes via ballast water from a transoceanic vessel. Meinesz believes that one of the worst cases of a single invasive species causing harm to an ecosystem can be attributed to a seemingly harmless jellyfish. Mnemiopsis leidyi, a species of comb jellyfish that spread so it now inhabits estuaries in many parts of the world, was first introduced in 1982, and thought to have been transported to the Black Sea in a ship's ballast water. The population of the jellyfish grew exponentially and, by 1988, it was wreaking havoc upon the local fishing industry. "The anchovy catch fell from 204,000 tons in 1984 to 200 tons in 1993; sprat from 24,600 tons in 1984 to 12,000 tons in 1993; horse mackerel from 4,000 tons in 1984 to zero in 1993." Now that the jellyfish have exhausted the zooplankton, including fish larvae, their numbers have fallen dramatically, yet they continue to maintain a stranglehold on the ecosystem. Invasive species can take over once occupied areas, facilitate the spread of new diseases, introduce new genetic material, alter underwater seascapes, and jeopardize the ability of native species to obtain food. Invasive species are responsible for about $138 billion annually in lost revenue and management costs in the US alone. Marine pollution Nutrient pollution Nutrient pollution is a primary cause of eutrophication of surface waters, in which excess nutrients, usually nitrates or phosphates, stimulate algae growth. This algae then dies, sinks, and is decomposed by bacteria in the water. This decomposition process consumes oxygen, depleting the supply for other marine life and creating what is referred to as a "dead zone." Dead zones are hypoxic, meaning the water has very low levels of dissolved oxygen. This kills off marine life or forces it to leave the area, removing life from the area and giving it the name dead zone. Hypoxic zones or dead zones can occur naturally, but nutrient pollution from human activity has turned this natural process into an environmental problem. There are five main sources of nutrient pollution. The most common source of nutrient runoff is municipal sewage. This sewage can reach waterways through storm water, leaks, or direct dumping of human sewage into bodies of water. The next biggest sources come from agricultural practices. Chemical fertilizers used in farming can seep into ground water or be washed away in rainwater, entering water ways and introducing excess nitrogen and phosphorus to these environments. Livestock waste can also enter waterways and introduce excess nutrients. Nutrient pollution from animal manure is most intense from industrial animal agriculture operations, in which hundreds or thousands of animals are raised in one concentrated area. Stormwater drainage is another source of nutrient pollution. Nutrients and fertilizers from residential properties and impervious surfaces can be picked up in stormwater, which then runs into nearby rivers and streams that eventually lead to the ocean. The fifth main source of nutrient runoff is aquaculture, in which aquatic organisms are cultivated under controlled conditions. The excrement, excess food, and other organic wastes created by these operations introduces excess nutrients into the surrounding water. Toxic chemicals Toxic chemicals can adhere to tiny particles which are then taken up by plankton and benthic animals, most of which are either deposit feeders or filter feeders. In this way, toxins are concentrated upward within ocean food chains. Many particles combine chemically in a manner which depletes oxygen, causing estuaries to become anoxic. Pesticides and toxic metals are similarly incorporated into marine food webs, harming the biological health of marine life. Many animal feeds have a high fish meal or fish hydrolysate content. In this way, marine toxins are transferred back to farmed land animals, and then to humans. Phytoplankton concentrations have increased over the last century in coastal waters, and more recently have declined in the open ocean. Increases in nutrient runoff from land may explain the rise in coastal phytoplankton, while warming surface temperatures in the open ocean may have strengthened stratification in the water column, reducing the flow of nutrients from the deep that open ocean phytoplankton find useful. Plastic pollution Over 300 million tons of plastic are produced every year, half of which are used in single-use products like cups, bags, and packaging. At least 14 million tons of plastic enter the oceans every year. It is impossible to know for sure, but it is estimated that about 150 million metric tons of plastic exists in our oceans. Plastic pollution makes up 80% of all marine debris from surface waters to deep-sea sediments. Because plastics are light, much of this pollution is seen in and around the ocean surface, but plastic trash and particles are now found in most marine and terrestrial habitats, including the deep sea, Great Lakes, coral reefs, beaches, rivers, and estuaries. The most eye-catching evidence of the ocean plastic problem are the garbage patches that accumulate in gyre regions. A gyre is a circular ocean current formed by the Earth's wind patterns and the forces created by the rotation of the planet. There are five main ocean gyres: the North and South Pacific Subtropical Gyres, the North and South Atlantic Subtropical Gyres, and the Indian Ocean Subtropical Gyre. There are significant garbage patches in each of these. Larger plastic waste can be ingested by marine species, filling their stomachs and leading them to believe they are full when in fact they have taken in nothing of nutritional value. This can bring seabirds, whales, fish, and turtles to die of starvation with plastic-filled stomachs. Marine species can also be suffocated or entangled in plastic garbage. The biggest threat of ocean plastic pollution comes from microplastics. These are small fragments of plastic debris, some of which were produced to be this small such as microbeads. Other microplastics come from the weathering of larger plastic waste. Once larger pieces of plastic waste enter the ocean, or any waterway, the sunlight exposure, temperature, humidity, waves, and wind begin to break the plastic down into pieces smaller than five millimeters long. Plastics can also be broken down by smaller organisms who will eat plastic debris, breaking it down into small pieces, and either excrete these microplastics or spit them out. In lab tests, it was found that amphipods of the species Orchestia gammarellus could quickly devour pieces of plastic bags, shredding a single bag into 1.75 million microscopic fragments. Although the plastic is broken down, it is still an artificial material that does not biodegrade. It is estimated that approximately 90% of the plastics in the pelagic marine environment are microplastics. These microplastics are frequently consumed by marine organisms at the base of the food chain, like plankton and fish larvae, which leads to a concentration of ingested plastic up the food chain. Plastics are produced with toxic chemicals which then enter the marine food chain, including the fish that some humans eat. Noise pollution There is a natural soundscape to the ocean that organisms have evolved around for tens of thousands of years. However, human activity has disrupted this soundscape, largely drowning out sounds organisms depend on for mating, warding off predators, and travel. Ship and boat propellers and engines, industrial fishing, coastal construction, oil drilling, seismic surveys, warfare, sea-bed mining and sonar-based navigation have all introduced noise pollution to ocean environments. Shipping alone has contributed an estimated 32-fold increase of low-frequency noise along major shipping routes in the past 50 years, driving marine animals away from vital breeding and feeding grounds. Sound is the sensory cue that travels the farthest through the ocean, and anthropogenic noise pollution disrupts organisms' ability to utilize sound. This creates stress for the organisms that can affect their overall health, disrupting their behavior, physiology, and reproduction, and even causing mortality. Sound blasts from seismic surveys can damage the ears of marine animals and cause serious injury. Noise pollution is especially damaging for marine mammals that rely on echolocation, such as whales and dolphins. These animals use echolocation to communicate, navigate, feed, and find mates, but excess sound interferes with their ability to use echolocation and, therefore, perform these vital tasks. Mining The prospect of deep sea mining has led to concerns from scientists and environmental groups over the impacts on fragile deep sea ecosystems and wider impacts on the ocean's biological pump. Human induced disease Rapid change to ocean environments allows disease to flourish. Disease-causing microbes can change and adapt to new ocean conditions much more quickly than other marine life, giving them an advantage in ocean ecosystems. This group of organisms includes viruses, bacteria, fungi, and protozoans. While these pathogenic organisms can quickly adapt, other marine life is weakened by rapid changes to their environment. In addition, microbes are becoming more abundant due to aquaculture, the farming of aquatic life, and human waste polluting the ocean. These practices introduce new pathogens and excess nutrients into the ocean, further encouraging the survival of microbes. Some of these microbes have wide host ranges and are referred to as multi-host pathogens. This means that the pathogen can infect, multiply, and be transmitted from different, unrelated species. Multi-host pathogens are especially dangerous because they can infect many organisms, but may not be deadly to all them. This means the microbes can exist in species that are more resistant and use these organisms as vessels for continuously infecting a susceptible species. In this case, the pathogen can completely wipe out the susceptible species while maintaining a supply of host organisms. Climate change In marine environments, microbial primary production contributes substantially to sequestration. Marine microorganisms also recycle nutrients for use in the marine food web and in the process release to the atmosphere. Microbial biomass and other organic matter (remnants of plants and animals) are converted to fossil fuels over millions of years. By contrast, burning of fossil fuels liberates greenhouse gases in a small fraction of that time. As a result, the carbon cycle is out of balance, and atmospheric levels will continue to rise as long as fossil fuels continue to be burnt. Ocean warming Most heat energy from global warming goes into the ocean, and not into the atmosphere or warming up the land. Scientists realized over 30 years ago the ocean was a key fingerprint of human impact on climate change and "the best opportunity for major improvement in our understanding of climate sensitivity is probably monitoring of internal ocean temperature". Marine organisms are moving to cooler parts of the ocean as global warming proceeds. For example, a group of 105 marine fish and invertebrate species were monitored along the US Northeast coast and in the eastern Bering Sea. During the period from 1982 to 2015, the average center of biomass for the group shifted northward about 10 miles as well moving about 20 feet deeper. There is evidence increasing ocean temperatures are taking a toll on marine ecosystem. For example, a study on phytoplankton changes in the Indian Ocean indicates a decline of up to 20% in marine phytoplankton during the past six decades. During summer, the western Indian Ocean is home to one of the largest concentrations of marine phytoplankton blooms in the world. Increased warming in the Indian Ocean enhances ocean stratification, which prevents nutrient mixing in the euphotic zone where ample light is available for photosynthesis. Thus, primary production is constrained and the region's entire food web is disrupted. If rapid warming continues, the Indian Ocean could transform into an ecological desert and cease being productive. The Antarctic oscillation (also called the Southern Annular Mode) is a belt of westerly winds or low pressure surrounding Antarctica which moves north or south according to which phase it is in. In its positive phase, the westerly wind belt that drives the Antarctic Circumpolar Current intensifies and contracts towards Antarctica, while its negative phase the belt moves towards the Equator. Winds associated with the Antarctic oscillation cause oceanic upwelling of warm circumpolar deep water along the Antarctic continental shelf. This has been linked to ice shelf basal melt, representing a possible wind-driven mechanism that could destabilize large portions of the Antarctic Ice Sheet. The Antarctic oscillation is currently in the most extreme positive phase that has occurred for over a thousand years. Recently this positive phase has been further intensifying, and this has been attributed to increasing greenhouse gas levels and later stratospheric ozone depletion. These large-scale alterations in the physical environment are "driving change through all levels of Antarctic marine food webs". Ocean warming is also changing the distribution of Antarctic krill. Antarctic krill is the keystone species of the Antarctic ecosystem beyond the coastal shelf, and is an important food source for marine mammals and birds. The IPCC (2019) says marine organisms are being affected globally by ocean warming with direct impacts on human communities, fisheries, and food production. It is likely there will be a 15% decrease in the number of marine animals and a decrease of 21% to 24% in fisheries catches by the end of the 21st century because of climate change. A 2020 study reports that by 2050 global warming could be spreading in the deep ocean seven times faster than it is now, even if emissions of greenhouse gases are cut. Warming in mesopelagic and deeper layers could have major consequences for the deep ocean food web, since ocean species will need to move to stay at survival temperatures. Rising sea levels Coastal ecosystems are facing further changes because of rising sea levels. Some ecosystems can move inland with the high-water mark, but others are prevented from migrating due to natural or artificial barriers. This coastal narrowing, called coastal squeeze if human-made barriers are involved, can result in the loss of habitats such as mudflats and marshes. Mangroves and tidal marshes adjust to rising sea levels by building vertically using accumulated sediment and organic matter. If sea level rise is too rapid, they will not be able to keep up and will instead be submerged. Coral, important for bird and fish life, also needs to grow vertically to remain close to the sea surface in order to get enough energy from sunlight. So far it has been able to keep up, but might not be able to do so in the future. These ecosystems protect against storm surges, waves and tsunamis. Losing them makes the effects of sea level rise worse. Human activities, such as dam building, can prevent natural adaptation processes by restricting sediment supplies to wetlands, resulting in the loss of tidal marshes. When seawater moves inland, the coastal flooding can cause problems with existing terrestrial ecosystems, such as contaminating their soils. The Bramble Cay melomys is the first known land mammal to go extinct as a result of sea level rise. Ocean circulation and salinity Ocean salinity is a measure of how much dissolved salt is in the ocean. The salts come from erosion and transport of dissolved salts from the land. The surface salinity of the ocean is a key variable in the climate system when studying the global water cycle, ocean–atmosphere exchanges and ocean circulation, all vital components transporting heat, momentum, carbon and nutrients around the world. Cold water is more dense than warm water and salty water is more dense than freshwater. This means the density of ocean water changes as its temperature and salinity changes. These changes in density are the main source of the power that drives the ocean circulation. Surface ocean salinity measurements taken since the 1950s indicate an intensification of the global water cycle with high saline areas becoming more saline and low saline areas becoming more less saline. Ocean acidification Ocean acidification is the increasing acidification of the oceans, caused mainly by the uptake of carbon dioxide from the atmosphere. The rise in atmospheric carbon dioxide due to the burning of fossil fuels leads to more carbon dioxide dissolving in the ocean. When carbon dioxide dissolves in water it forms hydrogen and carbonate ions. This in turn increases the acidity of the ocean and makes survival increasingly harder for microorganisms, shellfish and other marine organisms that depend on calcium carbonate to form their shells. Increasing acidity also has potential for other harm to marine organisms, such as depressing metabolic rates and immune responses in some organisms, and causing coral bleaching. Ocean acidification has increased 26% since the beginning of the industrial era. It has been compared to anthropogenic climate change and called the "evil twin of global warming" and "the other problem". Ocean deoxygenation Ocean deoxygenation is an additional stressor on marine life. Ocean deoxygenation is the expansion of oxygen minimum zones in the oceans as a consequence of burning fossil fuels. The change has been fairly rapid and poses a threat to fish and other types of marine life, as well as to people who depend on marine life for nutrition or livelihood. Ocean deoxygenation poses implications for ocean productivity, nutrient cycling, carbon cycling, and marine habitats. Ocean warming exacerbates ocean deoxygenation and further stresses marine organisms, limiting nutrient availability by increasing ocean stratification through density and solubility effects while at the same time increasing metabolic demand. According to the IPCC 2019 Special Report on the Ocean and Cryosphere in a Changing Climate, the viability of species is being disrupted throughout the ocean food web due to changes in ocean chemistry. As the ocean warms, mixing between water layers decreases, resulting in less oxygen and nutrients being available for marine life. Polar ice sheets Until recently, ice sheets were viewed as inert components of the carbon cycle and largely disregarded in global models. Research in the past decade has transformed this view, demonstrating the existence of uniquely adapted microbial communities, high rates of biogeochemical/physical weathering in ice sheets and storage and cycling of organic carbon in excess of 100 billion tonnes, as well as nutrients. Biogeochemical The diagram on the right shows some human impacts on the marine nitrogen cycle. Bioavailable nitrogen (Nb) is introduced into marine ecosystems by runoff or atmospheric deposition, causing eutrophication, the formation of dead zones and the expansion of the oxygen minimum zones (OMZs). The release of nitrogen oxides (N2O, NO) from anthropogenic activities and oxygen-depleted zones causes stratospheric ozone depletion leading to higher UVB exposition, which produces the damage of marine life, acid rain and ocean warming. Ocean warming causes water stratification, deoxygenation, and the formation of dead zones. Dead zones and OMZs are hotspots for anammox and denitrification, causing nitrogen loss (N2 and N2O). Elevated atmospheric carbon dioxide acidifies seawater, decreasing pH-dependent N-cycling processes such as nitrification, and enhancing N2 fixation. Calcium carbonates Aragonite is a form of calcium carbonate many marine animals use to build carbonate skeletons and shells. The lower the aragonite saturation level, the more difficult it is for the organisms to build and maintain their skeletons and shells. The map below shows changes in the aragonite saturation level of ocean surface waters between 1880 and 2012. To pick one example, pteropods are a group of widely distributed swimming sea snails. For pteropods to create shells they require aragonite which is produced through carbonate ions and dissolved calcium. Pteropods are severely affected because increasing acidification levels have steadily decreased the amount of water supersaturated with carbonate which is needed for the aragonite creation. When the shell of a pteropod was immersed in water with a pH level the ocean is projected to reach by the year 2100, the shell almost completely dissolved within six weeks. Likewise corals, coralline algae, coccolithophores, foraminifera, as well as shellfish generally, all experience reduced calcification or enhanced dissolution as an effect of ocean acidification. Pteropods and brittle stars together form the base of the Arctic food webs and both are seriously damaged by acidification. Pteropods shells dissolve with increasing acidification and brittle stars lose muscle mass when re-growing appendages. Additionally the brittle star's eggs die within a few days when exposed to expected conditions resulting from Arctic acidification. Acidification threatens to destroy Arctic food webs from the base up. Arctic waters are changing rapidly and are advanced in the process of becoming undersaturated with aragonite. Arctic food webs are considered simple, meaning there are few steps in the food chain from small organisms to larger predators. For example, pteropods are "a key prey item of a number of higher predators – larger plankton, fish, seabirds, whales". Silicates The rise in agriculture of the past 400 years has increased the exposure rocks and soils, which has resulted in increased rates of silicate weathering. In turn, the leaching of amorphous silica stocks from soils has also increased, delivering higher concentrations of dissolved silica in rivers. Conversely, increased damming has led to a reduction in silica supply to the ocean due to uptake by freshwater diatoms behind dams. The dominance of non-siliceous phytoplankton due to anthropogenic nitrogen and phosphorus loading and enhanced silica dissolution in warmer waters has the potential to limit silicon ocean sediment export in the future. In 2019 a group of scientists suggested acidification is reducing diatom silica production in the Southern Ocean. Carbon As the technical and political challenges of land-based carbon dioxide removal approaches become more apparent, the oceans may be the new "blue" frontier for carbon drawdown strategies in climate governance. Marine environments are the blue frontier of a strategy for novel carbon sinks in post-Paris climate governance, from nature-based ecosystem management to industrial-scale technological interventions in the Earth system. Marine carbon dioxide removal approaches are diverse — although several resemble key terrestrial carbon dioxide removal proposals. Ocean alkalinisation (adding silicate mineral such as olivine to coastal seawater, to increase uptake through chemical reactions) is enhanced weathering, blue carbon (enhancing natural biological drawdown from coastal vegetation) is marine reforestation, and cultivation of marine biomass (i.e., seaweed) for coupling with consequent carbon capture and storage is the marine variant of bioenergy and carbon capture and storage. Wetlands, coasts, and the open ocean are being conceived of and developed as managed carbon removal-and-storage sites, with practices expanded from the use of soils and forests. Effect of multiple stressors If more than one stressor is present the effects can be amplified. For example, the combination of ocean acidification and an elevation of ocean temperature can have a compounded effect on marine life far exceeding the individual harmful impact of either. While the full implications of elevated on marine ecosystems are still being documented, there is a substantial body of research showing that a combination of ocean acidification and elevated ocean temperature, driven mainly by and other greenhouse gas emissions, have a compounded effect on marine life and the ocean environment. This effect far exceeds the individual harmful impact of either. In addition, ocean warming exacerbates ocean deoxygenation, which is an additional stressor on marine organisms, by increasing ocean stratification, through density and solubility effects, thus limiting nutrients, while at the same time increasing metabolic demand. The direction and magnitude of the effects of ocean acidification, warming and deoxygenation on the ocean has been quantified by meta-analyses, and has been further tested by mesocosm studies. The mesocosm studies simulated the interaction of these stressors and found a catastrophic effect on the marine food web, namely, that the increases in consumption from thermal stress more than negates any primary producer to herbivore increase from more available carbon dioxide. Drivers of change Changes in marine ecosystem dynamics are influenced by socioeconomic activities (for example, fishing, pollution) and human-induced biophysical change (for example, temperature, ocean acidification) and can interact and severely impact marine ecosystem dynamics and the ecosystem services they generate to society. Understanding these direct—or proximate—interactions is an important step towards sustainable use of marine ecosystems. However, proximate interactions are embedded in a much broader socioeconomic context where, for example, economy through trade and finance, human migration and technological advances, operate and interact at a global scale, influencing proximate relationships. In 2024 a study was released, dedicated to the impact of fishing and non fishing ships on the coastal waters of the ocean when 75% of the industrial activity occur. According to the study: "A third of fish stocks are operated beyond biologically sustainable levels and an estimated 30–50% of critical marine habitats have been lost owing to human industrialization". It mentions that except traditional impacts like fishing, maritime trade and oil extraction there are new emerging like mining, aquaculture and offshore wind turbines. It used satellite data to monitor the vessels. It found that 72% - 76% of fishing ships and 21%-30% of energy and transport ships are "missing from public tracking systems". When the data was added to previously existing information about ships that were publicly tracked, this led to several discoveries including: The study discovered a significant increase in offshore wind turbins which had overpassed oil platforms by number already in 2021. Fishing increased only a little in the latest years and may begin to decline because fisheries are exhausted. It concluded that "transport and energy vessel traffic may continue to expand, following trends in global trade and the rapid development of renewable energy infrastructure. In this scenario, changes to marine ecosystems brought by infrastructure and vessel traffic may rival fishing in impact". Shifting baselines Shifting baselines arise in research on marine ecosystems because changes must be measured against some previous reference point (baseline), which in turn may represent significant changes from an even earlier state of the ecosystem. For example, radically depleted fisheries have been evaluated by researchers who used the state of the fishery at the start of their careers as the baseline, rather than the fishery in its unexploited or untouched state. Areas that swarmed with a particular species hundreds of years ago may have experienced long-term decline, but it is the level a few decades previously that is used as the reference point for current populations. In this way large declines in ecosystems or species over long periods of time were, and are, masked. There is a loss of perception of change that occurs when each generation redefines what is natural or untouched. See also Effects of climate change on biomes Deep-sea exploration Garbage patch Great Pacific Garbage Patch Marine heatwave Oceanic carbon cycle Offshore construction Ocean exploration Special Report on the Ocean and Cryosphere in a Changing Climate Sustainable Development Goal 14 World Oceans Day References Human impact on the environment Marine biology
Human impact on marine life
Biology
6,402
77,585,608
https://en.wikipedia.org/wiki/4-Hydroxybenzyl%20isothiocyanate
4-Hydroxybenzyl isothiocyanate is a naturally occurring isothiocyanate. It is formed as a degradation product of sinalbin from white mustard and contributes to the pungent taste of mustard seeds. Occurrence 4-hydroxybenzyl isothiocyanate occurs as a degradation product of sinalbin or glucosinalbin in white mustard. This compound is broken down as a mustard oil glycoside by myrosinase, releasing the isothiocyanate. The isothiocyanate further decomposes into hydroxybenzyl alcohols with the release of thiocyanates. In the presence of a nitrile-specifier protein, the less toxic 4-hydroxyphenylacetonitrile is formed from the mustard oil glycoside instead. The cabbage butterfly exploits this mechanism to avoid the toxic effects of the isothiocyanate. Similar to other isothiocyanates found in cruciferous vegetables, this compound contributes to the pungent flavor of mustard. Production Similar to its natural formation, 4-hydroxybenzyl isothiocyanate can be synthesized by reacting sinalbin with myrosinase. References Isothiocyanates 4-Hydroxyphenyl compounds
4-Hydroxybenzyl isothiocyanate
Chemistry
268
48,223,392
https://en.wikipedia.org/wiki/Sacubitrilat
Sacubitrilat (INN; or LBQ657) is the active metabolite of the antihypertensive drug sacubitril, which is used in the treatment of heart failure. References Antihypertensive agents Biphenyls Carboxylic acids Amides
Sacubitrilat
Chemistry
64
27,998,051
https://en.wikipedia.org/wiki/Narcotic%20Drugs%20Act
The Narcotic Drugs Act (, or BtMG) is the controlled substances law of Germany. In common with the Misuse of Drugs Act of 1971 of the United Kingdom and Controlled Substances Acts of the US and Canada, it is a consolidation of prior regulation and an implementation of treaty obligations under the Single Convention on Narcotic Drugs, Convention on Psychotropic Substances and other treaties. The BtMG updated the German Opium Law 1929 and mirrors the Swiss BtMG and Austrian Suchtmittelgesetz. The German Narcotics Act was re-announced on 1 March 1994. The last change to the law was the legalization of Cannabis in Germany on 1 April 2024. Since then, the handling of this drug has been subject to the German cannabis control bill. See also Drug policy of Germany Drugs controlled by the German Narcotic Drugs Act Links Non-official translation (as of 2009) References German criminal law Drug policy of Germany Drug control law
Narcotic Drugs Act
Chemistry
192
13,313,000
https://en.wikipedia.org/wiki/Substellar%20object
A substellar object, sometimes called a substar, is an astronomical object, the mass of which is smaller than the smallest mass at which hydrogen fusion can be sustained (approximately 0.08 solar masses). This definition includes brown dwarfs and former stars similar to EF Eridani B, and can also include objects of planetary mass, regardless of their formation mechanism and whether or not they are associated with a primary star. Assuming that a substellar object has a composition similar to the Sun's and at least the mass of Jupiter (approximately 0.001 solar masses), its radius will be comparable to that of Jupiter (approximately 0.1 solar radii) regardless of the mass of the substellar object (brown dwarfs are less than 75 Jupiter masses). This is because the center of such a substellar object at the top range of the mass (just below the hydrogen-burning limit) is quite degenerate, with a density of ≈103 g/cm3, but this degeneracy lessens with decreasing mass until, at the mass of Jupiter, a substellar object has a central density less than 10 g/cm3. The density decrease balances the mass decrease, keeping the radius approximately constant. Substellar objects like brown dwarfs do not have enough mass to fuse hydrogen and helium, hence do not undergo the usual stellar evolution that limits the lifetime of stars. A substellar object with a mass just below the hydrogen-fusing limit may ignite hydrogen fusion temporarily at its center. Although this will provide some energy, it will not be enough to overcome the object's ongoing gravitational contraction. Likewise, although an object with mass above approximately 0.013 solar masses will be able to fuse deuterium for a time, this source of energy will be exhausted in approximately 1100million years. Apart from these sources, the radiation of an isolated substellar object comes only from the release of its gravitational potential energy, which causes it to gradually cool and shrink. A substellar object in orbit around a star will shrink more slowly as it is kept warm by the star, evolving towards an equilibrium state where it emits as much energy as it receives from the star. Substellar objects are cool enough to have water vapor in their atmosphere. Infrared spectroscopy can detect the distinctive color of water in gas giant size substellar objects, even if they are not in orbit around a star. Classification William Duncan MacMillan proposed in 1918 the classification of substellar objects into three categories based on their density and phase state: solid, transitional and dark (non-stellar) gaseous. Solid objects include Earth, smaller terrestrial planets and moons; with Uranus and Neptune (as well as later mini-Neptune and Super Earth planets) as transitional objects between solid and gaseous. Saturn, Jupiter and large gas giant planets are in a fully "gaseous" state. Substellar companion A substellar object may be a companion of a star, such as an exoplanet or brown dwarf that is orbiting a star. Objects as low as 823 Jupiter masses have been called substellar companions. Objects orbiting a star are often called planets below 13 Jupiter masses and brown dwarves above that. Companions at that planet-brown dwarf borderline have been called Super-Jupiters, such as that around the star Kappa Andromedae. Nevertheless, objects as small as 8 Jupiter masses have been called brown dwarfs. See also Brown dwarf List of planet types Planet Red dwarf Sub-brown dwarf References Quoted as Chabrier and Baraffe: External links Keck views star and companion Planets Stars
Substellar object
Astronomy
725
23,800,840
https://en.wikipedia.org/wiki/Modular%20construction
Modular construction is a construction technique which involves the prefabrication of 2D panels or 3D volumetric structures in off-site factories and transportation to construction sites for assembly. This process has the potential to be superior to traditional building in terms of both time and costs, with claimed time savings of between 20 and 50 percent faster than traditional building techniques. It is estimated that by 2030, modular construction could deliver US$22 billion in annual cost savings for the US and European construction industry, helping fill the US$1.6 trillion productivity gap. The current need for standardized, repeatable 3D volumetric housing pre-fabricated units and designs for student accommodations, affordable housing and hotels is driving demand for modular construction. Advantages In a 2018 Practice Note, the NEC states that the benefits obtained from offsite construction mainly relate to the creation of components in a factory setting, protected from the weather and using manufacturing techniques such as assembly lines with dedicated and specialist equipment. Through the use of appropriate technology, modular construction can: increase the speed of construction by increasing the speed of manufacture of the component parts, reduce waste, increase economies of scale, improve quality leading to reduction in the whole life costs of assets reduce environmental impact such as dust and noise and reduce accidents and ill health by reducing the amount of construction work taking place at site Disadvantages In contrast to the benefits mentioned earlier, modular construction presents two significant obstacles: Logistical challenges: The transportation of completed modules to the construction site demands meticulous organization and synchronization, often incurring substantial costs. Constraints on size: The manufacturing and transportation procedures may place limitations on the dimensions of individual modular components. This can impact the room sizes in the building, potentially influencing the overall architectural design. Time Modular construction has consistently been at least 20 percent faster than traditional on-site builds. Currently, the design process of modular construction projects tends to take longer than that of traditional building. This is because modular construction is a fairly new technology and not many architects and engineers have experience working with it. In fewer words, the industry has not yet learned how to work this way. It is expected of design firms to develop module libraries which would assist in the automation of this process. These modules libraries would hold various pre-designed 2D panels and 3D structures which would be digitally assembled to create standardized structures. The foundations of a structure are a crucial part of its rigidity. The magnitude and complexity of such will vary depending on the size, and overall weight of the structure. Therefore, the weight difference of a traditionally built house and a prefabricated structure will mean that foundations needed will be smaller and faster to build. Off-site manufacturing is the pinnacle of modular construction. The ability to coordinate and repeat activities in a factory along with the increased help of automation result in largely faster manufacturing times than those of on-site building. A large time saver is the ability to parallelly work on the foundation of a structure and the manufacturing of the structure itself. This would be impossible with traditional construction. The on-site construction is radically simplified. The assembly of pre-fabricated components is as simple as assembling the 3D modules, and connecting the services to main site connections. A team of five workers can assemble up to six 3D modules, or the equivalent of 270 square meters of finished floor area, in a single work day. Production algorithms Since the technology required to manufacture the components of modular construction, the prefabricated parts of modular buildings are carried out by modular factories. To optimize time, modular factories consider the specifications and resources of the project and adapt a scheduling algorithm to fulfill the needs of this unique project. However, current scheduling methods assume the quantity of resources will never reach zero, therefore representing an unrealistic work cycle. A modular factory handling a single project at any given point is rare, and would produce low returns. Hyun and Lee's research propose a Genetic Algorithm (GA) scheduling model which takes into consideration various project's characteristics and shares resources. The production sequence of this algorithm would be largely affected by which modules need to be transported to which site and the dates they should arrive. After considering the variables of production, transportation and on-site assembly the objective function is:Where Si is the number of stocked units per day, Pi is the number of units per day and Ei is number of units installed per day. Production algorithms are continuously being developed to further accelerate the production of modular construction buildings, enlarging the time saving gap with traditional construction methods. Cost Modular construction can yield up to 20 percent of the total project cost in savings. However, there is also a risk of it increasing the cost by 10 percent. This occurs when the savings in the labor area of construction are outweighed by the increase in costs of the logistics area and materials. The pre-fabrication of components used in modular construction have a higher logistics cost than traditional building. Since the panels or 3D structures have to be manufactured in a factory and transported to the construction site, new variables which alter the flow of construction are presented. Transportation Transportation of fabricated components is naturally more expensive than that of raw materials. For one, even a number of 2D panels stacked together are much harder to transport than the raw cement, wood or material used to build them. Panels run a high risk of suffering minor or major damage when being transported through land. If a panel were to be damaged, it would likely have to be replaced entirely. The factory would need to temporarily stop production of other panels to replace this one, increasing the overall manufacturing hours and therefore cost. On top of the manufacturing hours, the transportation hours would also be increased, increasing yet another cost. Regardless, the transportation of 2D panels is still a good alternative to on-site construction. Transportation reaches its peak cost when shipping 3D volumetric structures. While 1 m2 of 2D floor space takes approximately US$8 to transport 250 km, its equivalent in 3D floor space takes US$45. Adding to this the replacement cost if the structure gets damaged during transport creates a large cost increase. Construction Assembling components in a factory off-site means that workers can use the repeatability of the structures as well as the use of automation to facilitate the manufacturing process. By standardizing the overall design of structures, work which would usually require expensive workers with specific skills (e.g. mechanical, electrical and plumbing) can be completed by low-cost manufacturers, decreasing the total salaries cost. As very little manufacturing occurs on-site, up to 80% of traditional labor activity can be moved off-site to the module factory. This leads to a lower number of sub-contractors needed, further decreasing overall total salaries cost. Overall, the larger the labor-intensive portion of a project, the larger the savings will be if modular construction is used. Project such as student accommodations, hotels and affordable housing are great candidates for modular construction. The repeatability of their structures leads to faster manufacturing times and therefore less overall cost. Meanwhile, if the project is (for example) a modern beach house with highly irregular wall spaces and ceilings, traditional construction methods may be preferable. As the industry continues to adapt and grow, these repeatable designs could one day be modified and adapted to fit all kinds of structures at decreased costs. Safety Construction is considered to be one of the most dangerous industries. Workers fall from heights, objects are dropped, muscles are strained and environmental hazards can be found. Modular construction constrains all manufacturing activities to a ground level, clean space with fewer workers needed. It is estimated that reportable accidents are reduced by over 80% relative to site-intensive construction. When asked in a survey about safety management in the construction industry conducted by McGraw Hill Construction in 2013, 50% of the construction industry believed that pre-fabrication was safer than traditional on-site building, while only 4% said that prefabrication or modular construction had a negative impact on safety performance. Of the general and specialty contractors surveyed, 78% and 59% said that the largest safety impact was the undergoing of complex tasks at ground level. According to the CDC, falling is the leading cause of work-related fatalities in construction, making up more than one in every three deaths in the industry. The reduction of heights at which workers need perform tasks on subsequently reduces the fatality risk they experience, greatly increasing the overall safety of the industry.  Also, 69% of the general contractors as well as 69% of the specialty contractors mentioned that the reduced number of workers performing different tasks at the off-site factory also improved construction site safety. Overall, modular construction is safer for the following reasons: Stable work location Tasks are performed in ample spaces Ground level assembly Cover from harsh weather Better monitoring of unsafe activities 30 to 50 percent reduction in time spent at construction site Fewer personnel on-site Modular construction is still not considered an entirely safe alternative. However, it does reduce accidents and fatalities by a significant amount. Especially in the manufacturing process of a project. 48.1% of all accidents during on-site construction were fall-related, while only 9.1% of the accidents at manufacturing plants were from falls. Manufacturing plant workers were more likely to be struck by an object or equipment (37.1%) and fracture and amputation had the same injury type frequency at 27.3%. Nevertheless, as the construction industry continues to adapt and moves over to more sustainable construction methods like pre-fabricated modular construction, it is expected that the overall safety number of accidents at construction sites will decrease. The use of modular construction methods is encouraged by proponents of Prevention through Design techniques in construction. It is included as a recommended hazard control for construction projects in the "PtD - Architectural Design and Construction Education Module" published by the National Institute for Occupational Safety and Health. Sustainability Modular construction is a great alternative to traditional construction when looking at the amount of waste each method produces. When constructing a high-rise building in Wolverhampton, 824 modules were used. During this process about 5% of the total weight of the construction was produced as waste. If it is compared to traditional methods' 1013% average waste, a small difference can be observed. This difference may not seem like much when talking about small structures; however, when talking about a 100,000 lb/ft2 building, it is a significant percentage. Also, the number of on-site deliveries decreased by up to 70%. The deliveries are instead moved to the modular factory, where more material can be received. On-site noise pollution is greatly reduced as well. By moving the manufacturing process to an off-site factory, usually located outside of the city, neighboring buildings are not impacted as they would be with the traditional building process. Modular construction systems Open-source and commercial hardware components used in modular construction include: open beams, bit beams, maker beams, grid beams, contraptors, OpenStructures components, etc. Space frame systems (such as Mero, Unistrut, Delta Structures, etc.) also tend to be modular in design. Other materials used in construction which are interlocking and thus reusable/modular in nature include interlocking bricks. See also References Construction
Modular construction
Engineering
2,253
3,581,689
https://en.wikipedia.org/wiki/Kelvin%20water%20dropper
The Kelvin water dropper, invented by Scottish scientist William Thomson (Lord Kelvin) in 1867, is a type of electrostatic generator. Kelvin referred to the device as his water-dropping condenser. The apparatus is variously called the Kelvin hydroelectric generator, the Kelvin electrostatic generator, or Lord Kelvin's thunderstorm. The device uses falling water to generate voltage differences by electrostatic induction occurring between interconnected, oppositely charged systems. This eventually leads to an electric arc discharging in the form of a spark. It is used in physics education to demonstrate the principles of electrostatics. Description A typical setup is shown in Fig. 1. A reservoir of water or other conducting liquid (top, grey) is connected to two hoses that release two falling streams of drops, which land in two buckets or containers (bottom, blue and red). Each stream passes (without touching) through a metal ring or open cylinder which is electrically connected to the opposite receiving container; the left ring (red) is connected to the right bucket, while the right ring (blue) is connected to the left bucket. The containers must be electrically insulated from each other and from electrical ground. Similarly, the rings must be electrically isolated from each other and their environment. The streams must break into separate droplets before reaching the containers. Typically, the containers are made of metal and the rings are connected to them by wires. The simple construction makes this device popular in physics education as a laboratory experiment for students. Principles of operation A small initial difference in electric charge between the two buckets, which always exists because the buckets are insulated from each other, is necessary to begin the charging process. Suppose, therefore, that the right bucket has a small positive charge. Now the left ring also has some positive charge because it is connected to the bucket. The charge on the left ring will attract negative charges in the water (ions) into the left-hand stream by the Coulomb electrostatic attraction. When a drop breaks off the end of the left-hand stream, the drop carries a negative charge with it. When the negatively charged water drop falls into its bucket (the left one), it gives that bucket and the attached ring (the right one) a negative charge. Once the right ring has a negative charge, it similarly attracts positive charge into the right-hand stream. When drops break off the end of that stream, they carry positive charge to the positively charged bucket, making that bucket even more positively charged. Thus positive charges are attracted to the right-hand stream by the ring, and positive charge drips into the positively charged right bucket. Negative charges are attracted to the left-hand stream and negative charge drips into the negatively charged left bucket. This process of charge separation that occurs in the water is called electrostatic induction. The higher the charge that accumulates in each bucket, the higher the electrical potential on the rings and the more effective this process of electrostatic induction is. During the induction process, there is an electric current that flows in the form of positive or negative ions in the water of the supply lines. This is separate from the bulk flow of water that falls through the rings and breaks into droplets on the way to the containers. For example, as water approaches the negatively charged ring on the right, any free electrons in the water can easily flee toward the left, against the flow of water. Eventually, when both buckets have become highly charged, several different effects may be seen. An electric spark may briefly arc between the two buckets or rings, decreasing the charge on each bucket. If there is a steady stream of water through the rings, and if the streams are not perfectly centered in the rings, one can observe the deflection of the streams prior to each spark due to the electrostatic attraction via Coulomb's law of opposite charges. As charging increases, a smooth and steady stream may fan out due to self-repulsion of the net charges in the stream. If the water flow is set such that it breaks into droplets in the vicinity of the rings, the drops may be attracted to the rings enough to touch the rings and deposit their charge on the oppositely charged rings, which decreases the charge on that side of the system. In that case also, the buckets will start to electrostatically repel the droplets falling towards them, and may fling the droplets away from the buckets. Each of these effects will limit the voltage that can be reached by the device. The voltages reached by this device can be in the range of kilovolts, but the amounts of charge are small, so there is no more danger to persons than that of static electrical discharges produced by shuffling feet on a carpet, for example. The opposite charges which build up on the buckets represent electrical potential energy, as shown by the energy released as light and heat when a spark passes between them. This energy comes from the gravitational potential energy released when the water falls. The charged falling water drops do work against the opposing electric field of the like-charged containers, which exerts an upward force against them, converting gravitational potential energy into electrical potential energy, plus motional kinetic energy. The kinetic energy is wasted as heat when the water drops land in the buckets, so when considered as an electric power generator the Kelvin machine is very inefficient. However, the principle of operation is the same as with other forms of hydroelectric power. As always, energy is conserved. Details If the buckets are metal conductors, then the built-up charge resides on the outside of the metal, not in the water. This is part of the electrical induction process, and is an example of the related "Faraday's ice bucket". Also, the idea of bringing small amounts of charge into the center of a large metal object with a large net charge, as happens in Kelvin's water dropper, relies on the same physics as in the operation of a van de Graaff generator. The discussion above is in terms of charged droplets falling. The inductive charging effects occur while the water stream is continuous. This is because the flow and separation of charge occurs already when the streams of water approach the rings, so that when the water passes through the rings there is already net charge on the water. When drops form, some net charge is trapped on each drop as gravity pulls it toward the like-charged container. When the containers are metal, the wires may be attached to the metal. Otherwise, the container-end of each wire must dip into the water. In the latter case, the charge resides on the surface of the water, not outside of the containers. The apparatus can be extended to more than two streams of droplets. In 2013, a combined group from the University of Twente (the Netherlands) constructed a microfluidic version of the Kelvin water dropper, which yields electrical voltages able to charge, deform and break water droplets of micrometric size by just using pneumatic force instead of gravity. A year later, they developed another version of a microfluidic Kelvin water dropper, using a microscale liquid jet (which then broke into microdroplets) shot onto a metal target, which yielded a maximum 48% efficiency. Historical background In De Magnete, published in 1600, William Gilbert included studies of static electricity produced by amber and its interaction with water. He observed the formation of conical structures on water which are commonly now called Taylor cones. Other early studies noting the interaction of static electricity with water and reported in the English language include: Francis Hauksbee "Physico-Mechanical Experiments on Various Subjects". (1719) William Watson, "Experiments and Observations Tending To Illustrate The Nature and Properties of Electricity". (MDCCXLVI) (1741) John Theophilus Desaguliers, "A Dissertation concerning Electricity" Innys and Longman, London MDCCXLII (1742) Joseph Priestley, "The History and Present State Of Electricity with Original Experiments by, Volumes I, II, and III (MDCCLXVII) (1747) James Ferguson, "An Introduction to Electricity", W. Strahan and T. Cadell, London MDCCLXX (1770) George Adams, "An Essay on Electricity", London (1785) Tiberius Cavallo, "A Complete Treatise On Electricity in Theory and Practice with Original Experiments", Volumes I and II (MDCCXCV) (1795) John Cuthbertson, "Practical Electricity", J. Callow, London (1807) George John Singer, "Elements of Electricity and Electro-chemistry", Longman, Hurst, Rees, Orme, and Brown, Paternoster Row 1814 George W. Francis, "Electrostatic Experiments" (1844) Henry Minchin Noad, "A Manual of Electricity" in two volumes (1857) By the 1840s it was able to be demonstrated that streams of water could carry electric charge, that streams carrying like charge were repelled and that streams carrying unlike charge were attracted. It could also be demonstrated that physical charge separation, that is, separation of charge into different regions, could be induced in a body of water by a static electric field. Lord Kelvin used this foundation of accumulated knowledge to, in 1859, create an apparatus involving the interaction of a stream of water with the Earth's static electric field to cause charge separation and subsequent measurement of charge to make atmospheric electricity measurements. Experimental studies Investigations of the Kelvin electrostatic generator under various controlled conditions showed that it operated with tap water, distilled water (non-deionised) and a saturated solution of NaCl. It was also found that the generator worked well even if the two liquid streams originate from different electrically insulated reservoirs. A model was proposed in which the electric charge results from the separation of the positive aqueous hydrogen ion and the negative aqueous hydroxyl ion as the water droplets form. References External links Kelvin Water dropper – MIT OCW YouTube ("Reinhard Schumacher") – Kelvin Water Dropper: Implementation and Explanation YouTube ("RimstarOrg") – Kelvin Water Dropper and How it Works YouTube ("Veritasium") – Sparks from Falling Water: Kelvin's Thunderstorm Detailed description of device and how to build your own Kelvin water dropper. Lego Kelvin water dropper Electrostatic generators Physics experiments Scottish inventions Water dropper
Kelvin water dropper
Physics
2,129
35,223,708
https://en.wikipedia.org/wiki/History%20of%20the%20center%20of%20the%20Universe
The center of the Universe is a concept that lacks a coherent definition in modern astronomy; according to standard cosmological theories on the shape of the universe, it has no distinct spatial center. Historically, different people have suggested various locations as the center of the Universe. Many mythological cosmologies included an axis mundi, the central axis of a flat Earth that connects the Earth, heavens, and other realms together. In the 4th century BC Greece, philosophers developed the geocentric model, based on astronomical observation; this model proposed that the center of the Universe lies at the center of a spherical, stationary Earth, around which the Sun, Moon, planets, and stars rotate. With the development of the heliocentric model by Nicolaus Copernicus in the 16th century, the Sun was believed to be the center of the Universe, with the planets (including Earth) and stars orbiting it. In the early-20th century, the discovery of other galaxies and the development of the Big Bang theory, led to the development of cosmological models of a homogeneous, isotropic Universe, which lacks a distinct spatial central point, which is rather everywhere, for space expands from a shared central point in time, the Big Bang. Outside astronomy In religion and mythology, the axis mundi (also cosmic axis, world axis, world pillar, columna cerului, center of the world) is a point described as the center of the world, the connection between it and Heaven, or both. Mount Hermon was regarded as the axis mundi in Canaanite tradition, from where the sons of God are introduced descending in 1 Enoch (1En6:6). The ancient Greeks regarded several sites as places of earth's omphalos (navel) stone, notably the oracle at Delphi, while still maintaining a belief in a cosmic world tree and in Mount Olympus as the abode of the gods. Judaism has the Temple Mount and Mount Sinai, Christianity has the Mount of Olives and Calvary, Islam has Mecca, said to be the place on earth that was created first, and the Temple Mount (Dome of the Rock). In Shinto, the Ise Shrine is the omphalos. In addition to the Kun Lun Mountains, where it is believed the peach tree of immortality is located, the Chinese folk religion recognizes four other specific mountains as pillars of the world. Sacred places constitute world centers (omphalos) with the altar or place of prayer as the axis. Altars, incense sticks, candles and torches form the axis by sending a column of smoke, and prayer, toward heaven. The architecture of sacred places often reflects this role. "Every temple or palace--and by extension, every sacred city or royal residence--is a Sacred Mountain, thus becoming a Centre." The stupa of Hinduism, and later Buddhism, reflects Mount Meru. Cathedrals are laid out in the form of a cross, with the vertical bar representing the union of Earth and heaven as the horizontal bars represent union of people to one another, with the altar at the intersection. Pagoda structures in Asian temples take the form of a stairway linking Earth and heaven. A steeple in a church or a minaret in a mosque also serve as connections of Earth and heaven. Structures such as the maypole, derived from the Saxons' Irminsul, and the totem pole among indigenous peoples of the Americas also represent world axes. The calumet, or sacred pipe, represents a column of smoke (the soul) rising form a world center. A mandala creates a world center within the boundaries of its two-dimensional space analogous to that created in three-dimensional space by a shrine. In medieval times some Christians thought of Jerusalem as the center of the world (Latin: umbilicus mundi, Greek: Omphalos), and was so represented in the so-called T and O maps. Byzantine hymns speak of the Cross being "planted in the center of the earth." Center of a flat Earth The Flat Earth model is a belief that the Earth's shape is a plane or disk covered by a firmament containing heavenly bodies. Most pre-scientific cultures have had conceptions of a Flat Earth, including Greece until the classical period, the Bronze Age and Iron Age civilizations of the Near East until the Hellenistic period, India until the Gupta period (early centuries AD) and China until the 17th century. It was also typically held in the aboriginal cultures of the Americas, and a flat Earth domed by the firmament in the shape of an inverted bowl is common in pre-scientific societies. "Center" is well-defined in a Flat Earth model. A flat Earth would have a definite geographic center. There would also be a unique point at the exact center of a spherical firmament (or a firmament that was a half-sphere). Earth as the center of the Universe The Flat Earth model gave way to an understanding of a Spherical Earth. Aristotle (384–322 BC) provided observational arguments supporting the idea of a spherical Earth, namely that different stars are visible in different locations, travelers going south see southern constellations rise higher above the horizon, and the shadow of Earth on the Moon during a lunar eclipse is round, and spheres cast circular shadows while discs generally do not. This understanding was accompanied by models of the Universe that depicted the Sun, Moon, stars, and naked eye planets circling the spherical Earth, including the noteworthy models of Aristotle (see Aristotelian physics) and Ptolemy. This geocentric model was the dominant model from the 4th century BC until the 17th century AD. Sun as center of the Universe Heliocentrism, or heliocentricism, is the astronomical model in which the Earth and planets revolve around a relatively stationary Sun at the center of the Solar System. The word comes from the Greek ( helios "sun" and kentron "center"). The notion that the Earth revolves around the Sun had been proposed as early as the 3rd century BC by Aristarchus of Samos, but had received no support from most other ancient astronomers. Nicolaus Copernicus' major theory of a heliocentric model was published in De revolutionibus orbium coelestium (On the Revolutions of the Celestial Spheres), in 1543, the year of his death, though he had formulated the theory several decades earlier. Copernicus' ideas were not immediately accepted, but they did begin a paradigm shift away from the Ptolemaic geocentric model to a heliocentric model. The Copernican Revolution, as this paradigm shift would come to be called, would last until Isaac Newton’s work over a century later. Johannes Kepler published his first two laws about planetary motion in 1609, having found them by analyzing the astronomical observations of Tycho Brahe. Kepler's third law was published in 1619. The first law was "The orbit of every planet is an ellipse with the Sun at one of the two foci." On 7 January 1610 Galileo used his telescope, with optics superior to what had been available before. He described "three fixed stars, totally invisible by their smallness", all close to Jupiter, and lying on a straight line through it. Observations on subsequent nights showed that the positions of these "stars" relative to Jupiter were changing in a way that would have been inexplicable if they had really been fixed stars. On 10 January Galileo noted that one of them had disappeared, an observation which he attributed to its being hidden behind Jupiter. Within a few days he concluded that they were orbiting Jupiter: Galileo stated that he had reached this conclusion on 11 January. He had discovered three of Jupiter's four largest satellites (moons). He discovered the fourth on 13 January. His observations of the satellites of Jupiter created a revolution in astronomy: a planet with smaller planets orbiting it did not conform to the principles of Aristotelian Cosmology, which held that all heavenly bodies should circle the Earth. Many astronomers and philosophers initially refused to believe that Galileo could have discovered such a thing;  by showing that, like Earth, other planets could also have moons of their own that followed prescribed paths, and hence that orbital mechanics didn't apply only to the Earth, planets, and Sun, what Galileo had essentially done was to show that other planets might be "like Earth". Newton made clear his heliocentric view of the Solar System – developed in a somewhat modern way, because already in the mid-1680s he recognised the "deviation of the Sun" from the centre of gravity of the Solar System. For Newton, it was not precisely the centre of the Sun or any other body that could be considered at rest, but rather "the common centre of gravity of the Earth, the Sun and all the Planets is to be esteem'd the Centre of the World", and this centre of gravity "either is at rest or moves uniformly forward in a right line" (Newton adopted the "at rest" alternative in view of common consent that the centre, wherever it was, was at rest). Milky Way's Galactic Center as center of the Universe Before the 1920s, it was generally believed that there were no galaxies other than the Milky Way (see for example The Great Debate). Thus, to astronomers of previous centuries, there was no distinction between a hypothetical center of the galaxy and a hypothetical center of the universe. In 1750 Thomas Wright, in his work An original theory or new hypothesis of the Universe, correctly speculated that the Milky Way might be a body of a huge number of stars held together by gravitational forces rotating about a Galactic Center, akin to the Solar System but on a much larger scale. The resulting disk of stars can be seen as a band on the sky from the Earth's perspective inside the disk. In a treatise in 1755, Immanuel Kant elaborated on Wright's idea about the structure of the Milky Way. In 1785, William Herschel proposed such a model based on observation and measurement, leading to scientific acceptance of galactocentrism, a form of heliocentrism with the Sun at the center of the Milky Way. The 19th century astronomer Johann Heinrich von Mädler proposed the Central Sun Hypothesis, according to which the stars of the universe revolved around a point in the Pleiades. The nonexistence of a center of the Universe In 1917, Heber Doust Curtis observed a nova within what then was called the "Andromeda Nebula". Searching the photographic record, 11 more novae were discovered. Curtis noticed that novas in Andromeda were drastically fainter than novas in the Milky Way. Based on this, Curtis was able to estimate that Andromeda was 500,000 light-years away. As a result, Curtis became a proponent of the so-called "island Universes" hypothesis, which held that objects previously believed to be spiral nebulae within the Milky Way were actually independent galaxies. In 1920, the Great Debate between Harlow Shapley and Curtis took place, concerning the nature of the Milky Way, spiral nebulae, and the dimensions of the Universe. To support his claim that the Great Andromeda Nebula (M31) was an external galaxy, Curtis also noted the appearance of dark lanes resembling the dust clouds in this galaxy, as well as the significant Doppler shift. In 1922 Ernst Öpik presented an elegant and simple astrophysical method to estimate the distance of M31. His result put the Andromeda Nebula far outside this galaxy at a distance of about 450,000 parsec, which is about 1,500,000 ly. Edwin Hubble settled the debate about whether other galaxies exist in 1925 when he identified extragalactic Cepheid variable stars for the first time on astronomical photos of M31. These were made using the 2.5 metre (100 in) Hooker telescope, and they enabled the distance of Great Andromeda Nebula to be determined. His measurement demonstrated conclusively that this feature was not a cluster of stars and gas within this galaxy, but an entirely separate galaxy located a significant distance from the Milky Way. This proved the existence of other galaxies. Expanding Universe Hubble also demonstrated that the redshift of other galaxies is approximately proportional to their distance from Earth (Hubble's law). This raised the appearance of this galaxy being in the center of an expanding Universe, however, Hubble rejected the findings philosophically: The redshift observations of Hubble, in which galaxies appear to be moving away from us at a rate proportional to their distance from us, are now understood to be associated with the expansion of the universe. All observers anywhere in the Universe will observe the same effect. Copernican and cosmological principles The Copernican principle, named after Nicolaus Copernicus, states that the Earth is not in a central, specially favored position. Hermann Bondi named the principle after Copernicus in the mid-20th century, although the principle itself dates back to the 16th-17th century paradigm shift away from the geocentric Ptolemaic system. The cosmological principle is an extension of the Copernican principle which states that the Universe is homogeneous (the same observational evidence is available to observers at different locations in the Universe) and isotropic (the same observational evidence is available by looking in any direction in the Universe). A homogeneous, isotropic Universe does not have a center. See also Center of the universe (disambiguation) Cosmic microwave background Earth's inner core Galactic Center Geographical centre of Earth Great Attractor Illustris project List of places referred to as the Center of the Universe Multiverse Religious cosmology Sun – the center of the Solar System Three-torus model of the universe Notes References Center of the Universe Physical cosmology Obsolete scientific theories
History of the center of the Universe
Physics,Astronomy
2,827
32,115,422
https://en.wikipedia.org/wiki/Geographical%20centre%20of%20Ireland
The Geographical Centre of Ireland, according to an investigation and calculation carried out by the Official Irish Government Mapping Agency, Ordnance Survey Ireland (OSI) published on the official OSI website on 24 February 2022 is near the village of Castletown Geoghegan, County Westmeath. The exact location lies at the Irish Transverse Mercator (ITM) coordinates 633015.166477, 744493.046768, and at Latitude: 53.4494762 and Longitude : -7.5029786. It sits in the townland of Adamstown within a National Monuments Zone, on the location of an ancient graveyard and near the remains of Kilbride church. This investigation to identify the exact geographic centre of Ireland assumed a calculation that would take in the whole of the mainland Island of Ireland but exclude the islands of which there are approximately 8,000 mapped islands, outcrops etc. It is however based on current data and based on current coastal data and any coastal erosion or accretion historically or in the future would change the data and change the exact location calculated. Various locations have claimed in the past to be the geographical centre of Ireland using various methodologies (though sometimes without any updated references or supporting academic methodology). OSI have determined that the most appropriate methodology to use currently is the one published in February 2022 and which determined the location to be just outside Castletown Geoghegan. In Irish mythology the Hill of Uisneach, which is about 17.7 kilometres west of Mullingar and two kilometres from the village of Loughanavally was generally considered to be the ceremonial and ancient spiritual centre of Ireland, though at times the Hill of Tara was also regarded in a similar manner. The Hill of Uisneach is the nearest of the alternative historical locations to the location calculated by the OSI and is approximately 5 kilometres south east in a direct line from the Hill of Uisneach. References Ordnance Survey Ireland website Blog : https://osi.ie/blog/where-is-the-centre-of-ireland/ 24 February 2022 Geography of Ireland Ireland
Geographical centre of Ireland
Physics,Mathematics
439
37,195,454
https://en.wikipedia.org/wiki/Nu%20Hydri
ν Hydri, Latinized as Nu Hydri, is a single star in the southern circumpolar constellation of Hydrus. It is orange-hued and faintly visible to the naked eye with an apparent visual magnitude of 4.76. This object is located approximately 331 light years from the Sun based on parallax, and is drifting further away with a radial velocity of +3 km/s. It is a member of the Ursa Major Moving Group of stars that share a common motion through space. This is an aging giant star with a stellar classification of K3III. With the supply of hydrogen at its core exhausted, the star has expanded and cooled. At present it has 21 times the girth of the Sun. It is 2.4 billion years old with estimates of its mass ranging from 1.8 to 3.5 times the mass of the Sun. The star is radiating 184 times the Sun's luminosity from its swollen photosphere at an effective temperature of 4,612 K. References K-type giants Hydrus Hydri, Nu Durchmusterung objects 018293 013244 0872 Ursa Major moving group
Nu Hydri
Astronomy
247
25,833,466
https://en.wikipedia.org/wiki/Edward%20Brinton
Edward Brinton (January 12, 1924 – January 13, 2010) was a professor of oceanography and research biologist. His particular area of expertise was Euphausiids or krill, small shrimp-like creatures found in all the oceans of the world. Early life Brinton was born on January 12, 1924, in Richmond, Indiana to a Quaker couple, Howard Brinton and Anna Shipley Cox Brinton. Much of his childhood was spent on the grounds of Mills College where his mother was Dean of Faculty and his father was a professor. The family later moved to the Pendle Hill Quaker Center for Study and Contemplation, in Pennsylvania where his father and mother became directors. Academic career Brinton attended High School at Westtown School in Chester County, Pennsylvania. He studied at Haverford College and graduated in 1949 with a bachelor's degree in biology. He enrolled at Scripps Institution of Oceanography as a graduate student in 1950 and was awarded a Ph.D. in 1957. He continued on as a research biologist in the Marine Life Research Group, part of the CalCOFI program. He soon turned his dissertation into a major publication, The Distribution of Pacific Euphausiids. In this large monograph, he laid out the major biogeographic provinces of the Pacific (and part of the Atlantic), large-scale patterns of pelagic diversity and one of the most rational hypotheses for the mechanism of sympatric, oceanic speciation. In all of these studies the role of physical oceanography and circulation played a prominent part. His work has since been validated by others and continues, to this day, to form the basis for our attempts to understand large-scale pelagic ecology and the role of physics of the movement of water in the regulation of pelagic ecosystems. In addition to these studies he has led in the studies of how climatic variations have led to the large variations in the California Current, and its populations and communities. He has described several new species and, in collaboration with Margaret Knight, worked out the complicated life histories of many Euphausiid species. He received a formal tribute from the international GLOBEC program in 2009. He served as a major adviser and scientist for the State Department-sponsored Naga expeditions in the Gulf of Thailand and, later, as the curator of the UNESCO-sponsored Indian Ocean Biological Center in Cochin, India. He taught numerous students in both venues. His Academic career continued at Scripps until his retirement in the 1991. Family life Brinton met and married Desiree Ward in 1948. He had four children and was widowed in 1976. He remained unmarried until the time of his death. His primary residence was in La Jolla, California. He and his family lived in Bangkok, Thailand for a year in 1960, and in Kerala, India from 1965 to 1967. He died after a long illness on January 13, 2010. Publications Brinton, Edward. The distribution of Pacific Euphausiids. Bulletin of the Scripps Institution of Oceanography, vol 8, number 2,1962. Brinton, Edward. Variable Factors affecting the Apparent Range and Estimated Concentration of Euphausiids in the North Pacific. Pacific Science 16, no. 4 (October 1962): 374–408. Brinton, Edward: Euphausiids of Southeast Asian waters. Naga Report volume 4, part 5. La Jolla: University of California, Scripps Institution of Oceanography, 1975. Brinton Edward: The oceanographic structure of the eastern Scotia Sea—III. Distributions of euphausiid species and their developmental stages in 1981 in relation to hydrography. Deep-Sea Research 1985;32:1153–1180. References External links Portrait of the Brinton family in the 1930s by Imogen Cunningham, photographer. (Edward at the far left) Recent GLOBEC Tribute-summer 2009 Brinton and Townsend Euphausiid Database Transpac Expedition Downwind Expedition 1924 births 2010 deaths Haverford College alumni University of California, San Diego faculty Oceanography American expatriates in India
Edward Brinton
Physics,Environmental_science
836
3,901,279
https://en.wikipedia.org/wiki/Sepharose
Sepharose is a tradename for a crosslinked, beaded-form of agarose, a polysaccharide polymer material extracted from seaweed. Its brand name is a portmanteau derived from Separation-Pharmacia-Agarose. A common application for the material is in chromatographic separations of biomolecules. Sepharose is a registered trademark of Cytiva (formerly: GE Healthcare and Pharmacia, Pharmacia LKB Biotechnology, Pharmacia Biotech, Amersham Pharmacia Biotech, and Amersham Biosciences). Various grades and chemistries of sepharose are available. Iodoacetyl functional groups can be added to selectively bind cysteine side chains and this method is often used to immobilize peptides. Sepharose/agarose, combined with some form of activation chemistry, is also used to immobilize enzymes, antibodies and other proteins and peptides through covalent attachment to the resin. Common activation chemistries include cyanogen bromide (CNBr) activation and reductive amination of aldehydes to attach proteins to the agarose resin through lysine side chains. See also Gel permeation chromatography Superose Sephadex References Polysaccharides
Sepharose
Chemistry
287
70,303,561
https://en.wikipedia.org/wiki/Prize%20of%20the%20Verkhovna%20Rada%20of%20Ukraine
The Prize of the Verkhovna Rada of Ukraine, for young scientists in the field of basic and applied research and scientific and technological research works, was established to promote domestic science and technology, increase the participation of young scientists in interdisciplinary basic and applied research and scientific development and increasing the prestige of the researcher. The award was established by the Verkhovna Rada of Ukraine in 2007. Every year, since January 1, 2008, 20 Prizes of the Verkhovna Rada of Ukraine are awarded to young scientists. The awardees also receive a sum of ₴20 thousand each. Laureates Tetiana Ivanova Oleksandr Kolodiazhnyi References Awards of the Verkhovna Rada of Ukraine Ukrainian awards State Prizes of Ukraine Badges
Prize of the Verkhovna Rada of Ukraine
Mathematics,Technology
157
35,613,133
https://en.wikipedia.org/wiki/Josiphos%20ligands
A Josiphos ligand is a type of chiral diphosphine which has been modified to be substrate-specific; they are widely used for enantioselective synthesis. They are widely used in asymmetric catalysis. History Modern enantioselective synthesis typically applies a well-chosen homogeneous catalyst for key steps. The ligands on these catalysts confer chirality. The Josiphos family of privileged ligands provides especially high yields in enantioselective synthesis. In the early 1990s, Antonio Togni began studying at the Ciba (now Novartis) Central Research Laboratories previously-known ferrocenyl ligands for a Au(I)-catalyzed aldol reaction. Togni's team began considering diphosphine ligands, and technician Josi Puleo prepared the first ligands with secondary phosphines. The team applied Puleo's products in an Ru-catalyzed enamide hydrogenation synthesis; in a dramatic success, the reaction had e.e. >99% and a turnover frequency (TOF) 0.3 s−1. The same ligand proved useful in production of (S)-metolachlor, active ingredient in the most common herbicide in the United States. Synthesis requires enantioselective hydrogenation of an imine; after introduction of the catalyst, the reaction proceeds with 100% conversion, turnover number (TON) >7mil, and turnover frequency >0.5 ms−1. This process is the largest-scale application of enantioselective hydrogenation, producing over 10 kilotons/year of the desired product with 79% e.e. Josiphos ligands also serve in non-enantioselective reactions: a Pd-catalyzed reaction of aryl chlorides and aryl vinyl tosylates with TON of 20,000 or higher, catalytic carbonylation, or Grignard and Negishi couplings A variety of Josiphos ligands are commercially available under licence from Solvias. The (R-S) and its enantiomer provide higher yields and enantioselectivities than the diastereomer (R,R). The ferrocene scaffold has proved to be versatile. The consensus for the naming is abbreviating the individual ligand as (R)-(S)-R2PF-PR'2. The substituent on the Cp is written in front of the F and the R on the chiral center after the F. Reactions using Josiphos ligands Some reactions that are accomplished using M-Josiphos complexes as catalyst are listed below. Other reactions where Josiphos ligands can be used are: hydrogenation of C=N, C=C and C=O bonds, catalyzed allylic substitution, hydrocarboxylation, Michael addition, allylic alkylation, Heck-type reactions, oxabicycle ring-opening, and allylamine isomerization. Hydroboration of styrene Conducted at -78 °C, the above reaction has e.e.'s up to 92% and TOF of 5-10 h−1. Hayashi's Rh-binap complex gives better yield. Hydroformylation of Styrene This reaction scheme yields of up to 78% ee of the (R) product, but low TON and TOF of 10-210 and 1-14h−1 (respectively). Reductive amination Above is the preparation of (S)-metolachlor. Good yields and a 100% conversion crucially require AcOH solvent. Hydrogenation of exocyclic methyl imine This key step to synthesize a HIV integrase inhibitor, Crixivan, is one of the few known homogeneous heteroarene hydrogenation reactions. Bulky R groups increase the catalyst's performance, with 97% e.e. and TON and TOF of 1k and 8 min−1, respectively. Asymmetric synthesis of chromanoylpyridine derivatives This reaction, for an intermediate in synthesis of an antihypertensive and anti-alopecic chromanoylpyridine derivative, exhibits high enantioselectivity, but low activity. Modified Josiphos ligands Many variations of Josiphos ligands have been reported. One family is prepared from Ugi's amine. An important improvement on initial syntheses has been using N(CH3)2 as a leaving group over acetate, although an acetic acid solvent gives better yields. Further reading References Coordination chemistry Phosphines Ferrocenes Ligands Diphosphines
Josiphos ligands
Chemistry
981
31,417,932
https://en.wikipedia.org/wiki/List%20of%20rail%20accidents%20%281990%E2%80%931999%29
This is a list of rail accidents from 1990 to 1999. 1990 January 4 – Pakistan – Sukkur rail disaster: A Multan–Karachi Bahauddin Zakaria Express collided head-on with an empty freight train at Sangi station, Sukkur, Sindh. The train was to pass through Sangi, but incorrectly set rail points directed the train to a siding where the freight train was parked. 307 people were killed and another 700 more injured in Pakistan's worst rail disaster. February 2 – West Germany – Rüsselsheim train disaster: Two S-Bahn commuter trains collide in Rüsselsheim, killing 17 and injuring 80. February 16 – Switzerland – A passenger train collided with a rail-mounted crane at Saxon, killing three people and injuring 12. March 7 – United States – 1990 Philadelphia subway accident: A bolt securing a traction motor on a SEPTA subway-elevated train failed, causing the train to derail. Three people were killed and 150 injured. May 6 – Australia – Cowan rail accident: The 3801 Limited special steam passenger train stalled while climbing a steep gradient from the Hawkesbury River to Cowan. It was then rear-ended by a CityRail inter-urban passenger train, killing six people. Sand applied to the rails interfered with track signals and gave the CityLine train a false clear indication. August 20 – Poland – Ursus rail crash: Passenger train Silesia – from Prague to Warsaw – telescoped the last car of a passenger train from Szklarska Poręba to Warsaw, killing 16 and injuring 43. December 12 – United States – Back Bay rail accident: Amtrak's Night Owl, the Northeast Corridor's overnight train, travelling at excessive speed around a curve inside a tunnel, derailed and rear-ended an MBTA commuter train in Back Bay station in Boston. 453 people were injured. December 20 – Taiwan – A Kaohsiung-Taipei express train collided at a level crossing at Lu Chu, Kaohsiung with a bus carrying 51 farmers which burst into flames, killing 25 people, and injuring 32. 1991 January 7 – Hungary – Tram No. 1342 (Ganz bendytram) operated by BKV derailed and overturned in Budapest at the corner of Vajda Péter utca and Orczy út due to an unintentional point switching by the tram driver, killing three passengers and a pedestrian. May 3 - Australia - In Henty, New South Wales, a New South Wales XPT train locomotive derailed due to track failures, 6 people were injured. May 14 – Japan – Shigaraki train disaster, Shigaraki, Shiga: A Shigaraki Kōgen Railway (SKR) passenger train collided head-on with a JR West passenger train after the SKR train passed a red signal, killing 42 people. A malfunctioning signal gave the JR West train a green signal, when the approaching SKR train should have turned the signal to red. Confusion among SKR staff stemming from the signal problems prompted them to send the train against a red signal. July 15 – United States – Dunsmuir, California – California's largest hazardous chemical spill: A 19,000-gallon (72,000 L) tank car containing the pesticide/herbicide metam sodium derailed from a Southern Pacific freight train and tumbled off the bridge over the Sacramento River at the Cantara Loop before rupturing on the rocks below, spilling the car's entire load into the river. Virtually every aquatic organism on a 40-mile (64 km) stretch of river was killed. July 21 – United Kingdom – Glasgow: Newton rail accident killed four and injured 22. Junction layout was cited as a contributing factor. July 29 – United States – Seacliff, California: A Southern Pacific freight train carrying hazardous chemicals derailed in the Ventura County coastal community of Seacliff. Four of the 14 railroad cars that derailed were carrying two types of chemicals: a half-strength aqueous hydrazine solution used to make agricultural, metal plating, plastics and photo processing chemicals; and naphthalene, an industrial solvent for making other chemicals. July 31 – United States – Lugoff, South Carolina: The Amtrak Silver Star derailed the rear portion of a former Seaboard Air Line of the CSXT Railroad after a faulty switch split as the train passed over it, directing a coach into a hopper car standing on a siding, and derailing equipment. Eight passengers died and 76 were injured. August 5 – Canada – Kinsella, Alberta: A CN intermodal train struck a truck carrying light crude oil at a marked highway crossing. The impact ignited the oil and significantly damaged the train, killing the three train crew members and the truck driver. August 28 – United States – New York City: 1991 Union Square derailment - Five people were killed and more than 200 injured after a #4 Lexington Avenue express train derailed over a switch just north of Union Square. Two subway cars broke open as they struck the steel tunnel support beams. The uninjured motorman, who was reported to have been handling the train erratically, was later found to be legally drunk. The accident was instrumental in imposing new federal rules for engineer certification and toxicology testing. September 6 – Republic of the Congo – More than 100 people were killed after a passenger train from Pointe Noire collided head-on with a goods train carrying timber from Brazzaville. October 16 – France – . A freight train overran a closed signal after its engineer suffered a heart attack, and fouled the path of the Nice-Paris night train at Melun, killing 16 people. The deadman mechanism worked normally but failed to stop the train in time. This led to the adoption of the KVB automatic train control system. November 15 – Taiwan – 1991 Miaoli train collision - A northbound Tze-Chiang Train ran over the stop signal and rear-ended a southbound Chukuang train, killing 30 people and injuring 112. December 7 – United Kingdom – Severn Tunnel rail accident - An InterCity 125 diesel multiple unit was rear-ended by a Class 155 Sprinter inside the Severn Tunnel, injuring 185 people. 1992 March 3 – Russia – Podsosenka train disaster: Yurmala passenger train No. 004 ran a red light at a restrictive signal and collided head-on with oncoming freight train No. 3455 before catching fire, killing 43 people and injuring 108. March 12 – Sweden – 1992 Gothenburg tram accident. An unmanned tram rolled down a street in central Gothenburg at high speed and crashed into waiting passengers at a tram stop at Vasaplatsen, killing 13 people. August 8 – Switzerland – A train and a tram collided at Zurich, killing one person and injuring nine. April 29 – United States – at Bell King Road in Newport News, Virginia, a crossing without gates or warning lights, Amtrak's Colonial passenger train collided with a dump truck, killing the truck driver and injuring 54 passengers. June 30 – United States– Nemadji River train derailment: near Superior, Wisconsin, a Burlington Northern freight train derailed on a trestle, spilling benzene into the Nemadji River and releasing a toxic vapor which killed wild animals and outside pets. August 12 – United States – Just outside Newport News, Virginia, Amtrak's Colonial passenger train, traveling at nearly , entered a switch that had just moments before been opened by a pair of saboteurs, injuring dozens. The saboteurs, Coast Guardsmen Joseph Loomis and Raymond Bornman Jr., were sentenced to federal prison terms. October 10 - South Africa – A steam passenger train running during the Lady Grey Spring Festival derailed at high speed killing the engine driver and 5 passengers. October 11 – China – Liaoning, Fushun – Two locomotives of the Shenyang Railway Bureau Meihekou locomotive depot were reconnected to pull a freight train through the Shenyang-Jilin Railway on approach to Shimenling railway station and prepare to stop a passenger train from Tianjin railway station to Jilin railway station. Because one of the corner cock of the train was closed, the brake failed. To avoid a collision, the freight train derailed and overturned after entering the safety siding of Shimenling railway station, killing four locomotive attendants. November 15 – Germany – 11 people died and 52 were injured after the wreckage of a derailed freight train was hit by an express train near Northeim. November 30 – Netherlands – Hoofddorp train accident: An Intercity train, travelling from Amsterdam to Vlissingen derailed near Hoofddorp, killing five people and injuring 33. 1993 January 31 – Kenya – Along the Ngai Ndethya River near Mtito Andei, a Kenya Railways train traveling from Mombasa to Nairobi derailed due to flooding from the river, killing 140 people. March 28 – South Korea – a Mugunghwa-ho train in the vicinity of Gupo station in Busan rolled over due to subsidence under a section of track caused by nearby construction, killing 78 people and injuring 198, making it the worst rail accident in South Korea. September 22 – United States – Big Bayou Canot rail accident, near Mobile, Alabama: Barges being pushed by an off-course towboat collided with a bridge piling; the bridge shifted out of alignment, creating a kink in the rails on the CSXT's former Louisville & Nashville Gulfcoast line. Minutes later, Amtrak's Sunset Limited derailed at high speed on the misaligned track and plunges into the water, causing an enormous fuel spill and fire that killed 47 people in Amtrak's deadliest accident. November 2 - Indonesia - 20 people were killed and 200 people were injured after 2 commuter train collided head-on in Ratujaya, near Depok. November 11 – United States – near Kelso, Washington: A Union Pacific and a Burlington Northern freight train collided head-on after the Burlington Northern train had failed to stop for a red signal, likely due to dense fog, killing the five crew members on board both trains. Due to this, the two railroads implemented new safety features called "Precision Train Control" (an ancestor to the later federally mandated, Positive Train Control) on 750 miles (1200 km) of UP and BN track. 1994 January 13 - United States - In Pasco County, Florida, A Barnum & Bailey circus train carrying circus members derailed due to a track failure and caught fire, killing two people. February 8 - Georgia - A head-on collision between two passenger trains kills three engine drivers. March 8 – Switzerland – a freight train derailed at Zurich. A tank wagon carrying petrol exploded, injuring three people. March 9 – South Africa – A commuter train derailed in a suburb of Durban, KwaZulu-Natal, killing at least 63 people and injuring 370 in one of South Africa's deadliest rail disasters. March 21 – Switzerland – The side of a passenger train from Brig to Romanshorn was ripped out by a crane wagon at Däniken, killing nine people. May 16 – United States – Amtrak's Silver Meteor passenger train, bound from New York to Florida, derailed near Selma, North Carolina after hitting a cargo container jutting out from a passing freight. The Amtrak engineer was killed and nearly 100 passengers were injured. June 25 – United Kingdom – Greenock rail crash, Scotland: Two people were killed after a train struck concrete blocks placed on the track by vandals. August 3 – United States – The Lake Shore Limited, operated by Amtrak, derailed on Conrail tracks in Batavia, New York. 108 passengers and 10 crew members were injured. Cars 8–12 on the consist fell down an embankment. September 22 – Angola – 1994 Tolunda rail disaster – Faulty brakes cause the derailment of a train in Tolunda. The train plunged into a ravine, killing 300 people. September 29 – Germany – Two passenger trains collided head-on near Bad Bramstedt, killing six people and injuring 67. October 15 – United Kingdom – Cowden rail crash – Two trains collided head-on in Cowden, Kent, after a driver ran through a red signal; killing five people and injuring 12. November 20 – Canada – VIA Rail train No. 66, travelling eastward at approximately 155 km/h (96 mph), struck a piece of rail intentionally placed on the track at the CN North America (CN) Kingston Subdivision, in Brighton, Ontario. The piece of rail punctured a locomotive fuel tank and severed electrical power cables creating electrical arcing which ignited the leaking fuel. A fire erupted and the trailing portion of the locomotive and the first two-passenger cars behind the locomotive became engulfed in flames. 46 passengers were injured. December 2 – Hungary – Szajol – A train going to Budapest derailed after passing through a false switch at 110 km/h. It collided with a station building, killing 31 people and injuring 52. It was the second-worst rail incident in post-World War II Hungary. December 14 – United States - San Bernardino, California – A Santa Fe intermodal freight train rear-ended a parked Union Pacific coal train at the Cajon Pass due to a kink in the air hose that triggered the brakes, injuring two crewmembers who were forced to jump from their runaway train going 50 mph (80 km/h). 1995 January 31 – United Kingdom – 1995 Ais Gill rail accident: A diesel multiple unit ran into a landslip at Ais Gill, Cumbria and was derailed. Another diesel multiple unit then collided with it. One person was killed and 30 were injured. May 10 – South Africa – Vaal Reefs Tragedy: A mine locomotive operating below ground fell into an elevator shaft. It struck the "detaching hook", separating the cable from the double-deck elevator car, which ffell a further . All 104 miners on board were killed. It is history's worst elevator accident. May 26 – United States – Two SB CSX freights collided near Flomaton, Alabama, on the former Louisville and Nashville Railroad, forcing the evacuation of residents. One of the derailed tank cars leaked vinyl chloride. June 16 – United States – Canadian Pacific 1278 boiler explosion: Gardners, Pennsylvania. A Gettysburg Railroad steam locomotive, Canadian Pacific 4-6-2 Number 1278, suffered a catastrophic boiler explosion due to low water, seriously injuring three crew. The National Transportation Safety Board reported that an even more serious explosion had been averted by the fact that the locomotive was fitted with fusible plugs, a safety feature rarely found in North America but common in Europe. Major new regulation of steam locomotives followed. June 24 – Czech Republic – Krouna train accident: Four runaway carriages smashed into a passenger train. 19 people were killed, only four passengers survived. July 10 – United States – Canadian Pacific 2317: Dunmore, Pennsylvania. A Steamtown-owned steam locomotive, Canadian Pacific 4-6-2 Number 2317, en route from Moscow to Scranton struck and killed two young boys who were trying to pry one of their jammed ATVs from the tracks. July 11 - United Kingdom - Largs, Scotland - A train collided with a buffer and hit a ticket office, injuring four people. August 11 – Canada – 1995 Russell Hill subway accident, Toronto: A subway train collided with the stationary train ahead after a driver misinterpreted a signal and the automatic train stop failed. Three people were killed, and 30 injured. August 20 – India – Firozabad rail disaster. A passenger train collided with another train that had stopped after it had run over a cow in Firozabad, killing 358 people. October 9 – United States – Palo Verde, Arizona Derailment, Palo Verde, Arizona: Unidentified saboteurs shifted a rail out of alignment after attaching a jumper circuit, keeping the signalling circuit closed. Amtrak's Sunset Limited subsequently derailed, plunging four cars into a dry riverbed killing one and injuring 78 people, 12 seriously. October 24 - Indonesia - In Kadipaten, Tasikmalaya, two merged trains loses brake, crashed and fell into a ravine in the Trowek area (now around Cirahayu station), killing dozens and injuring hundreds more. October 25 – United States – 1995 Fox River Grove bus–train collision: A school bus caught between a railroad crossing and a red traffic light is hit by a Metra commuter train, killing seven students. October 28 – Azerbaijan – 1995 Baku Metro fire: A Baku Metro subway train caught fire between Ulduz and Nariman Narimanov stations during evening rush hour. 289 people were killed and 265 injured. An electrical fault was blamed, but sabotage was not ruled out. The accident remains the deadliest subway disaster to date. December 12 - Germany - 'Garmisch-Partenkirchen train collision, In Garmisch, a Obb Train crashed into a DB Train after skipping a red light, killing one person and injuring 47. December 21 – Egypt – A Cairo-Beni Sueif passenger train and Cairo-Aswan passenger train collided in dense fog in Badrasheen railroad station, killing 75 people and injuring 150. December 25 – Spain – Jaén: An express from Barcelona to Seville via Málaga derailed at a bridge over the deep Despeñaperros canyon. The locomotive came to rest in a near-vertical position leaning against the bridge, but remained coupled to the first car, suspending the car's forward end above the bridge. The two enginemen were killed. 1996 January 6 – United States – Shady Grove Metrorail station, Derwood, Maryland: A Washington Metro train overran the station platform at Shady Grove and collided with a stored train. The operator of the overrunning train was killed. January 14 - Australia - Hines Hill train collision: two trains entered a passing loop from opposite directions after one of them passed a signal at danger. The engineer of one of the trains, along with one passenger, were killed. February 1 – United States – Cajon Pass, San Bernardino County, California: An AT&SF freight train carrying hazardous materials derailed due to failed brakes in the steep pass, killing two crewmen, injuring the engineer, and shutting down Interstate 15 for several days due to a cloud of noxious fumes. February 9 - United States - 1996 Secaucus train collision: Two New Jersey Transit trains collided in the morning rush killing three people. The cause was later determined to be the colorblindness of one of the engineers. February 14 - United States - St. Paul, Minnesota - A BNSF freight train slammed into two parked SOO Line locomotives, and a parked Canadian Pacific freight train in the St. Paul Railyard due to a kink in the air hose that prevented the brakes from being applied. 44 cars and six locomotives derailed, while nine workers were injured, and an office building was destroyed. February 16 – United States – 1996 Maryland train collision: A MARC commuter train bound for Washington Union Station, collided with outbound Amtrak train no. 29, the westbound Capitol Limited after the MARC crew apparently forgot an approach signal and failed to reduce speed, killings three crew and eight passengers aboard the MARC train. Eight of the dead were killed by smoke and flames possibly ignited by oil pot switch heaters. This led to the FRA instituting the Delay in Block Rule, and also was a major impetus for the Passenger Equipment Safety Standards regulation (49 CFR Part 238). March 4 – United States – Weyauwega, Wisconsin, derailment: A broken switch derailed a Wisconsin Central train carrying liquefied petroleum gas and propane. The town of Weyauwega, Wisconsin, was evacuated as the fire burns for most of the 18-day evacuation. March 8 – United Kingdom – 1996 Stafford rail crash: A freight train derailed due to an axle failure and was then struck by a Travelling Post Office train, killing one person and injuring 22. April 18 – India – Gorakhpur– A Gonda passenger train crashed into a stationary freight train at Domingarh station, killing at least 60. April 21 – Finland – Jokela rail accident: A passenger train operating in heavy fog derailed at Jokela after overspeeding through a slow-speed turnout. The locomotive driver and three passengers were killed, and 75 were injured. July 2 – Ukraine – 1996 Dniprodzerzhynsk tram accident: An overcrowded tram car derailed and crashed through a concrete wall after its brakes failed while going down a steep hill, killing 34 people and injuring over 100. August 8 – United Kingdom – Watford rail crash: An electric multiple unit overran a signal at and stopped foul of a junction before being hit head-on by an electric multiple unit, killing one person and injuring 69. September 16 – Switzerland – A passenger train collided with a locomotive at Courfaivre after passing a signal at danger, injuring 30 people. September 26 – Russia – A level crossing collision of a diesel locomotive into a school bus between Bataysk and Salsk in Rostov Oblast killed 19 people, including 18 children. November 18 – France / United Kingdom – 1996 Channel Tunnel fire: A fire occurred on board a Eurotunnel Shuttle train inside the Channel Tunnel, injuring 34 people. 1997 January 12 – Italy – A Pendolino train derailed due to excessive speed just before Piacenza station, killing eight people and injuring 29 others. March 3 – Pakistan – Five coaches of a Karachi-bound train from Peshawar overturned near Khanewal after the train's brakes failed and it was driven onto a runaway track, killing 110 people and injuring 150. July 28 – India – 12 people died in a collision at Faridabad in the Delhi suburbs. August 9 - United States - Amtrak Southwest Chief (Train Number 4) derailed on an old wooden bridge due to a severe thunderstorms, injuring 173 people. August 28 – China – Jilin, Changchun –A passenger train from Harbin railway station to Harbin railway station collided with a freight train and derailed, killing four people and injuring nine. September 8 - France - A passenger train collided with a fuel tanker on a level crossing at Port-Sainte-Foy, Dordogne, killing 13 people and injuring over 40. This remains France's worst ever level crossing accident. September 19 – United Kingdom – Southall rail crash, London: An inter-city train failed to stop at a red signal due to driver distraction and collided with a freight train crossing its path, killing seven people and injuring 139. November 8 – Ireland – Westport train derailment: An Iarnród Eireann train travelling between Dublin Heuston and Westport derailed 2 miles west of Knockcroghery, County Roscommon due to a broken rail that had been missed by a routine inspection. The train was travelling at between 50 and 60 miles per hour when it derailed at a level crossing. Four people were detained in hospital, none seriously injured. November 13 – Switzerland – Two passenger trains collided at Appenzell after one of them passed a signal at danger, injuring 17 people. December 9 – Germany – Hanover: A regional train collided with a freight train carrying petrol. Five of the wrecked tankers ignited and exploded. More than 90 people were injured. 1998 February 15 – Cameroon – Yaoundé train explosion: Spilt fuel oil from a tanker train crash ignited and exploded, killing more than 100 people. March 6 – Finland – Jyväskylä rail accident: An express passenger train derailed at Jyväskylä after overspeeding while passing over a slow-speed turnout. The locomotive driver and nine passengers were killed, 94 people were injured. April 4 – India – Fatuha train crash: At least 11 people died near Patna (near Fatuha station) on the Howrah-Delhi main line after the Howrah-Danapur Express derailed. June 3 – Germany – Eschede train disaster: An InterCityExpress high-speed train derails between Hanover and Hamburg strikes a bridge due to a faulty wheel rim. The bridge collapsed as the third car hit its pylons, the remaining cars and the rear power unit jackknifed into the pile. The first three carriages were separated from each other and came to a halt at Eschede railway station while the undamaged power car continued for another two kilometres until its brakes were automatically applied, killed at least 101 people in the world's worst ever high-speed rail disaster. October 18 – Egypt – An Alexandria–Cairo passenger train crashed at Kafr el-Dawar station, Nile Delta, after its driver exceeded the speed limit, killing 47 people and injuring 104=. November 26 – India – Khanna rail disaster: The Sealdah express rammed into three cars of derailed carriages of Golden Temple mail bound for Amritsar at Khanna station, on the outskirts of Ludhiana, Punjab, at least 212 people were killed. 1999 March 15 – United States – 1999 Bourbonnais, Illinois, train crash: The Amtrak City of New Orleans, traveling at approximately , slammed into a semi-trailer truck loaded with steel concrete reinforcing bar (rebar) at a grade crossing and derails. An ensuing fire set one Superliner sleeper car ablaze. Eleven people were killed and over 100 were injured. It was subsequently determined that the truck driver had ignored the grade crossing signals and driven around the lowered gates. April 12 – Germany – 1999 Wuppertal Schwebebahn accident: In Wuppertal, workers doing overnight maintenance on the Schwebebahn forgot to unscrew a metal clamp from the elevated monorail track. The first train in the morning hit it, derailed, and crashed into the river below, killing 5 passengers and injuring 47. April 23 – Canada – VIA Rail train No. 74, encountered an unexpected reversed switch, crossed over to the south main track and derailed at Thamesville, Ontario. The derailed train collided with stationary rail cars on an adjacent yard track and all four passenger cars and the locomotive of the passenger train derailed. The two train crew members in the locomotive cab were killed. August 2 – India – Gaisal train disaster: Two express trains collided head-on, killing more than 285 people. August 18 - Australia - ‘’’Zanthus train collision’’’, In Zanthus, Western Australia, an Indian Pacific Train crashed into an Nrc Train and derailed, injuring 21 people. October 5 – United Kingdom – Ladbroke Grove rail crash: A high-speed head-on collision between two trains occurred due to a signal passed at danger. The fuel tanks of one of the trains were destroyed and the contents were ignited by overhead power lines, causing a fireball, killing 31 people and injuring more than 520. November 1 – Switzerland – Two trains collided in Bern after one of them passed a signal at danger, killing two people. December 3 – Australia – Glenbrook rail accident, New South Wales: Stop and Proceed rule at red signal applied with insufficient care (too much speed), killing seven. December 30 – Canada – Mont-Saint-Hilaire, Quebec: Several tank cars filled with gasoline and heating oil from the CN freight train 703 travelling westward derailed as it was passing the CN freight train 306, travelling in the opposite direction on a parallel track. Train 306 hit the derailed cars, which exploded on impact, killing the engineer and the conductor of the 306 and starting a fire which burned 2.7 million liters of oil and forced the evacuation of 350 families within a 2-kilometer radius over the next four days. See also List of road accidents – includes level crossing accidents. List of British rail accidents List of Russian rail accidents Years in rail transport References Sources External links Railroad train wrecks 1907–2007 Rail accidents 1990-1999 20th-century railway accidents Rail accidents
List of rail accidents (1990–1999)
Technology
5,702
10,259,654
https://en.wikipedia.org/wiki/Long%20gallery
In architecture, a long gallery is a long, narrow room, often with a high ceiling. In Britain, long galleries were popular in Elizabethan and Jacobean houses. They were normally placed on the highest reception floor of English country houses, usually running along a side of the house, with windows on one side and at the ends giving views, and doors to other rooms on the other. They served several purposes: they were used for entertaining guests, for taking exercise in the form of walking when the weather was inclement, for displaying art collections, especially portraits of the family and royalty, and acting as a corridor. A long gallery has the appearance of a spacious corridor, but it was designed as a room to be used in its own right, not just as a means of passing from one room to another, though many served as this too. In the 16th century, the seemingly obvious concept of the corridor had not been introduced to British domestic architecture; rooms were entered from outside or by passing from one room to another. Later, long galleries were built, sometimes in a revivalist spirit, as at Harlaxton Manor, an extravagant early-Victorian house in Jacobean style, and sometimes to house a large art collection, as at Buckingham Palace, which has a long interior space lit from above, called the Picture Gallery. Examples Notable long galleries in the United Kingdom can be seen at: Althorp, Northamptonshire Apethorpe Hall, Northamptonshire Aston Hall, Birmingham Astley Hall, Chorley Blickling Hall, Norfolk Burghley House, near Stamford, Lincolnshire (converted into separate rooms in the late 17th century such as rooms known as Queen Elizabeth I Bedroom and Blue silk Dressing room) Broughton Castle, Oxfordshire Burton Agnes Hall, Yorkshire Burton Constable Hall, Yorkshire Castle Ashby House, Northamptonshire, now 18th-century in style. Charlton House, London Croome Court, Worcestershire, Adam interior Haddon Hall, Derbyshire Ham House, London – compact and running from front to rear Hardwick Hall, Derbyshire – one of the largest Harewood House Harlaxton Manor, Hatfield House, Hertfordshire . Hever Castle, Kent Little Moreton Hall, Cheshire Longleat House, Wiltshire – the long gallery is now called the Saloon Lyme Park, Cheshire Montacute House, Somerset Osterley Park, London Parham Park, West Sussex . Penshurst Place, Kent Powis Castle, Welshpool, Wales Scone Palace, Perthshire Sudbury Hall, Derbyshire Syon House, London Temple Newsam House, Yorkshire – Jacobean long gallery, later modified and now called the picture gallery Welbeck Abbey Windsor Castle – Elizabethan long gallery; later converted by William IV, along with adjacent rooms, to house the Royal Library References Further reading The 'Long Gallery': Its Origins, Development, Use and Decoration by Rosalys Coope in Architectural History, Vol. 29, 1986 (1986), pp. 43–72+74-84 Architectural elements Rooms Architecture in England
Long gallery
Technology,Engineering
601
5,283,559
https://en.wikipedia.org/wiki/Landscape%20assessment
Landscape assessment is a sub-category of environmental impact assessment (EIA) concerned with quality assessment of the landscape. Landscape quality is assessed either as part of a strategic planning process or in connection with a specific development which will affect the landscape. These methods are sub-divided into area-based assessments or proposal-driven assessments, respectively. The term 'landscape assessment' can be used to mean either visual assessment or character assessment. Since landscape assessments are intended to help with the conservation and enhancement of environmental goods, it is usually necessary to have a fully geographical landscape assessment as a stage in the process of EIA and landscape planning. During the initial phases of a project, such as site selection and design concept, the landscape architect begins to identify areas of opportunity or setbacks that may provide constraints. The architect prepares alternative options in order to compare their assessments and identifies the proposals which allow for the least adverse effects on the landscape or views. A landscape professional works with a design team to review potential effects as the team develops a sustainable proposal. Upon developing a design proposal, the landscape professional will identify and describe the landscape and visual effects that may occur and suggest mitigation measures to be taken in order to reduce negative effects and maximize benefits, if any. Landscape and visual impact assessment (LVIA) This process, which operates within the larger framework of Environmental Impact Assessment, strives to ensure that any of the effects of change are taken into account in the decision-making process of a project. It is essential that any possible change or development to the landscape or views around a project be evaluated throughout the planning and design phase of a project. Thus, landscape assessment is sub-divided into two types: visual assessment and character assessment. Visual assessment This would look at how changes in the landscape could alter the nature and extent of visual effects and qualities relating to locations and proposals and how they affect specific individuals or groups of people. Guidance on the preparation of these assessments is given in the 3rd edition of the Guidelines for Landscape and Visual Impact Assessment published by Routledge on behalf of the Landscape Institute & Institute of Environmental Management, 2013. Character assessment This includes assessment of the effect of a development or proposal on the character of the landscape. Typically the character of the landscape, resulting from a combination of aspects such as geology, hydrology, soils, ecology, settlement patterns, cultural history, scenic characteristics, land use etc., has previously been set out in a Landscape Character Assessment. The landscape assessment, as part of LVIA, is the formal examination of how this character may be affected, typically in order to inform development management decisions. This is because landscape character can be affected without a noticeable visual effect. Area-based assessment This assessment can be completed at the regional scale as well as district, city, or catchment scale. This process is used to determine a baseline and also guide landscape management. The process consists of three stages: landscape description, landscape characterization, and landscape evaluation. Landscape description The first step in completing an area-based assessment is to compile data in order to identify components of the landscape within a project area. Components of a landscape range from landform, geology, soil, vegetation cover, drainage patterns, built development, land uses, infrastructure, and heritage sites to cultural meaning. This step in the assessment of a landscape is not site specific, but instead, a general description of the landscape. Landscape Character Assessment (or Landscape characterisation) This step in the assessment refers to the process of identification, mapping, and description of landscape character areas and/or types. Landscape character areas are unique named geographical areas made up by a combination of individual landscape components (and possibly types) that make one area different from another area and which are recognised by the community. Landscape character types are more generic in nature and represent areas of shared characteristics. The characterisation of a landscape should begin to define the boundaries of the area being assessed. For more information see Landscape Character Assessment Landscape evaluation The last step in an area-based assessment is the evaluation process. This is a critical phase in the assessment process because landscape evaluations are the driving force behind landscape design, planning and management and development management. Here, the assessment should identify important landscapes or natural features and assign rankings and priorities to features that require management. Drawback The evaluation process in the assessment is subjective and dependent on the person completing the assessment and the extent of community involvement or reference to suitable evaluation criteria. Evaluations can therefore sometimes be controversial, particularly when they may limit development ambitions. Therefore, the assessment should always be completed by professionals who are trained to make accurate judgments of a landscape. Proposal-driven assessment Landscape quality can be assessed in connection with a specific development which will affect the landscape. Such assessment requires that a professional submit a development proposal. This approach to complete an assessment serves to identify the potential effects on landscape values brought forth by a certain proposal. The specific proposal is analyzed to evaluate the effects it may have on the landscape or character of the landscape, as well as the proposal's effect on the composition of available views. In a proposal-driven assessment, the area involved should include the site of the project as well as its immediate surroundings. This assessment should produce a detailed description of any physical changes to the landscape as well as a description and analysis of the effect these changes will have. This process should evaluate the importance of character, landscape, and visual amenity. Ultimately, this approach is effective if, and only when measures that can mitigate the effect of a given development proposal are identified. See also Landscape architecture Collective landscape Environmental Good External links GLVIA 3rd ed. 2013 UK Landscape Character Network (includes directory of Landscape Character assessments available in the UK) UK Countryside Agency information on Landscape Character Assessment An Approach to Landscape Character Assessment 2014 Post-Graduate International Workshop on Landscape Quality Assessment and Spatial Planning Exploring Significance Interfaces Landscape Landscape architecture Environmental impact assessment
Landscape assessment
Engineering
1,179
63,829,771
https://en.wikipedia.org/wiki/Ivan%20S.%20Sokolnikoff
Ivan Stephan Sokolnikoff (1901, Chernigov Province, Russian Empire – 16 April 1976, Santa Monica) was a Russian-American applied mathematician, who specialized in elasticity theory and wrote several mathematical textbooks for engineers and physicists. Biography Born to a wealthy family in Tsarist Russia, Ivan Sokolnikoff was educated by private tutors and at Anders Classical Gymnasium in Kiev. During the Russian Revolution, as a Tsarist naval officer, he was wounded in combat off the Kuril Islands. With the victory of the Reds, he became a refugee in China. There he worked for a subsidiary of an American electrical firm until 1922 when he became an American immigrant in Seattle. In 1922 he matriculated at the University of Idaho and graduated there with an electrical engineering degree in 1926. In 1930 he received his doctorate in mathematics from the University of Wisconsin-Madison. His doctoral dissertation On a Solution of Laplace's Equation with an Application to the Torsion Problem for a Polygon with Reentrant Angles was written under the supervision of Herman William March. In June 1931 Sokolnikoff married Elizabeth Thatcher Stafford. During the years from 1931 to 1941, they wrote 5 significant papers together, as well as the classic textbook Higher Mathematics for Physicists and Engineers. He joined the mathematics department of the University of Wisconsin–Madison as an instructor in 1927 and was promoted to full professor in 1941. At Wisconsin he was a member of the mathematics faculty until 1944. During WW II Sokolnikoff lived in New York and Washington and did research on ship gun fire-control for the National Defense Research Council. While Sokolnikoff was on the East Coast, Elizabeth Stafford Sokolnikoff taught mathematics and remained in Madison, Wisconsin. Along with mathematical professors William LeRoy Hart (1892–1984) of the University of Minnesota and William Thomas Reid (1907–1977) of the University of Chicago, he organized a pre-meteorology program in which a number of academic institutions trained meteorologists for the U.S. armed forces. In 1946 he became a mathematics professor at the University of California at Los Angeles (UCLA). There he retired as professor emeritus in 1965. In 1947 he divorced his first wife and married Ruth Lawyer in December of that year. Sokolnikoff was twice a visiting professor at Brown University. He was also twice a Guggenheim Fellow. His Guggenheim Fellowship for the academic year 1952-1953 was spent partly at the Royal Holloway College, London University and partly at the Free University of Brussels. His Guggenheim Fellowship for the academic year 1959–1960 was spent at the Swiss Federal Institute of Technology in Zürich. For the academic year 1962–1963 he held a Fulbright lecturing fellowship at Ankara's Middle East Technical University. Upon his death he was survived by his widow and a daughter from his second marriage. Selected publications Articles Books with Elizabeth Stafford Sokolnikoff: Higher Mathematics for Engineers and Physicists , McGraw Hill, 1934, 2nd edition 1941 Advanced Calculus , McGraw Hill 1939 The Mathematical Theory of Elasticity , McGraw Hill, 1946, 2nd edition 1956 Tensor Analysis - theory and applications to geometry and mechanics of continua , Wiley 1951, 2nd edition 1964 with Raymond Redheffer: Mathematics of physics and modern engineering, McGraw Hill 1958, 2nd edition 1966 References 20th-century American mathematicians 20th-century Russian mathematicians Applied mathematicians Russian emigrants to the United States University of Idaho alumni University of Wisconsin–Madison College of Letters and Science alumni University of Wisconsin–Madison faculty University of California, Los Angeles faculty 1901 births 1976 deaths
Ivan S. Sokolnikoff
Mathematics
720
52,072,835
https://en.wikipedia.org/wiki/Rocker%20Shovel%20Loader
A Rocker Shovel Loader, sometimes simply referred to as a Rocker Shovel or Mucker is a type of mechanical loader used in underground mining. A Rocker Shovel is usually powered by compressed air, or in some cases electricity. It is commonly mounted on steel wheels designed to run on narrow gauge rails, with some later models using metal or rubber-tyred road wheels. The operator, standing on a raised platform to one side of the machine, operates the controls, one lever to drive the machine along the tracks, and another to raise and lower the bucket. Once the bucket has been filled by driving the loader forwards into the pile of material, the rocker mechanism throws the contents over the top of the machine and into a wagon behind. Once full, the loaded wagon can be taken away and replaced with an empty one to allow loading to continue. On 28 May 1937, Edwin Burton Royle applied for a patent as inventor of the "loading machine" and US Patent No. 2,134,582 was issued on October 25, 1938 and assigned to the Eastern Iron Metals Company (later to be known as EIMCO). In 2000, the American Society of Mechanical Engineers added the EIMCO 12B Rocker Shovel Loader of 1938 to its List of Historic Mechanical Engineering Landmarks as reference number 212 out of a total number of 259 objects (as of 2015). In June 2012, an EIMCO 12B Rocker Shovel was featured in an episode of the American reality television series Auction Hunters, filmed in Littleton, Colorado. It was sold to a gold miner for $3,600. References External links Video of EIMCO 12B being demonstrated at Lea Bailey Light Railway Mining equipment
Rocker Shovel Loader
Engineering
343
44,153,629
https://en.wikipedia.org/wiki/Gun%20tunnel
A gun tunnel is an intermediate altitude hypersonic wind tunnel that can be configured to produce hypersonic flows at roughly 30 to 40 km altitude. This uses a piston for isentropic compression. The hypersonic facility at IISC Bangalore, India has a high enthalpy gun tunnel, which can produce Schlieren imaging and produce up to 8 megajoules of energy. Using a piston can be very tricky due to reflecting of shocks. At the facility, they use aluminium diaphragm to produce shocks and paper diaphragm to avoid shocks and pass through the hypersonic chamber. The pressure used is significantly higher like 30 times the atmosphere. References Wind tunnels
Gun tunnel
Chemistry
138
1,666,646
https://en.wikipedia.org/wiki/Cyanuric%20acid
Cyanuric acid or 1,3,5-triazine-2,4,6-triol is a chemical compound with the formula (CNOH)3. Like many industrially useful chemicals, this triazine has many synonyms. This white, odorless solid finds use as a precursor or a component of bleaches, disinfectants, and herbicides. In 1997, worldwide production was 160 000 tonnes. Properties and synthesis Properties Cyanuric acid can be viewed as the cyclic trimer of the elusive chemical species cyanic acid, HOCN. The ring can readily interconvert between several structures via lactam–lactim tautomerism. Although the triol tautomer may have aromatic character, the keto form predominates in solution. The hydroxyl (-OH) groups assume phenolic character. Deprotonation with base affords a series of cyanurate salts: [C(O)NH]3 ⇌ [C(O)NH]2[C(O)N]− + H+ (pKa = 6.88) [C(O)NH]2[C(O)N]− ⇌ [C(O)NH][C(O)N]22− + H+ (pKa = 11.40) [C(O)NH][C(O)N]22− ⇌ [C(O)N]33− + H+ (pKa = 13.5) Cyanuric acid is noted for its strong interaction with melamine, forming insoluble melamine cyanurate. This interaction locks the cyanuric acid into the tri-keto tautomer. Melamine cyanurate is cited as an example of supramolecular chemistry. Synthesis Cyanuric acid (CYA) was first synthesized by Friedrich Wöhler in 1829 by the thermal decomposition of urea and uric acid. The current industrial route to CYA entails the thermal decomposition of urea, with release of ammonia. The conversion commences at approximately 175 °C: 3 H2N-CO-NH2 → [C(O)NH]3 + 3 NH3 CYA crystallizes from water as the dihydrate. Cyanuric acid can be produced by hydrolysis of crude or waste melamine followed by crystallization. Acid waste streams from plants producing these materials contain cyanuric acid and on occasion, dissolved amino-substituted triazines, namely, ammeline, ammelide, and melamine. In one method, an ammonium sulfate solution is heated to the "boil" and treated with a stoichiometric amount of melamine, by which means the cyanuric acid present precipitates as melamine-cyanuric acid complex. The various waste streams containing cyanuric acid and amino-substituted triazines may be combined for disposal, and during upset conditions undissolved cyanuric acid may be present in the waste streams. Intermediates and impurities Intermediates in the dehydration include both isocyanic acid, biuret, and triuret: H2N-CO-NH2 → HNCO + NH3 H2N-CO-NH2 + HNCO → H2N-CO-NH-CO-NH2 H2N-CO-NH-CO-NH2 + HNCO → H2N-CO-NH-CO-NH-CO-NH2 As temperature exceeds 190 °C, other reactions begin to dominate the process. The first appearance of ammeline occurs prior to 225 °C and is suspected also to occur from decomposition of biuret but is produced at a lower rate than that of CYA or ammelide. 3 H2N-CO-NH-CO-NH2 → [C(O)]2(CNH2)(NH)2N + 2 NH3 + H2O Melamine, [C(NH2)N]3, formation occurs between 325–350 °C and only in very small quantities. N-substituted isocyanurates from isocyanates N-substituted isocyanurates can be synthesised by the trimerisation of isocyanates. This is utilised industrially in the formation of polyisocyanurates. Applications Cyanuric acid is used as a chlorine stabilizer / buffer in swimming pools. It binds to free chlorine and releases it slowly, extending the time needed to deplete each dose of sanitizer. A chemical equilibrium exists between the acid with free chlorine and its chlorinated form. Precursors to chlorinated cyanurates Cyanuric acid is mainly used as a precursor to N-chlorinated cyanurates, which are used to disinfect water. The dichloro derivative is prepared by direct chlorination: [C(O)NH]3 + 2 Cl2 + 2 NaOH → [C(O)NCl]2[C(O)NH] + 2 NaCl + 2  This species is typically converted to its sodium salt, sodium dichloro-s-triazinetrione. Further chlorination gives trichloroisocyanuric acid, [C(O)NCl]3. These N-chloro compounds serve as disinfectants and algicides for swimming pool water. The aforementioned equilibrium stabilizes the chlorine in the pool and prevents the chlorine from being quickly consumed by sunlight. Precursors to crosslinking agents Because of its trifunctionality, CYA is a precursor to crosslinking agents, especially for polyurethane resins and polyisocyanurate thermoset plastics. The experimental antineoplastic drug teroxirone (triglycidyl isocyanurate) is formed by reacting cyanuric acid with 3 equivalents of epichlorohydrin. It works by cross-linking DNA. Analysis Testing for cyanuric acid concentration is commonly done with a turbidometric test, which uses a reagent, melamine, to precipitate the cyanuric acid. The relative turbidity of the reacted sample quantifies the CYA concentration. Referenced in 1957, this test works because melamine combines with the cyanuric acid in the water to form a fine, white precipitate of the insoluble complex melamine cyanurate that causes the water to cloud in proportion to the amount of cyanuric acid in it. More recently, a sensitive method has been developed for analysis of cyanuric acid in urine. Animal feed FDA permits a certain amount of cyanuric acid to be present in some non-protein nitrogen (NPN) additives used in animal feed and drinking water. Cyanuric acid has been used as NPN. For example, Archer Daniels Midland manufactures an NPN supplement for cattle, which contains biuret, triuret, cyanuric acid and urea. 2007 pet food recalls Cyanuric acid is implicated in connection to the 2007 pet food recalls, the contamination and wide recall of many brands of cat and dog foods beginning in March 2007. Research has found evidence that cyanuric acid, a constituent of urine, together with melamine forms poorly soluble crystals which can cause kidney failure (see Analysis section above). Safety Cyanuric acid is classified as "essentially nontoxic". The 50% oral median lethal dose (LD50) is 7700 mg/kg in rats. However, when cyanuric acid is present together with melamine (which by itself is another low-toxicity substance), it will form an insoluble and rather nephrotoxic complex, as evidenced in dogs and cats during the 2007 pet food contamination and in children during the 2008 Chinese milk scandal cases. Natural occurrence An impure copper salt of the acid, with the formula Cu(C3N3O3H2)2(NH3)2, is currently the only known isocyanurate mineral, called joanneumite. It was found in a guano deposit in Chile. It is very rare. References External links Oregon Veterinary Medical Association (OVMA) Pet Food Contamination Page – News and developments updated regularly Lactims Triazines Imides Isocyanuric acids
Cyanuric acid
Chemistry
1,731
16,213,903
https://en.wikipedia.org/wiki/Prestik
Prestik is a rubber-like temporary adhesive that is marketed in South Africa, and manufactured by Bostik. It is water resistant, and can be used in temperatures from -30°C to 100°C. It can be used to secure things in place, such as pieces of paper on walls or fridge doors. It is similar to Blu Tack. External links Bostik Prestik data sheet Manufacturer's Website Adhesives
Prestik
Physics
93
2,536,864
https://en.wikipedia.org/wiki/Dual%20graph
In the mathematical discipline of graph theory, the dual graph of a planar graph is a graph that has a vertex for each face of . The dual graph has an edge for each pair of faces in that are separated from each other by an edge, and a self-loop when the same face appears on both sides of an edge. Thus, each edge of has a corresponding dual edge, whose endpoints are the dual vertices corresponding to the faces on either side of . The definition of the dual depends on the choice of embedding of the graph , so it is a property of plane graphs (graphs that are already embedded in the plane) rather than planar graphs (graphs that may be embedded but for which the embedding is not yet known). For planar graphs generally, there may be multiple dual graphs, depending on the choice of planar embedding of the graph. Historically, the first form of graph duality to be recognized was the association of the Platonic solids into pairs of dual polyhedra. Graph duality is a topological generalization of the geometric concepts of dual polyhedra and dual tessellations, and is in turn generalized combinatorially by the concept of a dual matroid. Variations of planar graph duality include a version of duality for directed graphs, and duality for graphs embedded onto non-planar two-dimensional surfaces. These notions of dual graphs should not be confused with a different notion, the edge-to-vertex dual or line graph of a graph. The term dual is used because the property of being a dual graph is symmetric, meaning that if is a dual of a connected graph , then is a dual of . When discussing the dual of a graph , the graph itself may be referred to as the "primal graph". Many other graph properties and structures may be translated into other natural properties and structures of the dual. For instance, cycles are dual to cuts, spanning trees are dual to the complements of spanning trees, and simple graphs (without parallel edges or self-loops) are dual to 3-edge-connected graphs. Graph duality can help explain the structure of mazes and of drainage basins. Dual graphs have also been applied in computer vision, computational geometry, mesh generation, and the design of integrated circuits. Examples Cycles and dipoles The unique planar embedding of a cycle graph divides the plane into only two regions, the inside and outside of the cycle, by the Jordan curve theorem. However, in an -cycle, these two regions are separated from each other by different edges. Therefore, the dual graph of the -cycle is a multigraph with two vertices (dual to the regions), connected to each other by dual edges. Such a graph is called a multiple edge, linkage, or sometimes a dipole graph. Conversely, the dual to an -edge dipole graph is an -cycle. Dual polyhedra According to Steinitz's theorem, every polyhedral graph (the graph formed by the vertices and edges of a three-dimensional convex polyhedron) must be planar and 3-vertex-connected, and every 3-vertex-connected planar graph comes from a convex polyhedron in this way. Every three-dimensional convex polyhedron has a dual polyhedron; the dual polyhedron has a vertex for every face of the original polyhedron, with two dual vertices adjacent whenever the corresponding two faces share an edge. Whenever two polyhedra are dual, their graphs are also dual. For instance the Platonic solids come in dual pairs, with the octahedron dual to the cube, the dodecahedron dual to the icosahedron, and the tetrahedron dual to itself. Polyhedron duality can also be extended to duality of higher dimensional polytopes, but this extension of geometric duality does not have clear connections to graph-theoretic duality. Self-dual graphs A plane graph is said to be self-dual if it is isomorphic to its dual graph. The wheel graphs provide an infinite family of self-dual graphs coming from self-dual polyhedra (the pyramids). However, there also exist self-dual graphs that are not polyhedral, such as the one shown. describe two operations, adhesion and explosion, that can be used to construct a self-dual graph containing a given planar graph; for instance, the self-dual graph shown can be constructed as the adhesion of a tetrahedron with its dual. It follows from Euler's formula that every self-dual graph with vertices has exactly edges. Every simple self-dual planar graph contains at least four vertices of degree three, and every self-dual embedding has at least four triangular faces. Properties Many natural and important concepts in graph theory correspond to other equally natural but different concepts in the dual graph. Because the dual of the dual of a connected plane graph is isomorphic to the primal graph, each of these pairings is bidirectional: if concept in a planar graph corresponds to concept in the dual graph, then concept in a planar graph corresponds to concept in the dual. Simple graphs versus multigraphs The dual of a simple graph need not be simple: it may have self-loops (an edge with both endpoints at the same vertex) or multiple edges connecting the same two vertices, as was already evident in the example of dipole multigraphs being dual to cycle graphs. As a special case of the cut-cycle duality discussed below, the bridges of a planar graph are in one-to-one correspondence with the self-loops of the dual graph. For the same reason, a pair of parallel edges in a dual multigraph (that is, a length-2 cycle) corresponds to a 2-edge cutset in the primal graph (a pair of edges whose deletion disconnects the graph). Therefore, a planar graph is simple if and only if its dual has no 1- or 2-edge cutsets; that is, if it is 3-edge-connected. The simple planar graphs whose duals are simple are exactly the 3-edge-connected simple planar graphs. This class of graphs includes, but is not the same as, the class of 3-vertex-connected simple planar graphs. For instance, the figure showing a self-dual graph is 3-edge-connected (and therefore its dual is simple) but is not 3-vertex-connected. Uniqueness Because the dual graph depends on a particular embedding, the dual graph of a planar graph is not unique, in the sense that the same planar graph can have non-isomorphic dual graphs. In the picture, the blue graphs are isomorphic but their dual red graphs are not. The upper red dual has a vertex with degree 6 (corresponding to the outer face of the blue graph) while in the lower red graph all degrees are less than 6. Hassler Whitney showed that if the graph is 3-connected then the embedding, and thus the dual graph, is unique. By Steinitz's theorem, these graphs are exactly the polyhedral graphs, the graphs of convex polyhedra. A planar graph is 3-vertex-connected if and only if its dual graph is 3-vertex-connected. Moreover, a planar biconnected graph has a unique embedding, and therefore also a unique dual, if and only if it is a subdivision of a 3-vertex-connected planar graph (a graph formed from a 3-vertex-connected planar graph by replacing some of its edges by paths). For some planar graphs that are not 3-vertex-connected, such as the complete bipartite graph , the embedding is not unique, but all embeddings are isomorphic. When this happens, correspondingly, all dual graphs are isomorphic. Because different embeddings may lead to different dual graphs, testing whether one graph is a dual of another (without already knowing their embeddings) is a nontrivial algorithmic problem. For biconnected graphs, it can be solved in polynomial time by using the SPQR trees of the graphs to construct a canonical form for the equivalence relation of having a shared mutual dual. For instance, the two red graphs in the illustration are equivalent according to this relation. However, for planar graphs that are not biconnected, this relation is not an equivalence relation and the problem of testing mutual duality is NP-complete. Cuts and cycles A cutset in an arbitrary connected graph is a subset of edges defined from a partition of the vertices into two subsets, by including an edge in the subset when it has one endpoint on each side of the partition. Removing the edges of a cutset necessarily splits the graph into at least two connected components. A minimal cutset (also called a bond) is a cutset with the property that every proper subset of the cutset is not itself a cut. A minimal cutset of a connected graph necessarily separates its graph into exactly two components, and consists of the set of edges that have one endpoint in each component. A simple cycle is a connected subgraph in which each vertex of the cycle is incident to exactly two edges of the cycle. In a connected planar graph , every simple cycle of corresponds to a minimal cutset in the dual of , and vice versa. This can be seen as a form of the Jordan curve theorem: each simple cycle separates the faces of into the faces in the interior of the cycle and the faces of the exterior of the cycle, and the duals of the cycle edges are exactly the edges that cross from the interior to the exterior. The girth of any planar graph (the size of its smallest cycle) equals the edge connectivity of its dual graph (the size of its smallest cutset). This duality extends from individual cutsets and cycles to vector spaces defined from them. The cycle space of a graph is defined as the family of all subgraphs that have even degree at each vertex; it can be viewed as a vector space over the two-element finite field, with the symmetric difference of two sets of edges acting as the vector addition operation in the vector space. Similarly, the cut space of a graph is defined as the family of all cutsets, with vector addition defined in the same way. Then the cycle space of any planar graph and the cut space of its dual graph are isomorphic as vector spaces. Thus, the rank of a planar graph (the dimension of its cut space) equals the cyclotomic number of its dual (the dimension of its cycle space) and vice versa. A cycle basis of a graph is a set of simple cycles that form a basis of the cycle space (every even-degree subgraph can be formed in exactly one way as a symmetric difference of some of these cycles). For edge-weighted planar graphs (with sufficiently general weights that no two cycles have the same weight) the minimum-weight cycle basis of the graph is dual to the Gomory–Hu tree of the dual graph, a collection of nested cuts that together include a minimum cut separating each pair of vertices in the graph. Each cycle in the minimum weight cycle basis has a set of edges that are dual to the edges of one of the cuts in the Gomory–Hu tree. When cycle weights may be tied, the minimum-weight cycle basis may not be unique, but in this case it is still true that the Gomory–Hu tree of the dual graph corresponds to one of the minimum weight cycle bases of the graph. In directed planar graphs, simple directed cycles are dual to directed cuts (partitions of the vertices into two subsets such that all edges go in one direction, from one subset to the other). Strongly oriented planar graphs (graphs whose underlying undirected graph is connected, and in which every edge belongs to a cycle) are dual to directed acyclic graphs in which no edge belongs to a cycle. To put this another way, the strong orientations of a connected planar graph (assignments of directions to the edges of the graph that result in a strongly connected graph) are dual to acyclic orientations (assignments of directions that produce a directed acyclic graph). In the same way, dijoins (sets of edges that include an edge from each directed cut) are dual to feedback arc sets (sets of edges that include an edge from each cycle). Spanning trees A spanning tree may be defined as a set of edges that, together with all of the vertices of the graph, forms a connected and acyclic subgraph. But, by cut-cycle duality, if a set of edges in a planar graph is acyclic (has no cycles), then the set of edges dual to has no cuts, from which it follows that the complementary set of dual edges (the duals of the edges that are not in ) forms a connected subgraph. Symmetrically, if is connected, then the edges dual to the complement of form an acyclic subgraph. Therefore, when has both properties – it is connected and acyclic – the same is true for the complementary set in the dual graph. That is, each spanning tree of is complementary to a spanning tree of the dual graph, and vice versa. Thus, the edges of any planar graph and its dual can together be partitioned (in multiple different ways) into two spanning trees, one in the primal and one in the dual, that together extend to all the vertices and faces of the graph but never cross each other. In particular, the minimum spanning tree of is complementary to the maximum spanning tree of the dual graph. However, this does not work for shortest path trees, even approximately: there exist planar graphs such that, for every pair of a spanning tree in the graph and a complementary spanning tree in the dual graph, at least one of the two trees has distances that are significantly longer than the distances in its graph. An example of this type of decomposition into interdigitating trees can be seen in some simple types of mazes, with a single entrance and no disconnected components of its walls. In this case both the maze walls and the space between the walls take the form of a mathematical tree. If the free space of the maze is partitioned into simple cells (such as the squares of a grid) then this system of cells can be viewed as an embedding of a planar graph, in which the tree structure of the walls forms a spanning tree of the graph and the tree structure of the free space forms a spanning tree of the dual graph. Similar pairs of interdigitating trees can also be seen in the tree-shaped pattern of streams and rivers within a drainage basin and the dual tree-shaped pattern of ridgelines separating the streams. This partition of the edges and their duals into two trees leads to a simple proof of Euler’s formula for planar graphs with vertices, edges, and faces. Any spanning tree and its complementary dual spanning tree partition the edges into two subsets of and edges respectively, and adding the sizes of the two subsets gives the equation which may be rearranged to form Euler's formula. According to Duncan Sommerville, this proof of Euler's formula is due to K. G. C. Von Staudt’s Geometrie der Lage (Nürnberg, 1847). In nonplanar surface embeddings the set of dual edges complementary to a spanning tree is not a dual spanning tree. Instead this set of edges is the union of a dual spanning tree with a small set of extra edges whose number is determined by the genus of the surface on which the graph is embedded. The extra edges, in combination with paths in the spanning trees, can be used to generate the fundamental group of the surface. Additional properties Any counting formula involving vertices and faces that is valid for all planar graphs may be transformed by planar duality into an equivalent formula in which the roles of the vertices and faces have been swapped. Euler's formula, which is self-dual, is one example. Another given by Harary involves the handshaking lemma, according to which the sum of the degrees of the vertices of any graph equals twice the number of edges. In its dual form, this lemma states that in a plane graph, the sum of the numbers of sides of the faces of the graph equals twice the number of edges. The medial graph of a plane graph is isomorphic to the medial graph of its dual. Two planar graphs can have isomorphic medial graphs only if they are dual to each other. A planar graph with four or more vertices is maximal (no more edges can be added while preserving planarity) if and only if its dual graph is both 3-vertex-connected and 3-regular. A connected planar graph is Eulerian (has even degree at every vertex) if and only if its dual graph is bipartite. A Hamiltonian cycle in a planar graph corresponds to a partition of the vertices of the dual graph into two subsets (the interior and exterior of the cycle) whose induced subgraphs are both trees. In particular, Barnette's conjecture on the Hamiltonicity of cubic bipartite polyhedral graphs is equivalent to the conjecture that every Eulerian maximal planar graph can be partitioned into two induced trees. If a planar graph has Tutte polynomial , then the Tutte polynomial of its dual graph is obtained by swapping and . For this reason, if some particular value of the Tutte polynomial provides information about certain types of structures in , then swapping the arguments to the Tutte polynomial will give the corresponding information for the dual structures. For instance, the number of strong orientations is and the number of acyclic orientations is . For bridgeless planar graphs, graph colorings with colors correspond to nowhere-zero flows modulo  on the dual graph. For instance, the four color theorem (the existence of a 4-coloring for every planar graph) can be expressed equivalently as stating that the dual of every bridgeless planar graph has a nowhere-zero 4-flow. The number of -colorings is counted (up to an easily computed factor) by the Tutte polynomial value and dually the number of nowhere-zero -flows is counted by . An st-planar graph is a connected planar graph together with a bipolar orientation of that graph, an orientation that makes it acyclic with a single source and a single sink, both of which are required to be on the same face as each other. Such a graph may be made into a strongly connected graph by adding one more edge, from the sink back to the source, through the outer face. The dual of this augmented planar graph is itself the augmentation of another st-planar graph. Variations Directed graphs In a directed plane graph, the dual graph may be made directed as well, by orienting each dual edge by a 90° clockwise turn from the corresponding primal edge. Strictly speaking, this construction is not a duality of directed planar graphs, because starting from a graph and taking the dual twice does not return to itself, but instead constructs a graph isomorphic to the transpose graph of , the graph formed from by reversing all of its edges. Taking the dual four times returns to the original graph. Weak dual The weak dual of a plane graph is the subgraph of the dual graph whose vertices correspond to the bounded faces of the primal graph. A plane graph is outerplanar if and only if its weak dual is a forest. For any plane graph , let be the plane multigraph formed by adding a single new vertex in the unbounded face of , and connecting to each vertex of the outer face (multiple times, if a vertex appears multiple times on the boundary of the outer face); then, is the weak dual of the (plane) dual of . Infinite graphs and tessellations The concept of duality applies as well to infinite graphs embedded in the plane as it does to finite graphs. However, care is needed to avoid topological complications such as points of the plane that are neither part of an open region disjoint from the graph nor part of an edge or vertex of the graph. When all faces are bounded regions surrounded by a cycle of the graph, an infinite planar graph embedding can also be viewed as a tessellation of the plane, a covering of the plane by closed disks (the tiles of the tessellation) whose interiors (the faces of the embedding) are disjoint open disks. Planar duality gives rise to the notion of a dual tessellation, a tessellation formed by placing a vertex at the center of each tile and connecting the centers of adjacent tiles. The concept of a dual tessellation can also be applied to partitions of the plane into finitely many regions. It is closely related to but not quite the same as planar graph duality in this case. For instance, the Voronoi diagram of a finite set of point sites is a partition of the plane into polygons within which one site is closer than any other. The sites on the convex hull of the input give rise to unbounded Voronoi polygons, two of whose sides are infinite rays rather than finite line segments. The dual of this diagram is the Delaunay triangulation of the input, a planar graph that connects two sites by an edge whenever there exists a circle that contains those two sites and no other sites. The edges of the convex hull of the input are also edges of the Delaunay triangulation, but they correspond to rays rather than line segments of the Voronoi diagram. This duality between Voronoi diagrams and Delaunay triangulations can be turned into a duality between finite graphs in either of two ways: by adding an artificial vertex at infinity to the Voronoi diagram, to serve as the other endpoint for all of its rays, or by treating the bounded part of the Voronoi diagram as the weak dual of the Delaunay triangulation. Although the Voronoi diagram and Delaunay triangulation are dual, their embedding in the plane may have additional crossings beyond the crossings of dual pairs of edges. Each vertex of the Delaunay triangle is positioned within its corresponding face of the Voronoi diagram. Each vertex of the Voronoi diagram is positioned at the circumcenter of the corresponding triangle of the Delaunay triangulation, but this point may lie outside its triangle. Nonplanar embeddings The concept of duality can be extended to graph embeddings on two-dimensional manifolds other than the plane. The definition is the same: there is a dual vertex for each connected component of the complement of the graph in the manifold, and a dual edge for each graph edge connecting the two dual vertices on either side of the edge. In most applications of this concept, it is restricted to embeddings with the property that each face is a topological disk; this constraint generalizes the requirement for planar graphs that the graph be connected. With this constraint, the dual of any surface-embedded graph has a natural embedding on the same surface, such that the dual of the dual is isomorphic to and isomorphically embedded to the original graph. For instance, the complete graph is a toroidal graph: it is not planar but can be embedded in a torus, with each face of the embedding being a triangle. This embedding has the Heawood graph as its dual graph. The same concept works equally well for non-orientable surfaces. For instance, can be embedded in the projective plane with ten triangular faces as the hemi-icosahedron, whose dual is the Petersen graph embedded as the hemi-dodecahedron. Even planar graphs may have nonplanar embeddings, with duals derived from those embeddings that differ from their planar duals. For instance, the four Petrie polygons of a cube (hexagons formed by removing two opposite vertices of the cube) form the hexagonal faces of an embedding of the cube in a torus. The dual graph of this embedding has four vertices forming a complete graph with doubled edges. In the torus embedding of this dual graph, the six edges incident to each vertex, in cyclic order around that vertex, cycle twice through the three other vertices. In contrast to the situation in the plane, this embedding of the cube and its dual is not unique; the cube graph has several other torus embeddings, with different duals. Many of the equivalences between primal and dual graph properties of planar graphs fail to generalize to nonplanar duals, or require additional care in their generalization. Another operation on surface-embedded graphs is the Petrie dual, which uses the Petrie polygons of the embedding as the faces of a new embedding. Unlike the usual dual graph, it has the same vertices as the original graph, but generally lies on a different surface. Surface duality and Petrie duality are two of the six Wilson operations, and together generate the group of these operations. Matroids and algebraic duals An algebraic dual of a connected graph is a graph such that and have the same set of edges, any cycle of is a cut of , and any cut of is a cycle of . Every planar graph has an algebraic dual, which is in general not unique (any dual defined by a plane embedding will do). The converse is actually true, as settled by Hassler Whitney in Whitney's planarity criterion: A connected graph is planar if and only if it has an algebraic dual. The same fact can be expressed in the theory of matroids. If is the graphic matroid of a graph , then a graph is an algebraic dual of if and only if the graphic matroid of is the dual matroid of . Then Whitney's planarity criterion can be rephrased as stating that the dual matroid of a graphic matroid is itself a graphic matroid if and only if the underlying graph of is planar. If is planar, the dual matroid is the graphic matroid of the dual graph of . In particular, all dual graphs, for all the different planar embeddings of , have isomorphic graphic matroids. For nonplanar surface embeddings, unlike planar duals, the dual graph is not generally an algebraic dual of the primal graph. And for a non-planar graph , the dual matroid of the graphic matroid of is not itself a graphic matroid. However, it is still a matroid whose circuits correspond to the cuts in , and in this sense can be thought of as a combinatorially generalized algebraic dual of . The duality between Eulerian and bipartite planar graphs can be extended to binary matroids (which include the graphic matroids derived from planar graphs): a binary matroid is Eulerian if and only if its dual matroid is bipartite. The two dual concepts of girth and edge connectivity are unified in matroid theory by matroid girth: the girth of the graphic matroid of a planar graph is the same as the graph's girth, and the girth of the dual matroid (the graphic matroid of the dual graph) is the edge connectivity of the graph. Applications Along with its use in graph theory, the duality of planar graphs has applications in several other areas of mathematical and computational study. In geographic information systems, flow networks (such as the networks showing how water flows in a system of streams and rivers) are dual to cellular networks describing drainage divides. This duality can be explained by modeling the flow network as a spanning tree on a grid graph of an appropriate scale, and modeling the drainage divide as the complementary spanning tree of ridgelines on the dual grid graph. In computer vision, digital images are partitioned into small square pixels, each of which has its own color. The dual graph of this subdivision into squares has a vertex per pixel and an edge between pairs of pixels that share an edge; it is useful for applications including clustering of pixels into connected regions of similar colors. In computational geometry, the duality between Voronoi diagrams and Delaunay triangulations implies that any algorithm for constructing a Voronoi diagram can be immediately converted into an algorithm for the Delaunay triangulation, and vice versa. The same duality can also be used in finite element mesh generation. Lloyd's algorithm, a method based on Voronoi diagrams for moving a set of points on a surface to more evenly spaced positions, is commonly used as a way to smooth a finite element mesh described by the dual Delaunay triangulation. This method improves the mesh by making its triangles more uniformly sized and shaped. In the synthesis of CMOS circuits, the function to be synthesized is represented as a formula in Boolean algebra. Then this formula is translated into two series–parallel multigraphs. These graphs can be interpreted as circuit diagrams in which the edges of the graphs represent transistors, gated by the inputs to the function. One circuit computes the function itself, and the other computes its complement. One of the two circuits is derived by converting the conjunctions and disjunctions of the formula into series and parallel compositions of graphs, respectively. The other circuit reverses this construction, converting the conjunctions and disjunctions of the formula into parallel and series compositions of graphs. These two circuits, augmented by an additional edge connecting the input of each circuit to its output, are planar dual graphs. History The duality of convex polyhedra was recognized by Johannes Kepler in his 1619 book Harmonices Mundi. Recognizable planar dual graphs, outside the context of polyhedra, appeared as early as 1725, in Pierre Varignon's posthumously published work, Nouvelle Méchanique ou Statique. This was even before Leonhard Euler's 1736 work on the Seven Bridges of Königsberg that is often taken to be the first work on graph theory. Varignon analyzed the forces on static systems of struts by drawing a graph dual to the struts, with edge lengths proportional to the forces on the struts; this dual graph is a type of Cremona diagram. In connection with the four color theorem, the dual graphs of maps (subdivisions of the plane into regions) were mentioned by Alfred Kempe in 1879, and extended to maps on non-planar surfaces by in 1891. Duality as an operation on abstract planar graphs was introduced by Hassler Whitney in 1931. Notes External links Algebraic graph theory Topological graph theory Planar graphs Graph Graph operations de:Dualität (Mathematik)#Geometrisch dualer Graph
Dual graph
Mathematics
6,402
476,993
https://en.wikipedia.org/wiki/Thermoelectric%20materials
Thermoelectric materials show the thermoelectric effect in a strong or convenient form. The thermoelectric effect refers to phenomena by which either a temperature difference creates an electric potential or an electric current creates a temperature difference. These phenomena are known more specifically as the Seebeck effect (creating a voltage from temperature difference), Peltier effect (driving heat flow with an electric current), and Thomson effect (reversible heating or cooling within a conductor when there is both an electric current and a temperature gradient). While all materials have a nonzero thermoelectric effect, in most materials it is too small to be useful. However, low-cost materials that have a sufficiently strong thermoelectric effect (and other required properties) are also considered for applications including power generation and refrigeration. The most commonly used thermoelectric material is based on bismuth telluride (). Thermoelectric materials are used in thermoelectric systems for cooling or heating in niche applications, and are being studied as a way to regenerate electricity from waste heat. Research in the field is still driven by materials development, primarily in optimizing transport and thermoelectric properties. Thermoelectric figure of merit The usefulness of a material in thermoelectric systems is determined by the device efficiency. This is determined by the material's electrical conductivity (σ), thermal conductivity (κ), and Seebeck coefficient (S), which change with temperature (T). The maximum efficiency of the energy conversion process (for both power generation and cooling) at a given temperature point in the material is determined by the thermoelectric materials figure of merit , given by Device efficiency The efficiency of a thermoelectric device for electricity generation is given by , defined as The maximum efficiency of a thermoelectric device is typically described in terms of its device figure of merit where the maximum device efficiency is approximately given by where is the fixed temperature at the hot junction, is the fixed temperature at the surface being cooled, and is the mean of and . This maximum efficiency equation is exact when thermoelectric properties are temperature-independent. For a single thermoelectric leg the device efficiency can be calculated from the temperature dependent properties S, κ and σ and the heat and electric current flow through the material. In an actual thermoelectric device, two materials are used (typically one n-type and one p-type) with metal interconnects. The maximum efficiency is then calculated from the efficiency of both legs and the electrical and thermal losses from the interconnects and surroundings. Ignoring these losses and temperature dependencies in S, κ and σ, an inexact estimate for is given by where is the electrical resistivity, and the properties are averaged over the temperature range; the subscripts n and p denote properties related to the n- and p-type semiconducting thermoelectric materials, respectively. Only when n and p elements have the same and temperature independent properties () does . Since thermoelectric devices are heat engines, their efficiency is limited by the Carnot efficiency , the first factor in , while and determines the maximum reversibility of the thermodynamic process globally and locally, respectively. Regardless, the coefficient of performance of current commercial thermoelectric refrigerators ranges from 0.3 to 0.6, one-sixth the value of traditional vapor-compression refrigerators. Power factor Often the thermoelectric power factor is reported for a thermoelectric material, given by where S is the Seebeck coefficient, and σ is the electrical conductivity. Although it is often claimed that TE devices with materials with a higher power factor are able to 'generate' more energy (move more heat or extract more energy from that temperature difference) this is only true for a thermoelectric device with fixed geometry and unlimited heat source and cooling. If the geometry of the device is optimally designed for the specific application, the thermoelectric materials will operate at their peak efficiency which is determined by their not . Aspects of materials choice For good efficiency, materials with high electrical conductivity, low thermal conductivity and high Seebeck coefficient are needed. Electron state density: metals vs semiconductors The band structure of semiconductors offers better thermoelectric effects than the band structure of metals. The Fermi energy is below the conduction band causing the state density to be asymmetric around the Fermi energy. Therefore, the average electron energy of the conduction band is higher than the Fermi energy, making the system conducive for charge motion into a lower energy state. By contrast, the Fermi energy lies in the conduction band in metals. This makes the state density symmetric about the Fermi energy so that the average conduction electron energy is close to the Fermi energy, reducing the forces pushing for charge transport. Therefore, semiconductors are ideal thermoelectric materials. Conductivity In the efficiency equations above, thermal conductivity and electrical conductivity compete. The thermal conductivity κ in crystalline solids has mainly two components: κ = κ electron + κ phonon According to the Wiedemann–Franz law, the higher the electrical conductivity, the higher κ electron becomes. Thus in metals the ratio of thermal to electrical conductivity is about fixed, as the electron part dominates. In semiconductors, the phonon part is important and cannot be neglected. It reduces the efficiency. For good efficiency a low ratio of κ phonon / κ electron is desired. Therefore, it is necessary to minimize κ phonon and keep the electrical conductivity high. Thus semiconductors should be highly doped. G. A. Slack proposed that in order to optimize the figure of merit, phonons, which are responsible for thermal conductivity must experience the material as a glass (experiencing a high degree of phonon scattering—lowering thermal conductivity) while electrons must experience it as a crystal (experiencing very little scattering—maintaining electrical conductivity): this concept is called phonon glass electron crystal. The figure of merit can be improved through the independent adjustment of these properties. Quality factor The maximum of a material is given by the material's quality factor where is the Boltzmann constant, is the reduced Planck constant, is the number of degenerated valleys for the band, is the average longitudinal elastic moduli, is the inertial effective mass, is the deformation potential coefficient, is the lattice thermal conduction, and is temperature. The figure of merit, , depends on doping concentration and temperature of the material of interest. The material quality factor is useful because it allows for an intrinsic comparison of possible efficiency between different materials. This relation shows that improving the electronic component , which primarily affects the Seebeck coefficient, will increase the quality factor of a material. A large density of states can be created due to a large number of conducting bands () or by flat bands giving a high band effective mass (). For isotropic materials . Therefore, it is desirable for thermoelectric materials to have high valley degeneracy in a very sharp band structure. Other complex features of the electronic structure are important. These can be partially quantified using an electronic fitness function. Materials of interest Strategies to improve thermoelectric performances include both advanced bulk materials and the use of low-dimensional systems. Such approaches to reduce lattice thermal conductivity fall under three general material types: (1) Alloys: create point defects, vacancies, or rattling structures (heavy-ion species with large vibrational amplitudes contained within partially filled structural sites) to scatter phonons within the unit cell crystal; (2) Complex crystals: separate the phonon glass from the electron crystal using approaches similar to those for superconductors (the region responsible for electron transport should be an electron crystal of a high-mobility semiconductor, while the phonon glass should ideally house disordered structures and dopants without disrupting the electron crystal, analogous to the charge reservoir in high-Tc superconductors); (3) Multiphase nanocomposites: scatter phonons at the interfaces of nanostructured materials, be they mixed composites or thin film superlattices. Materials under consideration for thermoelectric device applications include: Bismuth chalcogenides and their nanostructures Materials such as and comprise some of the best performing room temperature thermoelectrics with a temperature-independent figure-of-merit, ZT, between 0.8 and 1.0. Nanostructuring these materials to produce a layered superlattice structure of alternating and layers produces a device within which there is good electrical conductivity but perpendicular to which thermal conductivity is poor. The result is an enhanced ZT (approximately 2.4 at room temperature for p-type). Note that this high value of ZT has not been independently confirmed due to the complicated demands on the growth of such superlattices and device fabrication; however the material ZT values are consistent with the performance of hot-spot coolers made out of these materials and validated at Intel Labs. Bismuth telluride and its solid solutions are good thermoelectric materials at room temperature and therefore suitable for refrigeration applications around 300 K. The Czochralski method has been used to grow single crystalline bismuth telluride compounds. These compounds are usually obtained with directional solidification from melt or powder metallurgy processes. Materials produced with these methods have lower efficiency than single crystalline ones due to the random orientation of crystal grains, but their mechanical properties are superior and the sensitivity to structural defects and impurities is lower due to high optimal carrier concentration. The required carrier concentration is obtained by choosing a nonstoichiometric composition, which is achieved by introducing excess bismuth or tellurium atoms to primary melt or by dopant impurities. Some possible dopants are halogens and group IV and V atoms. Due to the small bandgap (0.16 eV) Bi2Te3 is partially degenerate and the corresponding Fermi-level should be close to the conduction band minimum at room temperature. The size of the band-gap means that Bi2Te3 has high intrinsic carrier concentration. Therefore, minority carrier conduction cannot be neglected for small stoichiometric deviations. Use of telluride compounds is limited by the toxicity and rarity of tellurium. Lead tellurides Heremans et al. (2008) demonstrated that thallium-doped lead telluride alloy (PbTe) achieves a ZT of 1.5 at 773 K. Later, Snyder et al. (2011) reported ZT~1.4 at 750 K in sodium-doped PbTe, and ZT~1.8 at 850 K in sodium-doped PbTe1−xSex alloy. Snyder's group determined that both thallium and sodium alter the electronic structure of the crystal increasing electronic conductivity. They also claim that selenium increases electric conductivity and reduces thermal conductivity. In 2012 another team used lead telluride to convert waste heat to electricity, reaching a ZT of 2.2, which they claimed was the highest yet reported. Inorganic clathrates Inorganic clathrates have the general formula AxByC46-y (type I) and AxByC136-y (type II), where B and C are group III and IV elements, respectively, which form the framework where “guest” A atoms (alkali or alkaline earth metal) are encapsulated in two different polyhedra facing each other. The differences between types I and II come from the number and size of voids present in their unit cells. Transport properties depend on the framework's properties, but tuning is possible by changing the “guest” atoms. The most direct approach to synthesize and optimize the thermoelectric properties of semiconducting type I clathrates is substitutional doping, where some framework atoms are replaced with dopant atoms. In addition, powder metallurgical and crystal growth techniques have been used in clathrate synthesis. The structural and chemical properties of clathrates enable the optimization of their transport properties as a function of stoichiometry. The structure of type II materials allows a partial filling of the polyhedra, enabling better tuning of the electrical properties and therefore better control of the doping level. Partially filled variants can be synthesized as semiconducting or even insulating. Blake et al. have predicted ZT~0.5 at room temperature and ZT~1.7 at 800 K for optimized compositions. Kuznetsov et al. measured electrical resistance and Seebeck coefficient for three different type I clathrates above room temperature and by estimating high temperature thermal conductivity from the published low temperature data they obtained ZT~0.7 at 700 K for Ba8Ga16Ge30 and ZT~0.87 at 870 K for Ba8Ga16Si30. Compounds of Mg and group-14 element Mg2BIV (B14=Si, Ge, Sn) compounds and their solid solutions are good thermoelectric materials and their ZT values are comparable with those of established materials. The appropriate production methods are based on direct co-melting, but mechanical alloying has also been used. During synthesis, magnesium losses due to evaporation and segregation of components (especially for Mg2Sn) need to be taken into account. Directed crystallization methods can produce single crystals of Mg2Si, but they intrinsically have n-type conductivity, and doping, e.g. with Sn, Ga, Ag or Li, is required to produce p-type material which is required for an efficient thermoelectric device. Solid solutions and doped compounds have to be annealed in order to produce homogeneous samples – with the same properties throughout. At 800 K, Mg2Si0.55−xSn0.4Ge0.05Bix has been reported to have a figure of merit about 1.4, the highest ever reported for these compounds. Skutterudite thermoelectrics Skutterudites have a chemical composition of LM4X12, where L is a rare-earth metal (optional component), M is a transition metal, and X is a metalloid, a group V element or a pnictogen such as phosphorus, antimony, or arsenic. These materials exhibit ZT>1.0 and can potentially be used in multistage thermoelectric devices. Unfilled, these materials contain voids, which can be filled with low-coordination ions (usually rare-earth elements) to reduce thermal conductivity by producing sources for lattice phonon scattering, without reducing electrical conductivity. It is also possible to reduce the thermal conductivity in skutterudite without filling these voids using a special architecture containing nano- and micro-pores. NASA is developing a Multi-Mission Radioisotope Thermoelectric Generator in which the thermocouples would be made of skutterudite, which can function with a smaller temperature difference than the current tellurium designs. This would mean that an otherwise similar RTG would generate 25% more power at the beginning of a mission and at least 50% more after seventeen years. NASA hopes to use the design on the next New Frontiers mission. Oxide thermoelectrics Homologous oxide compounds (such as those of the form ()n—the Ruddlesden-Popper phase) have layered superlattice structures that make them promising candidates for use in high-temperature thermoelectric devices. These materials exhibit low thermal conductivity perpendicular to the layers while maintaining good electronic conductivity within the layers. Their ZT values can reach 2.4 for epitaxial films, and the enhanced thermal stability of such oxides, as compared to conventional high-ZT bismuth compounds, makes them superior high-temperature thermoelectrics. Interest in oxides as thermoelectric materials was reawakened in 1997 when a relatively high thermoelectric power was reported for NaCo2O4. In addition to their thermal stability, other advantages of oxides are their low toxicity and high oxidation resistance. Simultaneously controlling both the electric and phonon systems may require nanostructured materials. Layered Ca3Co4O9 exhibited ZT values of 1.4–2.7 at 900 K. If the layers in a given material have the same stoichiometry, they will be stacked so that the same atoms will not be positioned on top of each other, impeding phonon conductivity perpendicular to the layers. Recently, oxide thermoelectrics have gained a lot of attention so that the range of promising phases increased drastically. Novel members of this family include ZnO, MnO2, and NbO2. Cation-substituted copper sulfide thermoelectrics All variables mentioned are included in the equation for the dimensionless figure of merit, zT, which can be seen at the top of this page. The goal of any thermoelectric experiment is to make the power factor, S2 σ, larger while maintaining a small thermal conductivity. This is because electricity is produced through a temperature gradient, so materials that can equilibrate heat very quickly are not useful. The two compounds detailed below were found to exhibit high-performing thermoelectric properties, which can be evidenced by the reported figure of merit in either respective manuscript. Cuprokalininite (CuCr2S4) is a copper-dominant analogue of the mineral joegoldsteinite. It was recently found within metamorphic rocks in Slyudyanka, part of the South Baikal region of Russia, and researchers have determined that Sb-doped cuprokalininite (Cu1-xSbxCr2S4) shows promise in renewable technology. Doping is the act of intentionally adding an impurity, usually to modify the electrochemical characteristics of the seed material. The introduction of antimony enhances the power factor by bringing in extra electrons, which increases the Seebeck coefficient, S, and reduces the magnetic moment (how likely the particles are to align with a magnetic field); it also distorts the crystal structure, which lowers the thermal conductivity, κ. Khan et al. (2017) were able to discover the optimal amount of Sb content (x=0.3) in cuprokalininte in order to develop a device with a ZT value of 0.43. Bornite (Cu5FeS4) is a sulfide mineral named after an Austrian mineralogist, though it is much more common than the aforementioned cuprokalininite. This metal ore was found to demonstrate an improved thermoelectric performance after undergoing cation exchange with iron. Cation exchange is the process of surrounding a parent crystal with an electrolyte complex, so that the cations (positively charged ions) within the structure can be swapped out for those in solution without affecting the anion sublattice (negatively charged crystal network). What one is left with are crystals that possess a different composition, yet an identical framework. In this way, scientists are granted extreme morphological control and uniformity when generating complicated heterostructures. As to why it was thought to improve the ZT value, the mechanics of cation exchange often bring about crystallographic defects, which cause phonons (simply put, heat particles) to scatter. According to the Debye-Callaway formalism, a model used to determine the lattice thermal conductivity, κL, the highly anharmonic behavior due to phonon scattering results in a large thermal resistance. Therefore, a greater defect density decreases the lattice thermal conductivity, thereby making a larger figure of merit. In conclusion, Long et al. reported that greater Cu-deficiencies resulted in increases of up to 88% in the ZT value, with a maximum of 0.79. The composition of thermoelectric devices can dramatically vary depending on the temperature of the heat they must harvest; considering the fact that more than eighty percent of industry waste falls within a range of 373-575 K, chalcogenides and antimonides are better suited for thermoelectric conversion because they can utilize heat at lower temperatures. Not only is sulfur the cheapest and lightest chalcogenide, current surpluses may be causing threat to the environment since it is a byproduct of oil capture, so sulfur consumption could help mitigate future damage. As for the metal, copper is an ideal seed particle for any kind of substitution method because of its high mobility and variable oxidation state, for it can balance or complement the charge of more inflexible cations. Therefore, either the cuprokalininite or bornite minerals could prove ideal thermoelectric components. Half-Heusler alloys Half-Heusler (HH) alloys have a great potential for high-temperature power generation applications. Examples of these alloys include NbFeSb, NbCoSn and VFeSb. They have a cubic MgAgAs-type structure formed by three interpenetrating face-centered-cubic (fcc) lattices. The ability to substitute any of these three sublattices opens the door for wide variety of compounds to be synthesized. Various atomic substitutions are employed to reduce the thermal conductivity and enhance the electrical conductivity. Previously, ZT could not peak more than 0.5 for p-type and 0.8 for n-type HH compound. However, in the past few years, researchers were able to achieve ZT≈1 for both n-type and p-type. Nano-sized grains is one of the approaches used to lower the thermal conductivity via grain boundaries- assisted phonon scattering. Other approach was to utilize the principles of nanocomposites, by which certain combination of metals were favored on others due to the atomic size difference. For instance, Hf and Ti is more effective than Hf and Zr, when reduction of thermal conductivity is of concern, since the atomic size difference between the former is larger than that of the latter. Flexible Thermoelectric Materials Electrically conducting organic materials Conducting polymers are of significant interest for flexible thermoelectric development. They are flexible, lightweight, geometrically versatile, and can be processed at scale, an important component for commercialization. However, the structural disorder of these materials often inhibits the electrical conductivity much more than the thermal conductivity, limiting their use so far. Some of the most common conducting polymers investigated for flexible thermoelectrics include poly(3,4-ethylenedioxythiophene) (PEDOT), polyanilines (PANIs), polythiophenes, polyacetylenes, polypyrrole, and polycarbazole. P-type PEDOT:PSS (polystyrene sulfonate) and PEDOT-Tos (Tosylate) have been some of the most encouraging materials investigated. Organic, air-stable n-type thermoelectrics are often harder to synthesize because of their low electron affinity and likelihood of reacting with oxygen and water in the air. These materials often have a figure of merit that is still too low for commercial applications (~0.42 in PEDOT:PSS) due to the poor electrical conductivity. Hybrid Composites Hybrid composite thermoelectrics involve blending the previously discussed electrically conducting organic materials or other composite materials with other conductive materials in an effort to improve transport properties. The conductive materials that are most commonly added include carbon nanotubes and graphene due to their conductivities and mechanical properties. It has been shown that carbon nanotubes can increase the tensile strength of the polymer composite they are blended with. However, they can also reduce the flexibility. Furthermore, future study into the orientation and alignment of these added materials will allow for improved performance. The percolation threshold of CNT’s is often especially low, well below 10%, due to their high aspect ratio. A low percolation threshold is desirable for both cost and flexibility purposes. Reduced graphene oxide (rGO) as graphene-related material was also used to enhance figure of merit of thermoelectric materials. The addition of rather low amount of graphene or rGO around 1 wt% mainly strengthens the phonon scattering at grain boundaries of all these materials as well as increases the charge carrier concentration and mobility in chalcogenide-, skutterudite- and, particularly, metal oxide-based composites. However, significant growth of ZT after addition of graphene or rGO was observed mainly for composites based on thermoelectric materials with low initial ZT. When thermoelectric material is already nanostructured and possesses high electrical conductivity, such an addition does not enhance ZT significantly. Thus, graphene or rGO-additive works mainly as an optimizer of the intrinsic performance of thermoelectric materials. Hybrid thermoelectric composites also refer to polymer-inorganic thermoelectric composites. This is generally achieved through an inert polymer matrix that is host to thermoelectric filler material. The matrix is generally nonconductive so as to not short current as well as to let the thermoelectric material dominate electrical transport properties. One major benefit of this method is that the polymer matrix will generally be highly disordered and random on many different length scales, meaning that the composite material will can have a much lower thermal conductivity. The general procedure to synthesize these materials involves a solvent to dissolve the polymer and dispersion of the thermoelectric material throughout the mixture. Silicon-germanium alloys Bulk Si exhibits a low ZT of ~0.01 because of its high thermal conductivity. However, ZT can be as high as 0.6 in silicon nanowires, which retain the high electrical conductivity of doped Si, but reduce the thermal conductivity due to elevated scattering of phonons on their extensive surfaces and low cross-section. Combining Si and Ge also allows to retain a high electrical conductivity of both components and reduce the thermal conductivity. The reduction originates from additional scattering due to very different lattice (phonon) properties of Si and Ge. As a result, Silicon-germanium alloys are currently the best thermoelectric materials around 1000 °C and are therefore used in some radioisotope thermoelectric generators (RTG) (notably the MHW-RTG and GPHS-RTG) and some other high^temperature applications, such as waste heat recovery. Usability of silicon-germanium alloys is limited by their high price and moderate ZT values (p-SiGe ~0.7 and n-SiGe ~1.0); however, ZT can be increased to 1–2 in SiGe nanostructures owing to the reduction in thermal conductivity. Sodium cobaltate Experiments on crystals of sodium cobaltate, using X-ray and neutron scattering experiments carried out at the European Synchrotron Radiation Facility (ESRF) and the Institut Laue-Langevin (ILL) in Grenoble were able to suppress thermal conductivity by a factor of six compared to vacancy-free sodium cobaltate. The experiments agreed with corresponding density functional calculations. The technique involved large anharmonic displacements of contained within the crystals. Amorphous materials In 2002, Nolas and Goldsmid have come up with a suggestion that systems with the phonon mean free path larger than the charge carrier mean free path can exhibit an enhanced thermoelectric efficiency. This can be realized in amorphous thermoelectrics and soon they became a focus of many studies. This ground-breaking idea was accomplished in Cu-Ge-Te, NbO2, In-Ga-Zn-O, Zr-Ni-Sn, Si-Au, and Ti-Pb-V-O amorphous systems. It should be mentioned that modelling of transport properties is challenging enough without breaking the long-range order so that design of amorphous thermoelectrics is at its infancy. Naturally, amorphous thermoelectrics give rise to extensive phonon scattering, which is still a challenge for crystalline thermoelectrics. A bright future is expected for these materials. Functionally graded materials Functionally graded materials make it possible to improve the conversion efficiency of existing thermoelectrics. These materials have a non-uniform carrier concentration distribution and in some cases also solid solution composition. In power generation applications the temperature difference can be several hundred degrees and therefore devices made from homogeneous materials have some part that operates at the temperature where ZT is substantially lower than its maximum value. This problem can be solved by using materials whose transport properties vary along their length thus enabling substantial improvements to the operating efficiency over large temperature differences. This is possible with functionally graded materials as they have a variable carrier concentration along the length of the material, which is optimized for operations over specific temperature range. Nanomaterials and superlattices In addition to nanostructured / superlattice thin films, other nanostructured materials, including silicon nanowires, nanotubes and quantum dots show potential in improving thermoelectric properties. PbTe/PbSeTe quantum dot superlattice Another example of a superlattice involves a PbTe/PbSeTe quantum dot superlattices provides an enhanced ZT (approximately 1.5 at room temperature) that was higher than the bulk ZT value for either PbTe or PbSeTe (approximately 0.5). Nanocrystal stability and thermal conductivity Not all nanocrystalline materials are stable, because the crystal size can grow at high temperatures, ruining the materials' desired characteristics. Nanocrystalline materials have many interfaces between crystals, which Physics of SASER scatter phonons so the thermal conductivity is reduced. Phonons are confined to the grain, if their mean free path is larger than the material grain size. Nanocrystalline transition metal silicides Nanocrystalline transition metal silicides are a promising material group for thermoelectric applications, because they fulfill several criteria that are demanded from the commercial applications point of view. In some nanocrystalline transition metal silicides the power factor is higher than in the corresponding polycrystalline material but the lack of reliable data on thermal conductivity prevents the evaluation of their thermoelectric efficiency. Nanostructured skutterudites Skutterudites, a cobalt arsenide mineral with variable amounts of nickel and iron, can be produced artificially, and are candidates for better thermoelectric materials. One advantage of nanostructured skutterudites over normal skutterudites is their reduced thermal conductivity, caused by grain boundary scattering. ZT values of ~0.65 and > 0.4 have been achieved with CoSb3 based samples; the former values were 2.0 for Ni and 0.75 for Te-doped material at 680 K and latter for Au-composite at . Even greater performance improvements can be achieved by using composites and by controlling the grain size, the compaction conditions of polycrystalline samples and the carrier concentration. Graphene Graphene is known for its high electrical conductivity and Seebeck coefficient at room temperature. However, from thermoelectric perspective, its thermal conductivity is notably high, which in turn limits its ZT. Several approaches were suggested to reduce the thermal conductivity of graphene without altering much its electrical conductivity. These include, but not limited to, the following: Doping with carbon isotopes to form isotopic heterojunction such as that of 12C and 13C. Those isotopes possess different phonon frequency mismatch, which leads to the scattering of the heat carriers (phonons). This approach has been shown to affect neither the power factor nor the electrical conductivity. Wrinkles and cracks in the graphene structure were shown to contribute to the reduction in the thermal conductivity. Reported values of thermal conductivity of suspended graphene of size 3.8 μm show a wide spread from 1500 to 5000 W/(m·K). A recent study attributed that to the microstructural defects present in graphene, such as wrinkles and cracks, which can drop the thermal conductivity by 27%. These defects help scatter phonons. Introduction of defects with techniques such as oxygen plasma treatment. A more systemic way of introducing defects in graphene structure is done through O2 plasma treatment. Ultimately, the graphene sample will contain prescribed-holes spaced and numbered according to the plasma intensity. People were able to improve ZT of graphene from 1 to a value of 2.6 when the defect density is raised from 0.04 to 2.5 (this number is an index of defect density and usually understood when compared to the corresponding value of the un-treated graphene, 0.04 in our case). Nevertheless, this technique would lower the electrical conductivity as well, which can be kept unchanged if the plasma processing parameters are optimized. Functionalization of graphene by oxygen. The thermal behavior of graphene oxide has not been investigated extensively as compared to its counterpart; graphene. However, it was shown theoretically by Density Functional Theory (DFT) model that adding oxygen into the lattice of graphene reduces a lot its thermal conductivity due to phonon scattering effect. Scattering of phonons result from both acoustic mismatch and reduced symmetry in graphene structure after doping with oxygen. The reduction of thermal conductivity can easily exceed 50% with this approach. Superlattices and roughness Superlattices – nano structured thermocouples, are considered a good candidate for better thermoelectric device manufacturing, with materials that can be used in manufacturing this structure. Their production is expensive for general-use due to fabrication processes based on expensive thin-film growth methods. However, since the amount of thin-film materials required for device fabrication with superlattices, is so much less than thin-film materials in bulk thermoelectric materials (almost by a factor of 1/10,000) the long-term cost advantage is indeed favorable. This is particularly true given the limited availability of tellurium causing competing solar applications for thermoelectric coupling systems to rise. Superlattice structures also allow the independent manipulation of transport parameters by adjusting the structure itself, enabling research for better understanding of the thermoelectric phenomena in nanoscale, and studying the phonon-blocking electron-transmitting structures – explaining the changes in electric field and conductivity due to the material's nano-structure. Many strategies exist to decrease the superlattice thermal conductivity that are based on engineering of phonon transport. The thermal conductivity along the film plane and wire axis can be reduced by creating diffuse interface scattering and by reducing the interface separation distance, both which are caused by interface roughness. Interface roughness can naturally occur or may be artificially induced. In nature, roughness is caused by the mixing of atoms of foreign elements. Artificial roughness can be created using various structure types, such as quantum dot interfaces and thin-films on step-covered substrates. Problems in superlattices Reduced electrical conductivity: Reduced phonon-scattering interface structures often also exhibit a decrease in electrical conductivity. The thermal conductivity in the cross-plane direction of the lattice is usually very low, but depending on the type of superlattice, the thermoelectric coefficient may increase because of changes to the band structure. Low thermal conductivity in superlattices is usually due to strong interface scattering of phonons. Minibands are caused by the lack of quantum confinement within a well. The mini-band structure depends on the superlattice period so that with a very short period (~1 nm) the band structure approaches the alloy limit and with a long period (≥ ~60 nm) minibands become so close to each other that they can be approximated with a continuum. Superlattice structure countermeasures: Counter measures can be taken which practically eliminate the problem of decreased electrical conductivity in a reduced phonon-scattering interface. These measures include the proper choice of superlattice structure, taking advantage of mini-band conduction across superlattices, and avoiding quantum-confinement. It has been shown that because electrons and phonons have different wavelengths, it is possible to engineer the structure in such a way that phonons are scattered more diffusely at the interface than electrons. Phonon confinement countermeasures: Another approach to overcome the decrease in electrical conductivity in reduced phonon-scattering structures is to increase phonon reflectivity and therefore decrease the thermal conductivity perpendicular to the interfaces. This can be achieved by increasing the mismatch between the materials in adjacent layers, including density, group velocity, specific heat, and the phonon-spectrum. Interface roughness causes diffuse phonon scattering, which either increases or decreases the phonon reflectivity at the interfaces. A mismatch between bulk dispersion relations confines phonons, and the confinement becomes more favorable as the difference in dispersion increases. The amount of confinement is currently unknown as only some models and experimental data exist. As with a previous method, the effects on the electrical conductivity have to be considered. Attempts to localize long-wavelength phonons by aperiodic superlattices or composite superlattices with different periodicities have been made. In addition, defects, especially dislocations, can be used to reduce thermal conductivity in low dimensional systems. Parasitic heat: Parasitic heat conduction in the barrier layers could cause significant performance loss. It has been proposed but not tested that this can be overcome by choosing a certain correct distance between the quantum wells. The Seebeck coefficient can change its sign in superlattice nanowires due to the existence of minigaps as Fermi energy varies. This indicates that superlattices can be tailored to exhibit n or p-type behavior by using the same dopants as those that are used for corresponding bulk materials by carefully controlling Fermi energy or the dopant concentration. With nanowire arrays, it is possible to exploit semimetal-semiconductor transition due to the quantum confinement and use materials that normally would not be good thermoelectric materials in bulk form. Such elements are for example bismuth. The Seebeck effect could also be used to determine the carrier concentration and Fermi energy in nanowires. In quantum dot thermoelectrics, unconventional or nonband transport behavior (e.g. tunneling or hopping) is necessary to utilize their special electronic band structure in the transport direction. It is possible to achieve ZT>2 at elevated temperatures with quantum dot superlattices, but they are almost always unsuitable for mass production. However, in superlattices, where quantum-effects are not involved, with film thickness of only a few micrometers (μm) to about 15 μm, Bi2Te3/Sb2Te3 superlattice material has been made into high-performance microcoolers and other devices. The performance of hot-spot coolers are consistent with the reported ZT~2.4 of superlattice materials at 300 K. Nanocomposites are promising material class for bulk thermoelectric devices, but several challenges have to be overcome to make them suitable for practical applications. It is not well understood why the improved thermoelectric properties appear only in certain materials with specific fabrication processes. SrTe nanocrystals can be embedded in a bulk PbTe matrix so that rocksalt lattices of both materials are completely aligned (endotaxy) with optimal molar concentration for SrTe only 2%. This can cause strong phonon scattering but would not affect charge transport. In such case, ZT~1.7 can be achieved at 815 K for p-type material. Tin selenide In 2014, researchers at Northwestern University discovered that tin selenide (SnSe) has a ZT of 2.6 along the b axis of the unit cell. This was the highest value reported to date. This was attributed to an extremely low thermal conductivity found in the SnSe lattice. Specifically, SnSe demonstrated a lattice thermal conductivity of 0.23 W·m−1·K−1, much lower than previously reported values of 0.5 W·m−1·K−1 and greater. This material also exhibited a ZT of along the c-axis and along the a-axis. These results were obtained at a temperature of . As shown by the figures below, SnSe performance metrics were found to significantly improve at higher temperatures; this is due to a structural change. Power factor, conductivity, and thermal conductivity all reach their optimal values at or above 750 K, and appear to plateau at higher temperatures. However, other groups have not been able to reproduce the reported bulk thermal conductivity data. Although it exists at room temperature in an orthorhombic structure with space group Pnma, SnSe undergoes a transition to a structure with higher symmetry, space group Cmcm, at higher temperatures. This structure consists of Sn-Se planes that are stacked upwards in the a-direction, which accounts for the poor performance out-of-plane (along a-axis). Upon transitioning to the Cmcm structure, SnSe maintains its low thermal conductivity but exhibits higher carrier mobilities. One impediment to further development of SnSe is that it has a relatively low carrier concentration: approximately 1017 cm−3. Compounding this issue is the fact that SnSe has been reported to have low doping efficiency. However, such single crystalline materials suffer from inability to make useful devices due to their brittleness as well as narrow range of temperatures, where ZT is reported to be high. In 2021 the researchers announced a polycrystalline form of SnSe that was at once less brittle and featured a ZT of 3.1. Anderson localization Anderson localization is a quantum mechanical phenomenom where charge carriers in a random potential are trapped in place (i.e. they are in localized states as opposed to being in scattering states if they could move freely). This localization prevents the charge carriers from moving, which inhibits their contribution to the thermal conductivity of a material, but because it also lowers the electrical conductivity, it was thought to reduce ZT and be detrimental for thermoelectric materials. In 2019, it was proposed that by localizing only the minority charge carriers in a doped semiconductor (i.e. holes in an n-doped semiconductor or electrons in a p-doped semiconductor), Anderson localization could increase ZT. The heat conductivity associated with movement of the minority charge carriers would be reduced while electrical conductivity of the majority charge carrier would be unaffected. In 2020, researchers at Kyung Hee University demonstrated the use of Anderson localization in an n-type semiconductor to improve the thermoelectric properties of a material. They embedded nanoparticles of silver telluride (Ag2Te) in a lead telluride (PbTe) matrix. Ag2Te undergoes a phase transition around 407 K. Below this temperature, both holes and electrons are localized at the Ag2Te nanoparticles, while after the transition, holes are still localized, but electrons can move freely in the material. The researchers were able to increase ZT from 1.5 to above 2.0 using this method. Production methods Production methods for these materials can be divided into powder and crystal growth based techniques. Powder based techniques offer excellent ability to control and maintain desired carrier distribution, particle size, and composition. In crystal growth techniques dopants are often mixed with melt, but diffusion from gaseous phase can also be used. In the zone melting techniques disks of different materials are stacked on top of others and then materials are mixed with each other when a traveling heater causes melting. In powder techniques, either different powders are mixed with a varying ratio before melting or they are in different layers as a stack before pressing and melting. There are applications, such as cooling of electronic circuits, where thin films are required. Therefore, thermoelectric materials can also be synthesized using physical vapor deposition techniques. Another reason to utilize these methods is to design these phases and provide guidance for bulk applications. 3D Printing Significant improvement on 3D printing skills has made it possible for thermoelectric components to be prepared via 3D printing. Thermoelectric products are made from special materials that absorb heat and create electricity. The requirement of fitting complex geometries in tightly constrained spaces makes 3D printing the ideal manufacturing technique. There are several benefits to the use of additive manufacturing in thermoelectric material production. Additive manufacturing allows for innovation in the design of these materials, facilitating intricate geometries that would not otherwise be possible by conventional manufacturing processes. It reduces the amount of wasted material during production and allows for faster production turnaround times by eliminating the need for tooling and prototype fabrication, which can be time-consuming and expensive. There are several major additive manufacturing technologies that have emerged as feasible methods for the production of thermoelectric materials, including continuous inkjet printing, dispenser printing, screen printing, stereolithography, and selective laser sintering. Each method has its own challenges and limitations, especially related to the material class and form that can be used. For example, selective laser sintering (SLS) can be used with metal and ceramic powders, stereolithography (SLA) must be used with curable resins containing solid particle dispersions of the thermoelectric material of choice, and inkjet printing must use inks which are usually synthesized by dispersing inorganic powders to organic solvent or making a suspension. The motivation for producing thermoelectrics by means of additive manufacturing is due to a desire to improve the properties of these materials, namely increasing their thermoelectric figure of merit ZT, and thereby improving their energy conversion efficiency. Research has been done proving the efficacy and investigating the material properties of thermoelectric materials produced via additive manufacturing. An extrusion-based additive manufacturing method was used to successfully print bismuth telluride (Bi2Te3) with various geometries. This method utilized an all-inorganic viscoelastic ink synthesized using Sb2Te2 chalcogenidometallate ions as binders for Bi2Te3-based particles. The results of this method showed homogenous thermoelectric properties throughout the material and a thermoelectric figure of merit ZT of 0.9 for p-type samples and 0.6 for n-type samples. The Seebeck coefficient of this material was also found to increase with increasing temperature up to around 200 °C. Groundbreaking research has also been done towards the use of selective laser sintering (SLS) for the production of thermoelectric materials. Loose Bi2Te3 powders have been printed via SLS without the use of pre- or post-processing of the material, pre-forming of a substrate, or use of binder materials. The printed samples achieved 88% relative density (compared to a relative density of 92% in conventionally manufactured Bi2Te3). Scanning Electron Microscopy (SEM) imaging results showed adequate fusion between layers of deposited materials. Though pores existed within the melted region, this is a general existing issue with parts made by SLS, occurring as a result of gas bubbles that get trapped in the melted material during its rapid solidification. X-ray diffraction results showed that the crystal structure of the material was intact after laser melting. The Seebeck coefficient, figure of merit ZT, electrical and thermal conductivity, specific heat, and thermal diffusivity of the samples were also investigated, at high temperatures up to 500 °C. Of particular interest is the ZT of these Bi2Te3 samples, which were found to decrease with increasing temperatures up to around 300 °C, increase slightly at temperatures between 300-400 °C, and then increase sharply without further increase in temperature. The highest achieved ZT value (for an n-type sample) was about 0.11. The bulk thermoelectric material properties of samples produced using SLS had comparable thermoelectric and electrical properties to thermoelectric materials produced using conventional manufacturing methods. This the first time the SLS method of thermoelectric material production has been used successfully. Mechanical Properties Thermoelectric materials are commonly used in thermoelectric generators to convert the thermal energy into electricity. Thermoelectric generators have the advantage of no moving parts and do not require any chemical reaction for energy conversion, which make them stand out from other sustainable energy resources such as wind turbine and solar cells; Nevertheless, the mechanical performance of thermoelectric generators may decay over time due to plastic, fatigue and creep deformation as a result of being subjected to complex and time-varying thermomechanical stresses. Thermomechanical Stresses in Thermoelectric Devices Geometrical Effects In their research, Al-Merbati et al. found that the stress levels around the leg corners of thermoelectric devices were high and generally increased closer to the hot side. However, switching to a trapezoidal leg geometry reduced thermal stresses. Erturun et al. compared various leg geometries and discovered that rectangular prism and cylindrical legs experienced the highest stresses. Studies have also shown that using thinner and longer legs can significantly relieve stress. Tachibana and Fang estimated the relationship between thermal stress, temperature difference, coefficient of thermal expansion, and module dimensions. They found that the thermal stress was proportional to, where L, α, ΔT and h are module thickness, Coefficients of Thermal Expansion(CTE), temperature difference and leg height, respectively. Effect of Boundary Conditions Clin et al. conducted finite-element analysis to replicate thermal stresses in a thermoelectric module and concluded that the thermal stresses were dependent on the mechanical boundary conditions on the module and on CTE mismatch between various components. The corners of the legs exhibited maximum stresses. In a separate investigation, Turenne et al. examined the distribution of stress in large freestanding thermoelectric modules and those rigidly fixed between two surfaces for thermal exchange. Although boundary conditions significantly altered the stress distribution, the authors deduced that external compressive loading on the TE module resulted in the creation of global compressive stresses. Effect of Thermal Fatigue Thermoelectric materials commonly contain different types of defects, such as dislocations, vacancies, secondary phases and antisite defects. These defects can affect thermoelectric performance by evolving under service conditions. In 2019, Yun Zheng et al. studied thermal fatigue of Bi_2Te_3 -based materials and they proposed that their fatigue behavior can be reduced via boosting the fracture toughness by introducing pores, microcracks or inclusion with the inextricable trade-off with fracture strength. Effect of Thermal Shocks Thermoelectric materials can undergo thermal shock loading through service temperature spikes, soldering and metallizing processes. The thermoelectric leg can be coated with metals to form the required diffusion barrier (Metallizing) and dipping the metallized leg in a molten alloy bath (Soldering) for connecting the leg to the interconnect. In a study conducted by Pelletier et al. thermoelectric disks were quenched for the purpose of thermal shock experiments. They realized that quenching in a hot medium helped disks' surface to produce compressive stresses in contrary to the core, which developed tensile stress. Anisotropic materials and thin disks were reported to develop greater maximum stresses. They also observed fracturing of specimens during quenching process in a soldering bath from room temperature. Effect of Tensile Stresses Thermal stresses have been quantified and extensively studied in thermoelectric modules throughout the years but von Mises stresses are commonly reported. The von Mises stress defines a constraint on plastic yielding without having any information of the stress nature. For instance, in a study by Sakamoto et al. the mechanical stability of a Mg_2Si-based structure was investigated that could utilize thermoelectric legs at an angle with elecftrical interconnects and substrates. Maximum tensile strength stresses were calculated and compared to the ultimate tensile strength of different materials. This approach might be misleading for brittle materials (such as ceramics) as they do not possess a defined tensile strength. Thermal Mismatch Stresses In 2018, Chen et al. investigated the cracking failure of Cu pillar bump that was caused by electromigration under thermoelectric coupling load. They showed that under thermoelectric coupling load, will experience severe joule heat and current density that can be accumulate thermoemechanical stress and miscrostructure evolution. They also pointed out that the difference in CTE between materials in the flip chip package causes thermal mismatch stress which can later develop the cavities to expand along cathode into cracks. Also, it is worth noting that they mentioned thermal-electrical coupling can cause electromigration, microcracks and delamination due to temperature and stress concentration that can fail Cu pillar bumps. Phase-Transformation Stresses Phase transformation can occur in thermoelectric materials as well as many other energy materials. As pointed out by Al Malki et al., phase transformation can lead to a total plastic strain when internal mismatch stresses are biased with shear stress. The alpha phase of Ag_2S transforms to a body centered cubic phase. Liang et al. showed that a crack was observed when heating through 407 K through this phase transformation. Creep Deformation Creep deformation is a time-dependent mechanism where strain accumulates as amaterial is subjected to external or internal stressesat a high homologous temperature in excess ofT/Tm= 0.5(whereTmis the melting point in K). This phenomenon can emerge in thermoelectric devices after operating for a long time (i.e. months to years). A coarse-grained or monocrystalline structures have been shown to be desirable as creep-resistant materials. Applications Refrigeration Thermoelectric materials can be used as refrigerators, called "thermoelectric coolers", or "Peltier coolers" after the Peltier effect that controls their operation. As a refrigeration technology, Peltier cooling is far less common than vapor-compression refrigeration. The main advantages of a Peltier cooler (compared to a vapor-compression refrigerator) are its lack of moving parts or refrigerant, and its small size and flexible shape (form factor). The main disadvantage of Peltier coolers is low efficiency. It is estimated that materials with ZT>3 (about 20–30% Carnot efficiency) would be required to replace traditional coolers in most applications. Today, Peltier coolers are only used in niche applications, especially small scale, where efficiency is not important. Power generation Thermoelectric efficiency depends on the figure of merit, ZT. There is no theoretical upper limit to ZT, and as ZT approaches infinity, the thermoelectric efficiency approaches the Carnot limit. However, until recently no known thermoelectrics had a ZT>3. In 2019, researchers reported a material with approximated ZT between 5 and 6. As of 2010, thermoelectric generators serve application niches where efficiency and cost are less important than reliability, light weight, and small size. Internal combustion engines capture 20–25% of the energy released during fuel combustion. Increasing the conversion rate can increase mileage and provide more electricity for on-board controls and creature comforts (stability controls, telematics, navigation systems, electronic braking, etc.) It may be possible to shift energy draw from the engine (in certain cases) to the electrical load in the car, e.g., electrical power steering or electrical coolant pump operation. Cogeneration power plants use the heat produced during electricity generation for alternative purposes; being this more profitable in industries with high amounts of waste energy. Thermoelectrics may find applications in such systems or in solar thermal energy generation. See also Batteryless radio Pyroelectric effect Thermionic converter References Bibliography External links TE Modules Application Tips and Hints The Seebeck Coefficient Materials for Thermoelectric Devices (4th chapter of Martin Wagner dissertation) New material breaks world record for turning heat into electricity Thermoelectricity Materials science Energy conversion bg:Термоелектричество ca:Termoelectricitat de:Thermoelektrizität el:Θερμοηλεκτρισμός es:Termoelectricidad fr:Thermoélectricité it:Termoelettricità lt:Termoelektra pt:Termoeletricidade
Thermoelectric materials
Physics,Materials_science,Engineering
12,009
9,646,527
https://en.wikipedia.org/wiki/Disodium%20phosphate
Disodium phosphate (DSP), or disodium hydrogen phosphate, or sodium phosphate dibasic, is an inorganic compound with the chemical formula . It is one of several sodium phosphates. The salt is known in anhydrous form as well as hydrates , where n is 2, 7, 8, and 12. All are water-soluble white powders. The anhydrous salt is hygroscopic. The pH of disodium hydrogen phosphate water solution is between 8.0 and 11.0, meaning it is moderately basic: Production and reactions It can be generated by neutralization of phosphoric acid with sodium hydroxide: Industrially It is prepared in a two-step process by treating dicalcium phosphate with sodium bisulfate, which precipitates calcium sulfate: In the second step, the resulting solution of monosodium phosphate is partially neutralized: Uses It is used in conjunction with trisodium phosphate in foods and water softening treatment. In foods, it is used to adjust pH. Its presence prevents coagulation in the preparation of condensed milk. Similarly, it is used as an anti-caking additive in powdered products. It is used in desserts and puddings, e.g. Cream of Wheat to quicken cook time, and Jell-O Instant Pudding for thickening. In water treatment, it retards calcium scale formation. It is also found in some detergents and cleaning agents. Heating solid disodium phosphate gives the useful compound tetrasodium pyrophosphate: Laxative Monobasic and dibasic sodium phosphate are used as a saline laxative to treat constipation or to clean the bowel before a colonoscopy. References External links solubility in Prophylaxis alcohol Sodium compounds Phosphates Edible thickening agents
Disodium phosphate
Chemistry
385
33,208,041
https://en.wikipedia.org/wiki/P-adic%20quantum%20mechanics
p-adic quantum mechanics is a collection of related research efforts in quantum physics that replace real numbers with p-adic numbers. Historically, this research was inspired by the discovery that the Veneziano amplitude of the open bosonic string, which is calculated using an integral over the real numbers, can be generalized to the p-adic numbers. This observation initiated the study of p-adic string theory. Another approach considers particles in a p-adic potential well, with the goal of finding solutions with smoothly varying complex-valued wave functions. Alternatively, one can consider particles in p-adic potential wells and seek p-adic valued wave functions, in which case the problem of the probabilistic interpretation of the p-adic valued wave function arises. As there does not exist a suitable p-adic Schrödinger equation, path integrals are employed instead. Some one-dimensional systems have been studied by means of the path integral formulation, including the free particle, the particle in a constant field, and the harmonic oscillator. References External links P-adic numbers Quantum mechanics String theory
P-adic quantum mechanics
Physics,Astronomy,Mathematics
233
23,586,028
https://en.wikipedia.org/wiki/Inertia%20damper
An inertia damper is a device that counters vibration using the effects of inertia and other forces and motion. The damper does not negate the forces but either absorbs or redirects them by other means. For example, a large and heavy suspended body may be used to absorb several short-duration large forces, and to reapply those forces as a smaller force over a longer period. Real-world applications and devices Inertial compensators are also used in simulators or rides, making them more realistic by creating artificial sensations of acceleration and other movement. The Disneyland ride “Star Tours: The Adventure Continues” is a fair example of this principle. There are many types of physical devices that can act as inertia dampers: Stockbridge damper - absorbs resonant wave motions in wire and support cables, seen on high voltage power lines. Shock absorber - motion redirected as heating of viscous oil forced through a restrictive passage Inerter (mechanical networks) A mechanical analog to an electrical capacitor. Rotary damper - rotary motion is dissipated as heat in a highly viscous fluid or gel. May use a smooth surface rotating cylinder and a smooth surface stationary interior wall with fluid/gel between. For more forceful motion absorption and higher surface area, a paddle wheel or toothed gear is used, with a similarly ribbed or studded stationary interior wall to more forcefully grip the fluid/gel. See also References Force Mass
Inertia damper
Physics,Mathematics
303
27,884,781
https://en.wikipedia.org/wiki/ISO%20657
ISO 657 (hot-rolled steel sections) is an ISO standard that specifies the tolerances for hot-finished circular, square and rectangular structural hollow sections and gives the dimensions and sectional properties for a range of standard sizes. This first edition as an International Standard constitutes a technical revision of ISO Recommendation R 657-1:1968. ISO 657 consists of 21 parts integrating any shapes of sections. ISO 657-1 specifies dimensions of hot-rolled equal-leg angles. Amendments ISO 657-2:1989 ISO 657-5:1976 ISO 657-11:1980 ISO 657-14:2000 Revisions ISO/DIS 12633-2 ISO 657-14:1982 ISO 657-15:1980 ISO 657-16:1980 ISO 657-18:1980 ISO 657-19:1980 ISO 657-21:1983 References ISO Catalogue in the ISO website 00657
ISO 657
Technology
188
29,392,122
https://en.wikipedia.org/wiki/Computer%20Graphics%3A%20Principles%20and%20Practice
Computer Graphics: Principles and Practice is a textbook written by James D. Foley, Andries van Dam, Steven K. Feiner, John Hughes, Morgan McGuire, David F. Sklar, and Kurt Akeley and published by Addison–Wesley. First published in 1982 as Fundamentals of Interactive Computer Graphics, it is widely considered a classic standard reference book on the topic of computer graphics. It is sometimes known as the bible of computer graphics (due to its size). Editions First Edition The first edition, published in 1982 and titled Fundamentals of Interactive Computer Graphics, discussed the SGP library, which was based on ACM's SIGGRAPH CORE 1979 graphics standard, and focused on 2D vector graphics. Second Edition The second edition, published 1990, was completely rewritten and covered 2D and 3D raster and vector graphics, user interfaces, geometric modeling, anti-aliasing, advanced rendering algorithms and an introduction to animation. The SGP library was replaced by SRGP (Simple Raster Graphics Package), a library for 2D raster primitives and interaction handling, and SPHIGS (Simple PHIGS), a library for 3D primitives, which were specifically written for the book. Second Edition in C In the second edition in C, all examples were converted from Pascal to C. New implementations for the SRGP and SPHIGS graphics packages in C were also provided. Third Edition A third edition covering modern GPU architecture was released in July 2013. Examples in the third edition are written in C++, C#, WPF, GLSL, OpenGL, G3D, or pseudocode. Awards The book has won a Front Line Award (Hall of Fame) in 1998. References 1990 non-fiction books 1995 non-fiction books Engineering textbooks Computer books Computer science books Addison-Wesley books Computer graphics
Computer Graphics: Principles and Practice
Technology
370
1,636,584
https://en.wikipedia.org/wiki/Ilin%20Island%20cloudrunner
The Ilin Island cloudrunner (Crateromys paulus) is a cloud rat known from a single specimen purchased on Ilin Island in the Philippines. It is called siyang by the Taubuwid Mangyan. It is a fluffy-coated, bushy-tailed rat and may have emerged from tree hollows at night to feed on fruits and leaves. The specimen, collected on 4 April 1953, was presented to the National Museum of Natural History in Washington D.C. The island's forests have been destroyed by human activity. The cloudrunner is among the 25 “most wanted lost” species that are the focus of Re:wild’s “Search for Lost Species” initiative. As there in no proof that the single specimen originated on Ilin Island, searches are now focussed on nearby Mindoro. Hope that it may be rediscovered have prompted IUCN to improve its status from possibly Extinct (EX?) in 1994 to Critically Endangered (CR) in 1996 before the current Data Deficient (DD) from 2008. References Flannery, Tim F., and Peter Schouten. A gap in nature : discovering the world's extinct animals. Melbourne: Text Pub, 2001. Print. Musser, G. G. and M. D. Carleton. 2005. Superfamily Muroidea. pp. 894–1531 in Mammal Species of the World a Taxonomic and Geographic Reference. D. E. Wilson and D. M. Reeder eds. Johns Hopkins University Press, Baltimore. Specific Crateromys Rodents of the Philippines Mammals described in 1981 Endemic fauna of the Philippines Fauna of Mindoro Species known from a single specimen
Ilin Island cloudrunner
Biology
337
14,368,827
https://en.wikipedia.org/wiki/Ideomotor%20apraxia
Ideomotor Apraxia, often IMA, is a neurological disorder characterized by the inability to correctly imitate hand gestures and voluntarily mime tool use, e.g. pretend to brush one's hair. The ability to spontaneously use tools, such as brushing one's hair in the morning without being instructed to do so, may remain intact, but is often lost. The general concept of apraxia and the classification of ideomotor apraxia were developed in Germany in the late 19th and early 20th centuries by the work of Hugo Liepmann, Adolph Kussmaul, Arnold Pick, Paul Flechsig, Hermann Munk, Carl Nothnagel, Theodor Meynert, and linguist Heymann Steinthal, among others. Ideomotor apraxia was classified as "ideo-kinetic apraxia" by Liepmann due to the apparent dissociation of the idea of the action with its execution. The classifications of the various subtypes are not well defined at present, however, owing to issues of diagnosis and pathophysiology. Ideomotor apraxia is hypothesized to result from a disruption of the system that relates stored tool use and gesture information with the state of the body to produce the proper motor output. This system is thought to be related to the areas of the brain most often seen to be damaged when ideomotor apraxia is present: the left parietal lobe and the premotor cortex. Little can be done at present to reverse the motor deficit seen in ideomotor apraxia, although the extent of dysfunction it induces is not entirely clear. Signs and Symptoms Ideomotor apraxia (IMA) impinges on one's ability to carry out common, familiar actions on command, such as waving goodbye. Persons with IMA exhibit a loss of ability to carry out motor movements, and may show errors in how they hold and move the tool in attempting the correct function. One of the defining symptoms of ideomotor apraxia is the inability to pantomime tool use. As an example, if a normal individual were handed a comb and instructed to pretend to brush his hair, he would grasp the comb properly and pass it through his hair. If this were repeated in a patient with ideomotor apraxia, the patient may move the comb in big circles around his head, hold it upside-down, or perhaps try and brush his teeth with it. The error may also be temporal in nature, such as brushing exceedingly slowly. The other characteristic symptom of ideomotor apraxia is the inability to imitate hand gestures, meaningless or meaningful, on request; a meaningless hand gesture is something like having someone make a ninety-degree angle with his thumb and placing it under his nose, with his hand in the plane of his face. This gesture has no meaning attached to it. In contrast, a meaningful gesture is something like saluting or waving goodbye. An important distinction here is that all of the above refer to actions that are consciously and voluntarily initiated. That is to say that a person is specifically asked to either imitate what someone else is doing or is given verbal instructions, such as "wave goodbye." People with ideomotor apraxia will know what they are supposed to do, e.g. they will know to wave goodbye and what their arm and hand should do to accomplish it, but will be unable to execute the motion correctly. This voluntary type of action is distinct from spontaneous actions. Ideomotor apraxia patients may still retain the ability to perform spontaneous motions; if someone they know leaves the room, for instance, they may be able to wave goodbye to that person, despite being unable to do so at request. The ability to perform this sort of spontaneous action is not always retained, however; some affected individuals lose this capability, as well. The recognition of meaningful gestures, e.g. understanding what waving goodbye means when it is seen, seems to be unaffected by ideomotor apraxia. It has also been shown that individuals with ideomotor apraxia may have some deficits in general spontaneous movements. Apraxia patients appear to be unable to tap their fingers as quickly as a control group, with a lower maximum tapping rate correlated with more severe apraxia. It has also been demonstrated that apraxic patients are slower to point at a target light when they do not have sight of their hand as compared with healthy patients under the same conditions. The two groups did not differ when they could see their hands. The speed and accuracy of grasping objects also appears unaffected by ideomotor apraxia. Patients with ideomotor apraxia appear to be much more reliant on visual input when conducting movements then nonapraxic individuals. Cause The most common cause of ideomotor apraxia is a unilateral ischemic lesion to the brain, which is damage to one hemisphere of the brain due to a disruption of the blood supply, as in a stroke. There are a variety of brain areas where lesions have been correlated to ideomotor apraxia. Initially it was believed that damage to the subcortical white matter tracts, the axons that extend down from the cells bodies in the cerebral cortex, was the main area responsible for this form of apraxia. Lesions to the basal ganglia may also be responsible, although there is considerable debate as to whether damage to the basal ganglia alone would be sufficient to induce apraxia. Lesions to these lower brain structures has not, however, been shown to be more prevalent in apraxic patients. In fact, these types of lesions are more common in nonapraxic patients. The lesions most associated with ideomotor apraxia are to the left parietal and premotor areas. Patients with lesions to the supplementary motor area have also presented with ideomotor apraxia. Lesions to the corpus callosum can also induce apraxic-like symptoms, with varying effects on the two hands, although this has not been thoroughly studied. In addition to ischemic lesions to the brain, ideomotor apraxia has also been seen in neurodegenerative disorders such as Parkinson's disease, Alzheimer's disease, Huntington's disease, corticobasal degeneration, and progressive supranuclear palsy. Pathophysiology The prevailing hypothesis for the pathophysiology of ideomotor apraxia is that the various brain lesions associated with the disorder somehow disrupt portions of the praxis system. The praxis system is the brain regions that are involved in taking processed sensory input, accessing of stored information about tools and gestures, and translating these into a motor output. Buxbaum et al. have proposed that the praxis system involves three distinct parts: stored gesture representations, stored tool knowledge, and a "dynamic body schema." The first two store information about the representation of gestures in the brain and the characteristic movements of tools. The body schema is a brain model of the body and its position in space. The praxis system relates the stored information about a movement type to how the dynamic, i.e. changing, body representation varies as the movement progresses. It is still not clear how this system maps out onto the brain itself, although some research has given indications to possible locations for certain portions. The dynamic body schema has been suggested to be localized in the superior posterior parietal cortex. There is also evidence that the inferior parietal lobule may be the locus for storage of the characteristic movements of a tool. This area showed inverse activation to the cerebellum in a study of tool use and tool mime. If the connections between these areas become severed, the praxis system would be disrupted, possibly resulting in the symptoms observed in ideomotor apraxia. Diagnosis There is no one definitive test for ideomotor apraxia; there are several that are used clinically to make an ideomotor apraxia diagnosis. The criteria for a diagnosis are not entirely conserved among clinicians, for apraxia in general or distinguishing subtypes. Almost all the tests laid out here that enable a diagnosis of ideomotor apraxia share a common feature: assessment of the ability to imitate gestures. A test developed by Georg Goldenberg uses imitation assessment of 10 gestures. The tester demonstrates the gesture to the patient and rates him on how whether the gesture was correctly imitated. If the first attempt to imitate the gesture was unsuccessful, the gesture is presented a second time; a higher score is given for correct imitation on the first trial, then for the second, and the lowest score is for not correctly imitating the gesture. The gestures used here are all meaningless, such as placing the hand flat on the top of the head or flat outward with the fingers towards the ear. This test is specifically designed for ideomotor apraxia. The main variation from this is in the type and number of gestures used. One test uses twenty-four movements with three trials for each and a trial-based scoring system similar to the Goldenberg protocol. The gestures here are also copied by the patient from the tester and are divided into finger movements, e.g. making a scissor movement with the forefinger and middle finger, and hand and arm movements, e.g. doing a salute. This protocol combines meaningful and meaningless gestures. Another test uses five meaningful gestures, such as waving goodbye or scratching your head and five meaningless gestures. Additional differences in this test are a verbal command to initiate the movement and it distinguishes between accurate performance and inaccurate but recognizable performance. One test utilizes tools, including a hammer and a key, with both a verbal command to use the tools and the patient copying the tester's demonstrated use of the tools. These tests have been shown to be individually unreliable, with considerable variability between the diagnoses delivered by each. If a battery of tests is used, however, the reliability and validity may be improved. It is also highly advisable to include assessments of how the patient performs activities in daily life. One of the newer tests that has been developed may provide greater reliability without relying on a multitude of tests. It combines three types of tool use with imitation of gestures. The tool use section includes having the patient pantomime use with no tool present, with visual contact with the tool, and finally with tactile contact with the tool. This test screens for ideational and ideomotor apraxia, with the second portion aimed specifically at ideomotor apraxia. One study showed great potential for this test, but further studies are needed to reproduce these results before this can be said with confidence. This disorder often occurs with other degenerative neurological disorders such as Parkinson's disease and Alzheimer's disease. These comorbidities can make it difficult to pick out the specific features of ideomotor apraxia. The important point in distinguishing ideomotor apraxia is that basic motor control is intact; it is a high level dysfunction involving tool use and gesturing. Additionally, clinicians must be careful to exclude aphasia as a possible diagnosis, as, in the tests involving verbal command, an aphasic patient could fail to perform a task properly because they do not understand what the directions are. Management Given the complexity of the medical problems facing people with ideomotor apraxia, as they are usually experiencing a multitude of other problems, it is difficult to ascertain the impact that it has on their ability to function independently. Deficits due to Parkinson's or Alzheimer's disease could very well be sufficient to mask or make irrelevant difficulties arising from the apraxia. Some studies have shown ideomotor apraxia to independently diminish the patient's ability to function on their own. The general consensus seems to be that ideomotor apraxia does have a negative impact on independence in that it can reduce an individual's ability to manipulate objects, as well as diminishing the capacity for mechanical problem solving, owing to the inability to access information about how familiar parts of the unfamiliar system function. A small subset of patients has been known to spontaneously recover from apraxia; this is rare, however. One possible hope is the phenomenon of hemispheric shift, where functions normally performed by one hemisphere can shift to the other in the event that the first is damaged. This seems to necessitate, however, that some portion of the function is associated with the other hemisphere to begin with. There is dispute over whether the right hemisphere of the cortex is involved at all in the praxis system, as some evidence from patients with severed corpus callosums indicates it may not be. Although there is little that can be done to substantially reverse the effects of ideomotor apraxia, Occupational Therapy can be effective in helping patients regain some functional control. Sharing the same approach in treating ideational apraxia, this is achieved by breaking a daily task (e.g. combing hair) into separate components and teaching each distinct component individually. With ample repetition, proficiency in these movements can be acquired and should eventually be combined to create a single pattern of movement. References Further reading External links Apraxia An Intervention Guide for Occupational Therapists Neurological disorders Complications of stroke Motor control
Ideomotor apraxia
Biology
2,782
15,290,960
https://en.wikipedia.org/wiki/Mottle
Mottle is a pattern of irregular marks, spots, streaks, blotches or patches of different shades or colours. It is commonly used to describe the surface of plants or the skin of animals. In plants, mottling usually consists of yellowish spots on plants, and is usually a sign of disease or malnutrition. Many plant viruses cause mottling, some examples being: Tobacco vein mottling virus Bean pod mottle virus Mottling is sometimes used to describe uneven, discolored patches on the skin of humans as a result of cutaneous ischemia (lowered blood flow to the surfaces of the skin) or Herpes zoster infections. The medical term for mottled skin is dyschromia. Although this is not always the case, mottling can occur in the dying patient and commonly indicates that the end of life is near. Mottling usually occurs in the extremities (lower first) and progresses up as cardiac function declines and circulation throughout the body is poor. In animals, mottling may be a sign of disease, but may also be a hereditary trait, such as seen with the champagne and leopard complex genes in horses. Mottles can also refer to discoloration in processed food, such as butter. In geology, mottled refers to a patchy/blotch texture of alteration or interbedding, commonly found in limestone and commonly caused by bioturbation. Mottling can also refer to an undesirable defect which can occur with effect coatings, most obvious on light metallic finishes. The total color impression shows irregular areas of lightness variations. These "patches" are usually visually evaluated, described as a mottling effect. Some also feel that it reminds them of clouds. This effect is especially noticeable on large body panels. It can be caused by the coating formulation, as well as variations in the application process. For example, disorientation of the metallic flakes or film thickness variations of the basecoat can lead to various mottle sizes resulting in a non-uniform appearance. The visual perception of mottling is dependent on the viewing distance: Large mottles can be seen in far distance evaluation, while small mottles are more noticeable in close up evaluation. The visual evaluation of mottling is very subjective, as it depends on the illumination conditions, the observing distance and the viewing angle. In graphics printing mottling refers to an uneven coloration resulting from letterpressed printing of textured papers, mainly in larger colored surfaces. Due to the uneven surface, not all fibers of the paper are evenly saturated with color unlike offset printing. Measurement The irregular lightness variations caused by mottling can be objectively measured with specially made instruments. These instruments simulate visual evaluation under different observing angles and characterize clouds / mottles by their size and visibility. Small to large mottles are measured under three observing angles, in which the scan length can usually be varied from 10 to 100 cm. The measurement results are independent of color and curvature of the surface and thus can be considered objective. The specific measurement process for one such instrument is as follows. It first optically scans the surface and measures the lightness variations. The specimen is illuminated with a white light LeD at a 15° angle and the lightness is detected under three viewing angles to simulate visual evaluation under different observing conditions: 15°, 45° and 60° measured from the specular reflection. The mottling meter is rolled across the surface for a defined distance of 10 to 100 cm and measures the lightness variations point by point. The measurement signal is divided via mathematical filter functions into 6 different size ranges and a rating value is calculated for each angle and mottle size. The higher the value is, the more visible the mottling effect. The measured values are displayed in a graph showing the mottle size on the X-axis and the rating value on the Y-axis. Thus, target values for small and large mottle sizes can be established for paint batch approval as well as process control. Military Military battledress often use a mottle pattern, such as Frog Skin and Flecktarn. References Plant pathogens and diseases Human skin color Color
Mottle
Biology
863
35,258,497
https://en.wikipedia.org/wiki/Telephone%20number%20%28mathematics%29
In mathematics, the telephone numbers or the involution numbers form a sequence of integers that count the ways people can be connected by person-to-person telephone calls. These numbers also describe the number of matchings (the Hosoya index) of a complete graph on vertices, the number of permutations on elements that are involutions, the sum of absolute values of coefficients of the Hermite polynomials, the number of standard Young tableaux with cells, and the sum of the degrees of the irreducible representations of the symmetric group. Involution numbers were first studied in 1800 by Heinrich August Rothe, who gave a recurrence equation by which they may be calculated, giving the values (starting from ) Applications John Riordan provides the following explanation for these numbers: suppose that people subscribe to a telephone service that can connect any two of them by a call, but cannot make a single call connecting more than two people. How many different patterns of connection are possible? For instance, with three subscribers, there are three ways of forming a single telephone call, and one additional pattern in which no calls are being made, for a total of four patterns. For this reason, the numbers counting how many patterns are possible are sometimes called the telephone numbers. Every pattern of pairwise connections between people defines an involution, a permutation of the people that is its own inverse. In this permutation, each two people who call each other are swapped, and the people not involved in calls remain fixed in place. Conversely, every possible involution has the form of a set of pairwise swaps of this type. Therefore, the telephone numbers also count involutions. The problem of counting involutions was the original combinatorial enumeration problem studied by Rothe in 1800 and these numbers have also been called involution numbers. In graph theory, a subset of the edges of a graph that touches each vertex at most once is called a matching. Counting the matchings of a given graph is important in chemical graph theory, where the graphs model molecules and the number of matchings is the Hosoya index. The largest possible Hosoya index of an -vertex graph is given by the complete graphs, for which any pattern of pairwise connections is possible; thus, the Hosoya index of a complete graph on vertices is the same as the -th telephone number. A Ferrers diagram is a geometric shape formed by a collection of squares in the plane, grouped into a polyomino with a horizontal top edge, a vertical left edge, and a single monotonic chain of edges from top right to bottom left. A standard Young tableau is formed by placing the numbers from 1 to into these squares in such a way that the numbers increase from left to right and from top to bottom throughout the tableau. According to the Robinson–Schensted correspondence, permutations correspond one-for-one with ordered pairs of standard Young tableaux. Inverting a permutation corresponds to swapping the two tableaux, and so the self-inverse permutations correspond to single tableaux, paired with themselves. Thus, the telephone numbers also count the number of Young tableaux with squares. In representation theory, the Ferrers diagrams correspond to the irreducible representations of the symmetric group of permutations, and the Young tableaux with a given shape form a basis of the irreducible representation with that shape. Therefore, the telephone numbers give the sum of the degrees of the irreducible representations. In the mathematics of chess, the telephone numbers count the number of ways to place rooks on an chessboard in such a way that no two rooks attack each other (the so-called eight rooks puzzle), and in such a way that the configuration of the rooks is symmetric under a diagonal reflection of the board. Via the Pólya enumeration theorem, these numbers form one of the key components of a formula for the overall number of "essentially different" configurations of mutually non-attacking rooks, where two configurations are counted as essentially different if there is no symmetry of the board that takes one into the other. Mathematical properties Recurrence The telephone numbers satisfy the recurrence relation first published in 1800 by Heinrich August Rothe, by which they may easily be calculated. One way to explain this recurrence is to partition the connection patterns of the subscribers to a telephone system into the patterns in which the first person is not calling anyone else, and the patterns in which the first person is making a call. There are connection patterns in which the first person is disconnected, explaining the first term of the recurrence. If the first person is connected to someone, there are choices for that person, and patterns of connection for the remaining people, explaining the second term of the recurrence. Summation formula and approximation The telephone numbers may be expressed exactly as a summation In each term of the first sum, gives the number of matched pairs, the binomial coefficient counts the number of ways of choosing the elements to be matched, and the double factorial is the product of the odd integers up to its argument and counts the number of ways of completely matching the selected elements. It follows from the summation formula and Stirling's approximation that, asymptotically, Generating function The exponential generating function of the telephone numbers is In other words, the telephone numbers may be read off as the coefficients of the Taylor series of and, in particular, the -th telephone number is the value at zero of the -th derivative of this function. The exponential generating function can be derived in a number of ways; for example, taking the recurrence relation for above, multiplying it by , and summing over gives The general solution to this differential equation is , and shows that the constant of proportionality is 1. This function is closely related to the exponential generating function of the Hermite polynomials, which are the matching polynomials of the complete graphs. The sum of absolute values of the coefficients of the -th (probabilist's) Hermite polynomial is the -th telephone number, and the telephone numbers can also be realized as certain special values of the Hermite polynomials: Prime factors For large values of , the -th telephone number is divisible by a large power of two, . More precisely, the 2-adic order (the number of factors of two in the prime factorization) of and of is ; for it is , and for it is . For any prime number , one can test whether there exists a telephone number divisible by by computing the recurrence for the sequence of telephone numbers, modulo , until either reaching zero or detecting a cycle. The primes that divide at least one telephone number are The odd primes in this sequence have been called inefficient. Each of them divides infinitely many telephone numbers. References Integer sequences Enumerative combinatorics Factorial and binomial topics Matching (graph theory) Permutations
Telephone number (mathematics)
Mathematics
1,441
40,351,890
https://en.wikipedia.org/wiki/Echitamidine
Echitamidine is an indole alkaloid isolated from Alstonia boonei. Its laboratory synthesis has been reported. References Indolizidines Tryptamine alkaloids Carbazoles Heterocyclic compounds with 5 rings Methyl esters Secondary alcohols
Echitamidine
Chemistry
59
33,863,235
https://en.wikipedia.org/wiki/Kozhikode%20Light%20Metro
Kozhikode Light Metro is a proposed Light Metro system for the city of Kozhikode, in India. In 2010, the State government explored the possibility of implementing a metro rail project for Kozhikode city and its suburbs. The proposal was to have a corridor connecting Meenchanda to the Kozhikode Medical College Hospital through the heart of the city. An inception report was submitted by a Bangalore-based consultant, Wilber Smith, on the detailed feasibility study on the prospect of implementing the Mass Rapid Transport System (MRTS) and Light Rail Transit System (LRTS) in the city. However, the project has been scrapped to be replaced by Kozhikode Monorail project. The State Cabinet then decided to form a special purpose vehicle (SPV) to implement monorail projects in Kozhikode and Thiruvananthapuram, and administrative sanction was given in October 2012. The state government issued orders entrusting the Thiruvananthapuram Monorail project to the KMCL on 26 November 2012. The government had handed over the Kozhikode Monorail project to the KMCL prior to that. On 12 June 2013, the State Cabinet gave clearance for an agreement to be signed between KMCL and DMRC, that would make the latter the general consultant for the monorail projects in Kozhikode and Thiruvananthapuram. The DMRC will receive a consultancy fee of 3.25% of the 55.81 billion ( 35.90 billion for Thiruvananthapuram and 19.91 billion for Kozhikode). The agreement was signed on 19 June 2013. However, due to cost overrun and the cold response from the bidders the project was put on hold. Bombardier Transportation was the only bidder for the project. The project was later scrapped and Light metro was proposed. History The proposal The Union Urban Development Ministry decided to consider the proposal for a Metro in Kozhikode after the success of the Delhi Metro and signed up for drawing the detailed project report (DPR) of the Rs.27.71 billion Kozhikode metro transport project with Delhi Metro Rail Corporation as a feasibility study for the introduction of suburban services in Kozhikode city. The Ministry decided to bear 50% of the cost of the preparation of the DPR for the city that comes under the population cut-off bracket. The preliminary feasibility study had been carried out by the National Transportation Planning and Research Centre (NATPAC) in association with the Kerala Road Fund Board in December 2008. Based on this feasibility report, the Board entrusted Wilber Smith to conduct the study in June 2009. Already, the NATPAC has submitted a metro rail project covering a total distance of 32.6 km from Karipur to the Kozhikode Medical College. The cost of the project was estimated at Rs. 27.71 billion and was expected be completed within five years. The monorail project which replaced the metro rail project was estimated to cost Rs 1,991 crore has received a bid from the lone bidder Bombardier consortium, and was almost double of the estimate. The project was scrapped and the Light Metro has been approved. Proposed route As per the proposal for Metro, it would start from Karipur Airport, touching Ramanattukara, Meenchanda, Mini- Bypass, Arayadathupalam and culminate at the Medical College. An estimated 2,083,000 people would get the benefits of the new transportation system by 2031. The project, which can be partly finished within three years, will be economically and technically feasible. However the detailed project report prepared by Delhi Metro Rail Corporation, the alignment for Kozhikode Monorail is retained for the Light metro project. The funding The Union government was in favour of implementing the project with private participation, ruling out its own financial involvement. The Ministry of Urban Development and the Planning Commission were also against government investment in the project, and refused to accept it as a project in line with the Delhi Metro and Chennai Metro. The political rivalry between the earlier Left Front government in Kerala and the UPA government at the Centre was a major reason for such developments and the slow down in the project. The change in government in Kerala changed that scenario, making the Kozhikode Metro one of the top priorities of the UDF government. But later, not to affect the Kochi Metro project The Kerala cabinet under the Chief Ministership of Oommen Chandy decided to give clearance only for the Kozhikode Monorail project, replacing the Metro rail project. The newly proposed Light Metro is proposed to be implemented as government initiative expecting a viability gap funding from the central and state government. Remaining fund is expected to be sourced internally and externally form competent agencies. Proposes The project was proposed to cover a distance of 14.2 km with 15 stations, from Medical College Hostel to Meenchanda. The car depot was proposed to be located about east of the Medical College Hostel station on of vacant land owned by the government. The monorail was proposed to be built in two phases. The first from Medical College to Mananchira and the second from Mananchira to Meenchantha. Approximately of land was to be required for the project, of which 80% is government-owned land. Stations Kozhikode monorail was proposed to have a total of 15 stations. Planned future expansion The government had planned to extend the monorail to Civil Station and West Hill. It would have required for the stretch connecting Malaparamba and Civil Station. Rolling stock Each train will be made up of 3 coaches on the formation - leading car / intermediate car / leading car. The length and width of the cars will be 18m and 2.8m respectively. The total length of train will be approximately 59.94 m. Each train has a capacity of approximately 800 passengers. The metro is designed to carry 30,000 passengers per hour. See also Kochi Metro Urban rail transit in India References THE HINDU Article. CALICUT METRO AGRE International. ECONOMIC TIMES Article. Delhi Metro. External links Kozhikode Kozhikode Authority Indian Institute of Management Transport in Kozhikode Siemens Mobility projects Underground rapid transit in India 2011 establishments in Kerala Standard gauge railways in India
Kozhikode Light Metro
Technology,Engineering
1,281
1,747,763
https://en.wikipedia.org/wiki/Multigrid%20method
In numerical analysis, a multigrid method (MG method) is an algorithm for solving differential equations using a hierarchy of discretizations. They are an example of a class of techniques called multiresolution methods, very useful in problems exhibiting multiple scales of behavior. For example, many basic relaxation methods exhibit different rates of convergence for short- and long-wavelength components, suggesting these different scales be treated differently, as in a Fourier analysis approach to multigrid. MG methods can be used as solvers as well as preconditioners. The main idea of multigrid is to accelerate the convergence of a basic iterative method (known as relaxation, which generally reduces short-wavelength error) by a global correction of the fine grid solution approximation from time to time, accomplished by solving a coarse problem. The coarse problem, while cheaper to solve, is similar to the fine grid problem in that it also has short- and long-wavelength errors. It can also be solved by a combination of relaxation and appeal to still coarser grids. This recursive process is repeated until a grid is reached where the cost of direct solution there is negligible compared to the cost of one relaxation sweep on the fine grid. This multigrid cycle typically reduces all error components by a fixed amount bounded well below one, independent of the fine grid mesh size. The typical application for multigrid is in the numerical solution of elliptic partial differential equations in two or more dimensions. Multigrid methods can be applied in combination with any of the common discretization techniques. For example, the finite element method may be recast as a multigrid method. In these cases, multigrid methods are among the fastest solution techniques known today. In contrast to other methods, multigrid methods are general in that they can treat arbitrary regions and boundary conditions. They do not depend on the separability of the equations or other special properties of the equation. They have also been widely used for more-complicated non-symmetric and nonlinear systems of equations, like the Lamé equations of elasticity or the Navier-Stokes equations. Algorithm There are many variations of multigrid algorithms, but the common features are that a hierarchy of discretizations (grids) is considered. The important steps are: Smoothing – reducing high frequency errors, for example using a few iterations of the Gauss–Seidel method. Residual Computation – computing residual error after the smoothing operation(s). Restriction – downsampling the residual error to a coarser grid. Interpolation or prolongation – interpolating a correction computed on a coarser grid into a finer grid. Correction – Adding prolongated coarser grid solution onto the finer grid. There are many choices of multigrid methods with varying trade-offs between speed of solving a single iteration and the rate of convergence with said iteration. The 3 main types are V-Cycle, F-Cycle, and W-Cycle. These differ in which and how many coarse-grain cycles are performed per fine iteration. The V-Cycle algorithm executes one coarse-grain V-Cycle. F-Cycle does a coarse-grain V-Cycle followed by a coarse-grain F-Cycle, while each W-Cycle performs two coarse-grain W-Cycles per iteration. For a discrete 2D problem, F-Cycle takes 83% more time to compute than a V-Cycle iteration while a W-Cycle iteration takes 125% more. If the problem is set up in a 3D domain, then a F-Cycle iteration and a W-Cycle iteration take about 64% and 75% more time respectively than a V-Cycle iteration ignoring overheads. Typically, W-Cycle produces similar convergence to F-Cycle. However, in cases of convection-diffusion problems with high Péclet numbers, W-Cycle can show superiority in its rate of convergence per iteration over F-Cycle. The choice of smoothing operators are extremely diverse as they include Krylov subspace methods and can be preconditioned. Any geometric multigrid cycle iteration is performed on a hierarchy of grids and hence it can be coded using recursion. Since the function calls itself with smaller sized (coarser) parameters, the coarsest grid is where the recursion stops. In cases where the system has a high condition number, the correction procedure is modified such that only a fraction of the prolongated coarser grid solution is added onto the finer grid. Computational cost This approach has the advantage over other methods that it often scales linearly with the number of discrete nodes used. In other words, it can solve these problems to a given accuracy in a number of operations that is proportional to the number of unknowns. Assume that one has a differential equation which can be solved approximately (with a given accuracy) on a grid with a given grid point density . Assume furthermore that a solution on any grid may be obtained with a given effort from a solution on a coarser grid . Here, is the ratio of grid points on "neighboring" grids and is assumed to be constant throughout the grid hierarchy, and is some constant modeling the effort of computing the result for one grid point. The following recurrence relation is then obtained for the effort of obtaining the solution on grid : And in particular, we find for the finest grid that Combining these two expressions (and using ) gives Using the geometric series, we then find (for finite ) that is, a solution may be obtained in time. It should be mentioned that there is one exception to the i.e. W-cycle multigrid used on a 1D problem; it would result in complexity. Multigrid preconditioning A multigrid method with an intentionally reduced tolerance can be used as an efficient preconditioner for an external iterative solver, e.g., The solution may still be obtained in time as well as in the case where the multigrid method is used as a solver. Multigrid preconditioning is used in practice even for linear systems, typically with one cycle per iteration, e.g., in Hypre. Its main advantage versus a purely multigrid solver is particularly clear for nonlinear problems, e.g., eigenvalue problems. If the matrix of the original equation or an eigenvalue problem is symmetric positive definite (SPD), the preconditioner is commonly constructed to be SPD as well, so that the standard conjugate gradient (CG) iterative methods can still be used. Such imposed SPD constraints may complicate the construction of the preconditioner, e.g., requiring coordinated pre- and post-smoothing. However, preconditioned steepest descent and flexible CG methods for SPD linear systems and LOBPCG for symmetric eigenvalue problems are all shown to be robust if the preconditioner is not SPD. Bramble–Pasciak–Xu preconditioner Originally described in Xu's Ph.D. thesis and later published in Bramble-Pasciak-Xu, the BPX-preconditioner is one of the two major multigrid approaches (the other is the classic multigrid algorithm such as V-cycle) for solving large-scale algebraic systems that arise from the discretization of models in science and engineering described by partial differential equations. In view of the subspace correction framework, BPX preconditioner is a parallel subspace correction method where as the classic V-cycle is a successive subspace correction method. The BPX-preconditioner is known to be naturally more parallel and in some applications more robust than the classic V-cycle multigrid method. The method has been widely used by researchers and practitioners since 1990. Generalized multigrid methods Multigrid methods can be generalized in many different ways. They can be applied naturally in a time-stepping solution of parabolic partial differential equations, or they can be applied directly to time-dependent partial differential equations. Research on multilevel techniques for hyperbolic partial differential equations is underway. Multigrid methods can also be applied to integral equations, or for problems in statistical physics. Another set of multiresolution methods is based upon wavelets. These wavelet methods can be combined with multigrid methods. For example, one use of wavelets is to reformulate the finite element approach in terms of a multilevel method. Adaptive multigrid exhibits adaptive mesh refinement, that is, it adjusts the grid as the computation proceeds, in a manner dependent upon the computation itself. The idea is to increase resolution of the grid only in regions of the solution where it is needed. Algebraic multigrid (AMG) Practically important extensions of multigrid methods include techniques where no partial differential equation nor geometrical problem background is used to construct the multilevel hierarchy. Such algebraic multigrid methods (AMG) construct their hierarchy of operators directly from the system matrix. In classical AMG, the levels of the hierarchy are simply subsets of unknowns without any geometric interpretation. (More generally, coarse grid unknowns can be particular linear combinations of fine grid unknowns.) Thus, AMG methods become black-box solvers for certain classes of sparse matrices. AMG is regarded as advantageous mainly where geometric multigrid is too difficult to apply, but is often used simply because it avoids the coding necessary for a true multigrid implementation. While classical AMG was developed first, a related algebraic method is known as smoothed aggregation (SA). In an overview paper by Jinchao Xu and Ludmil Zikatanov, the "algebraic multigrid" methods are understood from an abstract point of view. They developed a unified framework and existing algebraic multigrid methods can be derived coherently. Abstract theory about how to construct optimal coarse space as well as quasi-optimal spaces was derived. Also, they proved that, under appropriate assumptions, the abstract two-level AMG method converges uniformly with respect to the size of the linear system, the coefficient variation, and the anisotropy. Their abstract framework covers most existing AMG methods, such as classical AMG, energy-minimization AMG, unsmoothed and smoothed aggregation AMG, and spectral AMGe. Multigrid in time methods Multigrid methods have also been adopted for the solution of initial value problems. Of particular interest here are parallel-in-time multigrid methods: in contrast to classical Runge–Kutta or linear multistep methods, they can offer concurrency in temporal direction. The well known Parareal parallel-in-time integration method can also be reformulated as a two-level multigrid in time. Multigrid for nearly singular problems Nearly singular problems arise in a number of important physical and engineering applications. Simple, but important example of nearly singular problems can be found at the displacement formulation of linear elasticity for nearly incompressible materials. Typically, the major problem to solve such nearly singular systems boils down to treat the nearly singular operator given by robustly with respect to the positive, but small parameter . Here is symmetric semidefinite operator with large null space, while is a symmetric positive definite operator. There were many works to attempt to design a robust and fast multigrid method for such nearly singular problems. A general guide has been provided as a design principle to achieve parameters (e.g., mesh size and physical parameters such as Poisson's ratio that appear in the nearly singular operator) independent convergence rate of the multigrid method applied to such nearly singular systems, i.e., in each grid, a space decomposition based on which the smoothing is applied, has to be constructed so that the null space of the singular part of the nearly singular operator has to be included in the sum of the local null spaces, the intersection of the null space and the local spaces resulting from the space decompositions. Notes References G. P. Astrachancev (1971), An iterative method of solving elliptic net problems. USSR Comp. Math. Math. Phys. 11, 171–182. N. S. Bakhvalov (1966), On the convergence of a relaxation method with natural constraints on the elliptic operator. USSR Comp. Math. Math. Phys. 6, 101–13. Achi Brandt (April 1977), "Multi-Level Adaptive Solutions to Boundary-Value Problems", Mathematics of Computation, 31: 333–90. William L. Briggs, Van Emden Henson, and Steve F. McCormick (2000), A Multigrid Tutorial (2nd ed.), Philadelphia: Society for Industrial and Applied Mathematics, . R. P. Fedorenko (1961), A relaxation method for solving elliptic difference equations. USSR Comput. Math. Math. Phys. 1, p. 1092. R. P. Fedorenko (1964), The speed of convergence of one iterative process. USSR Comput. Math. Math. Phys. 4, p. 227. External links Links to AMG presentations Numerical analysis Partial differential equations Wavelets
Multigrid method
Mathematics
2,696
1,572,108
https://en.wikipedia.org/wiki/The%20Red%20Balloon
The Red Balloon () is a 1956 French fantasy comedy-drama featurette written, produced, and directed by Albert Lamorisse. The thirty-four-minute short, which follows the adventures of a young boy who one day finds a sentient, mute, red balloon, was filmed in the Ménilmontant neighborhood of Paris. Lamorisse used his children as actors in the film. His son, Pascal, plays himself in the main role, and his daughter, Sabine, portrays a young girl. The film won numerous awards, including an Oscar for Lamorisse for writing the Best Original Screenplay in 1956 and the Palme d'Or for short films at the 1956 Cannes Film Festival. It also became popular with children and educators. It is the only short film to win the Oscar for Best Original Screenplay. Plot The film follows Pascal (Pascal Lamorisse), a young boy who discovers a large, helium-filled red balloon on his way to school one morning. As he plays with it, he realizes it has a mind of its own. The balloon begins to follow him wherever he goes, never straying far, and sometimes floating outside his apartment window since his mother will not allow it inside. As Pascal and the balloon wander through the streets of Paris, they draw a lot of attention and envy from other children. At one point, the balloon enters his classroom, causing an uproar among his classmates. This alerts the principal, who locks Pascal in his office. Later, after being set free, Pascal and the balloon encounter a young girl (Sabine Lamorisse) with a blue balloon that also seems to have a mind of its own, just like his. One Sunday, Pascal is told to leave the balloon at home while he and his mother go to church. However, the balloon follows them through an open window and into the church, where a scolding beadle leads them out. As Pascal and the balloon continue to explore the neighborhood, a gang of older boys, envious of the balloon, steal it while Pascal is inside a bakery. He manages to retrieve it, but the boys eventually catch up to them after a chase through narrow alleys. They hold Pascal back as they bring the balloon down with slingshots and stones, and one of them finally destroys it by stomping on it. The film ends with all the other balloons in Paris coming to Pascal's aid, lifting him up, and taking him on a cluster balloon ride over the city. Themes The film, set in post-World War II Paris, features a dark and grey mise-en-scène that adds a somber tone to the setting and mood. In contrast, the bright red balloon serves as a symbol of hope and light throughout the film. The cluster balloon ride in the final scene can also be interpreted as a religious or spiritual metaphor. For example, when the balloon is destroyed, its "spirit" seems to live on through all the other balloons in the city, which some view as a metaphor for Christ. Themes of self-realization and loneliness are also present in the film. Additionally, the theme of innocence is a central focus, as the film shows how a cynical world is transformed into a magical one through the eyes of a child, highlighting the power of innocence and imagination. Author Myles P. Breen has identified thematic and stylistic elements in the film that reflect the qualities of poetry. Breen supports this view by quoting film theorist Christian Metz, who states, "In a poem, there is no story line, and nothing intrudes between the author and the reader." Breen categorizes the film as a "filmic poem," partly due to its loose, non-narrative structure. Production The film serves as a visual record of the Belleville and Ménilmontant areas of Paris, which had fallen into decay by the 1960s. This decline led the Parisian government to demolish much of the area as part of a slum-clearance effort. While some of the site was rebuilt with housing projects, the rest remained wasteland for 20 years. Many of the locations featured in the film no longer exist, including one of the bakeries, the school, the famous staircase located just beyond the equally famous café "Au Repos de la Montagne," the steep steps of passage Julien Lacroix where Pascal finds the balloon, and the empty lot where many of the battles take place. Today, the Parc de Belleville stands in that area. However, some locations remain intact, such as the apartment where Pascal lives with his mother at 15, rue du Transvaal, the Église Notre-Dame-de-la-Croix de Ménilmontant, and the Pyrénées-Ménilmontant bus stop at the intersection of rue des Pyrénées and rue de Ménilmontant. Lamorisse, a former auditor at the Institut des hautes études cinématographiques (IDHEC), employed a crew composed entirely of IDHEC graduates for the film. The main role of Pascal is played by Lamorisse's son, Pascal Lamorisse. French singer Renaud and his brother appear at the end of the film as twin brothers in red coats. They were cast in the roles through their uncle, Edmond Séchan, the film's director of photography. Release The film premiered and opened nationwide in France on 19 October 1956; it was released in the United Kingdom on 23 December 1956 (as the supporting film to the 1956 Royal Performance Film The Battle of the River Plate, which ensured it a wide distribution) and in the United States on 11 March 1957. The film has been featured in many festivals over the years, including the Wisconsin International Children's Film Festival; the Los Angeles Outfest Gay and Lesbian Film Festival; the Wisconsin Film Festival; and others. The film, in its American television premiere, was introduced by then-actor Ronald Reagan as an episode of the CBS anthology series General Electric Theater on 2 April 1961. The film is popular in elementary classrooms throughout the United States and Canada. A four-minute clip is on the rotating list of programming on Classic Arts Showcase. Reception Since its first release in 1956, the film has generally received overwhelmingly favorable reviews from critics. The film critic for The New York Times, Bosley Crowther, hailed the simple tale and praised director Lamorisse, writing: "Yet with the sensitive cooperation of his own beguiling son and with the gray-blue atmosphere of an old Paris quarter as the background for the shiny balloon, he has got here a tender, humorous drama of the ingenuousness of a child and, indeed, a poignant symbolization of dreams and the cruelty of those who puncture them." When the film was re-released in the United States in late 2006 by Janus Films, Entertainment Weekly magazine film critic Owen Gleiberman praised its direction and simple story line that reminded him of his youth, and wrote: "More than any other children's film, The Red Balloon turns me into a kid again whenever I see it...[to] see The Red Balloon is to laugh, and cry, at the impossible joy of being a child again." Film critic Brian Gibson wrote: "So far, this seems a post-Occupation France happy to forget the blood and death of Adolf Hitler's war a decade earlier. But soon people’s occasional, playful efforts to grab the floating, carefree balloon become grasping and destructive. In a gorgeous sequence, light streaming down alleys as children's shoes clack and clatter on the cobblestones, the balloon bouncing between the walls, Pascal is hunted down for his floating pet. Its ballooning sense of hope and freedom is deflated by a fierce, squabbling mass. Then, fortunately, it floats off, with the breeze of magic-realism, into a feeling of escape and peace, The Red Balloon taking hold of Pascal, lifting him out of this rigid, petty, earthbound life." In a review in The Washington Post, critic Philip Kennicott had a cynical view: "[The film takes] place in a world of lies. Innocent lies? Not necessarily. The Red Balloon may be the most seamless fusion of capitalism and Christianity ever put on film. A young boy invests in a red balloon the love of which places him on the outside of society. The balloon is hunted down and killed on a barren hilltop—think Calvary—by a mob of cruel boys. The ending, a bizarre emotional sucker punch, is straight out of the New Testament. Thus is investment rewarded—with Christian transcendence or, at least, an old-fashioned Assumption. This might be sweet. Or it might be a very cynical reduction of the primary impulse to religious faith." The review aggregator Rotten Tomatoes reported that 95% of critics gave the film a positive review, based on twenty reviews. The critical consensus reads: "The Red Balloon invests the simplest of narratives with spectacular visual inventiveness, making for a singularly wondrous portrait of innocence." Accolades Prix Louis Delluc: Prix Louis Delluc; Albert Lamorisse, 1956. Cannes Film Festival: Palme d'Or du court métrage/Golden Palm; Best Short Film, Albert Lamorisse, 1956. British Academy of Film and Television Arts: BAFTA Award; Special Award, France, 1957. Academy Awards: Oscar; Best Writing, Best Original Screenplay, Albert Lamorisse, 1957. National Board of Review: Top Foreign Films, 1957. Legacy In 1960, Lamorisse released a second film, Stowaway in the Sky, which also starred Pascal and was a spiritual successor to the film. Bob Godfrey's and Zlatko Grgic's 1979 animated film Dream Doll has a very similar plot and ending to the film, except instead of a boy being obsessed with a red balloon, the protagonist is a man obsessed with an inflatable nude woman. A stage adaptation by Anthony Clark was performed at the Royal National Theatre in 1996. Don Hertzfeld's 1997 short film Billy's Balloon, which also showed at Cannes, is a parody of the film. The music video for "Son of Sam" by Elliott Smith, from his 2000 album Figure 8, is a direct homage to the film. Hou Hsiao-hsien's 2007 film Flight of the Red Balloon is a direct homage to the film. A boy with a bright red balloon is featured in the epilogue of Damien Chazelle's 2016 musical film La La Land. The Pascal and Sabine restaurant in Asbury Park, New Jersey is named in honor of the film. Guitarist Keith Calmes' album Follow the Red Balloon is named as an homage to the spirit of Pascal and Sabine. In The Simpsons episode "The Crepes of Wrath", Bart returns from France bearing gifts for his family; his gift to Maggie is a red balloon. The red ballon appears (in three images on pages 162 and 163) of Jacques Tardi's Du Rififi à Menilmontant (Casterman, 2024), where private investigator Nestor Burma perambulates in the 20ème arrondissement during Christmas season, 1957. This is an original story by Tardi. Merchandise Home media The film was first released on VHS by Embassy Home Entertainment in 1984. A laserdisc of it was later released by The Criterion Collection in 1986, and was produced by Criterion, Janus Films, and Voyager Press. Included in it was Lamorisse's award-winning short White Mane (1953). A DVD version became available in 2008, and a Blu-ray version was released in the United Kingdom on January 18, 2010; it has now been confirmed as region-free. Book A tie-in book was first published by Doubleday Books, (now Penguin Random House), in 1957, using black and white and color stills from the film, with added prose. It was highly acclaimed and went on to win a 'New York Times Best Illustrated Children's Book of the Year'. Lamorisse was credited as its sole author. References External links The Red Balloon at Janus Films (official web site) The Red Balloon information site and DVD/Blu-ray review at DVD Beaver (includes images) Le Ballon rouge at Cinefeed 1950s French-language films 1950s fantasy comedy-drama films French fantasy comedy-drama films French comedy-drama short films Balloons 1950s children's fantasy films Films directed by Albert Lamorisse Films set in the 1950s Films shot in Paris Films whose writer won the Best Original Screenplay Academy Award Louis Delluc Prize winners Short Film Palme d'Or winners 1956 comedy-drama films 1956 films 1950s French films Films scored by Maurice Le Roux
The Red Balloon
Chemistry
2,593
22,696,986
https://en.wikipedia.org/wiki/Difluorocarbene
Difluorocarbene is the chemical compound with formula CF2. It has a short half-life, 0.5 and 20 ms, in solution and in the gas phase, respectively. Although highly reactive, difluorocarbene is an intermediate in the production of tetrafluoroethylene, which is produced on an industrial scale as the precursor to Teflon (PTFE). Bonding in difluorocarbene In general, carbenes exist in either singlet or triplet states, which are often quite close in energy. Singlet carbenes have spin-paired electrons and a higher energy empty 2p orbital. In a triplet carbene, one electron occupies the hybrid orbital and the other is promoted to the 2p orbital. For most carbenes, the triplet state is more stable than the corresponding singlet. In the case of fluorinated carbenes, however, the singlet is lower energy than the triplet. The difference in energy between the singlet ground state and the first excited triplet state is 56.6 kcal per mol. In singlet difluorocarbene, the C-F bond length is measured as 1.300 Å and F-C-F bond angle is measured as 104.94° (almost tetrahedral). On the other hand for the triplet state, the C-F bond length is measured as 1.320 Å and F-C-F bond angle is measured as 122.3° (slightly more, due to steric repulsion, than expected in an sp2 carbon). The reasoning for the difference between the two carbenes is outlined in the two figures on the left. Figure 1 depicts the electron distribution in a singlet carbene, figure 2 shows the orbitals available to π-electrons. The molecular orbitals are built from an empty p-orbital on the central carbon atom and two orbitals on the fluorine atoms. Four electrons, the carbon orbital is empty, the fluorine orbitals both carry two electrons, need to find a place, thus filling the lower two of the MO-set. The non-bonding electrons of the carbene now need to be placed either double in the rather low energy sp2 orbital on carbon or in the highest anti-bonding level of the MO-system. Clearly in CF2 the singlet is the most favorable state. In ordinary carbene, no π-MO-system is present, so the two non-bonding electrons can be placed in the two non-bonding orbitals on the carbon atom. Here avoiding the double negative charge in one orbital leads to a triplet carbene. Preparation Thermolysis of sodium chlorodifluoroacetate was the first route to difluorocarbene. The generation of difluorocarbene involves loss of carbon dioxide and chloride. Thermal decomposition of sodium chlorodifluoroacetate in the presence of triphenylphosphine and an aldehyde allows for a Wittig-like reactions In this case, is proposed as an intermediate. Alternatively, dehydrohalogenation of chlorodifluoromethane or bromodifluoromethane using alkoxides or alkyllithium also produces difluorocarbene: A variant of this reaction is using ethylene oxide in conjunction with a catalytic amount of quaternary ammonium halide at elevated temperature. In equilibrium a small amount of β-haloalkoxide is present that acts as a base. This avoids an excess concentration of base that will destroy the carbene just formed. Thermolysis of hexafluoropropylenoxide at 190 °C gives difluorocarbene and trifluoroacetyl fluoride. This is an attractive method for the synthesis of difluorocyclopropanes as hexafluoropropylenoxide is inexpensive and the byproduct trifluoroacetyl fluoride is a gas. Application Difluorocarbene is used to generate geminal difluorocyclopropanes. See also Dichlorocarbene References Fluorides Carbenes
Difluorocarbene
Chemistry
895
27,386,075
https://en.wikipedia.org/wiki/McCutcheon%20index
The McCutcheon index or chemotactic ratio is a numerical metric that quantifies the efficiency of movement. It is calculated as the ratio of the net displacement of a moving entity to the total length of the path it has traveled. The index acts as an evaluative measure of the directness of movement. A value close to 1 indicates that a moving entity performed its movement in a very direct manner, minimizing detours. On the other hand, a lower value indicates that the entity has achieved only a marginal net displacement, despite traveling a considerable distance. The index is used to evaluate movements of, for example, leukocytes, bacteria, or amoebae. It is named after Morton McCutcheon who introduced it to describe chemotaxis in leukocytes. References Biophysics
McCutcheon index
Physics,Biology
166
17,558,897
https://en.wikipedia.org/wiki/Ultra-low-voltage%20processor
Ultra-low-voltage processors (ULV processors) are a class of microprocessor that are deliberately underclocked to consume less power (typically 17 W or below), at the expense of performance. These processors are commonly used in subnotebooks, netbooks, ultraportables and embedded devices, where low heat dissipation and long battery life are required. Notable examples Intel Atom – Up to 2.0 GHz at 2.4 W (Z550) Intel Pentium M – Up to 1.3 GHz at 5 W (ULV 773) Intel Core 2 Solo – Up to 1.4 GHz at 5.5 W (SU3500) Intel Core Solo – Up to 1.3 GHz at 5.5 W (U1500) Intel Celeron M – Up to 1.2 GHz at 5.5 W (ULV 722) VIA Eden – Up to 1.5 GHz at 7.5 W VIA C7 – Up to 1.6 GHz at 8 W (C7-M ULV) VIA Nano – Up to 1.3 GHz at 8 W (U2250) AMD Athlon Neo – Up to 1 GHz at 8 W (Sempron 200U) AMD Geode – Up to 1 GHz at 9 W (NX 1500) Intel Core 2 Duo – Up to 1.3 GHz at 10 W (U7700) Intel Core i3/i5/i7 – Up to 1.5 GHz at 13 W (Core i7 3689Y) AMD A Series – Up to 3.2 GHz at 15 W (A10-7300P) See also Consumer Ultra-Low Voltage – a low power platform developed by Intel References Embedded systems Microprocessors
Ultra-low-voltage processor
Technology,Engineering
370
13,298,486
https://en.wikipedia.org/wiki/Potassium%20chlorochromate
Potassium chlorochromate is an inorganic compound with the formula KCrO3Cl. It is the potassium salt of chlorochromate, [CrO3Cl]−. It is a water-soluble orange compound is used occasionally for oxidation of organic compounds. It is sometimes called Péligot's salt, in recognition of its discoverer Eugène-Melchior Péligot. Structure and synthesis Potassium chlorochromate was originally prepared by treating potassium dichromate with hydrochloric acid. An improved route involves the reaction of chromyl chloride and potassium chromate: K2CrO4 + CrO2Cl2 → 2KCrO3Cl The salt consists of the tetrahedral chlorochromate anion. The average Cr=O bond length is 159 pm, and the Cr-Cl distance is 219 pm. Reactions Although air-stable, its aqueous solutions undergo hydrolysis in the presence of strong acids. With concentrated hydrochloric acid, it converts to chromyl chloride, which in turn reacts with water to form chromic acid and additional hydrochloric acid. When treated with 18-crown-6, it forms the lipophilic salt [K(18-crown-6)]CrO3Cl. Peligot's salt can oxidize benzyl alcohol, a reaction which can be catalyzed by acid. A related salt, pyridinium chlorochromate, is more commonly used for this reaction. Safety Potassium chlorochromate is toxic upon ingestion, and may cause irritation, chemical burns, and even ulceration on contact with the skin or eyes. . Like other hexavalent chromium compounds, it is also carcinogenic and mutagenic. References Oxidizing agents Chromates Potassium compounds
Potassium chlorochromate
Chemistry
387
19,561,724
https://en.wikipedia.org/wiki/SAP%20Enterprise%20Architecture%20Framework
The SAP Enterprise Architecture Framework (EAF) is a methodology and toolset by the German multinational software company SAP. It is based on The Open Group Architecture Framework (TOGAF). The TOGAF Architecture Development Method is a generic method for architecture development, which is designed to deal with most system and organizational requirements. It is usually tailored or extended to suit specific needs. See also Enterprise architecture framework SAP ERP References External links SAP Methodology for Accelerated Transformation to SOA SAP Enterprise Architecture Framework Unveiled:Aligning IT to the Business Enterprise architecture frameworks Enterprise Architecture Framework Service-oriented (business computing) Software architecture
SAP Enterprise Architecture Framework
Engineering
125
53,342,056
https://en.wikipedia.org/wiki/Promoting%20Women%20in%20Entrepreneurship%20Act
The Promoting Women in Entrepreneurship Act (, ) is a public law amendment to the Science and Engineering Equal Opportunities Act () to authorize the National Science Foundation to encourage its entrepreneurial programs to recruit and support women to extend their focus beyond the laboratory and into the commercial world. Background The Promoting Women in Entrepreneurship Act was introduced in the United States House of Representatives on January 4, 2017, by Representative Elizabeth Esty of Connecticut and signed into law by President Donald Trump on February 28, 2017. According to the Bureau of Labor Statistics, women account for 47% of the workforce, but make up only 25.6 percent of computer and mathematical occupations. In addition, only 15.4 percent of architecture and engineering jobs are filled by women. Congress also found that only 26 percent of women who earned STEM degrees actually worked in STEM related jobs. The president stated, “() enables the National Science Foundation to support women inventors – of which there are many – researchers and scientists in bringing their discoveries to the business world, championing science and entrepreneurship and creating new ways to improve people’s lives.” Trump signed the bill in a room full of women including Representative Barbara Comstock, who introduced the Inspire Women Act, Senator Heidi Heitkamp, and First Lady Melania Trump. The bill was supported by both parties, with 36 Democrats and 8 Republicans signing as co-sponsors. Impact The bill was designed to primarily improve the programs in place at the National Science Foundation in order to encourage more women to enter into the STEM fields. The Science and Engineering Equal Opportunities Act allocates funding for educational programs and for research in STEM fields, and this bill adds the ability for the Science Foundation to allocate new funding towards incentivizing women to join their educational and entrepreneurial programs. There has been little news regarding this act and its effects recently and the expected results have yet to come to fruition. However, the act still represents a trend within the Trump administration with regard to technology and women. The president has said that this issue was, "going to be addressed by my administration over the years with more and more of these bills coming out and address the barriers faced by female entrepreneurs and by those in STEM fields." Despite this, since the day of the law being signed, the Trump administration has yet to give a statement regarding future legislation that would further help improve the numbers of women in science and technology. See also Timeline of women's legal rights in the United States (other than voting) References Women in science and technology Acts of the 115th United States Congress
Promoting Women in Entrepreneurship Act
Technology
519
52,055,632
https://en.wikipedia.org/wiki/Hironaka%20decomposition
In mathematics, a Hironaka decomposition is a representation of an algebra over a field as a finitely generated free module over a polynomial subalgebra or a regular local ring. Such decompositions are named after Heisuke Hironaka, who used this in his unpublished master's thesis at Kyoto University . Hironaka's criterion , sometimes called miracle flatness, states that a local ring R that is a finitely generated module over a regular Noetherian local ring S is Cohen–Macaulay if and only if it is a free module over S. There is a similar result for rings that are graded over a field rather than local. Explicit decomposition of an invariant algebra Let be a finite-dimensional vector space over an algebraically closed field of characteristic zero, , carrying a representation of a group , and consider the polynomial algebra on , . The algebra carries a grading with , which is inherited by the invariant subalgebra . A famous result of invariant theory, which provided the answer to Hilbert's fourteenth problem, is that if is a linearly reductive group and is a rational representation of , then is finitely-generated. Another important result, due to Noether, is that any finitely-generated graded algebra with admits a (not necessarily unique) homogeneous system of parameters (HSOP). A HSOP (also termed primary invariants) is a set of homogeneous polynomials, , which satisfy two properties: The are algebraically independent. The zero set of the , , coincides with the nullcone (link) of . Importantly, this implies that the algebra can then be expressed as a finitely-generated module over the subalgebra generated by the HSOP, . In particular, one may write , where the are called secondary invariants. Now if is Cohen–Macaulay, which is the case if is linearly reductive, then it is a free (and as already stated, finitely-generated) module over any HSOP. Thus, one in fact has a Hironaka decomposition . In particular, each element in can be written uniquely as 􏰐, where , and the product of any two secondaries is uniquely given by , where . This specifies the multiplication in unambiguously. See also Rees decomposition Stanley decomposition References Commutative algebra
Hironaka decomposition
Mathematics
470
45,314,746
https://en.wikipedia.org/wiki/Elysium%20Health
Elysium Health is an American manufacturer of dietary supplements based in New York City. History Elysium Health was founded in 2014 by Leonard Guarente, Dan Alminana, and Eric Marcotulli. In 2015, Elysium introduced its first product, Basis, which contains nicotinamide riboside and pterostilbene. In December 2016, Elysium received an investment of $20 million in Series B funding. In 2019, Elysium introduced a test called Index that uses epigenetic analysis on saliva samples to estimate biological age. In June 2020, Elysium launched a supplement called Matter, which purports to maintain brain health and slow brain aging/atrophy. In October 2021, Elysium launched a supplement called Format, which is associated with anti-aging and immune system support. In 2023, Elysium launched a daily supplement called Mosaic, which claims to prevent skin aging and protect collagen. In October 2024, Elysium introduced a daily supplement called Vision, to maintain macular health and promote eye longevity. Litigation Elysium originally bought the ingredients in Basis from ChromaDex, which as of December 2016, sold the two ingredients to other supplement companies that also marketed products containing them. The two companies had an agreement under which Elysium Health did not have to acknowledge ChromaDex as the source of the ingredients, but then after Elysium recruited the VP of business development from ChromaDex and reportedly stopped paying ChromaDex, ChromaDex sued Elysium and the information became public. In September 2018, Dartmouth College and ChromaDex sued Elysium for infringing on patents for nicotinamide riboside. In August 2020, W.R. Grace and Company also sued Elysium for infringing on their patents for crystalline nicotinamide riboside. In September 2021, the claims by Dartmouth and ChromaDex were dismissed by a U.S. district judge, essentially invalidating their patents. In February 2023, the United States Court of Appeals for the Federal Circuit affirmed the district court’s judgment that these patent infringement claims are invalid under 35 U.S.C. § 101. References External links Official site Nutritional supplement companies of the United States Life sciences industry Companies based in New York City American companies established in 2014 2014 establishments in New York City
Elysium Health
Biology
492
69,259,218
https://en.wikipedia.org/wiki/Consoling%20touch
Consoling touch is a pro-social behavior involving physical contact between a distressed individual and a caregiver. The physical contact, most commonly recognized in the form of a hand hold or embrace, is intended to comfort one or more of the participating individuals. Consoling touch is intended to provide consolation - to alleviate or lessen emotional or physical pain. This type of social support has been observed across species and cultures. Studies have found little difference in the applications of consoling touch, with minor differences in frequency occurrence across cultures. These findings suggest a degree of universality. It remains unclear whether the relationship between social touch and interpersonal emotional bonds reflect biologically driven or culturally normative behavior. Evidence of consoling touch in non-human primates, who embrace one another following distressing events, suggest a biological basis. Numerous studies of consoling touch in humans and animals unveil a consistent physiological response. An embrace from a friend, relative, or even stranger can trigger the release of oxytocin, dopamine, and serotonin into the bloodstream. These neurotransmitters are associated with positive mood, numerous health benefits, and longevity. Cortisol, a stress hormone, also decreases. Studies have found that the degree of intimacy and quality of relationship between consoler and the consoled mediates physiological effects. In other words, while subjects experience reduced cortisol levels while holding the hand of a stranger, they exhibit a larger effect when receiving comfort from a trusted friend, and greater still, when holding the hand of a high quality romantic partner. Contact and development The importance of consoling touch was first explored by Harry Frederick Harlow (Oct 31, 1905 – Dec 6, 1981). From 1950 through 1970, Harlow conducted controversial research on rhesus monkeys observing maladaptation resulting from maternal-separation and social isolation. Infant monkeys were separated from their biological mothers and given two inanimate surrogate mothers. Cheekily referred to as ‘Iron Maidens’, the first of the two surrogates was constructed of wire and contained a feeding mechanism. The second contained no food and was constructed of rubber and soft terrycloth. In all variations of the paradigm the infants spent significantly more time clinging to the cloth mother. Only when the monkeys were hungry did they leave the terrycloth, only to return to it after eating. Monkeys accompanied by iron maidens behaved differently in novel environments than those in complete isolation. When chaperoned by a surrogate mother monkeys explored new environments, retreated to the surrogate when startled, only to continue exploring thereafter. Monkeys put in novel environments without an iron maiden cowered in the nearest corner, too fearful to explore. Those raised in complete isolation developed marked disturbed behavior such as pacing in cages, staring blankly, and self-mutilation. When introduced to other rhesus's, those raised in isolation did not socialize, kept separate from the group, and refused to eat. Harlow rehabilitated socially inept monkeys by enclosing them with a non-threatening, well socialized other. Harlow observed these social pair interactions, calling them "the isolate and the therapist'. Upon introduction, the isolate huddled in a corner. The therapist reacted by embracing the isolate. With consoling touch and modeling of social interaction, isolates were indistinguishable from therapists after one year. Harlow concluded that social rehabilitation is possible, however there may be a critical period, much like language development in humans. The need for close comforting physical contact became known as contact comfort. Contact comfort is believed to be the foundation of attachment and serves as the basis for consoling touch. Extensive research has documented the importance of physical touch in human emotional and physical wellbeing. From a developmental perspective touch plays a vital role in infants' physical and cortical growth, stress relief, and secure attachment formation. Nurturing touch is positively associated with children's neuronal development thus determining the trajectory of their behavioral and cognitive growth. Though no laboratory studies exist due to ethical considerations, data emphasizing the necessity of consoling touch was taken from orphanages where the caretaker to child ratio was 1:25. Children deficient in consoling touch during critical developmental stages had between 20%-30% less brain mass than children of similar age who received sufficient socialization. In a 1997 study, Dawson et al. monitored the neural functioning of children born to depressed mothers. A key symptom of maternal depression is reduced social touch between mother and child. The electroencephalogram (EEG) results of children with depressed mothers had markedly reduced activity in the left frontal lobe. The abnormality remained consistent with the mother's condition throughout the duration of the three-year study. The children of mothers who managed to diminish depressive symptoms before their child's first year later developed a more normal brain pattern. The likelihood of full neural recovery lessens as a child ages due to sensitive periods for brain development, the first year and a half being the most critical. Pain mitigation Physical From a therapeutic perspective consoling touch provides pain alleviation and facilitates healing. In a 1993 study of young adults undergoing chemotherapy, hand holding was rated to be a significantly effective coping strategy in ameliorating treatment-related pain. Overwhelmingly, patients preferred to hold the hand of a close relative or partner. Consoling touch functioned to reduce anxiety associated with impending treatments and served as a source of security. Patient's subjective experience of treatment-related pain was significantly reduced when they felt more secure, less tense, and had social support. Marshall Klaus's 1995 work demonstrated the power of social touch in labour and delivery. He found women receiving consoling touch during delivery had reduced labour duration, reported less anxiety and physical pain, and had reduced need for caesarean section. Numerous studies have been conducted exploring pain mitigation and consoling touch between romantic partners. From electric shocks to temperatures too hot for touch, holding the hand of a loved one decreases physical pain perception. In 2018 couples were brought into a lab and administered mild heat pain while undergoing EEG. Not only were pain ratings significantly reduced in the hand holding condition, couples exhibited what is known as brain-to-brain coupling, or neural synchrony. This means neural firing, both topographically and temporally, matches that of another party. This mechanism is hypothesized to be an integral feature of empathy and shared experience. Neural synchrony is most easily detected in couples during shared experience, such as laughter. Importantly, social touch nor neural synchrony are analgesics. Consoling touch can reduce pain perception, but not eliminate it entirely. Emotional Recent research has shown that consoling touch modulates emotional responses as well as physical. In a 2019 study the neurobehavioral correlates of consoling touch were examined by showing participants photos of recently deceased relatives while undergoing functional magnetic resonance imaging (fMRI). Participant brain activity was monitored in two conditions, either in solitude, or while holding the hand of a significant other. Activation varied in several brain areas. Reduced reactivity was reported in the anterior cingulate cortex (ACC) and cerebellum in the hand-holding condition. The ACC has neural connections to both the limbic system, the emotional center, and the prefrontal cortex, known for higher cognitive function. The ACCs location, paired with numerous empirical studies, confirm its involvement in emotion and pain regulation. The cerebellum, located in the brainstem, is classically responsible for coordinating voluntary movements; however, recent work suggests it may play a role in emotional valence determination. A similar fMRI experiment evaluated the neurological effects of viewing moderately disturbing images while holding the hand of a significant other. Connectivity between the anterior insula and the ACC decreased during partner touch. The anterior insula is known for emotional and olfactory appraisal with an observed focus on disgust. Decreased connectivity between these two regions in the hand holding condition suggest that consoling touch elicits a buffering effect. Affective versus discriminative touch Consoling touch has an emotional component that utilizes different neural networks and nerves than physical sensation processing alone. This distinction has been described by studies examining discriminative touch versus affective touch. Discriminative touch conveys information regarding pressure, vibration, or stretching of the skin. This kind of processing involves type A nerve fibers, which relay information very quickly to the brains sensory regions. Affective touch, however, involves type C nerve fibers. Unmyelinated, slower traveling, type C nerves communicate pain signals, temperature, and social touch. In humans, type C nerves have the greatest response to soft strokes from stimuli matching skin temperature. These afferent nerves also exhibit a tuning curve for caressing speed specific to that which an individual finds most pleasant. The 'social touch hypothesis', coined by Håkan Olausson in 2010, proposes that C afferent nerve fibers are most sensitive to tactile stimuli occurring during close social interaction. Patient G. L. Much of the understanding of affective and discriminative touch can be attributed to a woman known as 'Patient G. L'. Patient G. L. had Guillain-Barré syndrome, a rare autoimmune disorder wherein an immune system attacks the body's own muscle and sensory neurons. Due to the condition, Patient G. L. lacked type A nerve fibers, while type C remained intact. Though the patient could not perceive physical contact, such as pressure on her skin, she still reported an emotional response to consoling touch. Further functional magnetic resonance imaging (fMRI) examination confirmed the patient lacked activation in the somatosensory cortex during touch. Because the somatosensory cortex is responsible for type A processing, a healthy control would exhibit activation of these areas. Instead, patient G. L. showed heightened activation in the posterior insula. The posterior insula is not sensitive to visceral input, but is involved in recognition, intensity encoding, and reward assessment. Patient G. L. described being the recipient of social touch as "a faint, hard-to-place, pleasant sensation". Olausson, a professor of clinical neuroscience, has compiled a great deal of work on the somatosensory system. He has identified not only cases like Patient G. L., but her inverse. Numerous adults without type C nerve fibers, but with type A were identified and studied. FMRI data confirmed these patients exhibited activation of the somatosensory cortex with no firing of the posterior insula. These findings are some of the first to confirm C afferent nerve fibers convey emotional and social information involving the reward system while type A communicate tactile information within the somatosensory cortex. Individual differences Individuals vary in their preference for consoling touch. It is speculated that culture and upbringing are the greatest determinants. Going beyond environmental factors, there is a notable relationship between tactile experience and the autism spectrum. 96% of individuals on the spectrum report an altered, and largely heightened, sensitivity to tactile input. These variations in nerve processing manifest in different ways, be it wearing very specific fabrics, or avoiding rain because the sensation of drops on the skin is painful. Kevin Pelphrey, a clinical neuroscientist at Yale, recently evaluated response to social touch in non-autistic and autistic children. Children had their arms gently grazed with a paint brush and had their palms touched by a caregiver while in an fMRI scanner. Non-autistic children elicited the expected response. There was heightened activation of C afferent nerves and the posterior insula in the palm touch condition while type A nerves responded to the paintbrush condition. The children on the Autism spectrum, however, elicited a similar neuronal response in both conditions with marked activation of the somatosensory cortex. The findings raise the possibility that individuals on the spectrum may not be extracting social information from touch. These findings are preliminary and cannot be used to assume individual preference or experience of social touch. See also Affective haptics Social rejection References Behavior
Consoling touch
Biology
2,503
2,701,077
https://en.wikipedia.org/wiki/Dynamic-link%20library
A dynamic-link library (DLL) is a shared library in the Microsoft Windows or OS/2 operating system. A DLL can contain executable code (functions), data, and resources, in any combination. File extensions A DLL file often has file extension .dll, but can have any file extension. Developers can choose to use a file extension that describes the content of the file such as .ocx for ActiveX controls and .drv for a legacy (16-bit) device driver. A DLL that contains only resources can be called a resource DLL. Examples include the icon library, sometimes having extension .icl, and font library having extensions .fon and .fot. File format The file format of a DLL is the same as for an executable (a.k.a. EXE), but different versions of Windows use different formats. 32-bit and 64-bit Windows versions use Portable Executable (PE), and 16-bit Windows versions use New Executable (NE). The main difference between DLL and EXE is that a DLL cannot be run directly since the operating system requires an entry point to start execution. Windows provides a utility program (RUNDLL.EXE/RUNDLL32.EXE) to execute a function exposed by a DLL. Since they have the same format, an EXE can be used as a DLL. Consuming code can load an EXE via the same mechanism as loading a DLL. Background The first versions of Microsoft Windows ran programs together in a single address space. Every program was meant to co-operate by yielding the CPU to other programs so that the graphical user interface (GUI) could multitask and be maximally responsive. All operating-system level operations were provided by the underlying operating system: MS-DOS. All higher-level services were provided by Windows Libraries "Dynamic Link Library". The Drawing API, Graphics Device Interface (GDI), was implemented in a DLL called GDI.EXE, the user interface in USER.EXE. These extra layers on top of DOS had to be shared across all running Windows programs, not just to enable Windows to work in a machine with less than a megabyte of RAM, but to enable the programs to co-operate with each other. The code in GDI needed to translate drawing commands to operations on specific devices. On the display, it had to manipulate pixels in the frame buffer. When drawing to a printer, the API calls had to be transformed into requests to a printer. Although it could have been possible to provide hard-coded support for a limited set of devices (like the Color Graphics Adapter display, the HP LaserJet Printer Command Language), Microsoft chose a different approach. GDI would work by loading different pieces of code, called "device drivers", to work with different output devices. The same architectural concept that allowed GDI to load different device drivers also allowed the Windows shell to load different Windows programs, and for these programs to invoke API calls from the shared USER and GDI libraries. That concept was "dynamic linking". In a conventional non-shared static library, sections of code are simply added to the calling program when its executable is built at the "linking" phase; if two programs call the same routine, the routine is included in both the programs during the linking stage of the two. With dynamic linking, shared code is placed into a single, separate file. The programs that call this file are connected to it at run time, with the operating system (or, in the case of early versions of Windows, the OS-extension), performing the binding. For those early versions of Windows (1.0 to 3.11), the DLLs were the foundation for the entire GUI. As such, display drivers were merely DLLs with a .DRV extension that provided custom implementations of the same drawing API through a unified device driver interface (DDI), and the Drawing (GDI) and GUI (USER) APIs were merely the function calls exported by the GDI and USER, system DLLs with .EXE extension. This notion of building up the operating system from a collection of dynamically loaded libraries is a core concept of Windows that persists . DLLs provide the standard benefits of shared libraries, such as modularity. Modularity allows changes to be made to code and data in a single self-contained DLL shared by several applications without any change to the applications themselves. Another benefit of modularity is the use of generic interfaces for plug-ins. A single interface may be developed which allows old as well as new modules to be integrated seamlessly at run-time into pre-existing applications, without any modification to the application itself. This concept of dynamic extensibility is taken to the extreme with the Component Object Model, the underpinnings of ActiveX. In Windows 1.x, 2.x and 3.x, all Windows applications shared the same address space as well as the same memory. A DLL was only loaded once into this address space; from then on, all programs using the library accessed it. The library's data was shared across all the programs. This could be used as an indirect form of inter-process communication, or it could accidentally corrupt the different programs. With the introduction of 32-bit libraries in Windows 95, every process ran in its own address space. While the DLL code may be shared, the data is private except where shared data is explicitly requested by the library. That said, large swathes of Windows 95, Windows 98 and Windows Me were built from 16-bit libraries, which limited the performance of the Pentium Pro microprocessor when launched, and ultimately limited the stability and scalability of the DOS-based versions of Windows. Limitations Although the DLL technology is core to the Windows architecture, it has drawbacks. DLL Hell DLL hell describes the bad behavior of an application when the wrong version of a DLL is consumed. Mitigation strategies include: .NET Framework Virtualization-based solutions such as Microsoft Virtual PC and Microsoft Application Virtualization because they offer isolation between applications Side-by-side assembly Shared memory space The executable code of a DLL runs in the memory space of the calling process and with the same access permissions, which means there is little overhead in their use, but also that there is no protection for the calling program if the DLL has any sort of bug. Features Upgradability The DLL technology allows for an application to be modified without requiring consuming components to be re-compiled or re-linked. A DLL can be replaced so that the next time the application runs it uses the new DLL version. To work correctly, the DLL changes must maintain backward compatibility. Even the operating system can be upgraded since it is exposed to the applications via DLLs. System DLLs can be replaced so that the next time the applications run, they use the new system DLLs. Memory management In Windows API, DLL files are organized into sections. Each section has its own set of attributes, such as being writable or read-only, executable (for code) or non-executable (for data), and so on. The code in a DLL is usually shared among all the processes that use the DLL; that is, they occupy a single place in physical memory, and do not take up space in the page file. Windows does not use position-independent code for its DLLs; instead, the code undergoes relocation as it is loaded, fixing addresses for all its entry points at locations which are free in the memory space of the first process to load the DLL. In older versions of Windows, in which all running processes occupied a single common address space, a single copy of the DLL's code would always be sufficient for all the processes. However, in newer versions of Windows which use separate address spaces for each program, it is only possible to use the same relocated copy of the DLL in multiple programs if each program has the same virtual addresses free to accommodate the DLL's code. If some programs (or their combination of already-loaded DLLs) do not have those addresses free, then an additional physical copy of the DLL's code will need to be created, using a different set of relocated entry points. If the physical memory occupied by a code section is to be reclaimed, its contents are discarded, and later reloaded directly from the DLL file as necessary. In contrast to code sections, the data sections of a DLL are usually private; that is, each process using the DLL has its own copy of all the DLL's data. Optionally, data sections can be made shared, allowing inter-process communication via this shared memory area. However, because user restrictions do not apply to the use of shared DLL memory, this creates a security hole; namely, one process can corrupt the shared data, which will likely cause all other sharing processes to behave undesirably. For example, a process running under a guest account can in this way corrupt another process running under a privileged account. This is an important reason to avoid the use of shared sections in DLLs. If a DLL is compressed by certain executable packers (e.g. UPX), all of its code sections are marked as read and write, and will be unshared. Read-and-write code sections, much like private data sections, are private to each process. Thus DLLs with shared data sections should not be compressed if they are intended to be used simultaneously by multiple programs, since each program instance would have to carry its own copy of the DLL, resulting in increased memory consumption. Import libraries Like static libraries, import libraries for DLLs are noted by the .lib file extension. For example, kernel32.dll, the primary dynamic library for Windows's base functions such as file creation and memory management, is linked via kernel32.lib. The usual way to tell an import library from a proper static library is by size: the import library is much smaller as it only contains symbols referring to the actual DLL, to be processed at link-time. Both nevertheless are Unix ar format files. Linking to dynamic libraries is usually handled by linking to an import library when building or linking to create an executable file. The created executable then contains an import address table (IAT) by which all DLL function calls are referenced (each referenced DLL function contains its own entry in the IAT). At run-time, the IAT is filled with appropriate addresses that point directly to a function in the separately loaded DLL. In Cygwin/MSYS and MinGW, import libraries are conventionally given the suffix .dll.a, combining both the Windows DLL suffix and the Unix ar suffix. The file format is similar, but the symbols used to mark the imports are different (_head_foo_dll vs __IMPORT_DESCRIPTOR_foo). Although its GNU Binutils toolchain can generate import libraries and link to them, it is faster to link to the DLL directly. An experimental tool in MinGW called genlib can be used to generate import libs with MSVC-style symbols. Symbol resolution and binding Each function exported by a DLL is identified by a numeric ordinal and optionally a name. Likewise, functions can be imported from a DLL either by ordinal or by name. The ordinal represents the position of the function's address pointer in the DLL Export Address table. It is common for internal functions to be exported by ordinal only. For most Windows API functions only the names are preserved across different Windows releases; the ordinals are subject to change. Thus, one cannot reliably import Windows API functions by their ordinals. Importing functions by ordinal provides only slightly better performance than importing them by name: export tables of DLLs are ordered by name, so a binary search can be used to find a function. The index of the found name is then used to look up the ordinal in the Export Ordinal table. In 16-bit Windows, the name table was not sorted, so the name lookup overhead was much more noticeable. It is also possible to bind an executable to a specific version of a DLL, that is, to resolve the addresses of imported functions at compile-time. For bound imports, the linker saves the timestamp and checksum of the DLL to which the import is bound. At run-time, Windows checks to see if the same version of library is being used, and if so, Windows bypasses processing the imports. Otherwise, if the library is different from the one which was bound to, Windows processes the imports in a normal way. Bound executables load somewhat faster if they are run in the same environment that they were compiled for, and exactly the same time if they are run in a different environment, so there is no drawback for binding the imports. For example, all the standard Windows applications are bound to the system DLLs of their respective Windows release. A good opportunity to bind an application's imports to its target environment is during the application's installation. This keeps the libraries "bound" until the next OS update. It does, however, change the checksum of the executable, so it is not something that can be done with signed programs, or programs that are managed by a configuration management tool that uses checksums (such as MD5 checksums) to manage file versions. As more recent Windows versions have moved away from having fixed addresses for every loaded library (for security reasons), the opportunity and value of binding an executable is decreasing. Explicit run-time linking DLL files may be explicitly loaded at run-time, a process referred to simply as run-time dynamic linking by Microsoft, by using the LoadLibrary (or LoadLibraryEx) API function. The GetProcAddress API function is used to look up exported symbols by name, and FreeLibrary – to unload the DLL. These functions are analogous to dlopen, dlsym, and dlclose in the POSIX standard API. The procedure for explicit run-time linking is the same in any language that supports pointers to functions, since it depends on the Windows API rather than language constructs. Delayed loading Normally, an application that is linked against a DLL’s import library will fail to start if the DLL cannot be found, because Windows will not run the application unless it can find all of the DLLs that the application may need. However an application may be linked against an import library to allow delayed loading of the dynamic library. In this case, the operating system will not try to find or load the DLL when the application starts; instead, a stub is included in the application by the linker which will try to find and load the DLL through LoadLibrary and GetProcAddress when one of its functions is called. If the DLL cannot be found or loaded, or the called function does not exist, the application will generate an exception, which may be caught and handled appropriately. If the application does not handle the exception, it will be caught by the operating system, which will terminate the program with an error message. The delayed loading mechanism also provides notification hooks, allowing the application to perform additional processing or error handling when the DLL is loaded and/or any DLL function is called. Compiler and language considerations Delphi In a source file, the keyword library is used instead of program. At the end of the file, the functions to be exported are listed in exports clause. Delphi does not need LIB files to import functions from DLLs; to link to a DLL, the external keyword is used in the function declaration to signal the DLL name, followed by name to name the symbol (if different) or index to identify the index. Microsoft Visual Basic In Visual Basic (VB), only run-time linking is supported; but in addition to using LoadLibrary and GetProcAddress API functions, declarations of imported functions are allowed. When importing DLL functions through declarations, VB will generate a run-time error if the DLL file cannot be found. The developer can catch the error and handle it appropriately. When creating DLLs in VB, the IDE will only allow creation of ActiveX DLLs, however methods have been created to allow the user to explicitly tell the linker to include a .DEF file which defines the ordinal position and name of each exported function. This allows the user to create a standard Windows DLL using Visual Basic (Version 6 or lower) which can be referenced through a "Declare" statement. C and C++ Microsoft Visual C++ (MSVC) provides several extensions to standard C++ which allow functions to be specified as imported or exported directly in the C++ code; these have been adopted by other Windows C and C++ compilers, including Windows versions of GCC. These extensions use the attribute __declspec before a function declaration. Note that when C functions are accessed from C++, they must also be declared as extern "C" in C++ code, to inform the compiler that the C linkage should be used. Besides specifying imported or exported functions using __declspec attributes, they may be listed in IMPORT or EXPORTS section of the DEF file used by the project. The DEF file is processed by the linker, rather than the compiler, and thus it is not specific to C++. DLL compilation will produce both DLL and LIB files. The LIB file (import library) is used to link against a DLL at compile-time; it is not necessary for run-time linking. Unless the DLL is a Component Object Model (COM) server, the DLL file must be placed in one of the directories listed in the PATH environment variable, in the default system directory, or in the same directory as the program using it. COM server DLLs are registered using regsvr32.exe, which places the DLL's location and its globally unique ID (GUID) in the registry. Programs can then use the DLL by looking up its GUID in the registry to find its location or create an instance of the COM object indirectly using its class identifier and interface identifier. Programming examples Using DLL imports The following examples show how to use language-specific bindings to import symbols for linking against a DLL at compile-time. Delphi {$APPTYPE CONSOLE} program Example; // import function that adds two numbers function AddNumbers(a, b : Double): Double; StdCall; external 'Example.dll'; // main program var R: Double; begin R := AddNumbers(1, 2); Writeln('The result was: ', R); end. C 'Example.lib' file must be included (assuming that Example.dll is generated) in the project before static linking. The file 'Example.lib' is automatically generated by the compiler when compiling the DLL. Not executing the above statement would cause linking error as the linker would not know where to find the definition of AddNumbers. The DLL file 'Example.dll' may also have to be copied to the location where the .exe file would be generated by the following code: #include <windows.h> #include <stdio.h> // Import function that adds two numbers extern "C" __declspec(dllimport) double AddNumbers(double a, double b); int main(int argc, char *argv[]) { double result = AddNumbers(1, 2); printf("The result was: %f\n", result); return 0; } Using explicit run-time linking The following examples show how to use the run-time loading and linking facilities using language-specific Windows API bindings. Note that all of the four samples are vulnerable to DLL preloading attacks, since example.dll can be resolved to a place unintended by the author (unless explicitly excluded the application directory goes before system library locations, and without HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Session Manager\SafeDllSearchMode or HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\CWDIllegalInDLLSearch the current working directory is looked up before the system library directories), and thus to a malicious version of the library. See the reference for Microsoft's guidance on safe library loading: one should use in to remove both the application directory and the current working directory from the DLL search path, or use in to remove the current working directory from the DLL search path. Microsoft Visual Basic Option Explicit Declare Function AddNumbers Lib "Example.dll" _ (ByVal a As Double, ByVal b As Double) As Double Sub Main() Dim Result As Double Result = AddNumbers(1, 2) Debug.Print "The result was: " & Result End Sub Delphi program Example; {$APPTYPE CONSOLE} uses Windows; var AddNumbers:function (a, b: integer): Double; StdCall; LibHandle:HMODULE; begin LibHandle := LoadLibrary('example.dll'); if LibHandle <> 0 then AddNumbers := GetProcAddress(LibHandle, 'AddNumbers'); if Assigned(AddNumbers) then Writeln( '1 + 2 = ', AddNumbers( 1, 2 ) ); Readln; end. C #include <windows.h> #include <stdio.h> // DLL function signature typedef double (*importFunction)(double, double); int main(int argc, char **argv) { importFunction addNumbers; double result; HINSTANCE hinstLib; // Load DLL file hinstLib = LoadLibrary(TEXT("Example.dll")); if (hinstLib == NULL) { printf("ERROR: unable to load DLL\n"); return 1; } // Get function pointer addNumbers = (importFunction) GetProcAddress(hinstLib, "AddNumbers"); if (addNumbers == NULL) { printf("ERROR: unable to find DLL function\n"); FreeLibrary(hinstLib); return 1; } // Call function. result = addNumbers(1, 3); // Unload DLL file FreeLibrary(hinstLib); // Display result printf("The result was: %f\n", result); return 0; } Python The Python ctypes binding will use POSIX API on POSIX systems. import ctypes my_dll = ctypes.cdll.LoadLibrary("Example.dll") # The following "restype" method specification is needed to make # Python understand what type is returned by the function. my_dll.AddNumbers.restype = ctypes.c_double p = my_dll.AddNumbers(ctypes.c_double(1.0), ctypes.c_double(2.0)) print("The result was:", p) Component Object Model The Component Object Model (COM) defines a binary standard to host the implementation of objects in DLL and EXE files. It provides mechanisms to locate and version those files as well as a language-independent and machine-readable description of the interface. Hosting COM objects in a DLL is more lightweight and allows them to share resources with the client process. This allows COM objects to implement powerful back-ends to simple GUI front ends such as Visual Basic and ASP. They can also be programmed from scripting languages. DLL hijacking Due to a vulnerability commonly known as DLL hijacking, DLL spoofing, DLL preloading or binary planting, many programs will load and execute a malicious DLL contained in the same folder as a data file opened by these programs. The vulnerability was discovered by Georgi Guninski in 2000. In August 2010 it gained worldwide publicity after ACROS Security rediscovered it and many hundreds of programs were found vulnerable. Programs that are run from unsafe locations, i.e. user-writable folders like the Downloads or the Temp directory, are almost always susceptible to this vulnerability. See also Dependency Walker, a utility which displays exported and imported functions of DLL and EXE files Dynamic library Library (computing) Linker (computing) Loader (computing) Moricons.dll Object file Shared library Static library DLL Hell References Hart, Johnson. Windows System Programming Third Edition. Addison-Wesley, 2005. . Rector, Brent et al. Win32 Programming. Addison-Wesley Developers Press, 1997. . External links dllexport, dllimport on MSDN Dynamic-Link Libraries on MSDN Dynamic-Link Library Security on MSDN Dynamic-Link Library Search Order on MSDN Microsoft Security Advisory: Insecure library loading could allow remote code execution What is a DLL? on Microsoft support site Dynamic-Link Library Functions on MSDN Microsoft Portable Executable and Common Object File Format Specification Microsoft specification for dll files Carpet Bombing and Directory Poisoning MS09-014: Addressing the Safari Carpet Bomb vulnerability More information about the DLL Preloading remote attack vector An update on the DLL-preloading remote attack vector Load Library Safely Computer file formats Computer libraries Windows administration Articles with example C code
Dynamic-link library
Technology
5,434
49,498,411
https://en.wikipedia.org/wiki/Misuse%20of%20p-values
Misuse of p-values is common in scientific research and scientific education. p-values are often used or interpreted incorrectly; the American Statistical Association states that p-values can indicate how incompatible the data are with a specified statistical model. From a Neyman–Pearson hypothesis testing approach to statistical inferences, the data obtained by comparing the p-value to a significance level will yield one of two results: either the null hypothesis is rejected (which however does not prove that the null hypothesis is false), or the null hypothesis cannot be rejected at that significance level (which however does not prove that the null hypothesis is true). From a Fisherian statistical testing approach to statistical inferences, a low p-value means either that the null hypothesis is true and a highly improbable event has occurred or that the null hypothesis is false. Clarifications about p-values The following list clarifies some issues that are commonly misunderstood regarding p-values: The p-value is not the probability that the null hypothesis is true, or the probability that the alternative hypothesis is false. A p-value can indicate the degree of compatibility between a dataset and a particular hypothetical explanation (such as a null hypothesis). Specifically, the p-value can be taken as the probability of obtaining an effect that is at least as extreme as the observed effect, given that the null hypothesis is true. This should not be confused with the probability that the null hypothesis is true given the observed effect (see base rate fallacy). In fact, frequentist statistics does not attach probabilities to hypotheses. The p-value is not the probability that the observed effects were produced by random chance alone. The p-value is computed under the assumption that a certain model, usually the null hypothesis, is true. This means that the p-value is a statement about the relation of the data to that hypothesis. The 0.05 significance level is merely a convention. The 0.05 significance level (alpha level) is often used as the boundary between a statistically significant and a statistically non-significant p-value. However, this does not imply that there is generally a scientific reason to consider results on opposite sides of any threshold as qualitatively different. The p-value does not indicate the size or importance of the observed effect. A small p-value can be observed for an effect that is not meaningful or important. In fact, the larger the sample size, the smaller the minimum effect needed to produce a statistically significant p-value (see effect size). Issues 1 and 2 can be illustrated by analogy to the Prosecutor's Fallacy in their shared underlying 2×2 contingency table format, where the user's convenient 90° rotation of attention replaces the intended sample space with an illicit sample space. These p-value misuses are thus analogous to probability's Fallacy of the Transformed Conditional and in turn to categorical logic's Fallacy of Illicit Conversion. Representing probabilities of hypotheses A frequentist approach rejects the validity of representing probabilities of hypotheses: hypotheses are true or false, not something that can be represented with a probability. Bayesian statistics actively models the likelihood of hypotheses. The p-value does not in itself allow reasoning about the probabilities of hypotheses, which requires multiple hypotheses or a range of hypotheses, with a prior distribution of likelihoods between them, in which case Bayesian statistics could be used. There, one uses a likelihood function for all possible values of the prior instead of the p-value for a single null hypothesis. The p-value describes a property of data when compared to a specific null hypothesis; it is not a property of the hypothesis itself. For the same reason, p-values do not give the probability that the data were produced by random chance alone. Multiple comparisons problem The multiple comparisons problem occurs when one considers a set of statistical inferences simultaneously or infers a subset of parameters selected based on the observed values. It is also known as the look-elsewhere effect. Errors in inference, including confidence intervals that fail to include their corresponding population parameters or hypothesis tests that incorrectly reject the null hypothesis, are more likely to occur when one considers the set as a whole. Several statistical techniques have been developed to prevent this from happening, allowing significance levels for single and multiple comparisons to be directly compared. These techniques generally require a higher significance threshold for individual comparisons, so as to compensate for the number of inferences being made. The webcomic xkcd satirized misunderstandings of p-values by portraying scientists investigating the claim that eating jellybeans caused acne. After failing to find a significant (p < 0.05) correlation between eating jellybeans and acne, the scientists investigate 20 different colors of jellybeans individually, without adjusting for multiple comparisons. They find one color (green) nominally associated with acne (p < 0.05). The results are then reported by a newspaper as indicating that green jellybeans are linked to acne at a 95% confidence level—as if green were the only color tested. In fact, if 20 independent tests are conducted at the 0.05 significance level and all null hypotheses are true, there is a 64.2% chance of obtaining at least one false positive and the expected number of false positives is 1 (i.e. 0.05 × 20). In general, the family-wise error rate (FWER)—the probability of obtaining at least one false positive—increases with the number of tests performed. The FWER when all null hypotheses are true for m independent tests, each conducted at significance level α, is: See also Estimation statistics Replication crisis Metascience Misuse of statistics Statcheck References Further reading Statistical hypothesis testing Probability fallacies
Misuse of p-values
Mathematics
1,207
54,709,463
https://en.wikipedia.org/wiki/Periodic%20counter-current%20chromatography
Periodic counter-current chromatography (PCC) is a method for running affinity chromatography in a quasi-continuous manner. Today, the process is mainly employed for the purification of antibodies in the biopharmaceutical industry as well as in research and development. When purifying antibodies, protein A is used as affinity matrix. However, periodic counter-current processes can be applied to any affinity type chromatography. Basic principle In conventional affinity chromatography, a single chromatography column is loaded with feed material up to the point before target material (product) cannot be retained by the affinity material anymore. The resin with the adsorbed product on it is then washed to remove impurities. Finally, the pure product is eluted with a different buffer. Notably, if too much feed material is loaded onto the column, the product can break through and product is consequently lost. Therefore, it is very important to only partially load the column to maximize the yield. Periodic counter-current chromatography puts this problem aside by utilizing more than one column. PCC processes can be run with any number of columns, starting from two. The following paragraph will explain a two-column version of PCC, but other protocols with more columns rely on the same principles (see below). A diagram depicting the individual process steps is shown on the right. In Step 1, the so-called sequential loading phase, columns 1 and 2 are interconnected. Column 1 is fully loaded with sample (red) while its breakthrough is captured on column 2. In Step 2, column 1 is washed, eluted, cleaned and re-equilibrated while loading separately continues on column 2. In Step 3, after regeneration of column 1, the columns are again inter-connected and column 2 is fully loaded while its breakthrough is captured on column 1. Finally, in Step 4 column 2 is washed, eluted, cleaned and re-equilibrated while loading continues independently on column 1. This cyclic process is repeated in a continuous way. Several variations of periodic counter-current chromatography with more than two columns exist. In these cases, additional columns are either placed within the feed stream during loading, having the same effect as using longer columns. Alternatively, additional columns can be kept in an unoccupied stand-by mode during loading. This mode offers additional assurance that the main process is not influenced by washing and cleaning protocols, albeit in practice this is rarely required. On the other hand, the underutilized columns reduce the theoretical maximum productivity for such processes. Generally, the advantages and disadvantages of different multi-column protocols are the subject of debate. However, without a doubt, compared to single column batch processes, periodic counter-current processes provide significantly increased productivity. Dynamic process control On the time scale of continuous chromatography runs, it is fairly common to observe changes in important process parameters, such as column health, buffer quality, feed titer (concentration) or feed composition. Such changes result in an altered maximum column capacity, relative to the amount of loaded feed material. In order to achieve a steady quality and yield for each process cycle, the timing of the individual process steps therefore has to be adjusted. Manual changes are in principle conceivable, but rather impractical. More commonly, dynamic process control algorithms monitor the process parameters and apply changes as needed automatically. There are two different operating modes for dynamic process controllers in use today (see Figure on the right). The first one, called DeltaUV, monitors the difference between two signals from detectors situated before and after the first column. During initial loading, there is a large difference between the two signals, but it is diminishing as the impurities make their way through the column. Once the column is fully saturated with impurities and only additional product is being held back, the difference between the signals reaches a constant value. As long as the product is completely being captured on the column, the difference between the signals will remain constant. As soon as some of the product breaks through the column (compare above), the difference diminishes. Thus, the timing and amount of product breakthrough can be determined. The second possibility, called AutomAb, requires only the signal of a single detector situated behind the first column. During initial loading, the signal increases, as more and more impurities make their way through the column. When the column is saturated with impurities and as long as the product is completely being captured on the column, the signal then remains constant. As soon as some of the product breaks through the column (compare above), the signal increases again. Thus, the timing and amount of product breakthrough can again be determined. Both iterations work equally well in theory. In practice, the requirement for two synced signals and the exposure of one detector to unpurified feed material, makes the DetaUV approach less reliable than AutomAb. Commercial situation As of 2017, GE Healthcare holds patents around three-column periodic counter-current chromatography: this technology is used in their Äkta PCC instrument. Likewise, ChromaCon holds patents for an optimized two-column version (CaptureSMB). CaptureSMB is used in ChromaCon's Contichrom CUBE and under license in YMC's Ecoprime Twin systems. Additional manufacturers of systems capable of periodic counter-current chromatography include Novasep and Pall. References Chromatography
Periodic counter-current chromatography
Chemistry
1,126
48,222,358
https://en.wikipedia.org/wiki/Sodium%20channel%20opener
A sodium channel opener is a type of drug which facilitates ion transmission through sodium channels. Examples include toxins, such as aconitine, veratridine, batrachotoxin, robustoxin, palytoxin and ciguatoxins and insecticides (DDT and pyrethroids), which activate voltage-gated sodium channels (VGSCs), and solnatide (AP301), which activates the epithelial sodium channel (ENaC). See also Sodium channel blocker References Ion channel openers Sodium channels
Sodium channel opener
Chemistry
118
13,565,605
https://en.wikipedia.org/wiki/Deptropine
Deptropine (Brontina) also known as dibenzheptropine, is an antihistamine with anticholinergic properties acting at the H1 receptor. It is usually marketed as the citrate salt. References H1 receptor antagonists Tropanes Dibenzocycloheptenes Ethers
Deptropine
Chemistry
69
12,281,773
https://en.wikipedia.org/wiki/Graph%20of%20groups
In geometric group theory, a graph of groups is an object consisting of a collection of groups indexed by the vertices and edges of a graph, together with a family of monomorphisms of the edge groups into the vertex groups. There is a unique group, called the fundamental group, canonically associated to each finite connected graph of groups. It admits an orientation-preserving action on a tree: the original graph of groups can be recovered from the quotient graph and the stabilizer subgroups. This theory, commonly referred to as Bass–Serre theory, is due to the work of Hyman Bass and Jean-Pierre Serre. Definition A graph of groups over a graph is an assignment to each vertex of of a group and to each edge of of a group as well as monomorphisms and mapping into the groups assigned to the vertices at its ends. Fundamental group Let be a spanning tree for and define the fundamental group to be the group generated by the vertex groups and elements for each edge of with the following relations: if is the edge with the reverse orientation. for all in . if is an edge in . This definition is independent of the choice of . The benefit in defining the fundamental groupoid of a graph of groups, as shown by , is that it is defined independently of base point or tree. Also there is proved there a nice normal form for the elements of the fundamental groupoid. This includes normal form theorems for a free product with amalgamation and for an HNN extension . Structure theorem Let be the fundamental group corresponding to the spanning tree . For every vertex and edge , and can be identified with their images in . It is possible to define a graph with vertices and edges the disjoint union of all coset spaces and respectively. This graph is a tree, called the universal covering tree, on which acts. It admits the graph as fundamental domain. The graph of groups given by the stabilizer subgroups on the fundamental domain corresponds to the original graph of groups. Examples A graph of groups on a graph with one edge and two vertices corresponds to a free product with amalgamation. A graph of groups on a single vertex with a loop corresponds to an HNN extension. Generalisations The simplest possible generalisation of a graph of groups is a 2-dimensional complex of groups. These are modeled on orbifolds arising from cocompact properly discontinuous actions of discrete groups on simplicial complexes that have the structure of CAT(0) spaces. The quotient of the simplicial complex has finite stabilizer groups attached to vertices, edges and triangles together with monomorphisms for every inclusion of simplices. A complex of groups is said to be developable if it arises as the quotient of a CAT(0) simplicial complex. Developability is a non-positive curvature condition on the complex of groups: it can be verified locally by checking that all circuits occurring in the links of vertices have length at least six. Such complexes of groups originally arose in the theory of Bruhat–Tits buildings; their general definition and continued study have been inspired by the ideas of Gromov. See also Bass–Serre theory Right-angled Artin group References . . . . . Translated by John Stillwell from "arbres, amalgames, SL2", written with the collaboration of Hyman Bass, 3rd edition, astérisque 46 (1983). See Chapter I.5. Geometric group theory
Graph of groups
Physics
709
31,589,196
https://en.wikipedia.org/wiki/Formally%20smooth%20map
In algebraic geometry and commutative algebra, a ring homomorphism is called formally smooth (from French: Formellement lisse) if it satisfies the following infinitesimal lifting property: Suppose B is given the structure of an A-algebra via the map f. Given a commutative A-algebra, C, and a nilpotent ideal , any A-algebra homomorphism may be lifted to an A-algebra map . If moreover any such lifting is unique, then f is said to be formally étale. Formally smooth maps were defined by Alexander Grothendieck in Éléments de géométrie algébrique IV. For finitely presented morphisms, formal smoothness is equivalent to usual notion of smoothness. Examples Smooth morphisms All smooth morphisms are equivalent to morphisms locally of finite presentation which are formally smooth. Hence formal smoothness is a slight generalization of smooth morphisms. Non-example One method for detecting formal smoothness of a scheme is using infinitesimal lifting criterion. For example, using the truncation morphism the infinitesimal lifting criterion can be described using the commutative squarewhere . For example, if and then consider the tangent vector at the origin given by the ring morphismsendingNote because , this is a valid morphism of commutative rings. Then, since a lifting of this morphism tois of the formand , there cannot be an infinitesimal lift since this is non-zero, hence is not formally smooth. This also proves this morphism is not smooth from the equivalence between formally smooth morphisms locally of finite presentation and smooth morphisms. See also Dual number Smooth morphism Deformation theory References External links Formally smooth with smooth fibers, but not smooth https://mathoverflow.net/q/333596 Formally smooth but not smooth https://mathoverflow.net/q/195 Commutative algebra Algebraic geometry
Formally smooth map
Mathematics
411
31,082,169
https://en.wikipedia.org/wiki/Suillus%20acidus
Suillus acidus is an edible species of mushroom in the genus Suillus. The species was first described by Charles Horton Peck as Boletus acidus in 1905. References External links acidus Fungi of North America Edible fungi Fungi described in 1905 Fungus species
Suillus acidus
Biology
53
34,322,027
https://en.wikipedia.org/wiki/Voronezh%20radar
Voronezh radars () are the current generation of Russian early-warning radar, providing long distance monitoring of airspace against ballistic missile attack and aircraft monitoring. The first radar, in Lekhtusi near St Petersburg, became operational in 2009. There is a plan to replace older radars with the Voronezh by 2020. Their common name follows the pattern of Soviet radars in being named after a river, the Voronezh. The previous generation of radar was known as the Daryal (after Darial Gorge), Volga (after Volga River) and Daugava (Daugava River) and the generation before the Dnepr (Dnieper River), and Dnestr (Dniester River). The Voronezh radars are described as highly prefabricated meaning that they have a set up time of months rather than years and need fewer personnel than previous generations. They are also modular so that a radar can be brought into (partial) operation whilst being incomplete. Russia has used the launch of these new radars to raise its concerns about US missile defence in Europe. At the launch of the Kaliningrad radar in November 2011 Russian President Dmitry Medvedev was quoted as saying "I expect that this step [the launch of the radar] will be seen by our partners as the first signal of our country's readiness to make an adequate response to the threats which the missile shield poses for our strategic nuclear forces." Types All types are phased array radars. Voronezh-M (77Ya6-M) works in the meter range of wavelengths (VHF) and was designed by RTI Mints. Voronezh-DM (77Ya6-DM) works in the decimeter range (UHF) and was designed by NPK NIIDAR. It has a range of up to 10,000 km and is capable of simultaneously tracking 500 objects. Its horizon range is 6000 km and vertical range is 8000 km (due to radar horizon, this range is only applicable if target is located at altitude of several kilometers). Russia claims the radar can detect targets the size of a "football ball" at a distance of 8000 km. Voronezh-VP (77Ya6-VP) works in the meter range (VHF) and was designed by RTI Mints. The only one built has 6 segments instead of the 3 of the Voronezh-M. A Voronezh-M is claimed to cost 2.85 billion rubles and a Voronezh-DM 4.3 billion rubles. This compares to the 5 billion ruble cost of a Dnepr and 19.8 billion rubles for a Daryal, at current prices. Voronezh systems are manufactured at the Saransk Television Plant. Their designers, Sergey Boev (RTI), Sergey Saprykin (NIIDAR), and Valeriy Karasev (RTI Mints), were jointly awarded the 2011 State Prize for Science and Technology for their work on the Voronezh. Installations The first radar, a Voronezh-M, was built in Lekhtusi near St Petersburg. It entered testing in 2005 and was declared "combat ready" in 2012. It is adjacent to the A.F. Mozhaysky Military-Space Academy, which is an officer training centre for the Aerospace Defence Forces. It is described as filling the early warning gap caused by the closure of the radar station at Skrunda in Latvia in 1998, although the Volga radar in Hantsavichy, Belarus, has also been described as doing this, and as a UHF radar Volga has a different resolution from the VHF Voronezh-M. The second radar is at Armavir in southern Russia on the site of Baronovsky Airfield. It is a Voronezh-DM, a UHF radar and was announced as replacing the coverage lost when the Dnestr radars in Sevastopol and Mukachevo, Ukraine, were closed in 2009. There are actually two radars at this site, the first one covers the south west and could replace the Ukrainian radars. The second radar is facing south east and could replace the Daryal radar in Gabala that closed at the end of 2012. The radar station at Armavir was damaged by a Ukrainian drone strike in 2024 during the Russo-Ukrainian War. The third radar is to the south of Pionersky in Kaliningrad, on the site of Dunayevka airfield. It is another UHF Voronezh-DM and is surrounded by countries that are now in NATO. There is only one radar here and it is fully operational in 2014. A radar was built at Mishelevka in Irkutsk on the site of the former, and never operational, Daryal radar which was demolished in 2011. The radar is a Voronezh-VP and is sited close to the former Daryal transmitter building. This radar covers the south and can replace one of the two Dnepr radars at that site. Another Voronezh-VP array was planned which gives 240 degrees coverage and this is ready by 2014. It is planned to build a Voronezh-VP radar at Pechora in 2015 to replace the Daryal there. Similarly a Voronezh-VP is planned for Olenegorsk in 2017 to replace the Dnepr/Daugava. As part of the public negotiations over the future of Gabala Radar Station it had been suggested that the Daryal there could be replaced by a Voronezh-VP in 2017, although the station closed at the end of 2012 instead. Work started on the station at Barnaul in 2013, other locations announced are Omsk, Yeniseysk and Orenburg. On 20 December 2017, three new Voronezh radar stations entered service in Russia, thus increasing the total number of operational radars to 8 (Armavir Radar Station operates 2 radars). The radars are located in Krasnoyarsk Krai, Altai Krai and Orenburg Oblast. According to Russia's Ministry of Defence, in 2022 construction of new radar stations near Vorkuta and Murmansk (Olenegorsk) will be completed. Locations References External links Voronezh-DM Armavir photos from Novosti Kosmonavtiki Russian Space Forces Russian military radars Air defence radar networks NIIDAR products Early warning systems Military equipment introduced in the 2000s
Voronezh radar
Technology
1,327
339,220
https://en.wikipedia.org/wiki/Alexei%20Abrikosov%20%28physicist%29
Alexei Alexeyevich Abrikosov (; June 25, 1928 – March 29, 2017) was a Soviet, Russian and American theoretical physicist whose main contributions are in the field of condensed matter physics. He was the co-recipient of the 2003 Nobel Prize in Physics, with Vitaly Ginzburg and Anthony James Leggett, for theories about how matter can behave at extremely low temperatures. Education and early life Abrikosov was born in Moscow, Russian SFSR, Soviet Union, on June 25, 1928, to a couple of physicians: Aleksey Abrikosov and Fani ( Wulf). His mother was Jewish. After graduating from high school in 1943, Abrikosov began studying energy technology. He graduated from Moscow State University in 1948. From 1948 to 1965, he worked at the Institute for Physical Problems of the USSR Academy of Sciences, where he received his Ph.D. in 1951 for the theory of thermal diffusion in plasmas, and then his Doctor of Physical and Mathematical Sciences (a "higher doctorate") degree in 1955 for a thesis on quantum electrodynamics at high energies. Abrikosov moved to the US in 1991 and lived there until his death in 2017, in Palo Alto, California. While in the US, Abrikosov was elected to the National Academy of Sciences in 2000, and in 2001, to be a foreign member of the Royal Society. Career From 1965 to 1988, he worked at the Landau Institute for Theoretical Physics (USSR Academy of Sciences). He has been a professor at Moscow State University since 1965. In addition, he held tenure at the Moscow Institute of Physics and Technology from 1972 to 1976, and at the Moscow Institute of Steel and Alloys from 1976 to 1991. He served as a full member of the USSR Academy of Sciences from 1987 to 1991. In 1991, he became a full member of the Russian Academy of Sciences. In two works in 1952 and 1957, Abrikosov explained how magnetic flux can penetrate a class of superconductors. This class of materials are called type-II superconductors. The accompanying arrangement of magnetic flux lines is called the Abrikosov vortex lattice. Together with Lev Gor'kov and Igor Dzyaloshinskii, Abrikosov has written an iconic book on theoretical solid-state physics, which has been used to train physicists in the field for decades. From 1991 until his retirement, he worked at Argonne National Laboratory in the U.S. state of Illinois. Abrikosov was an Argonne Distinguished Scientist at the Condensed Matter Theory Group in Argonne's Materials Science Division. When he received the Nobel Prize, his research was focused on the origins of magnetoresistance, a property of some materials that change their resistance to electrical flow under the influence of a magnetic field. Honours and awards Abrikosov was awarded the Lenin Prize in 1966, the Fritz London Memorial Prize in 1972, and the USSR State Prize in 1982. In 1989 he received the Landau Prize from the Academy of Sciences, Russia. Two years later, in 1991, Abrikosov was awarded the Sony Corporation's John Bardeen Award. The same year he was elected a Foreign Honorary Member of the American Academy of Arts and Sciences. He shared the 2003 Nobel Prize in Physics. He was also a member of the Royal Academy of London, a fellow of the American Physical Society, and in 2000 was elected to the prestigious National Academy of Sciences. Other awards include: Member of the Academy of Sciences of the USSR (now Russian Academy of Sciences), 1964 Honorary Doctor of the University of Lausanne, 1975 Order of the Badge of Honour, 1975 Order of the Red Banner of Labour, 1988 Academician of the Academy of Sciences of the USSR (now Russian Academy of Sciences), 1987 Elected a Foreign Member of the Royal Society (ForMemRS) in 2001 Golden Plate Award of the American Academy of Achievement, 2004 Gold Medal of Vernadsky from National Academy of Sciences of Ukraine, 2015 Personal life Abrikosov was the son of the physicians Alexei Ivanovich Abrikosov (1875-1955) and his second wife, Fania Davidovna Woolf (1895—1965). Through his father, Abrikosov was the nephew of the martyred Catholic nun Anna Abrikosova (1882-1936). His sister was Maria Alekseevna Abrikósova (1929-1998), physician. He married Svetlana Yuriyevna Bunkova and had 3 children. He died in California on 29 March 2017 at the age of 88. Books See also List of Jewish Nobel laureates References External links including the Nobel Lecture on December 8, 2003 Type II Superconductors and the Vortex Lattice M. R. Norman, "Aleksei A. Abrikosov", Biographical Memoirs of the National Academy of Sciences (2018) 1928 births 2017 deaths Nobel laureates in Physics American Nobel laureates Russian Nobel laureates Members of the United States National Academy of Sciences Foreign members of the Royal Society Jewish American physicists Full Members of the USSR Academy of Sciences Full Members of the Russian Academy of Sciences Moscow State University alumni Academic staff of Moscow State University Academic staff of the Moscow Institute of Physics and Technology Recipients of the Lenin Prize Recipients of the Order of the Red Banner of Labour Recipients of the USSR State Prize Jewish Russian physicists Soviet physicists Superconductivity Fellows of the American Academy of Arts and Sciences Fellows of the American Physical Society Theoretical physicists Soviet Jews 20th-century Russian physicists
Alexei Abrikosov (physicist)
Physics,Materials_science,Engineering
1,133
41,336,594
https://en.wikipedia.org/wiki/Monochromatic%20wavelength%20dispersive%20x-ray%20fluorescence
Monochromatic wavelength dispersive x-ray fluorescence (MWD XRF) is an enhanced version of conventional wavelength-dispersive X-ray spectroscopy (WDXRF) elemental analysis. The key difference is that MWD XRF uses a doubly curved crystal X-ray optic between the X-ray source and the sample resulting in monochromatic excitation. This additional optic creates a high-intensity X-ray beam on a small spot size without increasing the power of the X-ray source. An MWD XRF instrument is constructed from a low-power X-ray tube, a point-to-point focusing optic for excitation, a sample cell, a focusing optic that collects the fluorescence from the sample, and an X-ray detector. By using an optic between the X-ray source and the sample, a monochromatic beam free of bremsstrahlung, excites the sample, eliciting the secondary fluorescence X-rays needed for elemental analysis. By restricting the band of wavelengths used for excitation, a much higher signal to background ratio is achieved. This type of excitation allows much lower limits of detection and faster reading times. References X-ray spectroscopy
Monochromatic wavelength dispersive x-ray fluorescence
Physics,Chemistry,Astronomy
257
49,711,392
https://en.wikipedia.org/wiki/List%20of%20cities%20by%20sunshine%20duration
The following is a list of cities by sunshine duration. Sunshine duration is a climatological indicator, measuring duration of sunshine in given period (usually, a day or a year) for a given location on Earth, typically expressed as an averaged value over several years. It is a general indicator of cloudiness of a location, and thus differs from insolation, which measures the total energy delivered by sunlight over a given period. Sunshine duration is usually expressed in hours per year, or in (average) hours per day. The first measure indicates the general sunniness of a location compared with other places, while the latter allows for comparison of sunshine in various seasons in the same location. Another often-used measure is percentage ratio of recorded bright sunshine duration and daylight duration in the observed period. Africa Asia Europe North America South America Oceania See also List of cities by average temperature References Sunshine duration Sunshine duration, cities Climate and weather statistics
List of cities by sunshine duration
Physics
188
2,354,662
https://en.wikipedia.org/wiki/Polaroid%20SX-70
The SX-70 is a folding single lens reflex Land camera which was produced by the Polaroid Corporation from 1972 to 1981. The SX-70 helped popularize instant photography. History In 1948, Polaroid introduced its first consumer camera. The Land Camera Model 95 was the first camera to use instant film to quickly produce photographs without developing them in a laboratory. Although popular, the Model 95 and subsequent Land Cameras required complex procedures to take and produce good photographs. The photographic paper for each picture had to be manually removed from the camera and peeled open after 60 seconds to reveal the image which needed to be hand coated with a chemical stabilizer for preservation. The picture required several minutes to dry and the process could leave developing chemicals on the hands. The instructions for the Model 20 Swinger, introduced in 1965, warned that, if not followed, "you’re headed for plenty of picture taking trouble". Pictures from the SX-70, by contrast, ejected automatically and developed quickly (fully within 10 minutes) without chemical residue. Polaroid founder Edwin H. Land announced the SX-70 at a company annual meeting in April 1972. On stage, he took out a folded SX-70 from his suit coat pocket and, in just ten seconds, produced five photographs, both actions impossible with previous Land Cameras. The company first sold the SX-70 in Miami, Florida in late 1972, and began selling it nationally in fall 1973. Although the high cost of $180 for the camera and $6.90 for each film pack of ten pictures ($ and $ respectively when adjusted for inflation) limited demand, Polaroid sold 700,000 by mid-1974. In 1973–4, the Skylab 3 and 4 astronauts used an SX-70 to photograph a video display screen to be able to compare the Sun's features from one orbit to the next. There were a variety of models, beginning in 1972 with the original SX-70, though all shared the same basic design. The first model had a plain focusing screen (the user was expected to be able to see the difference between in- and out-of focus) because Dr. Land wanted to encourage photographers to think they were looking at the subject, rather than through a viewfinder. When many users complained that focusing was difficult, especially in dim light, a split-image rangefinder prism was added. This feature is standard on all later manual focus models. The later Sonar OneStep (introduced in 1978) and SLR 680 models were equipped with a sonar autofocus system. This sonar autofocus system greatly helped the user's ability to focus the camera, especially in dark environments, and could be turned off if manual focus was needed. The Sonar OneStep models were the first autofocus SLRs available to consumers. The later SLR 680/690 models updated the basic design of the Sonar OneStep to more modern standards by incorporating support for newer 600 film cartridges instead of SX-70 cartridges, and a built-in flash instead of the disposable "Flash Bar". Today they are the most evolved forms of the SX-70, and are highly sought after by Polaroid enthusiasts. Though expensive, the SX-70 was popular in the 1970s and retains a cult following today. Photographers such as Ansel Adams, Andy Warhol, Helmut Newton, and Walker Evans praised and used the SX-70. Helmut Newton used the camera for fashion shoots. Walker Evans began using the camera in 1973 when he was 70 years old. Not until the $40 Model 1000 OneStep using SX-70 film became the best-selling camera of the 1977 Christmas shopping season, however, did its technology become truly popular. More recently, it was the inspiration for the Belfast alternative band SX-70's name. Design features The SX-70 included many sophisticated design elements. A collapsible SLR required a complex light path for the viewfinder, with three mirrors (including one Fresnel reflector) of unusual, aspheric shapes set at odd angles to create an erect image on the film and an erect aerial image for the viewfinder. Many mechanical parts were precision plastic moldings. The body was made of glass-filled polysulfone, a rigid plastic which was plated with a thin layer of copper-nickel-chromium alloy to give a metallic appearance. Models 2 & 3 used ABS in either Ebony or Ivory color. The film pack contained a flat, 6-volt "PolaPulse" battery to power the camera electronics, drive motor, and flash. The original flash system, a disposable "Flash Bar" consisting of 10 bulbs (five on each side, with the user rotating the bar halfway through) from General Electric, used logic circuits to detect and fire the next unused flash. Models Although various models were offered, all share the same basic design. All SX-70 models feature a folding body design, a 4-element 116 mm f/8 glass lens, and an automatic exposure system known as the Electric Eye. The cameras allow for focusing as close as 10.4 inches (26.4 cm), and have a shutter speed range from 1/175 s to more than 10 seconds. The Model 3 departs from the other models since it isn't a SLR, but instead has the viewfinder cut into the mirror hood. A whole array of accessories could be utilized with SX-70 cameras, such as a close-up lens (1:1 @ 5 inches), electrical remote shutter release, tripod mount and an Ever-Ready carrying case that hung from the neck and unfolded in concert with the camera. A lot of the technology used in the folding SX-70 cameras was later used in the production of rigid "box" type SX-70 cameras, such as the Model 1000 OneStep, Pronto, Presto and The Button. These models, although also utilizing SX-70 film, are very different from the folding SLR SX-70s. Evolution MiNT Camera modified SX-70's into their SLR670's (it uses 600 film natively without needing to add a neutral-density filter). New features include the Time Machine add-on (manual shutter speed from 1/2000s to 1s as well as bulb mode) for the SLR670-S and the SLR670-X, 600 film (ISO 640) compatibility under the "Auto 600" mode. The SLR670 allows for additional shutter/brightness control. MiNT also produces flash bars and filter lenses intended for the SX-70. OpenSX70 is a project to replace the printed circuit board (PCB) on the SX-70 with a modern open-source design based on Arduino. , a working prototype is available with some experimental 3D-printed enclosures. The new PCB also features an LED indicator and an audio jack for firing an external flash. Film When the Polaroid SX-70 was introduced in 1972 it used a film pack of 10 film sheets where each film sheet had a size of 3.5 x 4.25 in2 with a picture area of 3.125 x 3.125 in2 and ASA film speed of 150. The film was a market success despite some problems with the batteries on early film packs. The original SX-70 film was improved once in the mid-1970s (New Improved Faster Developing!) and replaced in 1980 by the further advanced "SX-70 Time-Zero Supercolor" product, in which the layers in the film card were altered to allow a much faster development time (hence the "time zero"). It also had richer, brighter colors than the original 1972 product. There were also professional market varieties of the SX-70 film including 778 (Time Zero equivalent) and the similar 708, Time Zero film without a battery, intended for use in applications such as the "Face Place" photo booth and professional or laboratory film-backs, where a battery is not needed. Time Zero was the film manufactured up until 2005, though overseas-market and some last run film packs were marked only as SX-70. A feature of the SX-70 film packs was a built-in battery to power the camera motors and exposure control, ensuring that a charged battery would always be available as long as film was in the camera. The "Polapulse" battery was configured as a 6 volt thin flat battery, and used zinc chloride chemistry to provide for the high pulse demand of the camera motors. Polaroid later released development kits to allow the Polapulse battery to be used in non-photographic applications. In the 1980s, the company even produced small "600" AM/FM radios that would run on film packs in which the film cards had been exhausted, but the battery still had enough power to be reused. The Polaroid 600 series film, introduced in 1981, has the same film format and cartridge as that of the SX-70 but features a higher film speed at ISO 640. The 2-stop difference in sensitivity can be compensated in an SX-70 by using a ND-filter or through circuit modifications that change the exposure time. MiNT Camera produces modified SX-70s that can take the higher-sensitivity film. Supply issues Polaroid SX-70 "Time-Zero" film was phased out of production in late 2005 to early 2006 (differing according to regional markets). Small quantities of the film may still be acquired, at a variety of prices given the film's expiration dates, on e-auction sites though none of it is guaranteed to work. After Polaroid ceased manufacturing all instant film in 2008, the Impossible Project (now known as Polaroid B.V. or simply Polaroid) began formulating replacements using equipment acquired when the original manufacturing facilities closed. In 2017, the Impossible Project rebranded as Polaroid Originals, joined the Polaroid Corporation and began selling SX-70 through Polaroid. Polaroid produces lines of black and white and color film compatible with the SX-70, though their films use a different chemistry than original Polaroid film and have different characteristics such as lower color quality, longer development times, and higher sensitivity to outside forces including light and pressure. Polaroid also makes film for 600 and newer "I-Type" cameras. Image manipulation One feature of SX-70 integral print film is its ability to be manipulated while developing and for some days after. Because the emulsion is gelatin-based, and the Mylar covering does not allow water vapor to readily pass, the emulsion stays soft for several days, allowing people to press and manipulate the emulsion to produce effects somewhat like impressionist paintings. An example of this technique was used on the cover of Peter Gabriel's third self-titled album from 1980. Another example of emulsion manipulation was the cover of Loverboy's debut album, Loverboy. Greek-American artist Lucas Samaras created a series of self-portraits titled "Photo-Transformations" (1973–76) which employed extensive use of emulsion manipulation techniques. The 500, 600, and Spectra/Image materials do not use a gelatin-based emulsion, and cannot be manipulated this way. Manipulation of the photograph is best done about two minutes after the picture has fully developed. It will stay soft and workable for about 5–15 minutes. Some colors will be more difficult to work on (dark green), whereas others are workable for a long time (red). If the photograph is on a warm surface or slightly warmed in an oven, image manipulation is made easier. Design history Polaroid founder Edwin H. Land was the primary driver behind the project, and many engineers and designers made key contributions. The final styling of the product involved industrial designer Henry Dreyfuss and his firm. See also Instant camera James Gilbert Baker Mike Brodie Stefanie Schneider Lucas Samaras References External links The Land List Polaroid SX-70/Time-Zero Film Important Notice 2006 Polaroid SX-70 Sonar: Photos & Review Notification of Polaroid Instant Film Availability 2008 New film for Polaroid SX70 cameras from Polaroid Originals Polaroid SX70 pictures Polaroid SX70 onlinegallery - sx70.dk Polaroid Art Klaus Wolfer Polaroid cameras Instant cameras Single-lens reflex cameras Cameras introduced in 1972 Personal cameras and photography in space
Polaroid SX-70
Technology
2,550
1,015,161
https://en.wikipedia.org/wiki/Phecda
Phecda , also called Gamma Ursae Majoris (γ Ursae Majoris, abbreviated Gamma UMa, γ UMa), is a star in the constellation of Ursa Major. Since 1943, the spectrum of this star has served as one of the stable anchor points by which other stars are classified. Based upon parallax measurements with the Hipparcos astrometry satellite, it is located at a distance of around from the Sun. It is more familiar to most observers in the northern hemisphere as the lower-left star forming the bowl of the Big Dipper, together with Alpha Ursae Majoris (Dubhe, upper-right), Beta Ursae Majoris (Merak, lower-right) and Delta Ursae Majoris (Megrez, upper-left). Along with four other stars in this well-known asterism, Phecda forms a loose association of stars known as the Ursa Major moving group. Like the other stars in the group, it is a main sequence star, as the Sun is, although somewhat hotter, brighter and larger. Phecda is located in relatively close physical proximity to the prominent Mizar–Alcor star system. The two are separated by an estimated distance of ; much closer than the two are from the Sun. The star Merak is separated from Phecda by . Nomenclature γ Ursae Majoris (Latinised to Gamma Ursae Majoris) is the star's Bayer designation. It bore the traditional names Phecda or Phad, derived from the Arabic phrase fakhth al-dubb ('thigh of the bear'). In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN, which included Phecda for this star. To the Hindus this star was known as Pulastya, one of the seven rishis. In Chinese, (), meaning Northern Dipper, refers to an asterism equivalent to the Big Dipper. Consequently, the Chinese name for Gamma Ursae Majoris itself is (, ) and (, ). Properties Phecda is an Ae star, which is surrounded by an envelope of gas that is adding emission lines to the spectrum of the star; hence the 'e' suffix in the stellar classification of A0 Ve. It is 2.4 times more massive than the Sun and is 333 million years old. It rotates rapidly with a rotational velocity of 386 km/s at its equator, which causes it to have an oblate shape. The equatorial radius measures , while the polar radius measures . The effective temperature varies as well, from 6,750 K in the equator to 10,520 K in the poles. Phecda is also an astrometric binary: the companion star regularly perturbs the Ae-type primary star, causing the primary to wobble around the barycenter. From this, an orbital period of 20.5 years has been calculated. The secondary star is a K-type main-sequence star that is 0.79 times as massive as the Sun, and with a surface temperature of . References A-type main-sequence stars Ursa Major moving group Phecda Ursae Majoris, Gamma Big Dipper Ursa Major Durchmusterung objects Ursae Majoris, 64 103287 058001 4554 Astrometric binaries
Phecda
Astronomy
735
50,001,505
https://en.wikipedia.org/wiki/Photosynthetica
Photosynthetica is a quarterly peer-reviewed scientific journal covering research on photosynthesis. It was established in 1967 and is published by the Institute of Experimental Botany of the Academy of Sciences of the Czech Republic. The editor-in-chief is Helena Synkova (Academy of Sciences of the Czech Republic). Up till 2019, the journal was published by Springer Science+Business Media. Abstracting and indexing The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2019 impact factor of 2.562. References External links Biochemistry journals Botany journals English-language journals Quarterly journals Springer Science+Business Media academic journals Academic journals established in 1967 Czech Academy of Sciences 1967 establishments in Czechoslovakia
Photosynthetica
Chemistry
147
6,768,791
https://en.wikipedia.org/wiki/L%C3%A9vy%27s%20modulus%20of%20continuity%20theorem
Lévy's modulus of continuity theorem is a theorem that gives a result about an almost sure behaviour of an estimate of the modulus of continuity for Wiener process, that is used to model what's known as Brownian motion. Lévy's modulus of continuity theorem is named after the French mathematician Paul Lévy. Statement of the result Let be a standard Wiener process. Then, almost surely, In other words, the sample paths of Brownian motion have modulus of continuity with probability one, for and sufficiently small . See also Some properties of sample paths of the Wiener process References Paul Pierre Lévy, Théorie de l'addition des variables aléatoires. Gauthier-Villars, Paris (1937). Probability theorems theorem
Lévy's modulus of continuity theorem
Mathematics
152
14,383,139
https://en.wikipedia.org/wiki/Allotropes%20of%20sulfur
The element sulfur exists as many allotropes. In number of allotropes, sulfur is second only to carbon. In addition to the allotropes, each allotrope often exists in polymorphs (different crystal structures of the same covalently bonded Sn molecules) delineated by Greek prefixes (α, β, etc.). Furthermore, because elemental sulfur has been an item of commerce for centuries, its various forms are given traditional names. Early workers identified some forms that have later proved to be single or mixtures of allotropes. Some forms have been named for their appearance, e.g. "mother of pearl sulfur", or alternatively named for a chemist who was pre-eminent in identifying them, e.g. "Muthmann's sulfur I" or "Engel's sulfur". The most commonly encountered form of sulfur is the orthorhombic polymorph of , which adopts a puckered ring – or "crown" – structure. Two other polymorphs are known, also with nearly identical molecular structures. In addition to , sulfur rings of 6, 7, 9–15, 18, and 20 atoms are known. At least five allotropes are uniquely formed at high pressures, two of which are metallic. The number of sulfur allotropes reflects the relatively strong S−S bond of 265 kJ/mol. Furthermore, unlike most elements, the allotropes of sulfur can be manipulated in solutions of organic solvents and are analysed by HPLC. Phase diagram The pressure-temperature (P-T) phase diagram for sulfur is complex (see image). The region labeled I (a solid region), is α-sulfur. High-pressure solid allotropes In a high-pressure study at ambient temperatures, four new solid forms, termed II, III, IV, V have been characterized, where α-sulfur is form I. Solid forms II and III are polymeric, while IV and V are metallic (and are superconductive below 10 K and 17 K, respectively). Laser irradiation of solid samples produces three sulfur forms below 200–300 kbar (20–30 GPa). Solid cyclo allotrope preparation Two methods exist for the preparation of the cyclo-sulfur allotropes. One of the methods, which is most famous for preparing hexasulfur, is to treat hydrogen polysulfides with polysulfur dichloride: A second strategy uses titanocene pentasulfide as a source of the unit. This complex is easily made from polysulfide solutions: Titanocene pentasulfide reacts with polysulfur chloride: Solid cyclo-sulfur allotropes Cyclo-hexasulfur, cyclo- This allotrope was first prepared by M. R. Engel in 1891 by treating thiosulfate with HCl. Cyclo- is orange-red and forms a rhombohedral crystal. It is called ρ-sulfur, ε-sulfur, Engel's sulfur and Aten's sulfur. Another method of preparation involves the reaction of a polysulfane with sulfur monochloride: (dilute solution in diethyl ether) The sulfur ring in cyclo- has a "chair" conformation, reminiscent of the chair form of cyclohexane. All of the sulfur atoms are equivalent. Cyclo-heptasulfur, cyclo- It is a bright yellow solid. Four (α-, β-, γ-, δ-) forms of cyclo-heptasulfur are known. Two forms (γ-, δ-) have been characterized. The cyclo- ring has an unusual range of bond lengths of 199.3–218.1 pm. It is said to be the least stable of all of the sulfur allotropes. Cyclo-octasulfur, cyclo- Octasulfur contains puckered rings, and is known in three forms that differ only in the way the rings are packed in the crystal. α-Sulfur α-Sulfur is the form most commonly found in nature. When pure it has a greenish-yellow colour (traces of cyclo- in commercially available samples make it appear yellower). It is practically insoluble in water and is a good electrical insulator with poor thermal conductivity. It is quite soluble in carbon disulfide: 35.5 g/100 g solvent at 25 °C. It has an orthorhombic crystal structure. α-Sulfur is the predominant form found in "flowers of sulfur", "roll sulfur" and "milk of sulfur". It contains puckered rings, alternatively called a crown shape. The S–S bond lengths are all 203.7 pm and the S-S-S angles are 107.8° with a dihedral angle of 98°. At 95.3 °C, α-sulfur converts to β-sulfur. β-Sulfur β-Sulfur is a yellow solid with a monoclinic crystal form and is less dense than α-sulfur. It is unusual because it is only stable above 95.3 °C; below this temperature it converts to α-sulfur. β-Sulfur can be prepared by crystallising at 100 °C and cooling rapidly to slow down formation of α-sulfur. It has a melting point variously quoted as 119.6 °C and 119.8 °C but as it decomposes to other forms at around this temperature the observed melting point can vary. The 119 °C melting point has been termed the "ideal melting point" and the typical lower value (114.5 °C) when decomposition occurs, the "natural melting point". γ-Sulfur γ-Sulfur was first prepared by F.W. Muthmann in 1890. It is sometimes called "nacreous sulfur" or "mother of pearl sulfur" because of its appearance. It crystallises in pale yellow monoclinic needles. It is the densest form of the three. It can be prepared by slowly cooling molten sulfur that has been heated above 150 °C or by chilling solutions of sulfur in carbon disulfide, ethyl alcohol or hydrocarbons. It is found in nature as the mineral rosickyite. It has been tested in carbon fiber-stabilized form as a cathode in lithium-sulfur (Li-S) batteries and was observed to stop the formation of polysulfides that compromise battery life. Cyclo- (n = 9–15, 18, 20) These allotropes have been synthesised by various methods for example, treating titanocene pentasulfide and a dichlorosulfane of suitable sulfur chain length, : or alternatively treating a dichlorosulfane, and a polysulfane, : , , and can also be prepared from . With the exception of cyclo-, the rings contain S–S bond lengths and S-S-S bond angle that differ one from another. Cyclo- is the most stable cyclo-allotrope. Its structure can be visualised as having sulfur atoms in three parallel planes, 3 in the top, 6 in the middle and three in the bottom. Two forms (α-, β-) of cyclo- are known, one of which has been characterized. Two forms of cyclo- are known where the conformation of the ring is different. To differentiate these structures, rather than using the normal crystallographic convention of α-, β-, etc., which in other cyclo- compounds refer to different packings of essentially the same conformer, these two conformers have been termed endo- and exo-. Cyclo-·cyclo- adduct This adduct is produced from a solution of cyclo- and cyclo- in . It has a density midway between cyclo- and cyclo-. The crystal consists of alternate layers of cyclo- and cyclo-. This material is a rare example of an allotrope that contains molecules of different sizes. Catena sulfur forms The term "Catena sulfur forms" refers to mixtures of sulfur allotropes that are high in catena (polymer chain) sulfur. The naming of the different forms is very confusing and care has to be taken to determine what is being described because some names are used interchangeably. Amorphous sulfur Amorphous sulfur is the quenched product from molten sulfur hotter than the λ-transition at 160 °C, where polymerization yields catena sulfur molecules. (Above this temperature, the properties of the liquid melt change remarkably. For example, the viscosity increases more than 10000-fold as the temperature increases through the transition). As it anneals, solid amorphous sulfur changes from its initial glassy form, to a plastic form, hence its other names of plastic, and glassy or vitreous sulfur. The plastic form is also called χ-sulfur. Amorphous sulfur contains a complex mixture of catena-sulfur forms mixed with cyclo-forms. Insoluble sulfur Insoluble sulfur is obtained by washing quenched liquid sulfur with . It is sometimes called polymeric sulfur, μ-S or ω-S. Fibrous (φ-) sulfur Fibrous (φ-) sulfur is a mixture of the allotropic ψ- form and γ-cyclo-. ω-Sulfur ω-Sulfur is a commercially available product prepared from amorphous sulfur that has not been stretched prior to extraction of soluble forms with . It sometimes called "white sulfur of Das" or supersublimated sulfur. It is a mixture of ψ-sulfur and lamina sulfur. The composition depends on the exact method of production and the sample's history. One well known commercial form is "Crystex". ω-sulfur is used in the vulcanization of rubber. λ-Sulfur λ-Sulfur is molten sulfur just above the melting temperature. It is a mixture containing mostly cyclo-. Cooling λ-sulfur slowly gives predominantly β-sulfur. μ-Sulfur μ-Sulfur is the name applied to solid insoluble sulfur and the melt prior to quenching. π-Sulfur π-Sulfur is a dark-coloured liquid formed when λ-sulfur is left to stay molten. It contains mixture of rings. Biradical catena () chains This term is applied to biradical catena-chains in sulfur melts or the chains in the solid. Solid catena allotropes The production of pure forms of catena-sulfur has proved to be extremely difficult. Complicating factors include the purity of the starting material and the thermal history of the sample. ψ-Sulfur This form, also called fibrous sulfur or ω1-sulfur, has been well characterized. It has a density of 2.01 g·cm−3 (α-sulfur 2.069 g·cm−3) and decomposes around its melting point of 104 °C. It consists of parallel helical sulfur chains. These chains have both left and right-handed "twists" and a radius of 95 pm. The S–S bond length is 206.6 pm, the S-S-S bond angle is 106° and the dihedral angle is 85.3°, (comparable figures for α-sulfur are 203.7 pm, 107.8° and 98.3°). Lamina sulfur Lamina sulfur has not been well characterized but is believed to consist of criss-crossed helices. It is also called χ-sulfur or ω2-sulfur. High-temperature gaseous allotropes Monatomic sulfur can be produced from photolysis of carbonyl sulfide. Disulfur, Disulfur, , is the predominant species in sulfur vapour above 720 °C (a temperature above that shown in the phase diagram); at low pressure (1 mmHg) at 530 °C, it comprises 99% of the vapor. It is a triplet diradical (like dioxygen and sulfur monoxide), with an S−S bond length of 188.7 pm. The blue colour of burning sulfur is due to the emission of light by the molecule produced in the flame. The molecule has been trapped in the compound (E = As, Sb) for crystallographic measurements, produced by treating elemental sulfur with excess iodine in liquid sulfur dioxide. The cation has an "open-book" structure, in which each ion donates the unpaired electron in the π* molecular orbital to a vacant orbital of the molecule. Trisulfur, is found in sulfur vapour, comprising 10% of vapour species at 440 °C and 10 mmHg. It is cherry red in colour, with a bent structure, similar to ozone, . Tetrasulfur, has been detected in the vapour phase, but it has not been well characterized. Diverse structures (e.g. chains, branched chains and rings) have been proposed. Theoretical calculations suggest a cyclic structure. Pentasulfur, Pentasulfur has been detected in sulfur vapours but has not been isolated in pure form. List of allotropes and forms Allotropes are in Bold. References Bibliography External links Amorphous solids
Allotropes of sulfur
Physics,Chemistry
2,769
907,931
https://en.wikipedia.org/wiki/Alaska%20Railroad
The Alaska Railroad is a Class II railroad that operates freight and passenger trains in the state of Alaska. The railroad's mainline runs between Seward on the southern coast and Fairbanks, near the center of the state. It passes through Anchorage and Denali National Park, to which 17% of visitors travel by train. The railroad has of track, including sidings, rail yards and branch lines. The main line between Seward and Fairbanks is over long. The branch to Whittier conveys freight railcars interchanged with the contiguous United States via rail barges sailing between the Port of Whittier and Harbor Island in Seattle. Construction of the railroad started in 1903 when the Alaska Central Railroad built a line starting in Seward and extending north. The Alaska Central went bankrupt in 1907 and was reorganized as the Alaska Northern Railroad Company in 1911, which extended the line another northward. On March 12, 1914, the U.S. Congress agreed to fund construction and operation of an all-weather railroad from Seward to Fairbanks and purchased the rail line from the financially struggling Alaska Northern. As the government started building the estimated $35 million railroad, it opened a construction town along Ship Creek, eventually giving rise to Anchorage, now the state's largest city. In 1917, the government purchased the narrow gauge Tanana Valley Railroad, mostly for its railyard in Fairbanks. The railroad was completed on July 15, 1923 with President Warren G. Harding traveling to Alaska to drive a ceremonial golden spike at Nenana. Ownership of the railroad passed from the federal government to the state of Alaska on January 6, 1985. In , the system had a ridership of , or about per weekday as of . In 2019, the company generated a profit on revenues of , holding in total assets. History In 1903 a company called the Alaska Central Railroad began to build a rail line beginning at Seward, near the southern tip of the Kenai Peninsula in Alaska, northward. The company built of track by 1909 and went into receivership. This route carried passengers, freight and mail to the upper Turnagain Arm. From there, goods were taken by boat at high tide, and by dog team or pack train to Eklutna and the Matanuska-Susitna Valley. In 1909, another company, the Alaska Northern Railroad Company, bought the rail line and extended it another northward. From the new end, goods were floated down the Turnagain Arm in small boats. The Alaska Northern Railroad went into receivership in 1914. At about this time, the United States government was planning a railroad route from Seward to the interior town of Fairbanks. President William Howard Taft authorized a commission to survey a route in 1912. The line would be long and provide an all-weather route to the interior. In 1914, the government bought the Alaska Northern Railroad and moved its headquarters to Ship Creek, in what would later become Anchorage. The government began to extend the rail line northward. In 1917, the Tanana Valley Railroad in Fairbanks was heading into bankruptcy. It owned a small (narrow gauge) line that serviced the towns of Fairbanks and the mining communities in the area as well as the boat docks on the Tanana River near Fairbanks. The government bought the Tanana Valley Railroad, principally for its terminal facilities. The section between Fairbanks and Happy was converted to dual gauge to complete the line from Seward to Fairbanks. The government extended the southern portion of the track to Nenana, and later converted the extension to standard gauge. The Alaska Railroad continued to operate the remaining TVRR narrow gauge line as the Chatanika Branch (the terminus was near the Yukon River), until decommissioning it in 1930. In 1923 they built the Mears Memorial Bridge across the Tanana River at Nenana. This was the final link in the Alaska Railroad and at the time, was the second longest single-span steel railroad bridge in the country. U.S. President Warren G. Harding drove the golden spike that completed the railroad on July 15, 1923, on the north side of the bridge. The railroad was part of the US Department of the Interior. The Alaska Railroad's first diesel locomotive entered service in 1944. The railroad retired its last steam locomotive in 1966. In 1958, land for the future Clear Air Force Station was purchased. (Clear is about south of Nenana.) Approximately of track were diverted, and later a spur was constructed to deliver coal to its power station. The railroad was greatly affected by the Good Friday earthquake, which struck southern Alaska in 1964. The yard and trackage around Seward buckled and the trackage along Turnagain Arm was damaged by floodwaters and landslides. It took several months to restore full service along the line. In 1967, the railroad was transferred to the Federal Railroad Administration, an agency within the newly created United States Department of Transportation. In 1975-76, an infusion of $15 million from the DOT enabled various capital improvements including those to facilitate hauling materials for the Alaska Pipeline. On January 6, 1985, the state of Alaska bought the railroad from the U.S. government for $22.3 million, based on a valuation determined by the US Railway Association. The state immediately invested over $70 million on improvements and repairs that compensated for years of deferred maintenance. The purchase agreement prohibits the Alaska Railroad from paying dividends or otherwise returning capital to the state of Alaska, unlike the state's other quasi-corporations: the Alaska Permanent Fund, the Alaska Housing Finance Corporation, and the Alaska Industrial Development and Export Authority. Proposed expansion in Alaska Northern Rail Extension to Delta Junction An extension of the railroad from Fairbanks to Delta Junction over a bridge spanning the Tanana River was envisioned as early as 2009. The 2011 Alaska state budget would provide $40 million in funding for the bridge, which initially be only for vehicular use. The United States Department of Defense would provide another $100 million in funds, as the bridge and a subsequent rail line would provide year-round access to Fort Greely and the Joint Tanana Training Complex. Groundbreaking ceremony for the Tanana River Bridge took place on September 28, 2011, and the new bridge was opened (for military road traffic only) in 2014. Point MacKenzie Line On 21 November 2011, the Surface Transportation Board approved the construction of a new 25-mile (40 km) line between Port MacKenzie and the existing main line at Houston, Alaska. As of May 2023 this spur line had not been completed. Anchorage Vicinity Service A spur line was built to Ted Stevens International Airport in 2003, along with a depot, officially named after Bill Sheffield. The line never received scheduled service but cruise lines charter trains to convey passengers between ships and the airport. The railroad currently leases the depot to citizens for private events such as conferences, seminars, and corporate functions. There are plans to provide commuter rail service within the Anchorage metropolitan area (Anchorage to Mat-Su Valley via Eagle River, north Anchorage to south Anchorage); additional tracks would be necessary to accommodate the heavy freight traffic. Proposed connection to the contiguous 48 states In 2001 federal legislation, sponsored by Republican U.S. senator (and later Alaska governor) Frank Murkowski, formed a bilateral commission to study feasibility of building a rail link between Canada and Alaska; Canada was asked to be part of the commission, but the Canadian federal government did not choose to join the commission or commit funds for the study. However, the Yukon territorial government did show some interest. A June 2006 report by the commission recommended Carmacks, Yukon, as a hub, with three possibilities: A line could go northward to Delta Junction, Alaska (Alaska Railroad's northern end-of-track). Another line could go from Carmacks to Hazelton, British Columbia (which is served by the CN), passing through Watson Lake, Yukon, and Dease Lake, British Columbia. The third line could go from Carmacks to either Haines or Skagway, Alaska. The latter path by way of Whitehorse, Yukon, the northern terminus of the (narrow-gauge) White Pass and Yukon Route Railroad). However, currently the latter's trains only reach Carcross, Yukon, because service has not been completely restored following a 1982 embargo of the entire line. Following the demise of the ill-fated Keystone XL Pipeline project, the Alaska Canada Rail Link (ACRL) was rekindled as an alternative. In November 2015, the National Post reported that a link between the southern provinces and the Alaska Railroad was again being considered by the Canadian federal government, this time routing to Alberta. In this scenario, the route would originate at Delta Junction and use Carmacks as a hub, as in prior plans. The route would continue through Watson Lake, Yukon, en route to a stop at Fort Nelson, British Columbia. It would continue to Peace River, Alberta, with its southern terminus at Fort McMurray. The route was endorsed by the Assembly of First Nations. It was unclear whether this rail connection would ever be utilized for passenger service. On September 25, 2020, then President Donald Trump announced he would issue a presidential permit to the Alaska-Alberta Railway Development Corporation (A2A Railway), which had an agreement with Alaska Railway to develop a joint operating plan for the rail connection to Canada. The proposed A2A Railway would have connected to the Alaska Railroad at North Pole, Alaska, and run through Yukon Territory to Fort Nelson, and from there to a terminus at Fort McMurray, Alberta. (The A2A Railway had also been negotiating with the Mat-Su Borough on an agreement to complete the Port Mackenzie Railway Extension.) Executives General managers under federal ownership Col. Frederick Mears, 1919-1923 (was originally head of the railroad as chairman of the Alaska Engineering Commission) Col. James Gordon Steese, 1923-1923 Lee H. Landis, 1923–1924 Noel W. Smith, 1924–1928 Col. Otto F. Ohlson, 1928–1945 Col. John P. Johnson, 1946–1953 Frank E. Kalbaugh, 1953–1955 Reginald N. Whitman, 1955–1956 John H. Lloyd, 1956–1958 Robert H. Anderson, 1958–1960 Donald J. Smith, 1960–1962 John E. Manley, 1962–1971 Walker S. Johnston, 1971-1975 William L. Dorcy, 1975–1979 Steven R. Ditmeyer (Acting) 1979-1980 Frank H. Jones, 1980–1985 Railroad Corporation Police The Alaska Railroad Corporation has its own police force Presidents under state ownership Frank Turpin, 1985-1991 Robert Hatfield Jr., 1991–1997 Bill Sheffield, 1997–2001 Patrick K. Gamble, 2001–2010 Christopher Aadnesen, 2010–2013 Bill O'Leary, 2013–present Routes and tourism The railroad is a major tourist attraction in the summer. Coach cars feature wide windows and domes. Private cars owned by the major cruise companies are towed behind the Alaska Railroad's own cars, and trips are included with various cruise packages. Routes The Denali Star runs from Anchorage to Fairbanks (approximately 12 hours one-way) and back with stops in Talkeetna and Denali National Park, from which various flight and bus tours are available. The Denali Star only operates between May 15 and September 15. Although the trip is only about , it takes 12 hours to travel from Anchorage to Fairbanks as the tracks wind through mountains and valleys; the train's top speed is but sometimes hovers closer to . The Aurora Winter Train is available in winter months (September 15 - May 15) on a reduced weekend-only schedule (Northbound, Saturday mornings; Southbound, Sunday mornings) between Anchorage and Fairbanks on the same route as the Denali Star. The Coastal Classic winds its way south from Anchorage along Turnagain Arm before turning south to the Kenai Peninsula, eventually reaching Seward. This trip takes around four and a half hours due to some slow trackage as the line winds its way over mountains. The Glacier Discovery provides a short (2-hour) trip south from Anchorage to Whittier for a brief stop before reversing direction for a stop at Grandview while returning to Anchorage in the evening. The Hurricane Turn provides rail service to people living between Talkeetna and the Hurricane area. This area has no roads, and the railroad provides the lifeline for residents who depend on the service to obtain food and supplies. One of the last flag-stop railway routes in the United States, passengers can board the Hurricane Turn anywhere along the route by waving a large white flag or cloth. The Grandview Cruise Train is a set of single level passenger dome cars that Alaska Railroad makes available for charter to cruise line operators for the transportation of their passengers exclusively, typically between May 15 and September 15. On alternate Mondays this train operates under charter to NCL Holdings between Anchorage International Airport and the Whittier NCL Depot, where it meets Norwegian Cruise Line vessels. On Thursdays and Fridays this train operates under charter to Royal Caribbean Group between Anchorage International Airport and the Dale R. Lindsey Alaska Railroad lntermodal Terminal in Seward, where it meets Royal Caribbean International, Celebrity Cruises, and Silversea Cruises vessels. On Sundays this train operates under charter to HAP Alaska-Yukon between Anchorage Depot and the Whittier HAP Depot, where it meets Holland America Line vessels. On Saturdays and alternate Wednesdays this train operates under charter to HAP Alaska-Yukon between McKinley station, located 3.4 miles south of Talkeetna, and the Whittier HAP Depot, where it meets Princess Cruises vessels; this operation is known as the McKinley Express. The Denali Express uses a set of bilevel passenger dome cars that are owned by Tour Alaska, a subsidiary of Carnival, and a single bilevel passenger dome car that is owned by Alaska Railroad, with all cars operated under contract by Alaska Railroad. This train operates Saturdays, Sundays, and alternate Wednesdays exclusively for Holland America Line and Princess Cruises passengers. The train operates between Denali Park Depot and Whittier HAP Depot, where it meets Holland America Line and Princess Cruises vessels. The McKinley Explorer uses a set of bilevel passenger dome cars that are owned by Tour Alaska, a subsidiary of Carnival, operated under contract by Alaska Railroad. This train operates daily and is available to all persons, whether a cruise line passenger or not. The train operates between Denali Park Depot and Anchorage Depot. The Wilderness Express uses a bilevel passenger dome car that is owned by Premier Alaska Tours, and which is attached to the Denali Star train and operated by Alaska Railroad. This service operates daily and is available to all persons, whether a cruise line passenger or not. While the Wilderness Express is part of the same consist as the Denali Star, there is no passage between this car and the Denali Star cars. The train operates between Fairbanks Depot and Anchorage Depot. Note that the spur affording access to the Ted Stevens Anchorage International Airport is used during the summer season for cruise ship service only. It was activated temporarily during the Alaska Federation of Natives (AFN) 2006 convention to provide airport-to-hotel mass transit for delegates. Rolling stock By 1936, the company had rostered 27 steam locomotives, 16 railcars, 40 passenger cars and 858 freight cars. Active , Alaska Railroad rosters a total of 51 locomotives, two control cab units, and one DMU (self-propelled railcar): 28 EMD SD70MAC locomotives (12 equipped with head-end power for passenger service) 15 EMD GP40-2 locomotives 8 EMD GP38-2 locomotives 2 EMD F40PH control cab units 1 Colorado Railcar DMU Retired Budd Rail Diesel Car (RDC) (Retired 2009; sold to TriMet, in Oregon, as spare equipment for its WES Commuter Rail service) EMD MP15AC (Retired 2009) EMD GP49 EMD F7 EMD FP7 (Two units, 1510 and 1512 sold to Verde Canyon Railroad in Arizona for excursions) Other In 2011 the Alaska Railroad reacquired ARR 557, the last steam locomotive bought new by the railroad and the last steam locomotive used by the railroad, with the intent to refurbish and operate it in special excursions between Anchorage and Portage. A USATC S160 "2-8-0 Consolidation" engine built in 1944 by Baldwin Locomotive Works, 557 was originally coal-fired but was converted to oil in 1955. It operated until 1964, when it was deemed surplus and sold as scrap. It was purchased by Monte Holm of Moses Lake, Washington and displayed in his House of Poverty Museum. After Holm's death in 2006, Jim and Vic Jansen bought 557 from the museum and returned it to the Alaska Railroad on the condition that it be restored to operation and put into service. The locomotive was sold to the non-profit Engine 557 Restoration Company for "One Dollar ($1.00) and other good and valuable considerations" and they have invested () 77 months and over 75,000 hours of volunteer time in the restoration and overhaul. In popular culture The Alaska Railroad was prominently featured in the movie Runaway Train. The Simpson family rides the Alaska Railroad in The Simpsons Movie. The railroad is mentioned in the 1995 film Balto. The Railroad is the subject of a 2013 reality TV series named Railroad Alaska on Destination America. See also Alaskan Engineering Commission, the Federal agency which constructed the Alaska railways Anton Anderson Memorial Tunnel Transportation in North America White Pass and Yukon Route References General references Alaska Railroad Surface Transportation Board, Alaska Railroad Corporation – Construction and Operation Exemption – Rail Line Between Eielson Air Force Base (North Pole) and Fort Greely (Delta Junction)), AK , October 4, 2007 Historical references Also see: Rights of way in Alaska; railroad rights of way; reservations; water transportation connections; State title to submerged lands; Federal repossession as trustee; "navigable waters" defined; posting schedules of rates; changes in rates Rights of way for Alaskan wagon roads, wire rope, aerial, or other tramways; reservations; filing preliminary survey and map of locations; alteration, amendment, repeal, or grant of equal rights; forfeiture of rights; reversion of grant; liens External links Alaska Railroad – A current route map for the ARR Reconnaissance Survey for the Alaska Railroad – University of Washington Digital Collection Historic American Engineering Record (HAER) documentation: 1914 establishments in Alaska Alaska Railroad Historic American Engineering Record in Alaska Kenai Mountains-Turnagain Arm National Heritage Area Passenger railroads in Alaska Transportation in Anchorage, Alaska Transportation in Denali Borough, Alaska Transportation in Fairbanks North Star Borough, Alaska Transportation in Kenai Peninsula Borough, Alaska Transportation in Matanuska-Susitna Borough, Alaska Transportation in Unorganized Borough, Alaska Yukon–Koyukuk Census Area, Alaska Historic Civil Engineering Landmarks Regional railroads in the United States
Alaska Railroad
Engineering
3,841
24,509,257
https://en.wikipedia.org/wiki/Gymnopilus%20parvisquamulosus
Gymnopilus parvisquamulosus is a species of mushroom-forming fungus in the family Hymenogastraceae. Description The cap is in diameter. Habitat and distribution Gymnopilus parvisquamulosus grows in groups on conifer logs. It has been found in California and Maine, between June and August. See also List of Gymnopilus species References parvisquamulosus Fungi of North America Fungi described in 1969 Taxa named by Lexemuel Ray Hesler Fungus species
Gymnopilus parvisquamulosus
Biology
107
39,614,812
https://en.wikipedia.org/wiki/Dimethyltryptamine-N-oxide
Dimethyltryptamine-N-oxide (DMT-N-oxide) is a dimethyltryptamine metabolite. References Tryptamines Amine oxides
Dimethyltryptamine-N-oxide
Chemistry
41
327,940
https://en.wikipedia.org/wiki/Paleoethnobotany
Paleoethnobotany (also spelled palaeoethnobotany), or archaeobotany, is the study of past human-plant interactions through the recovery and analysis of ancient plant remains. Both terms are synonymous, though paleoethnobotany (from the Greek words palaios [παλαιός] meaning ancient, ethnos [έθνος] meaning race or ethnicity, and votano [βότανο] meaning plants) is generally used in North America and acknowledges the contribution that ethnographic studies have made towards our current understanding of ancient plant exploitation practices, while the term archaeobotany (from the Greek words archaios [αρχαίος] meaning ancient and votano) is preferred in Europe and emphasizes the discipline's role within archaeology. As a field of study, paleoethnobotany is a subfield of environmental archaeology. It involves the investigation of both ancient environments and human activities related to those environments, as well as an understanding of how the two co-evolved. Plant remains recovered from ancient sediments within the landscape or at archaeological sites serve as the primary evidence for various research avenues within paleoethnobotany, such as the origins of plant domestication, the development of agriculture, paleoenvironmental reconstructions, subsistence strategies, paleodiets, economic structures, and more. Paleoethnobotanical studies are divided into two categories: those concerning the Old World (Eurasia and Africa) and those that pertain to the New World (the Americas). While this division has an inherent geographical distinction to it, it also reflects the differences in the flora of the two separate areas. For example, maize only occurs in the New World, while olives only occur in the Old World. Within this broad division, paleoethnobotanists tend to further focus their studies on specific regions, such as the Near East or the Mediterranean, since regional differences in the types of recovered plant remains also exist. Macrobotanical vs. microbotanical remains Plant remains recovered from ancient sediments or archaeological sites are generally referred to as either ‘macrobotanicals’ or ‘microbotanicals.’ Macrobotanical remains are vegetative parts of plants, such as seeds, leaves, stems and chaff, as well as wood and charcoal that can either be observed with the naked eye or the with the use of a low-powered microscope. Microbotanical remains consist of microscopic parts or components of plants, such as pollen grains, phytoliths and starch granules, that require the use of a high-powered microscope in order to see them. The study of seeds, wood/charcoal, pollen, phytoliths and starches all require separate training, as slightly different techniques are employed for their processing and analysis. Paleoethnobotanists generally specialize in the study of a single type of macrobotanical or microbotanical remain, though they are familiar with the study of other types and can sometimes even specialize in more than one. History The state of Paleoethnobotany as a discipline today stems from a long history of development that spans more than two hundred years. Its current form is the product of steady progression by all aspects of the field, including methodology, analysis and research. Initial work The study of ancient plant remains began in the 19th century as a result of chance encounters with desiccated and waterlogged material at archaeological sites. In Europe, the first analyses of plant macrofossils were conducted by the botanist C. Kunth (1826) on desiccated remains from Egyptian tombs and O. Heer (1866) on waterlogged specimens from lakeside villages in Switzerland, after which point archaeological plant remains became of interest and continued to be periodically studied from different European countries until the mid-20th century. In North America, the first analysis of plant remains occurred slightly later and did not generate the same interest in this type of archaeological evidence until the 1930s when Gilmore (1931) and Jones (1936) analysed desiccated material from rock shelters in the American Southwest. All these early studies, in both Europe and North America, largely focused on the simple identification of the plant remains in order to produce a list of the recovered taxa. Establishment of the field During the 1950s and 1960s, Paleoethnobotany gained significant recognition as a field of archaeological research with two significant events: the publication of the Star Carr excavations in the UK and the recovery of plant material from archaeological sites in the Near East. Both convinced the archaeological community of the importance of studying plant remains by demonstrating their potential contribution to the discipline; the former produced a detailed paleoenvironmental reconstruction that was integral to the archaeological interpretation of the site and the latter yielded the first evidence for plant domestication, which allowed for a fuller understanding of the archaeological record. Thereafter, the recovery and analysis of plant remains received greater attention as a part of archaeological investigations. In 1968, the International Work Group for Palaeoethnobotany (IWGP) was founded. Expansion and growth With the rise of Processual archaeology, the field of Paleoethnobotany began to grow significantly. The implementation in the 1970s of a new recovery method, called flotation, allowed archaeologists to begin systematically searching for plant macrofossils at every type of archaeological site. As a result, there was a sudden influx of material for archaeobotanical study, as carbonized and mineralized plant remains were becoming readily recovered from archaeological contexts. Increased emphasis on scientific analyses also renewed interest in the study of plant microbotanicals, such as phytoliths (1970s) and starches (1980s), while later advances in computational technology during the 1990s facilitated the application of software programs as tools for quantitative analysis. The 1980s and 1990s also saw the publication of several seminal volumes about Paleoethnobotany that demonstrated the sound theoretical framework in which the discipline operates. And finally, the popularization of Post-Processual archaeology in the 1990s, helped broaden the range of research topics addressed by paleoethnobotanists, for example 'food-related gender roles'. Current state of the field Paleoethnobotany is a discipline that is ever evolving, even up to the present day. Since the 1990s, the field has continued to gain a better understanding of the processes responsible for creating plant assemblages in the archaeological record and to refine its analytical and methodological approaches accordingly. For example, current studies have become much more interdisciplinary, utilizing various lines of investigation in order to gain a fuller picture of the past plant economies. Research avenues also continue to explore new topics pertaining to ancient human-plant interactions, such as the potential use of plant remains in relation to their mnemonic or sensory properties. Interest in plant remains surged in the 2000s alongside the improvement of stable isotope analysis and its application to archaeology, including the potential to illuminate the intensity of agricultural labor, resilience, and long-term social and economic changes. Archaeobotany had not been used extensively in Australia until recently. In 2018 a study of the Karnatukul site in the Little Sandy Desert of Western Australia showed evidence of continuous human habitation for around 50,000 years, by analysing wattle and other plant items. Modes of preservation As organic matter, plant remains generally decay over time due to microbial activity. In order to be recovered in the archaeological record, therefore, plant material must be subject to specific environmental conditions or cultural contexts that prevent their natural degradation. Plant macrofossils recovered as paleoenvironmental, or archaeological specimens result from four main modes of preservation: Carbonized (Charred): Plant remains can survive in the archaeological record when they have been converted into charcoal through exposure to fire under low-oxygen conditions. Charred organic material is more resistant to deterioration, since it is only susceptible to chemical breakdown, which takes a long time (Weiner 2010). Due to the essential use of fire for many anthropogenic activities, carbonized remains constitute the most common type of plant macrofossil recovered from archaeological sites. This mode of preservation, however, tends to be biased towards plant remains that come into direct contact with fire for cooking or fuel purposes, as well as those that are more robust, such as cereal grains and nut shells. Waterlogged: Preservation of plant material can also occur when it is deposited in permanently wet, anoxic conditions, because the absence of oxygen prohibits microbial activity. This mode of preservation can occur in deep archaeological features, such as wells, and in lakebed or riverbed sediments adjacent to settlements. A wide range of plant remains are usually preserved as waterlogged material, including seeds, fruit stones, nutshells, leaves, straw and other vegetative matter. Desiccated: Another mode by which plant material can be preserved is desiccation, which only occurs in very arid environments, such as deserts, where the absence of water limits decomposition of organic matter. Desiccated plant remains are a rarer recovery, but an incredibly important source of archaeological information, since all types of plant remains can survive, even very delicate vegetative attributes, such as onion skins and crocus stigmas (saffron), as well as woven textiles, bunches of flowers and entire fruits. Mineralized: Plant material can also preserve in the archaeological record when its soft organic tissues are completely replaced by inorganic minerals. There are two types of mineralization processes. The first, 'biomineralization,' occurs when certain plant remains, such as the fruits of Celtis sp. (hackberry) or nutlets of the Boraginaceae family, naturally produce increased amounts of calcium carbonate or silica throughout their growth, resulting in calcified or silicified specimens. The second, 'replacement mineralization,' occurs when plant remains absorb precipitating minerals present in the sediment or organic matter in which they are buried. This mode of preservation by mineralization only occurs under specific depositional conditions, usually involving a high presence of phosphate. Mineralized plant remains, therefore, are most commonly recovered from middens and latrine pits – contexts which often yield plant remains that have passed through the digestive track, such as spices, grape pips and fig seeds. The mineralization of plant material can also occur when remains are deposited alongside metal artefacts, especially those made of bronze or iron. In this circumstance, the soft organic tissues are replaced by the leaching of corrosion products that form over time on the metal objects. In addition to the above-mentioned modes of preservation, plant remains can also be occasionally preserved in a frozen state or as impressions. The former occurs quite rarely, but a famous example comes from Ötzi, the 5,500 year old mummy found frozen in the French Alps, whose stomach contents revealed the plant and meat components of his last meal. The latter occurs more regularly, though plant impressions do not actually preserve the macrobotanical remains themselves, but rather their negative imprints in pliable materials like clay, mudbrick or plaster. Impressions often result from the deliberate employment of plant material for decorative or technological purposes (such as the use of leaves to create patterning on ceramics or the use of chaff as temper in the construction of mudbricks), however, they can also derive from accidental inclusions. Identification of plant impressions is achieved by creating a silicone cast of the imprints and studying them under the microscope. Recovery methods In order to study ancient plant macrobotanical material, Paleoethnobotanists employ a variety of recovery strategies that involve different sampling and processing techniques depending on the kind of research questions they are addressing, the type of plant macrofossils they are expecting to recover and the location from which they are taking samples. Sampling In general, there are four different types of sampling methods that can be used for the recovery of plant macrofossils from an archaeological site: Full Coverage sampling: involves taking at least one sample from all contexts and features Judgement sampling: entails the sampling of only areas and features most likely to yield ancient plant remains, such as a hearth Random sampling: consists of taking random samples either arbitrarily or via a grid system Systematic sampling: involves taking samples at set intervals during excavation Each sampling method has its own pros and cons and for this reason, paleoethnobotanists sometimes implement more than one sampling method at a single site. In general, Systematic or Full Coverage sampling is always recommended whenever possible. The practicalities of excavation, however, and/or the type of archaeological site under investigation sometimes limit their use and Judgment sampling tends to occur more often than not. Aside from sampling methods, there are also different types of samples that can be collected, for which the standard, recommended sample size is ~20L for dry sites and 1-5L for waterlogged sites. Point/Spot samples: consist of sediment collected only from a particular location Pinch samples: consist of small amounts of sediment that are collected from across the whole context and combined in one bag Column samples: consist of sediment collected from the different stratigraphic layers of a column of sediment that was deliberately left unexcavated These different types of samples again serve different research aims. For example, Point/Spot samples can reveal the spatial differentiation of food-related activities, Pinch samples are representative of all activities associated with a specific context, and Column samples can show change or variation or time. The sampling methods and types of samples used for the recovery of microbotanical remains (namely, pollen, phytoliths, and starches) follows virtually the same practices as outline above, with only some minor differences. First, the required sample size is much smaller: ~50g (a couple of tablespoons) of sediment for each type of microfossil analysis. Secondly, artefacts, such as stone tools and ceramics, can also be sampled for microbotanicals. And third, control samples from unexcavated areas in and around the site should always be collected for analytical purposes. Processing There are several different techniques for the processing of sediment samples. The technique a paleoethnobotanist chooses depends entirely upon the type of plant macrobotanical remains they expect to recover. Dry Screening involves pouring sediment samples through a nest of sieves, usually ranging from 5–0.5 mm. This processing technique is often employed as a means of recovering desiccated plant remains, since the use of water can weaken or damage this type of macrofossil and even accelerate its decomposition. Wet Screening is most often used for waterlogged contexts. It follows the same basic principle as dry screening, expect water is gently sprayed onto the sediment once it has been pour into the nest of sieves in order to help it break up and pass down through the various mesh sizes. The Wash-Over technique was developed in the UK as an effective way of processing waterlogged samples. The sediment is poured into a bucket with water and gently agitated by hand. When the sediment has effectively broken up and the organic matter is suspended, all the contents from the bucket, expect for the heavy inorganic matter at the bottom, is carefully poured out onto a 300μ mesh. The bucket is then emptied and the organic matter carefully rinsed from the mesh back into the bucket. More water is added before the contents are again poured out through a nest of sieves. Flotation is the most common processing technique employed for the recovery of carbonized plant remains. It uses water as a mechanism for separating charred and organic material from the sediment matrix, by capitalizing on their buoyancy properties.  When a sediment sample is slowly added to agitated water, the stones, sand, shells and other heavy material within the sediment sink to the bottom (heavy fraction or heavy residue), while the charred and organic material, which is less dense, float to the surface (light fraction or flot). This floating material can either be scooped off or spilled over into a fine-mesh sieve (usually ~300 μm). Both the heavy and light fractions are then left to dry before being examined for archaeological remains. Plant macrofossils are mostly contained within the light fraction, though some denser specimens, such as pulses or mineralized grape endosperms, are also sometimes found in the heavy fraction. Thus, each fraction must be sorted to extract all plant material. A microscope is used in order to aid the sorting of the light fractions, while heavy fractions are sorted with the naked eye. Flotation can be undertaken manually with buckets or by machine-assistance, which circulates the water through a series of tanks by means of a pump. Small-scale, manual flotation can also be used in the laboratory on waterlogged samples. Microbotanical remains (namely, pollen, phytoliths and starches) require completely different processing procedures in order to extract specimens from the sediment matrix. These procedures can be quite expensive, as they involve various chemical solutions, and are always carried out in the laboratory. Analysis Analysis is the key step in paleoethnobotanical studies that makes the interpretation of ancient plant remains possible. The quality of identifications and the use of different quantification methods are essential factors that influence the depth and breadth of interpretative results. Identification Plant macrofossils are analyzed under a low-powered stereomicroscope. The morphological features of different specimens, such as size, shape and surface decoration, are compared with images of modern plant material in identification literature, such as seed atlases, as well as real examples of modern plant material from reference collections, in order to make identifications. Based on the type of macrofossils and their level of preservation, identifications are made to various taxonomic levels, mostly family, genus and species. These taxonomic levels reflect varying degrees of identification specificity: families comprise big groups of similar type plants; genera make up smaller groups of more closely related plants within each family, and species consist of the different individual plants within each genus. Poor preservation, however, may require the creation of broader identification categories, such as ‘nutshell’ or ‘cereal grain’, while extremely good preservation and/or the application of analytical technology, such as Scanning Electron Microscopy (SEM) or Morphometric Analysis, may allow even more precise identification down to subspecies or variety level Desiccated and waterlogged macrofossils often have a very similar appearance with modern plant material, since their modes of preservation do not directly affect the remains. As a result, fragile seed features, such as anthers or wings, and occasionally even color, can be preserved, allowing for very precise identifications of this material. The high temperatures involved in the carbonization of plant remains, however, can sometimes cause the damage to or loss of plant macrofossil features. The analysis of charred plant material, therefore, often includes several family- or genus-level identifications, as well as some specimen categories. Mineralized plant macrofossils can range in preservation from detailed copies to rough casts depending on depositional conditions and the kind of replacing mineral. This type of macrofossil can easily be mistaken for stones by the untrained eye. Microbotanical remains follow the same identification principles, but require a high-powered (greater magnification) microscope with transmitted or polarized lighting. Starch and phytolith identifications are also subject to limitations, in terms of taxonomical specificity, based on the state of current reference material for comparison and considerable overlap in specimen morphologies. Quantification After identification, paleoethnobotanists provide absolute counts for all plant macrofossils recovered in each individual sample. These counts constitute the raw analytical data and serve as the basis for any further quantitative methods that may be applied. Initially, paleoethnobotanical studies mostly involved a qualitative assessment of the plant remains at an archaeological site (presence and absence), but the application of simple statistical methods (non-multivariate) followed shortly thereafter. The use of more complex statistics (multivariate), however, is a more recent development. In general, simple statistics allow for observations concerning specimen values across space and over time, while more complex statistics facilitate the recognition of patterning within an assemblage, as well as the presentation of large datasets. The application of different statistical techniques depends on the quantity of material available. Complex statistics require the recovery of a large number of specimens (usually around 150 from each sample involved in this type of quantitative analysis), whereas simple statistics can be applied regardless of the amount of recovered specimens – though obviously, the more specimens, the more effective the results. The quantification of microbotanical remains differs slightly from that of macrobotanical remains, mostly due to the high numbers of microbotanical specimens that are usually present in samples. As a result, relative/percentage occurrence sums are usually employed in the quantification of microbotanical remains instead of absolute taxa counts. Research results The work done in Paleoethnobotany is constantly furthering over understanding of ancient plant exploitation practices. The results are disseminated in digital archives, archaeological excavation reports and at academic conferences, as well as in books and journals related to archaeology, anthropology, plant history, paleoecology, and social sciences. In addition to the use of plants as food, such as paleodiet, subsistence strategies and agriculture, Paleoethnobotany has illuminated many other ancient uses for plants (some examples provided below, though there are many more): Production of bread/pastry in the widest sense Production of beverages Extraction of oils and dyes Agricultural regimes (irrigation, manuring, and sowing) Economic practices (production, storage, and trade) Building materials Fuel Symbolic use in ritual activities See also References Bibliography Twiss, K.C. 2019. The Archaeology of Food. Cambridge: Cambridge University Press. ISBN 9781108670159 Kristen J.G. 1997. People, Plants, and Landscapes: Studies in Paleoethnobotany. Alabama: University of Alabama Press. . Miksicek, C.H.1987. "Formation Processes of the Archaeobotanical Record." In M.B.Schiffer (ed.). Advances in Archaeological Method and Theory 10. New York: Academic Press, 211–247. . External links International Associations Association of Environmental Archaeology (AEA) International Work Group for Palaeoethnobotany (IWGP) Journals Vegetation History and Archaeobotany, exclusively publishing archaeobotanical/palaeoethnobotanical research, official publishing organ of the IWGP Archaeological and Anthropological Sciences Environmental Archaeology Interdisciplinaria Archaeologica (IANSA) Various knowledge resources ArchBotLit, Kiel University Digital Plant Atlas, Groningen University Integrated Archaeobotanical Research Project (IAR), originally hosted at the University of Sheffield Terry B. Ball, "Phytolith Literature Review" Steve Archer, "About Phytoliths" Alwynne B. Beaudoin, "The Dung File" Anthropology Archaeological sub-disciplines Branches of botany Ethnobotany
Paleoethnobotany
Biology
4,791
37,578,426
https://en.wikipedia.org/wiki/Melcom
Melcom is a supermarket chain consisting of 65 shops spread all over Ghana. It was started in 1989 by Indian magnate Bhagwan Khubchandani. His late father, Ramchand Khubchandani, had arrived in the then Gold Coast in 1929 as a 14-year-old to work as a store boy. Melcom Group of Companies is a family business. The Melcom Group of Companies consists of six separate entities: Melcom Limited, Century Industries Limited, Crownstar Electronic Industries Limited, Melcom Hospitality, Melcom Travels, and Melcom Care. Aside from conquering an extensive retail market share with a network of more than 60 retail outlets spread all over Ghana (Melcom Limited), the Group is well-diversified into other businesses. Melcom Group is best known for its shopping mall, Melcom Limited. As Ghana’s largest chain of retail department stores, Melcom offers thousands of products and hundreds of well known brands. History Disasters in 2012 In 2012 Melcom suffered two major accidents. On 7 November 2012, Ghana suffered a major accident when Melcom's five story shopping mall at Achimota near Accra, Ghana collapsed, trapping many people inside. The NADMO organized a rescue mission. 82 people in all, including 14 dead, were pulled out of the rubble. Melcom had been operating in the building, which it said it had rented, only since January 2012. The NADMO promised an investigation to find out the cause of the collapse. Reports suggested that the building had structural defects owing to lack of adherence to building codes and use of improper materials. On December 22, 2012, Melcom suffered another major accident when its mall in Agona Swedru in the Central Region of Ghana was totally burnt down by fire and attended to by the GNFRS. Because the fire occurred after close of work, there were no casualties. However, adjoining warehouses stocked with items for Christmas were also burnt down. References Supermarkets of Africa Engineering failures Food and drink companies of Ghana Companies established in 1989 1989 establishments in Ghana
Melcom
Technology,Engineering
418
56,135,578
https://en.wikipedia.org/wiki/Microsoft%20Messaging
Messaging (also known as Microsoft Messaging, and as of recently, Windows Operator Messages) is an instant messaging Universal Windows Platform app for Windows 8.0, Windows 10 and Windows 10 Mobile. The mobile version allows SMS, MMS and RCS messaging. The desktop version is restricted to showing SMS messages sent via Skype, and billing SMS message from an LTE operator. As of recently, the app was refocused into a SMS data plan app, where your mobile operator sends messages about your data plan, this is due to the functionality of the app switching to Skype. It was also partially renamed to Windows Operator Messages. External links Send a text message — Microsoft Support References Windows components Instant messaging
Microsoft Messaging
Technology
143
25,279,655
https://en.wikipedia.org/wiki/Carbide-derived%20carbon
Carbide-derived carbon (CDC), also known as tunable nanoporous carbon, is the common term for carbon materials derived from carbide precursors, such as binary (e.g. SiC, TiC), or ternary carbides, also known as MAX phases (e.g., Ti2AlC, Ti3SiC2). CDCs have also been derived from polymer-derived ceramics such as Si-O-C or Ti-C, and carbonitrides, such as Si-N-C. CDCs can occur in various structures, ranging from amorphous to crystalline carbon, from sp2- to sp3-bonded, and from highly porous to fully dense. Among others, the following carbon structures have been derived from carbide precursors: micro- and mesoporous carbon, amorphous carbon, carbon nanotubes, onion-like carbon, nanocrystalline diamond, graphene, and graphite. Among carbon materials, microporous CDCs exhibit some of the highest reported specific surface areas (up to more than 3000 m2/g). By varying the type of the precursor and the CDC synthesis conditions, microporous and mesoporous structures with controllable average pore size and pore size distributions can be produced. Depending on the precursor and the synthesis conditions, the average pore size control can be applied at sub-Angstrom accuracy. This ability to precisely tune the size and shapes of pores makes CDCs attractive for selective sorption and storage of liquids and gases (e.g., hydrogen, methane, CO2) and the high electric conductivity and electrochemical stability allows these structures to be effectively implemented in electrical energy storage and capacitive water desalinization. History The production of SiCl4 by high temperature reaction of chlorine gas with silicon carbide was first patented in 1918 by Otis Hutchins, with the process further optimized for higher yields in 1956. The solid porous carbon product was initially regarded as a waste byproduct until its properties and potential applications were investigated in more detail in 1959 by Walter Mohun. Research was carried out in the 1960-1980s mostly by Russian scientists on the synthesis of CDC via halogen treatment, while hydrothermal treatment was explored as an alternative route to derive CDCs in the 1990s. Most recently, research activities have centered on optimized CDC synthesis and nanoengineered CDC precursors. Nomenclature Historically, various terms have been used for CDC, such as "mineral carbon" or "nanoporous carbon". Later, a more adequate nomenclature introduced by Yury Gogotsi was adopted that clearly denotes the precursor. For example, CDC derived from silicon carbide has been referred to as SiC-CDC, Si-CDC, or SiCDC. Recently, it was recommended to adhere to a unified precursor-CDC-nomenclature to reflect the chemical composition of the precursor (e.g., B4C-CDC, Ti3SiC2-CDC, W2C-CDC). Synthesis CDCs have been synthesized using several chemical and physical synthesis methods. Most commonly, dry chlorine treatment is used to selectively etch metal or metalloid atoms from the carbide precursor lattice. The term "chlorine treatment" is to be preferred over chlorination as the chlorinated product, metal chloride, is the discarded byproduct and the carbon itself remains largely unreacted. This method is implemented for commercial production of CDC by Skeleton in Estonia and Carbon-Ukraine. Hydrothermal etching has also been used for synthesis of SiC-CDC which yielded a route for porous carbon films and nanodiamond synthesis. Chlorine treatment The most common method for producing porous carbide-derived carbons involves high-temperature etching with halogens, most commonly chlorine gas. The following generic equation describes the reaction of a metal carbide with chlorine gas (M: Si, Ti, V; similar equations can be written for other CDC precursors): MC (solid) + 2 Cl2 (gas) → MCl4(gas) + C (solid) Halogen treatment at temperatures between 200 and 1000 °C has been shown to yield mostly disordered porous carbons with a porosity between 50 and ~80 vol% depending on the precursor. Temperatures above 1000 °C result in predominantly graphitic carbon and an observed shrinkage of the material due to graphitization. The linear growth rate of the solid carbon product phase suggests a reaction-driven kinetic mechanism, but the kinetics become diffusion-limited for thicker films or larger particles. A high mass transport condition (high gas flow rates) facilitates the removal of the chloride and shifts the reaction equilibrium towards the CDC product. Chlorine treatment has successfully been employed for CDC synthesis from a variety of carbide precursors, including SiC, TiC, B4C, BaC2, CaC2, Cr3C2, Fe3C, Mo2C, Al4C3, Nb2C, SrC2, Ta2C, VC, WC, W2C, ZrC, ternary carbides such as Ti2AlC, Ti3AlC2, and Ti3SiC2, and carbonitrides such as Ti2AlC0.5N0.5. Most produced CDCs exhibit a prevalence of micropores (< 2 nm) and mesopores (between 2 and 50 nm), with specific distributions affected by carbide precursor and synthesis conditions. Hierarchic porosity can be achieved by using polymer-derived ceramics with or without utilizing a templating method. Templating yields an ordered array of mesopores in addition to the disordered network of micropores. It has been shown that the initial crystal structure of the carbide is the primary factor affecting the CDC porosity, especially for low-temperature chlorine treatment. In general, a larger spacing between carbon atoms in the lattice correlates with an increase in the average pore diameter. As the synthesis temperature increases, the average pore diameter increases, while the pore size distribution becomes broader. The overall shape and size of the carbide precursor, however, is largely maintained and CDC formation is usually referred to as a conformal process. Vacuum decomposition Metal or metalloid atoms from carbides can selectively be extracted at high temperatures (usually above 1200 °C) under vacuum. The underlying mechanism is incongruent decomposition of carbides, using the high melting point of carbon compared to corresponding carbide metals that melt and eventually evaporate away, leaving the carbon behind. Like halogen treatment, vacuum decomposition is a conformal process. The resulting carbon structures are, as a result of the higher temperatures, more ordered, and carbon nanotubes and graphene can be obtained. In particular, vertically aligned carbon nanotubes films of high tube density have been reported for vacuum decomposition of SiC. The high tube density translates into a high elastic modulus and high buckling resistance which is of particular interest for mechanical and tribological applications. While carbon nanotube formation occurs when trace oxygen amounts are present, very high vacuum conditions (approaching 10−8–10−10 torr) result in the formation of graphene sheets. If the conditions are maintained, graphene transitions into bulk graphite. In particular, by vacuum annealing silicon carbide single crystals (wafers) at 1200–1500 °C, metal/metalloid atoms are selectively removed and a layer of 1–3 layer graphene (depending on the treatment time) is formed, undergoing a conformal transformation of 3 layers of silicon carbide into one monolayer of graphene. Also, graphene formation occurs preferentially on the Si-face of the 6H-SiC crystals, while nanotube growth is favored on the c-face of SiC. Hydrothermal decomposition The removal of metal atoms from carbides has been reported at high temperatures (300–1000 °C) and pressures (2–200 MPa). The following reactions are possible between metal carbides and water:  MC + x H2O → MOx +  CH4 MC + (x+1) H2O → MOx + CO + (x+1) H2 MC + (x+2) H2O → MOx + CO2 + (x+2) H2 MC + x H2O → MOx + C + x H2 Only the last reaction yields solid carbon. The yield of carbon-containing gases increases with pressure (decreasing solid carbon yield) and decreases with temperatures (increasing the carbon yield). The ability to produce a usable porous carbon material is dependent on the solubility of the formed metal oxide (such as SiO2) in supercritical water. Hydrothermal carbon formation has been reported for SiC, TiC, WC, TaC, and NbC. Insolubility of metal oxides, for example TiO2, is a significant complication for certain metal carbides (e.g., Ti3SiC2). Applications One application of carbide-derived carbons is as active material in electrodes for electric double layer capacitors which have become commonly known as supercapacitors or ultracapacitors. This is motivated by their good electrical conductivity combined with high surface area, large micropore volume, and pore size control that enable to match the porosity metrics of the porous carbon electrode to a certain electrolyte. In particular, when the pore size approaches the size of the (desolvated) ion in the electrolyte, there is a significant increase in the capacitance. The electrically conductive carbon material minimizes resistance losses in supercapacitor devices and enhances charge screening and confinement, maximizing the packing density and subsequent charge storage capacity of microporous CDC electrodes. CDC electrodes have been shown to yield a gravimetric capacitance of up to 190 F/g in aqueous electrolytes and 180 F/g in organic electrolytes. The highest capacitance values are observed for matching ion/pore systems, which allow high-density packing of ions in pores in superionic states. However, small pores, especially when combined with an overall large particle diameter, impose an additional diffusion limitation on the ion mobility during charge/discharge cycling. The prevalence of mesopores in the CDC structure allows for more ions to move past each other during charging and discharging, allowing for faster scan rates and improved rate handling abilities. Conversely, by implementing nanoparticle carbide precursors, shorter pore channels allow for higher electrolyte mobility, resulting in faster charge/discharge rates and higher power densities. Proposed applications Gas storage and carbon dioxide capturing TiC-CDC activated with KOH or CO2 store up to 21 wt.% of methane at 25 °C at high pressure. CDCs with subnanometer pores in the 0.50–0.88 nm diameter range have shown to store up to 7.1 mol CO2/kg at 1 bar and 0 °C. CDCs also store up to 3 wt.% hydrogen at 60 bar and −196 °C, with additional increases possible as a result of chemical or physical activation of the CDC materials. SiOC-CDC with large subnanometer pore volumes are able to store over 5.5 wt.% hydrogen at 60 bar and −196 °C, almost reaching the goal of the US Department of Energy of 6 wt.% storage density for automotive applications. Methane storage densities of over 21.5 wt.% can be achieved for this material at those conditions. In particular, a predominance of pores with subnanometer diameters and large pore volumes are instrumental towards increasing storage densities. Tribological coatings CDC films obtained by vacuum annealing (ESK) or chlorine treatment of SiC ceramics yield a low friction coefficient. The friction coefficient of SiC, which is widely used in tribological applications for its high mechanical strength and hardness, can therefore decrease from ~0.7 to ~0.2 or less under dry conditions. It’s important to mention that graphite cannot operate in dry environments. The porous 3-dimensional network of CDC allows for high ductility and an increased mechanical strength, minimizing fracture of the film under an applied force. Those coatings find applications in dynamic seals. The friction properties can be further tailored with high-temperature hydrogen annealing and subsequent hydrogen termination of dangling bonds. Protein adsorption Carbide-derived carbons with a mesoporous structure remove large molecules from biofluids. As other carbons, CDCs possess good biocompatibility. CDCs have been demonstrated to remove cytokines such as TNF-alpha, IL-6, and IL-1beta from blood plasma. These are the most common receptor-binding agents released into the body during a bacterial infection that cause the primary inflammatory response during the attack and increase the potential lethality of sepsis, making their removal a very important concern. The rates and levels of removal of above cytokines (85–100% removed within 30 minutes) are higher than those observed for comparable activated carbons. Catalyst support Pt nanoparticles can be introduced to the SiC/C interface during chlorine treatment (in the form of Pt3Cl3). The particles diffuse through the material to form Pt particle surfaces, which may serve as catalyst support layers. In particular, in addition to Pt, other noble elements such as gold can be deposited into the pores, with the resulting nanoparticle size controlled by the pore size and overall pore size distribution of the CDC substrate. Such gold or platinum nanoparticles can be smaller than 1 nm even without employing surface coatings. Au nanoparticles in different CDCs (TiC-CDC, Mo2C-CDC, B4C-CDC) catalyze the oxidation of carbon monoxide. Capacitive deionization (CDI) As desalinization and purification of water is critical for obtaining deionized water for laboratory research, large-scale chemical synthesis in industry and consumer applications, the use of porous materials for this application has received particular interest. Capacitive deionization operates in a fashion with similarities to a supercapacitor. As an ion-containing water (electrolyte) is flown between two porous electrodes with an applied potential across the system, the corresponding ions assemble into a double layer in the pores of the two terminals, decreasing the ion content in the liquid exiting the purification device. Due to the ability of carbide-derived carbons to closely match the size of ions in the electrolyte, side-by-side comparisons of desalinization devices based on CDCs and activated carbon showed a significant efficiency increase in the 1.2–1.4 V range compared to activated carbon. Commercial production and applications Having originated as the by-product of industrial metal chloride synthesis, CDC has certainly a potential for large-scale production at a moderate cost. Currently, only small companies engage in production of carbide-derived carbons and their implementation in commercial products. For example, Skeleton, which is located in Tartu, Estonia, and Carbon-Ukraine, located in Kyiv, Ukraine, have a diverse product line of porous carbons for supercapacitors, gas storage, and filtration applications. In addition, numerous education and research institutions worldwide are engaged in basic research of CDC structure, synthesis, or (indirectly) their application for various high-end applications. See also Hydrogen storage Hydrogen economy Nanotechnology Nanomaterials Nanoengineering Allotropes of carbon References External links http://nano.materials.drexel.edu http://skeletontech.com/ http://carbon.org.ua/ Allotropes of carbon Capacitors Nanomaterials
Carbide-derived carbon
Physics,Chemistry,Materials_science
3,323
2,018
https://en.wikipedia.org/wiki/A.%20J.%20Ayer
Sir Alfred Jules "Freddie" Ayer ( ; 29 October 1910 – 27 June 1989) was an English philosopher known for his promotion of logical positivism, particularly in his books Language, Truth, and Logic (1936) and The Problem of Knowledge (1956). Ayer was educated at Eton College and the University of Oxford, after which he studied the philosophy of logical positivism at the University of Vienna. From 1933 to 1940 he lectured on philosophy at Christ Church, Oxford. During the Second World War Ayer was a Special Operations Executive and MI6 agent. Ayer was Grote Professor of the Philosophy of Mind and Logic at University College London from 1946 until 1959, after which he returned to Oxford to become Wykeham Professor of Logic at New College. He was president of the Aristotelian Society from 1951 to 1952 and knighted in 1970. He was known for his advocacy of humanism, and was the second president of the British Humanist Association (now known as Humanists UK). Ayer was president of the Homosexual Law Reform Society for a time; he remarked, "as a notorious heterosexual I could never be accused of feathering my own nest." Life Ayer was born in St John's Wood, in north west London, to Jules Louis Cyprien Ayer and Reine (née Citroen), wealthy parents from continental Europe. His mother was from the Dutch-Jewish family that founded the Citroën car company in France; his father was a Swiss Calvinist financier who worked for the Rothschild family, including for their bank and as secretary to Alfred Rothschild. Ayer was educated at Ascham St Vincent's School, a former boarding preparatory school for boys in the seaside town of Eastbourne in Sussex, where he started boarding at the relatively early age of seven for reasons to do with the First World War, and at Eton College, where he was a King's Scholar. At Eton Ayer first became known for his characteristic bravado and precocity. Though primarily interested in his intellectual pursuits, he was very keen on sports, particularly rugby, and reputedly played the Eton Wall Game very well. In the final examinations at Eton, Ayer came second in his year, and first in classics. In his final year, as a member of Eton's senior council, he unsuccessfully campaigned for the abolition of corporal punishment at the school. He won a classics scholarship to Christ Church, Oxford. He graduated with a BA with first-class honours. After graduating from Oxford, Ayer spent a year in Vienna, returned to England and published his first book, Language, Truth and Logic, in 1936. This first exposition in English of logical positivism as newly developed by the Vienna Circle, made Ayer at age 26 the enfant terrible of British philosophy. As a newly famous intellectual, he played a prominent role in the Oxford by-election campaign of 1938. Ayer campaigned first for the Labour candidate Patrick Gordon Walker, and then for the joint Labour-Liberal "Independent Progressive" candidate Sandie Lindsay, who ran on an anti-appeasement platform against the Conservative candidate, Quintin Hogg, who ran as the appeasement candidate. The by-election, held on 27 October 1938, was quite close, with Hogg winning narrowly. In the Second World War, Ayer served as an officer in the Welsh Guards, chiefly in intelligence (Special Operations Executive (SOE) and MI6). He was commissioned as a second lieutenant into the Welsh Guards from the Officer Cadet Training Unit on 21 September 1940. After the war, Ayer briefly returned to the University of Oxford where he became a fellow and Dean of Wadham College. He then taught philosophy at University College London from 1946 until 1959, during which time he started to appear on radio and television. He was an extrovert and social mixer who liked dancing and attending clubs in London and New York. He was also obsessed with sport: he had played rugby for Eton, and was a noted cricketer and a keen supporter of Tottenham Hotspur football team, where he was for many years a season ticket holder. For an academic, Ayer was an unusually well-connected figure in his time, with close links to 'high society' and the establishment. Presiding over Oxford high-tables, he is often described as charming, but could also be intimidating. Ayer was married four times to three women. His first marriage was from 1932 to 1941, to (Grace Isabel) Renée, with whom he had a sonallegedly the son of Ayer's friend and colleague Stuart Hampshireand a daughter. Renée subsequently married Hampshire. In 1960, Ayer married Alberta Constance (Dee) Wells, with whom he had one son. That marriage was dissolved in 1983, and the same year, Ayer married Vanessa Salmon, the former wife of politician Nigel Lawson. She died in 1985, and in 1989 Ayer remarried Wells, who survived him. He also had a daughter with Hollywood columnist Sheilah Graham Westbrook. In 1950, Ayer attended the founding meeting of the Congress for Cultural Freedom in West Berlin, though he later said he went only because of the offer of a "free trip". He gave a speech on why John Stuart Mill's conceptions of liberty and freedom were still valid in the 20th century. Together with the historian Hugh Trevor-Roper, Ayer fought against Arthur Koestler and Franz Borkenau, arguing that they were far too dogmatic and extreme in their anti-communism, in fact proposing illiberal measures in the defence of liberty. Adding to the tension was the location of the congress in West Berlin, together with the fact that the Korean War began on 25 June 1950, the fourth day of the congress, giving a feeling that the world was on the brink of war. From 1959 to his retirement in 1978, Ayer held the Wykeham Chair, Professor of Logic at Oxford. He was knighted in 1970. After his retirement, Ayer taught or lectured several times in the United States, including as a visiting professor at Bard College in 1987. At a party that same year held by fashion designer Fernando Sanchez, Ayer confronted Mike Tyson, who was forcing himself upon the then little-known model Naomi Campbell. When Ayer demanded that Tyson stop, Tyson reportedly asked, "Do you know who the fuck I am? I'm the heavyweight champion of the world", to which Ayer replied, "And I am the former Wykeham Professor of Logic. We are both pre-eminent in our field. I suggest that we talk about this like rational men". Ayer and Tyson then began to talk, allowing Campbell to slip out. Ayer was also involved in politics, including anti-Vietnam War activism, supporting the Labour Party (and later the Social Democratic Party), chairing the Campaign Against Racial Discrimination in Sport, and serving as president of the Homosexual Law Reform Society. In 1988, a year before his death, Ayer wrote an article titled "What I saw when I was dead", describing an unusual near-death experience after his heart stopped for four minutes as he choked on smoked salmon. Of the experience, he first said that it "slightly weakened my conviction that my genuine death ... will be the end of me, though I continue to hope that it will be." A few weeks later, he revised this, saying, "what I should have said is that my experiences have weakened, not my belief that there is no life after death, but my inflexible attitude towards that belief". Ayer died on 27 June 1989. From 1980 to 1989 he lived at 51 York Street, Marylebone, where a memorial plaque was unveiled on 19 November 1995. Philosophical ideas In Language, Truth and Logic (1936), Ayer presents the verification principle as the only valid basis for philosophy. Unless logical or empirical verification is possible, statements like "God exists" or "charity is good" are not true or untrue but meaningless, and may thus be excluded or ignored. Religious language in particular is unverifiable and as such literally nonsense. He also criticises C. A. Mace's opinion that metaphysics is a form of intellectual poetry. The stance that a belief in God denotes no verifiable hypothesis is sometimes referred to as igtheism (for example, by Paul Kurtz). In later years, Ayer reiterated that he did not believe in God and began to call himself an atheist. He followed in the footsteps of Bertrand Russell by debating religion with the Jesuit scholar Frederick Copleston. Ayer's version of emotivism divides "the ordinary system of ethics" into four classes: "Propositions that express definitions of ethical terms, or judgements about the legitimacy or possibility of certain definitions" "Propositions describing the phenomena of moral experience, and their causes" "Exhortations to moral virtue" "Actual ethical judgements" He focuses on propositions of the first classmoral judgementssaying that those of the second class belong to science, those of the third are mere commands, and those of the fourth (which are considered normative ethics as opposed to meta-ethics) are too concrete for ethical philosophy. Ayer argues that moral judgements cannot be translated into non-ethical, empirical terms and thus cannot be verified; in this he agrees with ethical intuitionists. But he differs from intuitionists by discarding appeals to intuition of non-empirical moral truths as "worthless" since the intuition of one person often contradicts that of another. Instead, Ayer concludes that ethical concepts are "mere pseudo-concepts": Between 1945 and 1947, together with Russell and George Orwell, Ayer contributed a series of articles to Polemic, a short-lived British Magazine of Philosophy, Psychology, and Aesthetics edited by the ex-Communist Humphrey Slater. Ayer was closely associated with the British humanist movement. He was an Honorary Associate of the Rationalist Press Association from 1947 until his death. He was elected a Foreign Honorary Member of the American Academy of Arts and Sciences in 1963. In 1965, he became the first president of the Agnostics' Adoption Society and in the same year succeeded Julian Huxley as president of the British Humanist Association, a post he held until 1970. In 1968 he edited The Humanist Outlook, a collection of essays on the meaning of humanism. He was one of the signers of the Humanist Manifesto. Works Ayer is best known for popularising the verification principle, in particular through his presentation of it in Language, Truth, and Logic. The principle was at the time at the heart of the debates of the so-called Vienna Circle, which Ayer had visited as a young guest. Others, including the circle's leading light, Moritz Schlick, were already writing papers on the issue. Ayer's formulation was that a sentence can be meaningful only if it has verifiable empirical import; otherwise, it is either "analytical" if tautologous or "metaphysical" (i.e. meaningless, or "literally senseless"). He started to work on the book at the age of 23 and it was published when he was 26. Ayer's philosophical ideas were deeply influenced by those of the Vienna Circle and David Hume. His clear, vibrant and polemical exposition of them makes Language, Truth and Logic essential reading on the tenets of logical empiricism; the book is regarded as a classic of 20th-century analytic philosophy and is widely read in philosophy courses around the world. In it, Ayer also proposes that the distinction between a conscious man and an unconscious machine resolves itself into a distinction between "different types of perceptible behaviour", an argument that anticipates the Turing test published in 1950 to test a machine's capability to demonstrate intelligence. Ayer wrote two books on the philosopher Bertrand Russell, Russell and Moore: The Analytic Heritage (1971) and Russell (1972). He also wrote an introductory book on the philosophy of David Hume and a short biography of Voltaire. Ayer was a strong critic of the German philosopher Martin Heidegger. As a logical positivist, Ayer was in conflict with Heidegger's vast, overarching theories of existence. Ayer considered them completely unverifiable through empirical demonstration and logical analysis, and this sort of philosophy an unfortunate strain in modern thought. He considered Heidegger the worst example of such philosophy, which Ayer believed entirely useless. In Philosophy in the Twentieth Century, Ayer accuses Heidegger of "surprising ignorance" or "unscrupulous distortion" and "what can fairly be described as charlatanism." In 1972–73, Ayer gave the Gifford Lectures at the University of St Andrews, later published as The Central Questions of Philosophy. In the book's preface, he defends his selection to hold the lectureship on the basis that Lord Gifford wished to promote "natural theology, in the widest sense of that term", and that non-believers are allowed to give the lectures if they are "able reverent men, true thinkers, sincere lovers of and earnest inquirers after truth". He still believed in the viewpoint he shared with the logical positivists: that large parts of what was traditionally called philosophyincluding metaphysics, theology and aestheticswere not matters that could be judged true or false, and that it was thus meaningless to discuss them. In The Concept of a Person and Other Essays (1963), Ayer heavily criticised Wittgenstein's private language argument. Ayer's sense-data theory in Foundations of Empirical Knowledge was famously criticised by fellow Oxonian J. L. Austin in Sense and Sensibilia, a landmark 1950s work of ordinary language philosophy. Ayer responded in the essay "Has Austin Refuted the Sense-datum Theory?", which can be found in his Metaphysics and Common Sense (1969). Awards Ayer was awarded a knighthood as Knight Bachelor in the London Gazette on 1 January 1970. Collections Ayer's biographer, Ben Rogers, deposited 7 boxes of research material accumulated through the writing process at University College London in 2007. The material was donated in collaboration with Ayer's family. Selected publications 1936, Language, Truth, and Logic, London: Gollancz., 2nd ed., with new introduction (1946) 1936, "Causation and free will", The Aryan Path. 1940, The Foundations of Empirical Knowledge, London: Macmillan. 1954, Philosophical Essays, London: Macmillan. (Essays on freedom, phenomenalism, basic propositions, utilitarianism, other minds, the past, ontology.) 1957, "The conception of probability as a logical relation", in S. Korner, ed., Observation and Interpretation in the Philosophy of Physics, New York: Dover Publications. 1956, The Problem of Knowledge, London: Macmillan. 1957, "Logical Positivism - A Debate" (with F. C. Copleston) in: Edwards, Paul, Pap, Arthur (eds.), A Modern Introduction to Philosophy; readings from classical and contemporary sources 1963, The Concept of a Person and Other Essays, London: Macmillan. (Essays on truth, privacy and private languages, laws of nature, the concept of a person, probability.) 1967, "Has Austin Refuted the Sense-Datum Theory?" Synthese vol. XVIII, pp. 117–140. (Reprinted in Ayer 1969). 1968, The Origins of Pragmatism, London: Macmillan. 1969, Metaphysics and Common Sense, London: Macmillan. (Essays on knowledge, man as a subject for science, chance, philosophy and politics, existentialism, metaphysics, and a reply to Austin on sense-data theory [Ayer 1967].) 1971, Russell and Moore: The Analytical Heritage, London: Macmillan. 1972, Probability and Evidence, London: Macmillan. 1972, Russell, London: Fontana Modern Masters. 1973, The Central Questions of Philosophy, London: Weidenfeld. 1977, Part of My Life, London: Collins. 1979, "Replies", in G. F. Macdonald, ed., Perception and Identity: Essays Presented to A. J. Ayer, With His Replies, London: Macmillan; Ithaca, N.Y.: Cornell University Press. 1980, Hume, Oxford: Oxford University Press 1982, Philosophy in the Twentieth Century, London: Weidenfeld. 1984, Freedom and Morality and Other Essays, Oxford: Clarendon Press. 1984, More of My Life, London: Collins. 1986, Ludwig Wittgenstein, London: Penguin. 1986, Voltaire, New York: Random House. 1988, Thomas Paine, London: Secker & Warburg. 1990, The Meaning of Life and Other Essays, Weidenfeld & Nicolson. 1991, "A Defense of Empiricism" in: Griffiths, A. Phillips (ed.), A. J. Ayer: Memorial Essays (Royal Institute of Philosophy Supplements). Cambridge University Press. 1992, "Intellectual Autobiography" and Replies in: Lewis Edwin Hahn (ed.), The Philosophy of A.J. Ayer (The Library of Living Philosophers Volume XXI), Open Court Publishing Co. *For more complete publication details see "The Philosophical Works of A. J. Ayer" (1979) and "Bibliography of the writings of A.J. Ayer" (1992). See also A priori knowledge List of British philosophers References Footnotes Works cited Ayer, A.J. (1989). "That undiscovered country", New Humanist, Vol. 104 (1), May, pp. 10–13. Rogers, Ben (1999). A.J. Ayer: A Life. New York: Grove Press. . (Chapter one and a review by Hilary Spurling, The New York Times, 24 December 2000.) Further reading Jim Holt, "Positive Thinking" (review of Karl Sigmund, Exact Thinking in Demented Times: The Vienna Circle and the Epic Quest for the Foundations of Science, Basic Books, 449 pp.), The New York Review of Books, vol. LXIV, no. 20 (21 December 2017), pp. 74–76. Ted Honderich, Ayer's Philosophy and its Greatness. Anthony Quinton, Alfred Jules Ayer. Proceedings of the British Academy, 94 (1996), pp. 255–282. Graham Macdonald, Alfred Jules Ayer, Stanford Encyclopedia of Philosophy, 7 May 2005. External links "Logical Positivism" (video) Men of Ideas interview with Bryan Magee (1978) "Frege, Russell, and Modern Logic" (video) The Great Philosophers interview with Bryan Magee (1987) Ayer's Elizabeth Rathbone Lecture on Philosophy & Politics Ayer entry in the Stanford Encyclopedia of Philosophy A.J. Ayer: Out of time by Alex Callinicos Appearance on Desert Island Discs – 3 August 1984 Ayer (Rogers) Papers at University College London 1910 births 1989 deaths 20th-century atheists 20th-century English non-fiction writers Academics of University College London Alumni of Christ Church, Oxford Analytic philosophers Aristotelian philosophers Atheism in the United Kingdom Atheist philosophers Bard College faculty British Army personnel of World War II British people of Dutch-Jewish descent British people of Swiss descent British critics of religions British Special Operations Executive personnel Empiricists English atheists English humanists English logicians English people of Dutch-Jewish descent English people of Swiss descent 20th-century English philosophers British epistemologists Fellows of Christ Church, Oxford Fellows of the American Academy of Arts and Sciences Fellows of the British Academy Jewish atheists Jewish humanists Jewish philosophers Knights Bachelor Linguistic turn Logical positivism Logicians Ontologists People educated at Eton College People from St John's Wood British philosophers of culture British philosophers of education Philosophers of history British philosophers of language British philosophers of logic British philosophers of mind British philosophers of religion British philosophers of science Philosophers of technology Philosophy writers English political philosophers Presidents of the Aristotelian Society Presidents of Humanists UK Vienna Circle Welsh Guards officers Wykeham Professors of Logic English LGBTQ rights activists
A. J. Ayer
Mathematics
4,157
14,445,873
https://en.wikipedia.org/wiki/GPR125
Adhesion G-protein coupled receptor A3 (ADGRA3), also known as GPR125, is an adhesion GPCR that in humans is encoded by the Adgra3 gene (previously Gpr125). References Further reading G protein-coupled receptors
GPR125
Chemistry
58
52,007,956
https://en.wikipedia.org/wiki/NGC%20286
NGC 286 is a lenticular galaxy in the constellation Cetus. It was discovered on October 2, 1886 by Francis Leavenworth. References External links 0286 18861002 Cetus Lenticular galaxies Discoveries by Francis Leavenworth 003142
NGC 286
Astronomy
51
22,290,125
https://en.wikipedia.org/wiki/List%20of%20Greek%20and%20Roman%20architectural%20records
This is the list of ancient architectural records consists of record-making architectural achievements of the Greco-Roman world from c. 800 BC to 600 AD. Bridges The highest bridge over the water or ground was the single-arched Pont d'Aël which carried irrigation water for Aosta across a deep Alpine gorge. The height of its deck over the torrent below measures 66 m. The largest bridge by span was the Trajan's Bridge over the lower Danube. Its twenty-one timber arches spanned 50 m each from centreline to centreline. The largest pointed arch bridge by span was the Karamagara Bridge in Cappadocia with a clear span of 17 m. Constructed in the 5th or 6th century AD across a tributary of the Euphrates, the now submerged structure is one of the earliest known examples of pointed architecture in late antiquity, and may even be the oldest surviving pointed arch bridge. The largest rivers to be spanned by solid bridges were the Danube and the Rhine, the two largest European rivers west of the Eurasian Steppe. The lower Danube was crossed at least at two different crossing points (at Drobeta-Turnu Severin and at Corabia) and the middle and lower Rhine at four (at Mainz, at Neuwied, at Koblenz and at Cologne). For rivers with strong currents and to allow swift army movements, pontoon bridges were also routinely employed. Going from the distinct lack of records of solid bridges spanning larger rivers elsewhere, the Roman feat appears to be unsurpassed anywhere in the world until well into the 19th century. The longest bridge, and one of the longest of all time, was Constantine's Bridge with an overall length of 2,437 m, 1137 m of which crossed the Danube's riverbed. Pont Serme in southern France reached a length of 1,500 m, but may be better classified as an arcaded viaduct. The second longest bridge was thus the acclaimed Trajan's Bridge further upstream from Constantine's. Erected 104–105 AD by the engineer Apollodorus of Damascus for facilitating the advance of Roman troops in the Dacian Wars, it featured twenty-one spans covering a total distance of between 1,070 and 1,100 m. The longest existing Roman bridge is the sixty-two span Puente Romano at Mérida, Spain (today 790 m). The total length of all aqueduct arch bridges of the Aqua Marcia to Rome, constructed from 144 to 140 BC, amounts to 10 km. The longest segmental arch bridge was the c. 1,100 m long Trajan's Bridge, whose wooden superstructure was supported by twenty concrete piers. The Bridge at Limyra in modern-day Turkey, consisting of twenty-six flat brick arches, features the greatest lengths of all extant masonry structures in this category (360 m). The tallest bridge was the Pont du Gard, which carried water across the Gard river to Nîmes, southern France. The 270 m long aqueduct bridge was constructed in three tiers which measure successively 20.5 m, 19.5 m and 7.4 m, adding up to a total height of 47.4 m above the water-level. When crossing deeper valleys, Roman hydraulic engineers preferred inverted siphons over bridges for reasons of relative economics; this is evident in the Gier aqueduct where seven out of nine siphons exceed the 45 m mark, reaching depths up to 123 m. The tallest road bridges were the monumental Alcántara Bridge, Spain, and the bridge at Narni, Italy, which rose above the stream-level c. 42 m and 30 m, respectively. The widest bridge was the Pergamon Bridge in Pergamon, Turkey. The structure served as a substruction for a large court in front of the Serapis Temple, allowing the waters of the Selinus river to pass unrestricted underneath. Measuring 193 m in width, the dimensions of the extant bridge are such that it is frequently mistaken for a tunnel, although the whole structure was actually erected above ground. A similar design was also executed in the Nysa Bridge which straddled the local stream on a length of 100 m, supporting a forecourt of the city theatre. By comparison, the width of a normal, free standing Roman bridge did not exceed 10 m. The bridge with the greatest load capacity – as far as can be determined from the limited research – was the Alcántara Bridge the largest arch of which can support a load of 52 t, followed by the Ponte de Pedra (30 t), Puente Bibei (24 t) and Puente de Ponte do Lima (24 t) (all in Hispania). According to modern calculations, the Limyra Bridge, Asia Minor, can support a 30 t vehicle on one arch plus a load of 500 kp/m2 on the remaining surface of the arch. The load limit of Roman arch bridges was thus far in excess of the live loads imposed by ancient traffic. Ratio of clear span against rise, arch rib and pier thickness: The bridge with the flattest arches was the Trajan's Bridge, with a span-to-rise ratio of about 7 to 1. It also held several other important architectural records (see below). A number of fully stone segmental arch bridges, scattered throughout the empire, featured ratios of between 6.4 and 3, such as the relatively unknown Bridge at Limyra, the Ponte San Lorenzo and the Alconétar Bridge. By comparison, the Florentine Ponte Vecchio, one of the earliest segmental arch bridges in the Middle Ages, features a ratio of 5.3 to 1. The bridge with the most slender arch was the Pont-Saint-Martin in the Alpine Aosta Valley. A favourable ratio of arch rib thickness to span is regarded as the single most important parameter in the design of stone arches. The arch rib of the Pont-Saint-Martin is only 1.03 m thick what translates to a ratio of 1/34 respectively 1/30 depending on whether one assumes 35.64 m or 31.4 m to be the value for its clear span. A statistical analysis of extant Roman bridges shows that ancient bridge builders preferred a ratio for rib thickness to span of 1/10 for smaller bridges, while they reduced this to as low as 1/20 for larger spans in order to relieve the arch from its own weight. The bridge with the most slender piers was the three-span Ponte San Lorenzo in Padua, Italy. A favourable ratio between pier thickness and span is considered a particularly important parameter in bridge building, since wide openings reduce stream velocities which tend to undermine the foundations and cause collapse. The approximately 1.70 m thick piers of the Ponte San Lorenzo are as slender as one-eighth of the span. In some Roman bridges, the ratio still reached one-fifth, but a common pier thickness was around one third of the span. Having been completed sometime between 47 and 30 BC, the San Lorenzo Bridge also represents one of the earliest segmental arch bridges in the world with a span to rise ratio of 3.7 to 1. Canals The largest canal appears to be the Canal of the Pharaohs connecting the Mediterranean Sea and the Red Sea via the Nile. Opened by king Ptolemy II around 280 BC the waterway branched off the Pelusiac arm of the river running eastwards through the Wadi Tumalat to the Great Bitter Lake on a length of 55.6 km. There, it turned sharply south following the modern course of the canal and discharged into the Red Sea after altogether 92.6 km. The canal was 10 m deep and 35 m wide, with its sea entrance secured by a lock. Under Trajan the Ptolemaic canal was restored and extended for about another 60 km to the south where it now tapped the main branch of the Nile at Babylon. A particularly ambitious canal scheme which never came to fruition was Nero's Corinth Canal project, work on which was abandoned after his murder. Columns Note: This section makes no distinction between columns composed of drums and monolithic shafts; for records concerning solely the latter, see monoliths. The tallest victory column in Constantinople was the Column of Theodosius, which no longer exists, with the height of its top above ground being c. 50 m. The Column of Arcadius, whose 10.5 m base alone survives, was c. 46.1 m high. The Column of Constantine may originally have been as high as 40 m above the pavement of the Forum. The height of the Column of Justinian is unclear, but it may have been even larger. The height of each of these monuments was originally even higher, as all were further crowned with a colossal imperial statue several times life-size. The tallest victory column in Rome was the Column of Marcus Aurelius, Rome, with the height of its top above ground being c. 39.72 m. It thus exceeds its earlier model, Trajan's Column, by 4.65 m, chiefly due to its higher pedestal. The tallest monolithic column was Pompey's Pillar in Alexandria which is 26.85 m high with its base and capital and whose monolithic column shaft measures 20.75 m. The statue of Diocletian atop "Pompey's" Pillar was itself approximately 7 m tall. The tallest Corinthian colonnade, a style which was particularly popular in Roman monumental construction, adorned the Temple of Jupiter at Baalbek, reaching a height of 19.82 m including base and capital; their shafts measure 16.64 m high. The next two tallest are those of the Temple of Mars Ultor in Rome and of the Athenian Olympieion which are 17.74 m (14.76 m) respectively 16.83 m (14 m) high. These are followed by a group of three virtually identical high Corinthian orders in Rome: the Hadrianeum, the Temple of Apollo Sosianus and the Temple of Castor and Pollux, all of which are in the order of 14.8 m (12.4 m) height. Dams The largest arch dam was the Glanum Dam in the French Provence. Since its remains were nearly obliterated by a 19th-century dam on the same spot, its reconstruction relies on prior documentation, according to which the Roman dam was 12 m high, 3.9 m wide and 18 m long at the crest. Being the earliest known arch dam, it remained unique in antiquity and beyond (aside from the Dara Dam whose dimensions are unknown). The largest arch-gravity dam was the Kasserine Dam in Tunisia, arguably the biggest Roman dam in North Africa with 150 m length by 10 m height by 7.3 m width. However, despite its curved nature, it is uncertain whether the 2nd century AD dam structurally acted by arching action and not solely by its sheer weight; in this case it would be classified as a gravity dam and considerably smaller structures in Turkey or the Spanish Puy Foradado Dam would move up in this category (see sortable List of Roman dams). The largest bridge dam was the Band-e Kaisar which was erected by a Roman workforce on Sassanid territory in the 3rd century AD. The approximately 500 m long structure, a novel combination of overflow dam and arcaded bridge, crossed Iran's most effluent river on more than forty arches. The most eastern Roman civil engineering structure ever built, its dual-purpose design exerted a profound influence on Iranian dam building. The largest multiple arch buttress dam was the Esparragalejo Dam in Spain, whose 320 m long wall was supported on its air face alternatingly by buttresses and concave-shaped arches. Dated to the 1st century AD, the structure represents the first and, as it appears, only known dam of its type in ancient times. The longest buttress dam was the 632+ m long Consuegra Dam (3rd–4th century AD) in central Spain which is still fairly well preserved. Instead of an earth embankment, its only 1.3 m thick retaining wall was supported on the downstream side by buttresses in regular intervals of 5 to 10 m. In Spain, a large number of ancient buttress dams are concentrated, representing nearly one-third of the total found there. The longest gravity dam, and longest dam overall, impounds Lake Homs in Syria. Built in 284 AD by emperor Diocletian for irrigation, the 2,000 m long and 7 m high masonry dam consists of a concrete core protected by basalt ashlar. The lake, 6 miles long by 2.5 miles wide, had a capacity of 90 million m3, making it the biggest Roman reservoir in the Near East and possibly the largest artificial lake constructed up to that time. Enlarged in the 1930s, it is still a landmark of Homs which it continues to supply with water. Further notable dams in this category include the little-studied 900 m long Wadi Caam II dam at Leptis Magna and the Spanish dams at Alcantarilla and at Consuegra. The tallest dam belonged to the Subiaco Dams at the central Italian town of the same name. Constructed by Nero (54–68 AD) as an adjunct to his villa on the Aniene river, the three reservoirs were highly unusual in their time for serving recreational rather than utilitarian purposes. The biggest dam of the group is estimated to have reached a height of 50 m. It remained unsurpassed in the world until its accidental destruction in 1305 by two monks who fatally removed cover stones from the top. Also quite tall structures were Almonacid de la Cuba Dam (34 m), Cornalvo Dam (28 m) and Proserpina Dam (21.6 m), all of which are located in Spain and still of substantially Roman fabric. Domes The largest dome in the world for more than 1,700 years was the Pantheon in Rome. Its concrete dome spans an interior space of 43.45 m, which corresponds exactly to its height from floor to top. Its apex concludes with an 8.95 m wide oculus. The structure remained unsurpassed until 1881 and stills holds the title of the largest unreinforced solid concrete dome in the world. The Pantheon has exercised an immense influence on Western dome construction to this day. The largest dome out of clay hollowware ever constructed is the caldarium of the Baths of Caracalla in Rome. The now ruined dome, completed in 216 AD, had an inner diameter of 35.08 m. For reduction of weight its shell was constructed of amphora joined together, a quite new method then which could do without time-consuming wooden centring. The largest half-domes were found in the Baths of Trajan in Rome, completed in 109 AD. Several exedrae integrated into the enclosure wall of the compound reached spans up to 30 m. The largest stone dome was the Western Thermae in Gerasa, Jordan, constructed around 150–175 AD. The 15 m wide dome of the bath complex was also one of the earliest of its kind with a square ground plan. Fortifications The longest city walls were those of Classical Athens. Their extraordinary length was due to the construction of the famous Long Walls which played a key role in the city's maritime strategy, by providing it with a secure access to the sea and offering the population of Attica a retreat zone in case of foreign invasions. At the eve of the Peloponnesian War (431–404 BC), Thucydides gave the length of the entire circuit as follows: 43 stades (7.6 km) for the city walls without the southwestern section covered by others walls and 60 stades (10.6 km) for the circumference of the Peiraeus port. A corridor between these two was established by the northern Long Wall (40 stades or 7.1 km) and the Phaleric Wall (35 stades or 6.2 km). Assuming a value of 177.6 m for one Attic stade, the overall length of the walls of Athens thus measured about 31.6 km. The structure, consisting of sun-dried bricks built on a foundation of limestone blocks, was dismantled after Athens' defeat in 404 BC, but rebuilt a decade later. Syracuse, Rome (Aurelian Walls) and Constantinople (Walls of Constantinople) were also protected by very long circuit walls. Monoliths The largest monolith lifted by a single crane can be determined from the characteristic lewis iron holes (each of which points at the use of one crane) in the lifted stone block. By dividing its weight by their number, one arrives at a maximum lifting capacity of 7.5 to 8 t as exemplified by a cornice block at the Trajan's Forum and the architrave blocks of the Temple of Jupiter at Baalbek. Based on a detailed Roman relief of a construction crane, the engineer O'Connor calculates a slightly less lifting capability, 6.2 t, for such a type of treadwheel crane, on the assumption that it was powered by five men and using a three-pulley block. The largest monolith lifted by cranes was the 108 t heavy corner cornice block of the Jupiter temple at Baalbek, followed by an architrave block weighing 63 t, both of which were raised to a height of about 19 m. The capital block of Trajan's Column, with a weight of 53.3 t, was even lifted to c. 34 m above the ground. As such enormous loads far exceeded the lifting capability of any single treadwheel crane, it is assumed that Roman engineers set up a four-masted lifting tower in the midst of which the stone blocks were vertically raised by the means of capstans placed on the ground around it. The largest monoliths hewn were two giant building blocks in the quarry of Baalbek: an unnamed rectangular block which was only recently discovered is measured at c. 20 m x 4.45 m x 4.5 m, yielding a weight of 1,242 t. The similarly shaped Stone of the Pregnant Woman nearby weighs an estimated 1,000.12 t. Both limestone blocks were intended for the Roman temple district nearby, possibly as an addition to the trilithon, but were left for unknown reasons at their quarrying sites. The largest monolith moved was the trilithon, a group of three monumental blocks in the podium of the Jupiter temple at Baalbek. The individual stones are 19.60 m, 19.30 m and 19.10 m long respectively, with a depth of 3.65 m and a height of 4.34 m. Weighing approximately 800 t on average, they were transported a distance of 800 m from the quarry and probably pulled by the means of ropes and capstans into their final position. The supporting stone layer beneath features a number of blocks which are still in the order of 350 t. The various giant stones of Roman Baalbek rank high among the largest man-made monoliths in history. The largest monolithic columns were used by Roman builders who preferred them over the stacked drums typical of classical Greek architecture. The logistics and technology involved in the transport and erection of extra-large single-piece columns were demanding: As a rule of thumb, the weight of the column shafts in the length range between 40 and 60 Roman feet (c. 11.8 to 17.8 m) doubled with every ten feet from c. 50 over 100 to 200 t. Despite this, forty and also fifty feet tall monolithic shafts can be found in a number of Roman buildings, but examples reaching sixty feet are only in evidence in two unfinished granite columns which still lie in the Roman quarry of Mons Claudianus, Egypt. One of the pair, which was discovered only in the 1930s, has an estimated weight of 207 t. All these dimensions, however, are surpassed by Pompey's Pillar, a free-standing victory column erected in Alexandria in 297 AD: measuring 20.46 m high with a diameter of 2.71 m at its base, the weight of its granite shaft has been put at 285 t. The largest monolithic dome crowned the early 6th century AD Mausoleum of Theodoric in Ravenna, then capital of the Ostrogothic kingdom. The weight of the single, 10.76 m wide roof slab has been calculated at 230 t. Obelisks The tallest obelisks are all located in Rome, adorning its inner-city squares. The Agonalis obelisk on Piazza Navona stands highest at 16.54 m without pedestal, followed by the Esquiline, Quirinale (both 14.7 m), Sallustiano (13.92 m) and the somewhat smaller Pinciano obelisk. Only some of them were inscribed with hieroglyphs, while others remained blank. These five obelisks of Roman date complement a group of eight ancient Egyptian obelisks which were carried on imperial order by obelisk carriers from the Nile to the Tiber, elevating Rome to the city with the most ancient obelisks to this day. Roads The longest trackway was the Diolkos near Corinth, Greece, measuring between 6 and 8.5 km. The paved roadway allowed boats to be pulled across the Isthmus of Corinth, thus avoiding the long and dangerous sea trip around the Peloponnese peninsula. Working by the railway principle, with a gauge of around 160 cm between two parallel grooves cut into the limestone paving, it remained in regular and frequent service for at least 650 years. By comparison, the world's first overland wagonway, the Wollaton Wagonway of 1604, ran for c. 3 km. Roofs The largest post and lintel roof by span spanned the Parthenon in Athens. It measured 19.20 m between the cella walls, with an unsupported span of 11.05 m between the interior colonnades. Sicilian temples of the time featured slightly larger cross sections, but these may have been covered by truss roofs instead. The largest truss roof by span covered the Aula Regia (throne room) built for emperor Domitian (81–96 AD) on the Palatine Hill, Rome. The timber truss roof had a width of 31.67 m, slightly surpassing the postulated limit of 30 m for Roman roof constructions. Tie-beam trusses allowed for much larger spans than the older prop-and-lintel system and even concrete vaulting: Nine out of the ten largest rectangular spaces in Roman architecture were bridged this way, the only exception being the groin vaulted Basilica of Maxentius. Tunnels The deepest tunnel was the Tunnels of Claudius, constructed in eleven years time by emperor Claudius (41–54 AD). Draining the Fucine Lake, the largest Italian inland water, 100 km east of Rome, it is widely deemed as the most ambitious Roman tunnel project as it stretched ancient technology to its limits. The 5653 m long qanat tunnel, passing under Monte Salviano, features vertical shafts up to 122 m depth; even longer ones were run obliquely through the rock. After repairs under Trajan and Hadrian, the Claudius tunnel remained in use until the end of antiquity. Various attempts at restoration succeeded only in the late 19th century. The longest road tunnel was the Cocceius Tunnel near Naples, Italy, which connected Cumae with the base of the Roman fleet, Portus Julius. The 1000 m long tunnel was part of an extensive underground network which facilitated troop movements between the various Roman facilities in the volcanic area. Built by the architect Cocceius Auctus, it featured paved access roads and well-built mouthes. Other road tunnels include the Crypta Neapolitana to Pozzuoli (750 m long, 3–4 m wide and 3–5 m high), and the similarly sized Grotta di Seiano. The longest qanat was the 94 km long Gadara Aqueduct in northern Jordan. This recently discovered structure provided for hundreds of years water for Adraa, Abila and Gadara, three cities of the ancient Decapolis. Only 35 km long as the crow flies, its length was almost tripled by following closely the contours of the local topography, avoiding valleys and mountain ridges alike. The monumental work seemed to be carried out in seven stages of construction between 130 and 193 AD. The distance between the individual vertical shafts was on average 50 m. Probably the project was initiated by Hadrian, who had granted privileges to the cities during a longer stay in the Decapolis. The aqueduct remained operational until the Byzantines lost control of the region after the Battle of Yarmuk in 636. The longest tunnel excavated from opposite ends was built around the end of the 6th century BC for draining and regulating Lake Nemi, Italy. Measuring 1600 m, it was almost 600 m longer than the slightly older Tunnel of Eupalinos on the isle of Samos, the first tunnel in history to be excavated from two ends with a methodical approach. The Albano Tunnel, also in central Italy, reaches a length of 1,400 m. It was excavated no later than 397 BC and is still in service. Determining the tunnelling direction underground and coordinating the advance of the separate work parties made meticulous surveying and execution on the part of the ancient engineers necessary. Vaulting The largest barrel vault by span covered the Temple of Venus and Roma, Rome. Built between 307 and 312 AD, the vaulted structure replaced the original timber truss roof from Hadrian's time. The largest groin vault by span roofed the 25.01 m wide main nave of the Basilica of Maxentius on the Forum Romanum, built in the early 4th century AD. Miscellaneous The greatest concentration of mechanical power was the Barbegal water mill complex in southern France, constructed in the early 2nd century AD. Sixteen overshot water wheels fed by an arcaded aqueduct branch from the main conduit to Arles produced an estimated 4.5 t of flour per 24 hours – an output sufficient to feed 12,500 people or the majority of the population of Arles. Water mill batteries are also known from Amida in Asia Minor, the Janiculum hill in Rome, and a number of other places throughout the empire. The longest spiral stair belonged to the 2nd century AD Trajan's Column in Rome. Measuring a height of 29.68 m, it surpassed its successor, the Column of Marcus Aurelius, by a mere 6 cm. Its treads were carved out ouf nineteen massive marble blocks so that each drum comprised a half-turn of seven steps. The quality of the craftsmanship was such that the staircase was practically even, and the joints between the huge blocks accurately fitting. The design of the Trajan's column had a profound influence on Roman construction technique, and the spiral stair became over time an establish architectural element. The longest straight alignment was constituted by an 81.259 km long section of the Roman limes in Germany. The fortified line ran through hilly and densely wooded country in completely linear fashion, deviating in its entire length only once, for a distance of 1.6 km, to avoid a steep valley. The extraordinary accuracy of the alignment has been attributed to the groma, a surveying instrument which was used by the Romans to great effect in land division and road construction. See also Ancient Greek architecture Greek technology Ancient Roman architecture Roman technology Roman engineering References Sources External links Traianus – Technical investigation of Roman public works 600 Roman Aqueducts – with 40 described in detail List of ancient architectural records Architectural records Ancient architectural records Architecture records
List of Greek and Roman architectural records
Engineering
5,666
24,138
https://en.wikipedia.org/wiki/Proton%20decay
In particle physics, proton decay is a hypothetical form of particle decay in which the proton decays into lighter subatomic particles, such as a neutral pion and a positron. The proton decay hypothesis was first formulated by Andrei Sakharov in 1967. Despite significant experimental effort, proton decay has never been observed. If it does decay via a positron, the proton's half-life is constrained to be at least . According to the Standard Model, the proton, a type of baryon, is stable because baryon number (quark number) is conserved (under normal circumstances; see Chiral anomaly for an exception). Therefore, protons will not decay into other particles on their own, because they are the lightest (and therefore least energetic) baryon. Positron emission and electron capture—forms of radioactive decay in which a proton becomes a neutron—are not proton decay, since the proton interacts with other particles within the atom. Some beyond-the-Standard-Model grand unified theories (GUTs) explicitly break the baryon number symmetry, allowing protons to decay via the Higgs particle, magnetic monopoles, or new X bosons with a half-life of 10 to 10 years. For comparison, the universe is roughly years old. To date, all attempts to observe new phenomena predicted by GUTs (like proton decay or the existence of magnetic monopoles) have failed. Quantum tunnelling may be one of the mechanisms of proton decay. Quantum gravity (via virtual black holes and Hawking radiation) may also provide a venue of proton decay at magnitudes or lifetimes well beyond the GUT scale decay range above, as well as extra dimensions in supersymmetry. There are theoretical methods of baryon violation other than proton decay including interactions with changes of baryon and/or lepton number other than 1 (as required in proton decay). These included B and/or L violations of 2, 3, or other numbers, or B − L violation. Such examples include neutron oscillations and the electroweak sphaleron anomaly at high energies and temperatures that can result between the collision of protons into antileptons or vice versa (a key factor in leptogenesis and non-GUT baryogenesis). Baryogenesis One of the outstanding problems in modern physics is the predominance of matter over antimatter in the universe. The universe, as a whole, seems to have a nonzero positive baryon number density – that is, there is more matter than antimatter. Since it is assumed in cosmology that the particles we see were created using the same physics we measure today, it would normally be expected that the overall baryon number should be zero, as matter and antimatter should have been created in equal amounts. This has led to a number of proposed mechanisms for symmetry breaking that favour the creation of normal matter (as opposed to antimatter) under certain conditions. This imbalance would have been exceptionally small, on the order of 1 in every 1010 particles a small fraction of a second after the Big Bang, but after most of the matter and antimatter annihilated, what was left over was all the baryonic matter in the current universe, along with a much greater number of bosons. Most grand unified theories explicitly break the baryon number symmetry, which would account for this discrepancy, typically invoking reactions mediated by very massive X bosons or massive Higgs bosons (). The rate at which these events occur is governed largely by the mass of the intermediate or particles, so by assuming these reactions are responsible for the majority of the baryon number seen today, a maximum mass can be calculated above which the rate would be too slow to explain the presence of matter today. These estimates predict that a large volume of material will occasionally exhibit a spontaneous proton decay. Experimental evidence Proton decay is one of the key predictions of the various grand unified theories (GUTs) proposed in the 1970s, another major one being the existence of magnetic monopoles. Both concepts have been the focus of major experimental physics efforts since the early 1980s. To date, all attempts to observe these events have failed; however, these experiments have been able to establish lower bounds on the half-life of the proton. Currently, the most precise results come from the Super-Kamiokande water Cherenkov radiation detector in Japan: a lower bound on the proton's half-life of via positron decay, and similarly, via antimuon decay, close to a supersymmetry (SUSY) prediction of 1034–1036 years. An upgraded version, Hyper-Kamiokande, probably will have sensitivity 5–10 times better than Super-Kamiokande. Theoretical motivation Despite the lack of observational evidence for proton decay, some grand unification theories, such as the SU(5) Georgi–Glashow model and SO(10), along with their supersymmetric variants, require it. According to such theories, the proton has a half-life of about ~ years and decays into a positron and a neutral pion that itself immediately decays into two gamma ray photons: Since a positron is an antilepton this decay preserves number, which is conserved in most GUTs. Additional decay modes are available (e.g.: ), both directly and when catalyzed via interaction with GUT-predicted magnetic monopoles. Though this process has not been observed experimentally, it is within the realm of experimental testability for future planned very large-scale detectors on the megaton scale. Such detectors include the Hyper-Kamiokande. Early grand unification theories (GUTs) such as the Georgi–Glashow model, which were the first consistent theories to suggest proton decay, postulated that the proton's half-life would be at least . As further experiments and calculations were performed in the 1990s, it became clear that the proton half-life could not lie below . Many books from that period refer to this figure for the possible decay time for baryonic matter. More recent findings have pushed the minimum proton half-life to at least – years, ruling out the simpler GUTs (including minimal SU(5) / Georgi–Glashow) and most non-SUSY models. The maximum upper limit on proton lifetime (if unstable), is calculated at , a bound applicable to SUSY models, with a maximum for (minimal) non-SUSY GUTs at . Although the phenomenon is referred to as "proton decay", the effect would also be seen in neutrons bound inside atomic nuclei. Free neutrons—those not inside an atomic nucleus—are already known to decay into protons (and an electron and an antineutrino) in a process called beta decay. Free neutrons have a half-life of 10 minutes () due to the weak interaction. Neutrons bound inside a nucleus have an immensely longer half-life – apparently as great as that of the proton. Projected proton lifetimes The lifetime of the proton in vanilla SU(5) can be naively estimated as . Supersymmetric GUTs with reunification scales around   yield a lifetime of around , roughly the current experimental lower bound. Decay operators Dimension-6 proton decay operators The dimension-6 proton decay operators are and where is the cutoff scale for the Standard Model. All of these operators violate both baryon number () and lepton number () conservation but not the combination  − . In GUT models, the exchange of an X or Y boson with the mass can lead to the last two operators suppressed by . The exchange of a triplet Higgs with mass can lead to all of the operators suppressed by . See Doublet–triplet splitting problem. Dimension-5 proton decay operators In supersymmetric extensions (such as the MSSM), we can also have dimension-5 operators involving two fermions and two sfermions caused by the exchange of a tripletino of mass . The sfermions will then exchange a gaugino or Higgsino or gravitino leaving two fermions. The overall Feynman diagram has a loop (and other complications due to strong interaction physics). This decay rate is suppressed by where is the mass scale of the superpartners. Dimension-4 proton decay operators In the absence of matter parity, supersymmetric extensions of the Standard Model can give rise to the last operator suppressed by the inverse square of sdown quark mass. This is due to the dimension-4 operators and . The proton decay rate is only suppressed by which is far too fast unless the couplings are very small. See also Age of the universe B − L Virtual black hole Weak hypercharge X and Y bosons Iron star References Further reading External links Proton decay at Super-Kamiokande Pictorial history of the IMB experiment Proton Nuclear physics Physics beyond the Standard Model Grand Unified Theory Supersymmetric quantum field theory Hypothetical processes Ultimate fate of the universe 1967 in science ja:陽子#陽子の崩壊
Proton decay
Physics
1,866
55,532,583
https://en.wikipedia.org/wiki/Codewars
Codewars is an educational community for computer programming. On the platform, software developers train on programming challenges known as kata. These discrete programming exercises train a range of skills in a variety of programming languages, and are completed within an online integrated development environment. On Codewars the community and challenge progression is gamified, with users earning ranks and honor for completing kata, contributing kata, and quality solutions. The platform is owned and operated by Qualified, a technology company that provides a platform for assessing and training software engineering skills. History Founded by Nathan Doctor and Jake Hoffner in November 2012, the project initially began at a Startup Weekend competition that year, where it was prototyped. It was awarded first place in that competition, drawing the attention of engineers, and funding interest from two of the judges Paige Craig (angel investor) and Brian Lee (entrepreneur). After building the first production iteration of the platform, it was launched to the Hacker News community, receiving significant attention for its challenge format and signing up approximately 10,000 users within that weekend. See also CodeFights CodinGame Competitive programming HackerRank External links AngelList profile (archived on 28 October 2020) Programming contests Computer programming American educational websites References
Codewars
Technology,Engineering
245
3,238,520
https://en.wikipedia.org/wiki/Microfabrication
Microfabrication is the process of fabricating miniature structures of micrometre scales and smaller. Historically, the earliest microfabrication processes were used for integrated circuit fabrication, also known as "semiconductor manufacturing" or "semiconductor device fabrication". In the last two decades, microelectromechanical systems (MEMS), microsystems (European usage), micromachines (Japanese terminology) and their subfields have re-used, adapted or extended microfabrication methods. These subfields include microfluidics/lab-on-a-chip, optical MEMS (also called MOEMS), RF MEMS, PowerMEMS, BioMEMS and their extension into nanoscale (for example NEMS, for nano electro mechanical systems). The production of flat-panel displays and solar cells also uses similar techniques. Miniaturization of various devices presents challenges in many areas of science and engineering: physics, chemistry, materials science, computer science, ultra-precision engineering, fabrication processes, and equipment design. It is also giving rise to various kinds of interdisciplinary research. The major concepts and principles of microfabrication are microlithography, doping, thin films, etching, bonding, and polishing. Fields of use Microfabricated devices include: integrated circuits (“microchips”) (see semiconductor manufacturing) microelectromechanical systems (MEMS) and microoptoelectromechanical systems (MOEMS) microfluidic devices (ink jet print heads) solar cells flat panel displays (see AMLCD and thin-film transistors) sensors (microsensors) (biosensors, nanosensors) power MEMS, fuel cells, energy harvesters/scavengers Origins Microfabrication technologies originate from the microelectronics industry, and the devices are usually made on silicon wafers even though glass, plastics and many other substrate are in use. Micromachining, semiconductor processing, microelectronic fabrication, semiconductor fabrication, MEMS fabrication and integrated circuit technology are terms used instead of microfabrication, but microfabrication is the broad general term. Traditional machining techniques such as electro-discharge machining, spark erosion machining, and laser drilling have been scaled from the millimeter size range to micrometer range, but they do not share the main idea of microelectronics-originated microfabrication: replication and parallel fabrication of hundreds or millions of identical structures. This parallelism is present in various imprint, casting and moulding techniques which have successfully been applied in the microregime. For example, injection moulding of DVDs involves fabrication of submicrometer-sized spots on the disc. Processes Microfabrication is actually a collection of technologies which are utilized in making microdevices. Some of them have very old origins, not connected to manufacturing, like lithography or etching. Polishing was borrowed from optics manufacturing, and many of the vacuum techniques come from 19th century physics research. Electroplating is also a 19th-century technique adapted to produce micrometre scale structures, as are various stamping and embossing techniques. To fabricate a microdevice, many processes must be performed, one after the other, many times repeatedly. These processes typically include depositing a film, patterning the film with the desired micro features, and removing (or etching) portions of the film. Thin film metrology is used typically during each of these individual process steps, to ensure the film structure has the desired characteristics in terms of thickness (t), refractive index (n) and extinction coefficient (k), for suitable device behavior. For example, in memory chip fabrication there are some 30 lithography steps, 10 oxidation steps, 20 etching steps, 10 doping steps, and many others are performed. The complexity of microfabrication processes can be described by their mask count. This is the number of different pattern layers that constitute the final device. Modern microprocessors are made with 30 masks while a few masks suffice for a microfluidic device or a laser diode. Microfabrication resembles multiple exposure photography, with many patterns aligned to each other to create the final structure. Substrates Microfabricated devices are not generally freestanding devices but are usually formed over or in a thicker support substrate. For electronic applications, semiconducting substrates such as silicon wafers can be used. For optical devices or flat panel displays, transparent substrates such as glass or quartz are common. The substrate enables easy handling of the micro device through the many fabrication steps. Often many individual devices are made together on one substrate and then singulated into separated devices toward the end of fabrication. Deposition or growth Microfabricated devices are typically constructed using one or more thin films (see Thin film deposition). The purpose of these thin films depends upon the type of device. Electronic devices may have thin films which are conductors (metals), insulators (dielectrics) or semiconductors. Optical devices may have films which are reflective, transparent, light guiding or scattering. Films may also have a chemical or mechanical purpose as well as for MEMS applications. Examples of deposition techniques include: Thermal oxidation Local oxidation of silicon Chemical vapor deposition (CVD) APCVD LPCVD PECVD Physical vapor deposition (PVD) Sputtering Evaporative deposition Electron beam PVD Epitaxy Patterning It is often desirable to pattern a film into distinct features or to form openings (or vias) in some of the layers. These features are on the micrometer or nanometer scale and the patterning technology is what defines microfabrication. This patterning technique typically uses a 'mask' to define portions of the film which will be removed. Examples of patterning techniques include: Photolithography Shadow masking Etching Etching is the removal of some portion of the thin film or substrate. The substrate is exposed to an etching (such as an acid or plasma) which chemically or physically attacks the film until it is removed. Etching techniques include: Dry etching (plasma etching) such as reactive-ion etching (RIE) or deep reactive-ion etching (DRIE) Wet etching or chemical etching Microforming Microforming is a microfabrication process of microsystem or microelectromechanical system (MEMS) "parts or structures with at least two dimensions in the submillimeter range." It includes techniques such as microextrusion, microstamping, and microcutting. These and other microforming processes have been envisioned and researched since at least 1990, leading to the development of industrial- and experimental-grade manufacturing tools. However, as Fu and Chan pointed out in a 2013 state-of-the-art technology review, several issues must still be resolved before the technology can be implemented more widely, including deformation load and defects, forming system stability, mechanical properties, and other size-related effects on the crystallite (grain) structure and boundaries: In microforming, the ratio of the total surface area of grain boundaries to the material volume decreases with the decrease of specimen size and the increase of grain size. This leads to the decrease of grain boundary strengthening effect. Surface grains have lesser constraints compared to internal grains. The change of flow stress with part geometry size is partly attributed to the change of volume fraction of surface grains. In addition, the anisotropic properties of each grain become significant with the decrease of workpiece size, which results in the inhomogeneous deformation, irregular formed geometry and the variation of deformation load. There is a critical need to establish the systematic knowledge of microforming to support the design of part, process, and tooling with the consideration of size effects. Other a wide variety of other processes for cleaning, planarizing, or modifying the chemical properties of microfabricated devices can also be performed. Some examples include: Doping by either thermal diffusion or ion implantation Chemical-mechanical planarization (CMP) Wafer cleaning, also known as "surface preparation" (see below) Wire bonding Cleanliness in wafer fabrication Microfabrication is carried out in cleanrooms, where air has been filtered of particle contamination and temperature, humidity, vibrations and electrical disturbances are under stringent control. Smoke, dust, bacteria and cells are micrometers in size, and their presence will destroy the functionality of a microfabricated device. Cleanrooms provide passive cleanliness but the wafers are also actively cleaned before every critical step. RCA-1 clean in ammonia-peroxide solution removes organic contamination and particles; RCA-2 cleaning in hydrogen chloride-peroxide mixture removes metallic impurities. Sulfuric acid-peroxide mixture (a.k.a. Piranha) removes organics. Hydrogen fluoride removes native oxide from silicon surface. These are all wet cleaning steps in solutions. Dry cleaning methods include oxygen and argon plasma treatments to remove unwanted surface layers, or hydrogen bake at elevated temperature to remove native oxide before epitaxy. Pre-gate cleaning is the most critical cleaning step in CMOS fabrication: it ensures that the ca. 2 nm thick oxide of a MOS transistor can be grown in an orderly fashion. Oxidation, and all high temperature steps are very sensitive to contamination, and cleaning steps must precede high temperature steps. Surface preparation is just a different viewpoint, all the steps are the same as described above: it is about leaving the wafer surface in a controlled and well known state before you start processing. Wafers are contaminated by previous process steps (e.g. metals bombarded from chamber walls by energetic ions during ion implantation), or they may have gathered polymers from wafer boxes, and this might be different depending on wait time. Wafer cleaning and surface preparation work similarly to the machines in a bowling alley: first they remove all unwanted bits and pieces, and then they reconstruct the desired pattern so that the game can go on. See also 3D microfabrication Nanofabrication Semiconductor fabrication References Further reading Journals Journal of Microelectromechanical Systems (J.MEMS) Sensors and Actuators A: Physical Sensors and Actuators B: Chemical Journal of Micromechanics and Microengineering Lab on a Chip IEEE Transactions of Electron Devices Journal of Vacuum Science and Technology A: Vacuum, Surfaces, Films Journal of Vacuum Science and Technology B: Microelectronics and Nanometer Structures: Processing, Measurement, and Phenomena Books External links Videos and animations on microfabrication techniques and related applications . MicroManufacturing Conference. Semiconductor device fabrication Nanotechnology Microtechnology
Microfabrication
Materials_science,Engineering
2,216
63,547
https://en.wikipedia.org/wiki/Pancreatitis
Pancreatitis is a condition characterized by inflammation of the pancreas. The pancreas is a large organ behind the stomach that produces digestive enzymes and a number of hormones. There are two main types: acute pancreatitis, and chronic pancreatitis. Signs and symptoms of pancreatitis include pain in the upper abdomen, nausea and vomiting. The pain often goes into the back and is usually severe. In acute pancreatitis, a fever may occur; symptoms typically resolve in a few days. In chronic pancreatitis, weight loss, fatty stool, and diarrhea may occur. Complications may include infection, bleeding, diabetes mellitus, or problems with other organs. The two most common causes of acute pancreatitis are a gallstone blocking the common bile duct after the pancreatic duct has joined; and heavy alcohol use. Other causes include direct trauma, certain medications, infections such as mumps, and tumors. Chronic pancreatitis may develop as a result of acute pancreatitis. It is most commonly due to many years of heavy alcohol use. Other causes include high levels of blood fats, high blood calcium, some medications, and certain genetic disorders, such as cystic fibrosis, among others. Smoking increases the risk of both acute and chronic pancreatitis. Diagnosis of acute pancreatitis is based on a threefold increase in the blood of either amylase or lipase. In chronic pancreatitis, these tests may be normal. Medical imaging such as ultrasound and CT scan may also be useful. Acute pancreatitis is usually treated with intravenous fluids, pain medication, and sometimes antibiotics. Typically eating and drinking are disallowed, and a nasogastric tube is placed in the stomach. A procedure known as an endoscopic retrograde cholangiopancreatography (ERCP) may be done to examine the distal common bile duct and remove a gallstone if present. In those with gallstones the gallbladder is often also removed. In chronic pancreatitis, in addition to the above, temporary feeding through a nasogastric tube may be used to provide adequate nutrition. Long-term dietary changes and pancreatic enzyme replacement may be required. Occasionally, surgery is done to remove parts of the pancreas. Globally, in 2015 about 8.9 million cases of pancreatitis occurred. This resulted in 132,700 deaths, up from 83,000 deaths in 1990. Acute pancreatitis occurs in about 30 per 100,000 people a year. New cases of chronic pancreatitis develop in about 8 per 100,000 people a year and currently affect about 50 per 100,000 people in the United States. It is more common in men than women. Often chronic pancreatitis starts between the ages of 30 and 40 and is rare in children. Acute pancreatitis was first described on autopsy in 1882 while chronic pancreatitis was first described in 1946. Signs and symptoms The most common symptoms of pancreatitis are severe upper abdominal or left upper quadrant burning pain radiating to the back, nausea, and vomiting that is worse with eating. The physical examination will vary depending on severity and presence of internal bleeding. Blood pressure may be elevated by pain or decreased by dehydration or bleeding. Heart and respiratory rates are often elevated. The abdomen is usually tender but to a lesser degree than the pain itself. As is common in abdominal disease, bowel sounds may be reduced from reflex bowel paralysis. Fever or jaundice may be present. Chronic pancreatitis can lead to diabetes or pancreatic cancer. Unexplained weight loss may occur from a lack of pancreatic enzymes hindering digestion. Complications Early complications include shock, infection, systemic inflammatory response syndrome, low blood calcium, high blood glucose, and dehydration. Blood loss, dehydration, and fluid leaking into the abdominal cavity (ascites) can lead to kidney failure. Respiratory complications are often severe. Pleural effusion is usually present. Shallow breathing from pain can lead to lung collapse. Pancreatic enzymes may attack the lungs, causing inflammation. Severe inflammation can lead to intra-abdominal hypertension and abdominal compartment syndrome, further impairing renal and respiratory function and potentially requiring management with an open abdomen to relieve the pressure. Late complications include recurrent pancreatitis and the development of pancreatic pseudocysts—collections of pancreatic secretions that have been walled off by scar tissue. These may cause pain, become infected, rupture and bleed, block the bile duct and cause jaundice, or migrate around the abdomen. Acute necrotizing pancreatitis can lead to a pancreatic abscess, a collection of pus caused by necrosis, liquefaction, and infection. This happens in approximately 3% of cases or almost 60% of cases involving more than two pseudocysts and gas in the pancreas. Causes About 80 percent of pancreatitis cases are caused by gallstones or alcohol. Choledocholithiasis (gallstones in the bile duct) are the single most common cause of acute pancreatitis, and alcoholism is the single most common cause of chronic pancreatitis. Serum triglyceride levels greater than 1000 mg/dL (11.29 mmol/L, i.e. hyperlipidemia) is another cause. The mnemonic "GET SMASHED" is often used to help clinicians and medical students remember the common causes of pancreatitis: Gallstones, Ethanol, Trauma, Steroids, Mumps, Autoimmune, Scorpion sting, Hyperlipidemia, hypothermia or hyperparathyroidism, ERCP, Drugs (commonly azathioprine, valproic acid, liraglutide). Medications There are seven classes of medications associated with acute pancreatitis: statins, ACE inhibitors, oral contraceptives/hormone replacement therapy (HRT), diuretics, antiretroviral therapy, valproic acid, and oral hypoglycemic agents. Mechanisms of these drugs causing pancreatitis are not known exactly, but it is possible that statins have direct toxic effect on the pancreas or through the long-term accumulation of toxic metabolites. Meanwhile, ACE inhibitors cause angioedema of the pancreas through the accumulation of bradykinin. Birth control pills and HRT cause arterial thrombosis of the pancreas through the accumulation of fat (hypertriglyceridemia). Diuretics such as furosemide have a direct toxic effect on the pancreas. Meanwhile, thiazide diuretics cause hypertriglyceridemia and hypercalcemia, where the latter is the risk factor for pancreatic stones. HIV infection itself can cause a person to be more likely to get pancreatitis. Meanwhile, antiretroviral drugs may cause metabolic disturbances such as hyperglycemia and hypercholesterolemia, which predisposes to pancreatitis. Valproic acid may have direct toxic effect on the pancreas. Various oral hypoglycemic agents are associated with pancreatitis including metformin, but glucagon-like peptide-1 mimetics such as exenatide are more strongly associated with pancreatitis by promoting inflammation in combination with a high-fat diet. Atypical antipsychotics such as clozapine, risperidone, and olanzapine can also cause pancreatitis. Infection A number of infectious agents have been recognized as causes of pancreatitis including: Viruses Coxsackie virus Cytomegalovirus Hepatitis B Herpes simplex virus Mumps Varicella-zoster virus Bacteria Legionella Leptospira Mycoplasma Salmonella Fungi Aspergillus Parasites Ascaris Cryptosporidium Toxoplasma Other Other common causes include trauma, autoimmune disease, high blood calcium, hypothermia, and endoscopic retrograde cholangiopancreatography (ERCP). Pancreas divisum is a common congenital malformation of the pancreas that may underlie some recurrent cases. Diabetes mellitus type 2 is associated with a 2.8-fold higher risk. Less common causes include pancreatic cancer, pancreatic duct stones, vasculitis (inflammation of the small blood vessels in the pancreas), and porphyria—particularly acute intermittent porphyria and erythropoietic protoporphyria. There is an inherited form that results in the activation of trypsinogen within the pancreas, leading to autodigestion. Involved genes may include trypsin 1, which codes for trypsinogen, SPINK1, which codes for a trypsin inhibitor, or cystic fibrosis transmembrane conductance regulator. Diagnosis The differential diagnosis for pancreatitis includes but is not limited to cholecystitis, choledocholithiasis, perforated peptic ulcer, bowel infarction, small bowel obstruction, hepatitis, and mesenteric ischemia. Diagnosis requires 2 of the 3 following criteria: Characteristic acute onset of epigastric or vague abdominal pain that may radiate to the back (see signs and symptoms above) Serum amylase or lipase levels ≥ 3 times the upper limit of normal An imaging study with characteristic changes. CT, MRI, abdominal ultrasound or endoscopic ultrasound can be used for diagnosis. Amylase and lipase are 2 enzymes produced by the pancreas. Elevations in lipase are generally considered a better indicator for pancreatitis as it has greater specificity and has a longer half life. However, both enzymes can be elevated in other disease states. In chronic pancreatitis, the fecal pancreatic elastase-1 (FPE-1) test is a marker of exocrine pancreatic function. Additional tests that may be useful in evaluating chronic pancreatitis include hemoglobin A1C, immunoglobulin G4, rheumatoid factor, and anti-nuclear antibody. For imaging, abdominal ultrasound is convenient, simple, non-invasive, and inexpensive. It is more sensitive and specific for pancreatitis from gallstones than other imaging modalities. However, in 25–35% of patients the view of the pancreas can be obstructed by bowel gas making it difficult to evaluate. A contrast-enhanced CT scan is usually performed more than 48 hours after the onset of pain to evaluate for pancreatic necrosis and extrapancreatic fluid as well as predict the severity of the disease. CT scanning earlier can be falsely reassuring. ERCP or an endoscopic ultrasound can also be used if a biliary cause for pancreatitis is suspected. Treatment The treatment of pancreatitis is supportive and depends on severity. Morphine generally is suitable for pain control. There are no clinical studies to suggest that morphine can aggravate or cause pancreatitis or cholecystitis. The treatment for acute pancreatitis will depend on whether the diagnosis is for the mild form of the condition, which causes no complications, or the severe form, which can cause serious complications. Mild acute pancreatitis The treatment of mild acute pancreatitis is successfully carried out by admission to a general hospital ward. Traditionally, people were not allowed to eat until the inflammation resolved but more recent evidence suggests early feeding is safe and improves outcomes, and may result in an ability to leave the hospital sooner. Due to inflammation occurring in pancreatitis, proinflammatory cytokines secreted into the bloodstream can cause inflammation throughout the body, including the lungs and can manifest as ARDS. Because pancreatitis can cause lung injury and affect normal lung function, supplemental oxygen is occasionally delivered through breathing tubes that are connected via the nose (e.g., nasal cannulae) or via a mask. The tubes can then be removed after a few days once it is clear that the condition is improving. Dehydration may result during an episode of acute pancreatitis, so fluids will be provided intravenously. Opioids may be used for the pain. When the pancreatitis is due to gallstones, early gallbladder removal also appears to improve outcomes. Severe acute pancreatitis Severe pancreatitis can cause organ failure, necrosis, infected necrosis, pseudocyst, and abscess. If diagnosed with severe acute pancreatitis, people will need to be admitted to a high-dependency unit or intensive care unit. It is likely that the levels of fluids inside the body will have dropped significantly as it diverts bodily fluids and nutrients in an attempt to repair the pancreas. The drop in fluid levels can lead to a reduction in the volume of blood within the body, which is known as hypovolemic shock. Hypovolemic shock can be life-threatening as it can very quickly starve the body of the oxygen-rich blood that it needs to survive. To avoid going into hypovolemic shock, fluids will be administered intravenously. Oxygen will be supplied through tubes attached to the nose and ventilation equipment may be used to assist with breathing. Feeding tubes may be used to provide nutrients, combined with appropriate analgesia. As with mild pancreatitis, it will be necessary to treat the underlying cause—gallstones, discontinuing medications, cessation of alcohol, etc. If the cause is gallstones, it is likely that an ERCP procedure or removal of the gallbladder will be recommended. The gallbladder should be removed during the same hospital admission or within two weeks of pancreatitis onset so as to limit the risk of recurrent pancreatitis. If the cause of pancreatitis is alcohol, cessation of alcohol consumption and treatment for alcohol dependency may improve pancreatitis. Even if the underlying cause is not related to alcohol consumption, doctors recommend avoiding it for at least six months as this can cause further damage to the pancreas during the recovery process. Oral intake, especially fats, is generally restricted initially but early enteral feeding within 48 hours has been shown to improve clinical outcomes. Fluids and electrolytes are replaced intravenously. Nutritional support is initiated via tube feeding to surpass the portion of the digestive tract most affected by secreted pancreatic enzymes if there is no improvement in the first 72–96 hours of treatment. Prognosis Severe acute pancreatitis has mortality rates around 2–9%, higher where necrosis of the pancreas has occurred. Several scoring systems are used to predict the severity of an attack of pancreatitis. They each combine demographic and laboratory data to estimate severity or probability of death. Examples include APACHE II, Ranson, BISAP, and Glasgow. The Modified Glasgow criteria suggests that a case be considered severe if at least three of the following are true: Age > 55 years Blood levels: PO2 oxygen < 60 mmHg or 7.9 kPa White blood cells > 15,000/μL Calcium < 2 mmol/L Blood urea nitrogen > 16 mmol/L Lactate dehydrogenase (LDH) > 600iu/L Aspartate transaminase (AST) > 200iu/L Albumin < 3.2g/L Glucose > 10 mmol/L This can be remembered using the mnemonic PANCREAS: PO2 oxygen < 60 mmHg or 7.9 kPa Age > 55 Neutrophilia white blood cells > 15,000/μL Calcium < 2 mmol/L Renal function (BUN) > 16 mmol/L Enzymes lactate dehydrogenase (LDH) > 600iu/L aspartate transaminase (AST) > 200iu/L Albumin < 3.2g/L Sugar glucose > 10 mmol/L The BISAP score (blood urea nitrogen level >25 mg/dL (8.9 mmol/L), impaired mental status, systemic inflammatory response syndrome, age over 60 years, pleural effusion) has been validated as similar to other prognostic scoring systems. Epidemiology Globally the incidence of acute pancreatitis is 5 to 35 cases per 100,000 people. The incidence of chronic pancreatitis is 4–8 per 100,000 with a prevalence of 26–42 cases per 100,000. In 2013 pancreatitis resulted in 123,000 deaths up from 83,000 deaths in 1990. Costs In adults in the United Kingdom, the estimated average total direct and indirect costs of chronic pancreatitis is roughly £79,000 per person on an annual basis. Acute recurrent pancreatitis and chronic pancreatitis occur infrequently in children, but are associated with high healthcare costs due to substantial disease burden. Globally, the estimated average total cost of treatment for children with these conditions is approximately $40,500/person/year. Other animals Fatty foods may cause canine pancreatitis in dogs. See also Exocrine pancreatic insufficiency Chronic pancreatitis References External links GeneReviews/NCBI/NIH/UW entry on PRSS1-Related Hereditary Pancreatitis Abdominal pain Herpes simplex virus–associated diseases Inflammations Metabolic disorders Pancreas disorders Wikipedia emergency medicine articles ready to translate Wikipedia medicine articles ready to translate
Pancreatitis
Chemistry
3,691
69,325,608
https://en.wikipedia.org/wiki/Hexadecanal
Hexadecanal is an organic compound with the chemical formula C16H32O. In biology Hexadecanal is found in human skin, saliva, and feces. It has a calming effect on mice. A 2017 study found that non-autistic men demonstrate an increase in electrodermal activity when exposed to subliminal levels of hexadecanal while men with autism spectrum disorder do not. In 2021, inhalation of hexadecanal was found to reduce aggression in men but to trigger aggression in women. Hexadecanal is one of the most abundant substances emitted by human babies from their heads, which may be an evolutionary survival mechanism to induce mothers to defend the baby and fathers to not attack it. But it is not yet known whether the amount of hexadecanal emitted by humans is sufficient to affect other humans. References Alkanals
Hexadecanal
Chemistry
185
3,422,168
https://en.wikipedia.org/wiki/Traian%20Lalescu
Traian Lalescu (; 12 July 1882 – 15 June 1929) was a Romanian mathematician. His main focus was on integral equations and he contributed to work in the areas of functional equations, trigonometric series, mathematical physics, geometry, mechanics, algebra, and the history of mathematics. Life He was born in Bucharest. His father, also named Traian, was originally from Cornea, Caraș-Severin and worked as a superintendent at the Creditul Agricol Bank. Lalescu went to the Carol I High School in Craiova, continuing high school in Roman, and graduating from the Boarding High School in Iași. After entering the University of Iași, he completed his undergraduate studies in 1903 at the University of Bucharest. He earned his Ph.D. in Mathematics from the University of Paris in 1908. His dissertation, Sur les équations de Volterra, was written under the direction of Émile Picard. That same year, he presented his work at the International Congress of Mathematicians in Rome. In 1911, he published Introduction to the Theory of Integral Equations, the first book ever on the subject of integral equations. After returning to Romania in 1909, he first taught Mathematics at the Ion Maiorescu Gymnasium in Giurgiu. He then taught until 1912 at the Gheorghe Șincai High School and the Cantemir Vodă High School in Bucharest. From 1909 to 1910, he was a teaching assistant at the School of Bridges and Roads, in the department of graphic statistics. A year later, he was appointed full-time professor of analytical geometry, succeeding Spiru Haret; he lectured at the School (which would later become the Polytechnic University of Bucharest) until his death. In 1916, he became the first president of Sportul Studențesc, the university's football club. Also that year, he was appointed tenured professor of algebra and number theory at the University of Bucharest, a position he held until his death. In 1920, Lalescu became a professor and the inaugural rector of the Polytechnic University of Timișoara; for a year, he would commute by train for 20 hours between Timișoara and Bucharest to teach his classes. In 1921, he founded the football club Politehnica Timișoara. His wife, Ecaterina, was a former student of his; they had for children—two sons and two daughters: Nicolae, Mariana, Florica, and Traian. She died in childbirth in 1921, at age 28. In 1920, Lalescu was elected to the Parliament of Romania as deputy for Orșova, and then re-elected twice as deputy for Caransebeș. He presented in parliament a well-received report on the budget project for 1925. In the fall of 1927, he caught a double pneumonia; in 1928, he went for a vacation in Nice and for treatment in Paris, but he succumbed to the disease the next year, at age 46. In 1991, he was elected posthumously honorary member of the Romanian Academy. The Lalescu sequence In a 1900 issue of , Lalescu proposed the study of the sequence . It turns out that the Lalescu sequence is decreasing and bounded below by 0, and thus is converging. Its limit is given by . Legacy There are several institutions bearing his name, including Colegiul Național de Informatică Traian Lalescu in Hunedoara and Liceul Teoretic Traian Lalescu in Reșița. There are also streets named after him in Craiova, Oradea, Reșița, and Timișoara. The National Mathematics Contest Traian Lalescu for undergraduate students is also named after him. A statue of Lalescu, carved in 1930 by Cornel Medrea, is situated in front of the Faculty of Mechanical Engineering, in Timișoara and another statue of Lalescu is situated inside the University of Bucharest. Work T. Lalesco, Introduction à la théorie des équations intégrales. Avec une préface de É. Picard, Paris: A. Hermann et Fils, 1912. VII + 152 pp. JFM entry Traian Lalescu, Introducere la teoria ecuațiilor integrale, Editura Academiei Republicii Populare Romîne, 1956. 134 pp. (A reprint of the first edition [Bucharest, 1911], with a bibliography taken from the French translation [Paris, 1912]). References External links "Representative Figures of the Romanian Science and Technology" "Traian Lalescu", from Colegiul Național de Informatică Traian Lalescu, Hunedoara "Cine a fost Traian Lalescu?", from Liceul Teoretic Traian Lalescu, Reșița "Monumentul lui Traian Lalescu (1930)", at infotim.ro A Class of Applications of AM-GM Inequality (From a 2004 Putnam Competition Problem to Lalescu’s Sequence) by Wladimir G. Boskoff and Bogdan Suceava, Australian Math. Society Gazette, 33 (2006), No.1, 51-56. 1882 births 1929 deaths Scientists from Bucharest 20th-century Romanian mathematicians Mathematical analysts Romanian schoolteachers Romanian textbook writers Rectors of Politehnica University of Timișoara University and college founders Academic staff of the University of Bucharest Academic staff of the Politehnica University of Bucharest Carol I National College alumni Costache Negruzzi National College alumni Alexandru Ioan Cuza University alumni University of Bucharest alumni University of Paris alumni Members of the Chamber of Deputies (Romania) Romanian expatriates in France Deaths from pneumonia in Romania Members of the Romanian Academy elected posthumously
Traian Lalescu
Mathematics
1,165
27,945,084
https://en.wikipedia.org/wiki/Briolette
A briolette is a style of gemstone cut. It is an elongated, mostly symmetrical along the main axel, pear shape covered with angular facets usually with a pointed end and no girdle. It is often drilled to hang as a bead. The name is also sometimes erroneously used for pendeloque cut gems. While the briolette is a symmetrical drop shape, the pendeloque cut is flatter and has two different sides: one with a large table facet and one with a point or ridge. The top of a briolette is attached to the piece of jewelry, usually by a hole drilled in the stone, and a pendeloque cut stone needs to be mounted in a prong setting. The briolette is one of the drop cuts for gemstones. The briolette cut is said to have been designed by Belgian Lodewyk van Bercken in 1476. This cut requires a more advanced technique than the round cuts, like the brilliant cut, and results in a much larger loss of the original stone's weight, making briolettes very rare and expensive. The cut is mostly used for stones with color, like sapphires and varieties of quartz. It is rarely used for diamonds. The style was popular during the Victorian era. See also Briolette of India Pendeloque cut References Gemstone cutting
Briolette
Engineering
283
20,034
https://en.wikipedia.org/wiki/Mutual%20recursion
In mathematics and computer science, mutual recursion is a form of recursion where two mathematical or computational objects, such as functions or datatypes, are defined in terms of each other. Mutual recursion is very common in functional programming and in some problem domains, such as recursive descent parsers, where the datatypes are naturally mutually recursive. Examples Datatypes The most important basic example of a datatype that can be defined by mutual recursion is a tree, which can be defined mutually recursively in terms of a forest (a list of trees). Symbolically: f: [t[1], ..., t[k]] t: v f A forest f consists of a list of trees, while a tree t consists of a pair of a value v and a forest f (its children). This definition is elegant and easy to work with abstractly (such as when proving theorems about properties of trees), as it expresses a tree in simple terms: a list of one type, and a pair of two types. Further, it matches many algorithms on trees, which consist of doing one thing with the value, and another thing with the children. This mutually recursive definition can be converted to a singly recursive definition by inlining the definition of a forest: t: v [t[1], ..., t[k]] A tree t consists of a pair of a value v and a list of trees (its children). This definition is more compact, but somewhat messier: a tree consists of a pair of one type and a list of another, which require disentangling to prove results about. In Standard ML, the tree and forest datatypes can be mutually recursively defined as follows, allowing empty trees: datatype 'a tree = Empty | Node of 'a * 'a forest and 'a forest = Nil | Cons of 'a tree * 'a forest Computer functions Just as algorithms on recursive datatypes can naturally be given by recursive functions, algorithms on mutually recursive data structures can be naturally given by mutually recursive functions. Common examples include algorithms on trees, and recursive descent parsers. As with direct recursion, tail call optimization is necessary if the recursion depth is large or unbounded, such as using mutual recursion for multitasking. Note that tail call optimization in general (when the function called is not the same as the original function, as in tail-recursive calls) may be more difficult to implement than the special case of tail-recursive call optimization, and thus efficient implementation of mutual tail recursion may be absent from languages that only optimize tail-recursive calls. In languages such as Pascal that require declaration before use, mutually recursive functions require forward declaration, as a forward reference cannot be avoided when defining them. As with directly recursive functions, a wrapper function may be useful, with the mutually recursive functions defined as nested functions within its scope if this is supported. This is particularly useful for sharing state across a set of functions without having to pass parameters between them. Basic examples A standard example of mutual recursion, which is admittedly artificial, determines whether a non-negative number is even or odd by defining two separate functions that call each other, decrementing by 1 each time. In C: bool is_even(unsigned int n) { if (n == 0) return true; else return is_odd(n - 1); } bool is_odd(unsigned int n) { if (n == 0) return false; else return is_even(n - 1); } These functions are based on the observation that the question is 4 even? is equivalent to is 3 odd?, which is in turn equivalent to is 2 even?, and so on down to 0. This example is mutual single recursion, and could easily be replaced by iteration. In this example, the mutually recursive calls are tail calls, and tail call optimization would be necessary to execute in constant stack space. In C, this would take O(n) stack space, unless rewritten to use jumps instead of calls. This could be reduced to a single recursive function is_even. In that case, is_odd, which could be inlined, would call is_even, but is_even would only call itself. As a more general class of examples, an algorithm on a tree can be decomposed into its behavior on a value and its behavior on children, and can be split up into two mutually recursive functions, one specifying the behavior on a tree, calling the forest function for the forest of children, and one specifying the behavior on a forest, calling the tree function for the tree in the forest. In Python: def f_tree(tree) -> None: f_value(tree.value) f_forest(tree.children) def f_forest(forest) -> None: for tree in forest: f_tree(tree) In this case the tree function calls the forest function by single recursion, but the forest function calls the tree function by multiple recursion. Using the Standard ML datatype above, the size of a tree (number of nodes) can be computed via the following mutually recursive functions: fun size_tree Empty = 0 | size_tree (Node (_, f)) = 1 + size_forest f and size_forest Nil = 0 | size_forest (Cons (t, f')) = size_tree t + size_forest f' A more detailed example in Scheme, counting the leaves of a tree: (define (count-leaves tree) (if (leaf? tree) 1 (count-leaves-in-forest (children tree)))) (define (count-leaves-in-forest forest) (if (null? forest) 0 (+ (count-leaves (car forest)) (count-leaves-in-forest (cdr forest))))) These examples reduce easily to a single recursive function by inlining the forest function in the tree function, which is commonly done in practice: directly recursive functions that operate on trees sequentially process the value of the node and recurse on the children within one function, rather than dividing these into two separate functions. Advanced examples A more complicated example is given by recursive descent parsers, which can be naturally implemented by having one function for each production rule of a grammar, which then mutually recurse; this will in general be multiple recursion, as production rules generally combine multiple parts. This can also be done without mutual recursion, for example by still having separate functions for each production rule, but having them called by a single controller function, or by putting all the grammar in a single function. Mutual recursion can also implement a finite-state machine, with one function for each state, and single recursion in changing state; this requires tail call optimization if the number of state changes is large or unbounded. This can be used as a simple form of cooperative multitasking. A similar approach to multitasking is to instead use coroutines which call each other, where rather than terminating by calling another routine, one coroutine yields to another but does not terminate, and then resumes execution when it is yielded back to. This allows individual coroutines to hold state, without it needing to be passed by parameters or stored in shared variables. There are also some algorithms which naturally have two phases, such as minimax (min and max), which can be implemented by having each phase in a separate function with mutual recursion, though they can also be combined into a single function with direct recursion. Mathematical functions In mathematics, the Hofstadter Female and Male sequences are an example of a pair of integer sequences defined in a mutually recursive manner. Fractals can be computed (up to a given resolution) by recursive functions. This can sometimes be done more elegantly via mutually recursive functions; the Sierpiński curve is a good example. Prevalence Mutual recursion is very common in functional programming, and is often used for programs written in LISP, Scheme, ML, and similar programming languages. For example, Abelson and Sussman describe how a meta-circular evaluator can be used to implement LISP with an eval-apply cycle. In languages such as Prolog, mutual recursion is almost unavoidable. Some programming styles discourage mutual recursion, claiming that it can be confusing to distinguish the conditions which will return an answer from the conditions that would allow the code to run forever without producing an answer. Peter Norvig points to a design pattern which discourages the use entirely, stating: Terminology Mutual recursion is also known as indirect recursion, by contrast with direct recursion, where a single function calls itself directly. This is simply a difference of emphasis, not a different notion: "indirect recursion" emphasises an individual function, while "mutual recursion" emphasises the set of functions, and does not single out an individual function. For example, if f calls itself, that is direct recursion. If instead f calls g and then g calls f, which in turn calls g again, from the point of view of f alone, f is indirectly recursing, while from the point of view of g alone, g is indirectly recursing, while from the point of view of both, f and g are mutually recursing on each other. Similarly a set of three or more functions that call each other can be called a set of mutually recursive functions. Conversion to direct recursion Mathematically, a set of mutually recursive functions are primitive recursive, which can be proven by course-of-values recursion, building a single function F that lists the values of the individual recursive function in order: and rewriting the mutual recursion as a primitive recursion. Any mutual recursion between two procedures can be converted to direct recursion by inlining the code of one procedure into the other. If there is only one site where one procedure calls the other, this is straightforward, though if there are several it can involve code duplication. In terms of the call stack, two mutually recursive procedures yield a stack ABABAB..., and inlining B into A yields the direct recursion (AB)(AB)(AB)... Alternately, any number of procedures can be merged into a single procedure that takes as argument a variant record (or algebraic data type) representing the selection of a procedure and its arguments; the merged procedure then dispatches on its argument to execute the corresponding code and uses direct recursion to call self as appropriate. This can be seen as a limited application of defunctionalization. This translation may be useful when any of the mutually recursive procedures can be called by outside code, so there is no obvious case for inlining one procedure into the other. Such code then needs to be modified so that procedure calls are performed by bundling arguments into a variant record as described; alternately, wrapper procedures may be used for this task. See also Cycle detection (graph theory) Recursion (computer science) Circular dependency References External links Mutual recursion at Rosetta Code "Example demonstrating good use of mutual recursion", "Are there any example of Mutual recursion?", Stack Overflow Theory of computation Recursion
Mutual recursion
Mathematics
2,437
9,766,295
https://en.wikipedia.org/wiki/Glass%20cutter
A glass cutter is a tool used to make a shallow score in one surface of a piece of glass (normally a flat one) that is to be broken in two pieces, for example to fit a window. The scoring makes a split in the surface of the glass which encourages the glass to break along the score. This is not to be confused with the tools used to make cut glass objects. Regular, annealed glass can be broken apart this way but not tempered glass as the latter tends to shatter rather than breaking cleanly into two pieces. History In the Middle Ages, glass was cut with a heated and sharply pointed iron rod. The red hot point was drawn along the moistened surface of the glass causing it to snap apart. Fractures created in this way were not very accurate and the rough pieces had to be chipped or "grozed" down to more exact shapes with a hooked tool called a grozing iron. Between the 14th and 16th centuries, starting in Italy, a diamond-tipped cutter became prevalent which allowed for more precise cutting. In 1869, the wheel cutter was developed by Samuel Monce of Bristol, Connecticut, which remains the current standard tool for most glass cutting. Cutting process A glass cutter may use a diamond to create the split, but more commonly a small cutting wheel made of hardened steel or tungsten carbide 4–6 mm in diameter with a V-shaped profile called a "hone angle" is used. The greater the hone angle of the wheel, the sharper the angle of the V and the thicker the piece of glass it is designed to cut. The hone angle on most hand-held glass cutters is 120° to 140°, though wheels are made as near-flat as 154° or even 160° [180° would be flat like a roller] for cutting glass as thick as . Their main drawback is that wheels with sharper hone angles will become dull more quickly than their more obtuse counterparts. Lubrication The effective cutting of glass also requires a small amount of oil (kerosene is often used) and some glass cutters contain a reservoir of this oil which both lubricates the wheel and prevents it from becoming too hot: as the wheel scores, friction between it and the glass surface briefly generates intense heat, and oil dissipates this efficiently. When properly lubricated a steel wheel can give a long period of satisfactory service. However, tungsten carbide wheels have been proven to have a significantly longer life than steel wheels and offer greater and more reproducible penetration in scoring as well as easier opening of the scored glass. Cutting The cutter is then rolled firmly over the glass, producing a "score line" or "fissure," weakening the glass along this line. Pressure as light as 5 or 6 pounds, upon a 120 to 140 degree wheel on thin glass; Pressure as heavy as 20+ pounds, with a 154 to 160 degree wheel, on very thick glass. The well-scored pane is ready to be split. The glass may be further weakened by lightly tapping along the cut. The glass cutter in the photo has a ball on one end for tapping the glass. Running pliers may then be used to "run" or "open" to the split. Wheel diameter Glass cutters are manufactured with wheels of varying diameters. One of the most popular has a diameter of 5.5 mm ( in). The ratio between the arc of the wheel and the pressure applied with the tool has an important bearing on the degree of penetration. Average hand pressure with this size wheel often gives good results. For a duller wheel on soft glass a larger wheel (e.g., 6 mm ( in) will require no change in hand pressure. A smaller wheel (3 mm ( in)) is appropriate for cutting patterns and curves since a smaller wheel can follow curved lines without dragging. General purpose glass is mostly made by the float glass process and is obtainable in thicknesses from 1.5 to 25 mm ( to 1 in). Thin float glass tends to cut easily with a sharp cutter. Thicker glass such as 10 mm ( in) float glass is significantly more difficult to cut and break; glass with textured or patterned surfaces may demand specialized methods for scoring and opening the cuts. Large sheets Large sheets of glass are usually cut with a computer-assisted (CNC) semi-automatic glass cutting table. These sheets are then broken out by hand into the individual sheets of glass (also known as "lites" in the glass industry). See also Cutting Computer numerical control (CNC) Water jet cutter References Further reading Cut and install glass Glasschneider Test (in German) History (in French) History of glass cutters Glazier's tools Cutting tools Glass production
Glass cutter
Materials_science,Engineering
979
62,471,004
https://en.wikipedia.org/wiki/ArviZ
ArviZ ( ) is a Python package for exploratory analysis of Bayesian models. It is specifically designed to work with the output of probabilistic programming libraries like PyMC, Stan, and others by providing a set of tools for summarizing and visualizing the results of Bayesian inference in a convenient and informative way. ArviZ also provides a common data structure for manipulating and storing data commonly arising in Bayesian analysis, like posterior samples or observed data. ArviZ is an open source project, developed by the community and is an affiliated project of NumFOCUS. and it has been used to help interpret inference problems in several scientific domains, including astronomy, neuroscience, physics and statistics. Etymology The ArviZ name is derived from reading "rvs" (the short form of random variates) as a word instead of spelling it and also using the particle "viz" usually used to abbreviate visualization. Exploratory analysis of Bayesian models When working with Bayesian models there are a series of related tasks that need to be addressed besides inference itself: Diagnoses of the quality of the inference, this is needed when using numerical methods such as Markov chain Monte Carlo techniques Model criticism, including evaluations of both model assumptions and model predictions Comparison of models, including model selection or model averaging Preparation of the results for a particular audience All these tasks are part of the Exploratory analysis of Bayesian models approach, and successfully performing them is central to the iterative and interactive modeling process. These tasks require both numerical and visual summaries. Library features InferenceData object for Bayesian data manipulation. This object is based on xarray Plots using two alternative backends matplotlib or bokeh Numerical summaries and diagnostics for Markov chain Monte Carlo methods. Integration with established probabilistic programming languages including; PyStan (the Python interface of Stan), PyMC, Edward Pyro, and easily integrated with novel or bespoke Bayesian analyses. ArviZ is also available in Julia, using the ArviZ.jl interface See also Bambi is a high-level Bayesian model-building interface based on PyMC PyMC a probabilistic programming language written in Python Stan is a probabilistic programming language for statistical inference written in C++ References External links ArviZ web site Computational statistics Free Bayesian statistics software Monte Carlo software Numerical programming languages Probabilistic software
ArviZ
Mathematics
518
65,084,570
https://en.wikipedia.org/wiki/Austrian%20Resin%20Extraction
"Pecherei" is the common expression in southern Lower Austria for the practice of Resin Extraction from black pine trees (Evergreens). This profession centers around the extraction of tree resin, also known as "Pitch," that will ultimately be used in the production of further chemical products. Those who extract resin for a living are described as "Pecher" or "Resin Workers." In the year 2011, Pecherei was incorporated into the register of Intangible Cultural Heritage in Austria, which was drafted in the context of the UNESCO Convention for the Preservation of Intangible Culture. The most important tree for use in resin extraction is the black pine (Pinus nigra), which has the greatest resin content of all of the European coniferous trees, and it was even used as early as by the Romans for this very purpose. These trees are generally best tapped for their resin between the ages of 90 and 120 years old. In Lower Austria, the Austrian Black Pine is the predominant tree, and its resin is of particularly high-quality, thereby making the Austrian pitch one of the best in the world. History In the southern part of Lower Austria, most prominently in the Industrial Quarter and the Vienna Woods, Pecherei became an established practice probably as early as the 17th century. From the beginning of the 18th century, lords of local manors began to promote pitch extraction, which led to the emergence of Pitch Huts for resin processing. In fact, at this time, Pecherei and the trading business surrounding it became an important source of income for some members of the population. In the early decades of the 19th century, resin extraction experienced its first heyday, as an increase in demand led to increasing prices and production. However, throughout the 1960s, the industry gradually came to a standstill. The main reason for this was the fact that cheaper, comparable products were being imported from eastern-bloc countries (communist countries during the cold war) as well as from Turkey, Greece, and Portugal. Also, around this time there were numerous advances in technical chemistry that made resin less necessary for numerous products. Austrian Social Security Law still recognizes the Pecherei profession in the context of independent practitioners. This profession is defined as follows: "Self-employed Resin Workers are people who, without being employed on the basis of a service or apprenticeship relationship, pursue a seasonally recurring, monetarily gainful activity by extracting resin products in forests outside their home area, provided that they usually pursue this gainful activity without the help of non-family workers." Raw Materials and Processing The raw resin is light yellow in color. It is rich in organic hydrocarbons, but it has a low oxygen content and is nitrogen-free. Additionally, raw resin is largely a mix of terpene-derived substances, with many having acidic properties. The resin owes its spicy, aromatic smell to the abundant essential oils it contains. The resin flow within a tree differs based on the time of year and the weather, with warmth and humidity having beneficial effects. Between 3 and 4 kilograms of Pitch could be obtained from a single trunk in one year. So, in order for a Resin Worker to live modestly with his family, he had to extract resin from about 3000 trees. The workdays usually began before sunrise with the commute to the work area in the pine forest, and Resin Workers would often work 10 to 12 hours. The tree resin was melted from the raw resin balm in special huts through a distillation process, the so-called the "Boiling Pitch". During this process, the impurities were first skimmed off or sieved before the oil and water were evaporated and collected in a collection container. The lighter turpentine oil floated to the top of the mixture during this process, and it was poured out. The "Boiling Pitch," now freed from water and Turpentine oils, became a dark yellow, hard and brittle mass after cooling—this is known as: "Rosin." This collected Terpentine Oil and the Rosin were primarily used in the paper, varnish, soap, wire, and shoe-polish industries. Seasonal Work and Working Methods The work of a Resin Worker varied based on the season. The most important work in the winter was the preparation of equipment, especially the making of pitch notches via the use of a special tool called a "Notch Planar." Pitch notches were wooden planks, which were inserted against the tree after bark removal (between the bare trunk and the remaining bark on the edges) to help direct resin flow. The most complex work took place in the spring, when the actual resin collection was done, and different methods were used. Pitch Container Variations and General Collection Methods "Grandl" or "Scrap" Method In the earliest method of resin extraction, the resin was collected near the base of the trunk in simple earthen pits smeared with clay. Because this led to resin contamination, the "Grandl" or Scrap Method was eventually developed. When using this method, the Resin Worker would create a recess—which was called a "Scrap"—out of the wood near the ground with a hoe. This Scrap became the new site of resin collection. Since this new resin container had to be smooth and clean, the Scrap was smoothed with a narrower, rounded hoe (called the "moon" or "scrap" hoe). The resulting wood chips from this process were removed via the use of a pointed stick—referred to as the "Rowisch"—which at the same time served as a counting tool: after each new scrap was cut, the Resin Worker would carve an indentation into the stick so that the number of trees that had been extracted was always known. With the "Adze," which later became the guild symbol of the Pecherei profession, and with a hoe, the Resin Worker subsequently removed the bark from the tree trunk. In order to be able to direct the resin flow into the resin-collection area (the Scrap), pitch notches had to be inserted across the trunk. About three times every two weeks, from spring to early autumn, de-barking was the oldest working method. The Resin Worker removed the bark piece-by-piece with a special de-barking Adze down to the trunk so that the surface free from bark continued to grow and the resin flow remained upright. Depending on its size, a Scrap could take up between 0.25 and 0.35 kg of pitch. A tree worked in this way could provide pitch for 12 to 18 years of resin extraction. The Beer Mug Method In the inter-war period, the transition from the "Scrap" to the "Beer Mug" method began, in which pitch mugs were used for the resin collection. To do this, the bark of new pitch trees, called the "Heurigen", had to be trimmed from the ground up with a hoe. During this process, the bark was removed from about a third of the circumference of the trunk first with an axe and then with the Rintler (which is essentially a scraper) so that a V-shaped demarcation was created. Then, the Resin Worker had to create an elongated recess on the sides of the tree trunk to accommodate the pitch notches, chopping and pulling them in. Just below the narrowest point, an opening was hacked out to hold the pitch-collecting mug; a pitch nail was hammered in just below it, and finally the collecting mug with its lid was put in. The tree was now ready for resin extraction and, as described above, had to be de-barked at regular intervals. The trees that had been pitched for several years were processed in a similar way. General Description of Bark Removal Methods While the Scrap and Beer Mug methods were essentially the two approaches used in setting up a tree for resin extraction, some variation existed in terms of how the bark was removed and how the resin flow was managed. Some of these bark removal and resin flow methods are described below. Adze-based Bark Removal (de-barking): As mentioned in the collection methods described above, bark was originally removed via the use of an Adze or similar tools. This, however, was time-consuming and required much effort, as only small pieces of bark could be removed per hit with this device. The Planing Method Due to the strenuous nature of the Adze-based bark removal, the planing method was developed. Not only was it less strenuous; it also took less time. The working method for new pitch trees as well as those that had been worked on for several years remained the same as described above, only planing was used instead of the usual, Adze-based, de-barking. With the plane (a tool for smoothing surfaces), the Resin Worker cut a wide, flat chunk from the trunk with a single cut. When de-barking in earlier methods, this could only be achieved with many hits from the Adze. The Groove Method As with all processing methods, the upper section of tree bark had to be removed beforehand for the grooving method. Then, the Resin Worker removed a layer of bark several millimeters thick with a scraper. A precise cut was important. With this planing process, no contiguous surfaces were created, but rather V-shaped grooves within the trunk itself. This saved the Resin Worker from inserting the pitch notches, as the resin could flow through the grooves into the pitch mug. Although the Groove Method saved work and time by eliminating chopping, it was only used sporadically in southern Lower Austria, as the yield was up to 50% lower than that of the two other resin extraction approaches, namely adze-based de-barking and planing. A big problem with the grooving process was the clogging of the grooves with resin. Other Tools and Facilities The ladder was an indispensable tool for working on trees that had been pitched for several years. It was made from two thin, long pine trees that served as stiles and tough dogwood for the rungs. A professional extractor climbed up to 22 rungs of the ladder, which corresponds to a height of 6 m, several hundred times a day, worked the trunk and then slid down with the leather slip patches attached to the thighs and knees. According to old custom, a wooden Pecher hut was built in the middle of the forest. It resembled a wood chopper's hut and was mainly used for protection and refuge in bad weather. Inside there was usually a roughly timbered table and a bench. The Resin Worker ate here occasionally. Now and then, there was also a stove. The Pecher went home nearly every day; only in exceptional cases did he spend the night in the hut. A ladder area was set up so that the ladders needed to work on the trees of different heights did not always have to be taken home. Citations (Weblinks) Resins Lower Austria
Austrian Resin Extraction
Physics
2,243
15,544,038
https://en.wikipedia.org/wiki/Hamaker%20constant
In molecular physics, the Hamaker constant (denoted ; named for H. C. Hamaker) is a physical constant that can be defined for a van der Waals (vdW) body–body interaction: where are the number densities of the two interacting kinds of particles, and is the London coefficient in the particle–particle pair interaction. The magnitude of this constant reflects the strength of the vdW-force between two particles, or between a particle and a substrate. The Hamaker constant provides the means to determine the interaction parameter from the vdW-pair potential, Hamaker's method and the associated Hamaker constant ignores the influence of an intervening medium between the two particles of interaction. In 1956 Lifshitz developed a description of the vdW energy but with consideration of the dielectric properties of this intervening medium (often a continuous phase). The Van der Waals forces are effective only up to several hundred angstroms. When the interactions are too far apart, the dispersion potential decays faster than this is called the retarded regime, and the result is a Casimir–Polder force. See also Hamaker theory Intermolecular forces van der Waals Forces References Physical chemistry Intermolecular forces
Hamaker constant
Physics,Chemistry,Materials_science,Engineering
258
330,017
https://en.wikipedia.org/wiki/Discretization
In applied mathematics, discretization is the process of transferring continuous functions, models, variables, and equations into discrete counterparts. This process is usually carried out as a first step toward making them suitable for numerical evaluation and implementation on digital computers. Dichotomization is the special case of discretization in which the number of discrete classes is 2, which can approximate a continuous variable as a binary variable (creating a dichotomy for modeling purposes, as in binary classification). Discretization is also related to discrete mathematics, and is an important component of granular computing. In this context, discretization may also refer to modification of variable or category granularity, as when multiple discrete variables are aggregated or multiple discrete categories fused. Whenever continuous data is discretized, there is always some amount of discretization error. The goal is to reduce the amount to a level considered negligible for the modeling purposes at hand. The terms discretization and quantization often have the same denotation but not always identical connotations. (Specifically, the two terms share a semantic field.) The same is true of discretization error and quantization error. Mathematical methods relating to discretization include the Euler–Maruyama method and the zero-order hold. Discretization of linear state space models Discretization is also concerned with the transformation of continuous differential equations into discrete difference equations, suitable for numerical computing. The following continuous-time state space model where and are continuous zero-mean white noise sources with power spectral densities can be discretized, assuming zero-order hold for the input and continuous integration for the noise , to with covariances where and is the sample time. If is nonsingular, The equation for the discretized measurement noise is a consequence of the continuous measurement noise being defined with a power spectral density. A clever trick to compute and in one step is by utilizing the following property: Where and are the discretized state-space matrices. Discretization of process noise Numerical evaluation of is a bit trickier due to the matrix exponential integral. It can, however, be computed by first constructing a matrix, and computing the exponential of it The discretized process noise is then evaluated by multiplying the transpose of the lower-right partition of with the upper-right partition of : Derivation Starting with the continuous model we know that the matrix exponential is and by premultiplying the model we get which we recognize as and by integrating, which is an analytical solution to the continuous model. Now we want to discretise the above expression. We assume that is constant during each timestep. We recognize the bracketed expression as , and the second term can be simplified by substituting with the function . Note that . We also assume that is constant during the integral, which in turn yields which is an exact solution to the discretization problem. When is singular, the latter expression can still be used by replacing by its Taylor expansion, This yields which is the form used in practice. Approximations Exact discretization may sometimes be intractable due to the heavy matrix exponential and integral operations involved. It is much easier to calculate an approximate discrete model, based on that for small timesteps . The approximate solution then becomes: This is also known as the Euler method, which is also known as the forward Euler method. Other possible approximations are , otherwise known as the backward Euler method and , which is known as the bilinear transform, or Tustin transform. Each of these approximations has different stability properties. The bilinear transform preserves the instability of the continuous-time system. Discretization of continuous features In statistics and machine learning, discretization refers to the process of converting continuous features or variables to discretized or nominal features. This can be useful when creating probability mass functions. Discretization of smooth functions In generalized functions theory, discretization arises as a particular case of the Convolution Theorem on tempered distributions where is the Dirac comb, is discretization, is periodization, is a rapidly decreasing tempered distribution (e.g. a Dirac delta function or any other compactly supported function), is a smooth, slowly growing ordinary function (e.g. the function that is constantly or any other band-limited function) and is the (unitary, ordinary frequency) Fourier transform. Functions which are not smooth can be made smooth using a mollifier prior to discretization. As an example, discretization of the function that is constantly yields the sequence which, interpreted as the coefficients of a linear combination of Dirac delta functions, forms a Dirac comb. If additionally truncation is applied, one obtains finite sequences, e.g. . They are discrete in both, time and frequency. See also Discrete event simulation Discrete space Discrete time and continuous time Finite difference method Finite volume method for unsteady flow Interpolation Smoothing Stochastic simulation Time-scale calculus References Further reading External links Discretization in Geometry and Dynamics: research on the discretization of differential geometry and dynamics Numerical analysis Applied mathematics Functional analysis Iterative methods Control theory
Discretization
Mathematics
1,062
65,970,498
https://en.wikipedia.org/wiki/Marine%20coastal%20ecosystem
A marine coastal ecosystem is a marine ecosystem which occurs where the land meets the ocean. Worldwide there is about of coastline. Coastal habitats extend to the margins of the continental shelves, occupying about 7 percent of the ocean surface area. Marine coastal ecosystems include many very different types of marine habitats, each with their own characteristics and species composition. They are characterized by high levels of biodiversity and productivity. For example, estuaries are areas where freshwater rivers meet the saltwater of the ocean, creating an environment that is home to a wide variety of species, including fish, shellfish, and birds. Salt marshes are coastal wetlands which thrive on low-energy shorelines in temperate and high-latitude areas, populated with salt-tolerant plants such as cordgrass and marsh elder that provide important nursery areas for many species of fish and shellfish. Mangrove forests survive in the intertidal zones of tropical or subtropical coasts, populated by salt-tolerant trees that protect habitat for many marine species, including crabs, shrimp, and fish. Further examples are coral reefs and seagrass meadows, which are both found in warm, shallow coastal waters. Coral reefs thrive in nutrient-poor waters on high-energy shorelines that are agitated by waves. They are underwater ecosystem made up of colonies of tiny animals called coral polyps. These polyps secrete hard calcium carbonate skeletons that builds up over time, creating complex and diverse underwater structures. These structures function as some of the most biodiverse ecosystems on the planet, providing habitat and food for a huge range of marine organisms. Seagrass meadows can be adjacent to coral reefs. These meadows are underwater grasslands populated by marine flowering plants that provide nursery habitats and food sources for many fish species, crabs and sea turtles, as well as dugongs. In slightly deeper waters are kelp forests, underwater ecosystems found in cold, nutrient-rich waters, primarily in temperate regions. These are dominated by a large brown algae called kelp, a type of seaweed that grows several meters tall, creating dense and complex underwater forests. Kelp forests provide important habitats for many fish species, sea otters and sea urchins. Directly and indirectly, marine coastal ecosystems provide vast arrays of ecosystem services for humans, such as cycling nutrients and elements, and purifying water by filtering pollutants. They sequester carbon as a cushion against climate change. They protect coasts by reducing the impacts of storms, reducing coastal erosion and moderating extreme events. They provide essential nurseries and fishing grounds for commercial fisheries. They provide recreational services and support tourism. These ecosystems are vulnerable to various anthropogenic and natural disturbances, such as pollution, overfishing, and coastal development, which have significant impacts on their ecological functioning and the services they provide. Climate change is impacting coastal ecosystems with sea level rises, ocean acidification, and increased storm frequency and intensity. When marine coastal ecosystems are damaged or destroyed, there can be serious consequences for the marine species that depend on them, as well as for the overall health of the ocean ecosystem. Some conservation efforts are underway to protect and restore marine coastal ecosystems, such as establishing marine protected areas and developing sustainable fishing practices. Overview The Earth has approximately of coastline. Coastal habitats extend to the margins of the continental shelves, occupying about 7 percent by area of the Earth's oceans. These coastal seas are highly productive systems, providing an array of ecosystem services to humankind, such as processing of nutrient effluents from land and climate regulation. However, coastal ecosystems are threatened by human-induced pressures such as climate change and eutrophication. In the coastal zone, the fluxes and transformations of nutrients and carbon sustaining coastal ecosystem functions and services are strongly regulated by benthic (that is, occurring at the seafloor) biological and chemical processes. Coastal systems also contribute to the regulation of climate and nutrient cycles, by efficiently processing anthropogenic emissions from land before they reach the ocean. The high value of these ecosystem services is obvious considering that a large proportion of the world population lives close to the coast. Currently, coastal seas around the world are undergoing major ecological changes driven by human-induced pressures, such as climate change, anthropogenic nutrient inputs, overfishing and the spread of invasive species. In many cases, the changes alter underlying ecological functions to such an extent that new states are achieved and baselines are shifted. In 2015, the United Nations established 17 Sustainable Development Goals with the aim of achieving certain targets by 2030. Their mission statement for their 14th goal, Life below water, is to "conserve and sustainably use the oceans, seas and marine resources for sustainable development". The United Nations has also declared 2021–2030 the UN Decade on Ecosystem Restoration, but restoration of coastal ecosystems is not receiving appropriate attention. Coastal habitats Intertidal zone Intertidal zones are the areas that are visible and exposed to air during low tide and covered up by saltwater during high tide. There are four physical divisions of the intertidal zone with each one having its distinct characteristics and wildlife. These divisions are the Spray zone, High intertidal zone, Middle Intertidal zone, and Low intertidal zone. The Spray zone is a damp area that is usually only reached by the ocean and submerged only under high tides or storms. The high intertidal zone is submerged at high tide but remains dry for long periods between high tides. Due to the large variance of conditions possible in this region, it is inhabited by resilient wildlife that can withstand these changes such as barnacles, marine snails, mussels and hermit crabs. Tides flow over the middle intertidal zone two times a day and this zone has a larger variety of wildlife. The low intertidal zone is submerged nearly all the time except during the lowest tides and life is more abundant here due to the protection that the water gives. Estuaries Estuaries occur where there is a noticeable change in salinity between saltwater and freshwater sources. This is typically found where rivers meet the ocean or sea. The wildlife found within estuaries is unique as the water in these areas is brackish - a mix of freshwater flowing to the ocean and salty seawater. Other types of estuaries also exist and have similar characteristics as traditional brackish estuaries. The Great Lakes are a prime example. There, river water mixes with lake water and creates freshwater estuaries. Estuaries are extremely productive ecosystems that many humans and animal species rely on for various activities. This can be seen as, of the 32 largest cities in the world, 22 are located on estuaries as they provide many environmental and economic benefits such as crucial habitat for many species, and being economic hubs for many coastal communities. Estuaries also provide essential ecosystem services such as water filtration, habitat protection, erosion control, gas regulation nutrient cycling, and it even gives education, recreation and tourism opportunities to people. Lagoons Lagoons are areas that are separated from larger water by natural barriers such as coral reefs or sandbars. There are two types of lagoons, coastal and oceanic/atoll lagoons. A coastal lagoon is, as the definition above, simply a body of water that is separated from the ocean by a barrier. An atoll lagoon is a circular coral reef or several coral islands that surround a lagoon. Atoll lagoons are often much deeper than coastal lagoons. Most lagoons are very shallow meaning that they are greatly affected by changed in precipitation, evaporation and wind. This means that salinity and temperature are widely varied in lagoons and that they can have water that ranges from fresh to hypersaline. Lagoons can be found in on coasts all over the world, on every continent except Antarctica and is an extremely diverse habitat being home to a wide array of species including birds, fish, crabs, plankton and more. Lagoons are also important to the economy as they provide a wide array of ecosystem services in addition to being the home of so many different species. Some of these services include fisheries, nutrient cycling, flood protection, water filtration, and even human tradition. Reefs Coral reefs Coral reefs are one of the most well-known marine ecosystems in the world, with the largest being the Great Barrier Reef. These reefs are composed of large coral colonies of a variety of species living together. The corals from multiple symbiotic relationships with the organisms around them. Coral reefs are being heavily affected by global warming. They are one of the most vulnerable marine ecosystems. Due to marine heatwaves that have high warming levels coral reefs are at risk of a great decline, loss of its important structures, and exposure to higher frequency of marine heatwaves. Bivalve reefs Bivalve reefs provide coastal protection through erosion control and shoreline stabilization, and modify the physical landscape by ecosystem engineering, thereby providing habitat for species by facilitative interactions with other habitats such as tidal flat benthic communities, seagrasses and marshes. Vegetated Vegetated coastal ecosystems occur throughout the world, as illustrated in the diagram on the right. Seagrass beds are found from cold polar waters to the tropics. Mangrove forests are confined to tropical and sub-tropical areas, while tidal marshes are found in all regions, but most commonly in temperate areas. Combined, these ecosystems cover about 50 million hectares and provide a diverse array of ecosystem services such as fishery production, coastline protection, pollution buffering, as well as high rates of carbon sequestration. Rapid loss of vegetated coastal ecosystems through land-use change has occurred for centuries, and has accelerated in recent decades. Causes of habitat conversion vary globally and include conversion to aquaculture, agriculture, forest over-exploitation, industrial use, upstream dams, dredging, eutrophication of overlying waters, urban development, and conversion to open water due to accelerated sea-level rise and subsidence. Vegetated coastal ecosystems typically reside over organic-rich sediments that may be several meters deep and effectively lock up carbon due to low-oxygen conditions and other factors that inhibit decomposition at depth. These carbon stocks can exceed those of terrestrial ecosystems, including forests, by several times. When coastal habitats are degraded or converted to other land uses, the sediment carbon is destabilised or exposed to oxygen, and subsequent increased microbial activity releases large amounts of greenhouse gasses to the atmosphere or water column. The potential economic impacts that come from releasing stored coastal blue carbon to the atmosphere are felt worldwide. Economic impacts of greenhouse gas emissions in general stem from associated increases in droughts, sea level, and frequency of extreme weather events. Coastal wetlands Coastal wetlands are among the most productive ecosystems on Earth and generate vital services that benefit human societies around the world. Sediment-stabilization by wetlands such as salt marshes and mangroves serves to protect coastal communities from storm-waves, flooding, and land erosion. Coastal wetlands also reduce pollution from human waste, remove excess nutrients from the water column, trap pollutants, and sequester carbon. Further, near-shore wetlands act as both essential nursery habitats and feeding grounds for game fish, supporting a diverse group of economically important species. Mangrove forests Mangroves are trees or shrubs that grow in low-oxygen soil near coastlines in tropical or subtropical latitudes. They are an extremely productive and complex ecosystem that connects the land and sea. Mangroves consist of species that are not necessarily related to each other and are often grouped for the characteristics they share rather than genetic similarity. Because of their proximity to the coast, they have all developed adaptions such as salt excretion and root aeration to live in salty, oxygen-depleted water. Mangroves can often be recognized by their dense tangle of roots that act to protect the coast by reducing erosion from storm surges, currents, wave, and tides. The mangrove ecosystem is also an important source of food for many species as well as excellent at sequestering carbon dioxide from the atmosphere with global mangrove carbon storage is estimated at 34 million metric tons per year. Salt marshes Salt marshes are a transition from the ocean to the land, where fresh and saltwater mix. The soil in these marshes is often made up of mud and a layer of organic material called peat. Peat is characterized as waterlogged and root-filled decomposing plant matter that often causes low oxygen levels (hypoxia). These hypoxic conditions causes growth of the bacteria that also gives salt marshes the sulfurous smell they are often known for. Salt marshes exist around the world and are needed for healthy ecosystems and a healthy economy. They are extremely productive ecosystems and they provide essential services for more than 75 percent of fishery species and protect shorelines from erosion and flooding. Salt marshes can be generally divided into the high marsh, low marsh, and the upland border. The low marsh is closer to the ocean, with it being flooded at nearly every tide except low tide. The high marsh is located between the low marsh and the upland border and it usually only flooded when higher than usual tides are present. The upland border is the freshwater edge of the marsh and is usually located at elevations slightly higher than the high marsh. This region is usually only flooded under extreme weather conditions and experiences much less waterlogged conditions and salt stress than other areas of the marsh. Seagrass meadows Seagrasses form dense underwater meadows which are among the most productive ecosystems in the world. They provide habitats and food for a diversity of marine life comparable to coral reefs. This includes invertebrates like shrimp and crabs, cod and flatfish, marine mammals and birds. They provide refuges for endangered species such as seahorses, turtles, and dugongs. They function as nursery habitats for shrimps, scallops and many commercial fish species. Seagrass meadows provide coastal storm protection by the way their leaves absorb energy from waves as they hit the coast. They keep coastal waters healthy by absorbing bacteria and nutrients, and slow the speed of climate change by sequestering carbon dioxide into the sediment of the ocean floor. Seagrasses evolved from marine algae which colonized land and became land plants, and then returned to the ocean about 100 million years ago. However, today seagrass meadows are being damaged by human activities such as pollution from land runoff, fishing boats that drag dredges or trawls across the meadows uprooting the grass, and overfishing which unbalances the ecosystem. Seagrass meadows are currently being destroyed at a rate of about two football fields every hour. Kelp forests Kelp forests occur worldwide throughout temperate and polar coastal oceans. In 2007, kelp forests were also discovered in tropical waters near Ecuador. Physically formed by brown macroalgae, kelp forests provide a unique habitat for marine organisms and are a source for understanding many ecological processes. Over the last century, they have been the focus of extensive research, particularly in trophic ecology, and continue to provoke important ideas that are relevant beyond this unique ecosystem. For example, kelp forests can influence coastal oceanographic patterns and provide many ecosystem services. However, the influence of humans has often contributed to kelp forest degradation. Of particular concern are the effects of overfishing nearshore ecosystems, which can release herbivores from their normal population regulation and result in the overgrazing of kelp and other algae. This can rapidly result in transitions to barren landscapes where relatively few species persist. Already due to the combined effects of overfishing and climate change, kelp forests have all but disappeared in many especially vulnerable places, such as Tasmania's east coast and the coast of Northern California. The implementation of marine protected areas is one management strategy useful for addressing such issues, since it may limit the impacts of fishing and buffer the ecosystem from additive effects of other environmental stressors. Coastal ecology Coastal food webs Coastal waters include the waters in estuaries and over continental shelves. They occupy about 8 percent of the total ocean area and account for about half of all the ocean productivity. The key nutrients determining eutrophication are nitrogen in coastal waters and phosphorus in lakes. Both are found in high concentrations in guano (seabird feces), which acts as a fertilizer for the surrounding ocean or an adjacent lake. Uric acid is the dominant nitrogen compound, and during its mineralization different nitrogen forms are produced. Ecosystems, even those with seemingly distinct borders, rarely function independently of other adjacent systems. Ecologists are increasingly recognizing the important effects that cross-ecosystem transport of energy and nutrients have on plant and animal populations and communities. A well known example of this is how seabirds concentrate marine-derived nutrients on breeding islands in the form of feces (guano) which contains ~15–20% nitrogen (N), as well as 10% phosphorus. These nutrients dramatically alter terrestrial ecosystem functioning and dynamics and can support increased primary and secondary productivity. However, although many studies have demonstrated nitrogen enrichment of terrestrial components due to guano deposition across various taxonomic groups, only a few have studied its retroaction on marine ecosystems and most of these studies were restricted to temperate regions and high nutrient waters. In the tropics, coral reefs can be found adjacent to islands with large populations of breeding seabirds, and could be potentially affected by local nutrient enrichment due to the transport of seabird-derived nutrients in surrounding waters. Studies on the influence of guano on tropical marine ecosystems suggest nitrogen from guano enriches seawater and reef primary producers. Reef building corals have essential nitrogen needs and, thriving in nutrient-poor tropical waters where nitrogen is a major limiting nutrient for primary productivity, they have developed specific adaptations for conserving this element. Their establishment and maintenance are partly due to their symbiosis with unicellular dinoflagellates, Symbiodinium spp. (zooxanthellae), that can take up and retain dissolved inorganic nitrogen (ammonium and nitrate) from the surrounding waters. These zooxanthellae can also recycle the animal wastes and subsequently transfer them back to the coral host as amino acids, ammonium or urea. Corals are also able to ingest nitrogen-rich sediment particles and plankton. Coastal eutrophication and excess nutrient supply can have strong impacts on corals, leading to a decrease in skeletal growth, Coastal predators Food web theory predicts that current global declines in marine predators could generate unwanted consequences for many marine ecosystems. In coastal plant communities, such as kelp, seagrass meadows, mangrove forests and salt marshes, several studies have documented the far-reaching effects of changing predator populations. Across coastal ecosystems, the loss of marine predators appears to negatively affect coastal plant communities and the ecosystem services they provide. The green world hypothesis predicts loss of predator control on herbivores could result in runaway consumption that would eventually denude a landscape or seascape of vegetation. Since the inception of the green world hypothesis, ecologists have tried to understand the prevalence of indirect and alternating effects of predators on lower trophic levels (trophic cascades), and their overall impact on ecosystems. Multiple lines of evidence now suggest that top predators are key to the persistence of some ecosystems. With an estimated habitat loss greater than 50 percent, coastal plant communities are among the world’s most endangered ecosystems. As bleak as this number is, the predators that patrol coastal systems have fared far worse. Several predatory taxa including species of marine mammals, elasmobranchs, and seabirds have declined by 90 to 100 percent compared to historical populations. Predator declines pre-date habitat declines, suggesting alterations to predator populations may be a major driver of change for coastal systems. There is little doubt that collapsing marine predator populations results from overharvesting by humans. Localized declines and extinctions of coastal predators by humans began over 40,000 years ago with subsistence harvesting. However, for most large bodied, marine predators (toothed whales, large pelagic fish, sea birds, pinnipeds, and otters) the beginning of their sharp global declines occurred over the last century, coinciding with the expansion of coastal human populations and advances in industrial fishing. Following global declines in marine predators, evidence of trophic cascades in coastal ecosystems started to emerge, with the disturbing realisation that they affected more than just populations of lower trophic levels. Understanding the importance of predators in coastal plant communities has been bolstered by their documented ability to influence ecosystem services. Multiple examples have shown that changes to the strength or direction of predator effects on lower trophic levels can influence coastal erosion, carbon sequestration, and ecosystem resilience. The idea that the extirpation of predators can have far-reaching effects on the persistence of coastal plants and their ecosystem services has become a major motivation for their conservation in coastal systems. Seascape ecology Seascape ecology is the marine and coastal version of landscape ecology. It is currently emerging as an interdisciplinary and spatially explicit ecological science with relevance to marine management, biodiversity conservation, and restoration. Seascapes are complex ocean spaces, shaped by dynamic and interconnected patterns and processes operating across a range of spatial and temporal scales. Rapid advances in geospatial technologies and the proliferation of sensors, both above and below the ocean surface, have revealed intricate and scientifically intriguing ecological patterns and processes, some of which are the result of human activities. Despite progress in the collecting, mapping, and sharing of ocean data, the gap between technological advances and the ability to generate ecological insights for marine management and conservation practice remains substantial. For instance, fundamental gaps exist in the understanding of multidimensional spatial structure in the sea, and the implications for planetary health and human wellbeing. Deeper understanding of the multi-scale linkages between ecological structure, function, and change will better support the design of whole-system strategies for biodiversity preservation and reduce uncertainty around the consequences of human activity. For example, in the design and evaluation of marine protected areas (MPAs) and habitat restoration, it is important to understand the influence of spatial context, configuration, and connectivity, and to consider effects of scale. Interactions between ecosystems The diagram on the right shows the principal interactions between mangroves, seagrass, and coral reefs. Coral reefs, seagrasses, and mangroves buffer habitats further inland from storms and wave damage as well as participate in a tri-system exchange of mobile fish and invertebrates. Mangroves and seagrasses are critical in regulating sediment, freshwater, and nutrient flows to coral reefs. The diagram immediately below shows locations where mangroves, coral reefs, and seagrass beds exist within one km of each other. Buffered intersection between the three systems provides relative co-occurrence rates on a global scale. Regions where systems strongly intersect include Central America (Belize), the Caribbean, the Red Sea, the Coral Triangle (particularly Malaysia), Madagascar, and the Great Barrier Reef. The diagram at the right graphically illustrates the ecosystem service synergies between mangroves, seagrasses, and coral reefs. The ecosystem services provided by intact reefs, seagrasses, and mangroves are both highly valuable and mutually enhance each other. Coastal protection (storm/wave attenuation) maintains the structure of adjacent ecosystems, and associated ecosystem services, in an offshore-to-onshore direction. Fisheries are characterized by migratory species, and therefore, protecting fisheries in one ecosystem increases fish biomass in others. Tourism benefits from coastal protection and healthy fisheries from multiple ecosystems. Here, we do not draw within-ecosystem connections in order to better emphasise synergies between systems. Network ecology To compound things, removal of biomass from the ocean occurs simultaneously with multiple other stressors associated to climate change that compromise the capacity of these socio-ecological systems to respond to perturbations. Besides sea surface temperature, climate change also affects many other physical–chemical characteristics of marine coastal waters (stratification, acidification, ventilation) as well as the wind regimes that control surface water productivity along the productive coastal upwelling ecosystems. Changes in the productivity of the oceans are reflected in changes of plankton biomass. Plankton contributes approximately half of the global primary production, supports marine food webs, influences the biogeochemical process in the ocean, and strongly affects commercial fisheries. Indeed, an overall decrease in marine plankton productivity is expected over global scales. Long-term increases and decreases in plankton productivity have already occurred over the past two decades along extensive regions of the Humboldt upwelling ecosystem off Chile, and are expected to propagate up the pelagic and benthic food webs. Network ecology has advanced understanding of ecosystems by providing a powerful framework to analyse biological communities. Previous studies used this framework to assess food web robustness against species extinctions, defined as the fraction of initial species that remain present in the ecosystem after a primary extinction. These studies showed the importance for food web persistence of highly connected species (independent of trophic position), basal species, and highly connected species that, at the same time, trophically support other highly connected species. Most of these studies used a static approach, which stems from network theory and analyzes the impacts of structural changes on food webs represented by nodes (species) and links (interactions) that connect nodes, but ignores interaction strengths and population dynamics of interacting species. Other studies used a dynamic approach, which considers not only the structure and intensity of interactions in a food web, but also the changes in species biomasses through time and the indirect effects that these changes have on other species. Coastal biogeochemistry Globally, eutrophication is one of the major environmental problems in coastal ecosystems. Over the last century the annual riverine inputs of nitrogen and phosphorus to the oceans have increased from 19 to 37 megatonnes of nitrogen and from 2 to 4 megatonnes of phosphorus. Regionally, these increases were even more substantial as observed in the United States, Europe and China. In the Baltic Sea nitrogen and phosphorus loads increased by roughly a factor of three and six, respectively. The riverine nitrogen flux has increased by an order of magnitude to coastal waters of China within thirty years, while phosphorus export has tripled between 1970 and 2000. Efforts to mitigate eutrophication through nutrient load reductions are hampered by the effects of climate change. Changes in precipitation increase the runoff of N, P and carbon (C) from land, which together with warming and increased dissolution alter the coupled marine nutrient and carbon cycles. In contrast to the open ocean where biogeochemical cycling is largely dominated by pelagic processes driven primarily by ocean circulation, in the coastal zone, pelagic and benthic processes interact strongly and are driven by a complex and dynamic physical environment. Eutrophication in coastal areas leads to shifts toward rapidly growing opportunistic algae, and generally to a decline in benthic macrovegetation because of decreased light penetration, substrate change and more reducing sediments. Increased production and warming waters have caused expanding hypoxia at the seafloor with a consequent loss of benthic fauna. Hypoxic systems tend to lose many long-lived higher organisms and biogeochemical cycles typically become dominated by benthic bacterial processes and rapid pelagic turnover. However, if hypoxia does not occur, benthic fauna tends to increase in biomass with eutrophication. Changes in benthic biota have far-reaching impacts on biogeochemical cycles in the coastal zone and beyond. In the illuminated zone, benthic microphytes and macrophytes mediate biogeochemical fluxes through primary production, nutrient storage and sediment stabilization and act as a habitat and food source for a variety of animals, as shown in the diagram on the left above. Benthic animals contribute to biogeochemical transformations and fluxes between water and sediments both directly through their metabolism and indirectly by physically reworking the sediments and their porewaters and stimulating bacterial processes. Grazing on pelagic organic matter and biodeposition of feces and pseudofeces by suspension-feeding fauna increases organic matter sedimentation rates. In addition, nutrients and carbon are retained in biomass and transformed from organic to inorganic forms through metabolic processes. Bioturbation, including sediment reworking and burrow ventilation activities (bioirrigation), redistributes particles and solutes within the sediment and enhances sediment-water fluxes of solutes. Bioturbation can also enhance resuspension of particles, a phenomenon termed "bioresuspension". Together, all these processes affect physical and chemical conditions at the sediment-water interface, and strongly influence organic matter degradation. When up-scaled to the ecosystem level, such modified conditions can significantly alter the functioning of coastal ecosystems and ultimately, the role of the coastal zone in filtering and transforming nutrients and carbon. Artisan fisheries Artisanal fisheries use simple fishing gears and small vessels. Their activities tend to be confined to coastal areas. In general, top-down and bottom-up forces determine ecosystem functioning and dynamics. Fisheries as a top-down force can shorten and destabilise food webs, while effects driven by climate change can alter the bottom-up forces of primary productivity. Direct human impacts and the full suite of drivers of global change are the main cause of species extinctions in Anthropocene ecosystems, with detrimental consequences on ecosystem functioning and their services to human societies. The world fisheries crisis is among those consequences, which cuts across fishing strategies, oceanic regions, species, and includes countries that have little regulation and those that have implemented rights-based co-management strategies to reduce overharvesting. Chile has been one of the countries implementing Territorial Use Rights (TURFs) over an unprecedented geographic scale to manage the diverse coastal benthic resources using a co-management strategy. These TURFS are used for artisanal fisheries. Over 60 coastal benthic species are actively harvested by these artisanal fisheries, with species that are extracted from intertidal and shallow subtidal habitats. The Chilean TURFs system brought significant improvements in sustainability of this complex socio-ecological system, helping to rebuild benthic fish stocks, improving fishers’ perception towards sustainability and increasing compliance9, as well as showing positive ancillary effects on conservation of biodiversity. However, the situation of most artisanal fisheries is still far from sustainable, and many fish stocks and coastal ecosystems show signs of overexploitation and ecosystem degradation, a consequence of the low levels of cooperation and low enforcement of TURF regulations, which leads to high levels of free-riding and illegal fishing. It is imperative to improve understanding of the effects of these multi-species artisanal fisheries which simultaneously harvest species at all trophic levels from kelp primary producers to top carnivores. Remote sensing Coastal zones are among the most populated areas on the planet. As the population continues to increase, economic development must expand to support human welfare. However, this development may damage the ability of the coastal environment to continue supporting human welfare for current and future generations. The management of complex coastal and marine social-ecological systems requires tools that provide frameworks with the capability of responding to current and emergent issues. Remote data collection technologies include satellite-based remote sensing, aerial remote sensing, unmanned aerial vehicles, unmanned surface vehicles, unmanned underwater vehicles, and static sensors. Frameworks have been developed that attempt to address and integrate these complex issues, such as the Millennium Ecosystem Assessment framework which links drivers, ecosystem services, and human welfare However, obtaining the environmental data that is necessary to use such frameworks is difficult, especially in countries where access to reliable data and their dissemination are limited or non-existent and even thwarted. Traditional techniques of point sampling and observation in the environment do deliver high information content, but they are expensive and often do not provide adequate spatial and temporal coverage, while remote sensing can provide cost-effective solutions, as well as data for locations where there is no or only limited information. Coastal observing systems are typically nationally funded and built around national priorities. As a result, there are presently significant differences between countries in terms of sustainability, observing capacity and technologies, as well as methods and research priorities. Ocean observing systems in coastal areas need to move toward integrated, multidisciplinary and multiscale systems, where heterogeneity can be exploited to deliver fit-for-purpose answers. Essential elements of such distributed observation systems are the use of machine-to-machine communication, data fusion and processing applying recent technological developments for the Internet of Things (IoT) toward a common cyberinfrastructure. It has been argued that the standardisation that IoT brings to wireless sensing will revolutionise areas like this. Coastal areas are the most dynamic and productive parts of the oceans, which makes them a significant source of human resources and services. Coastal waters are located immediately in contact with human populations and exposed to anthropogenic disturbances, placing these resources and services under threat. These concerns explain why, in several coastal regions, a rapidly increasing number of observing systems have been implemented in the last decade. Expansion of coherent and sustained coastal observations has been fragmented and driven by national and regional policies and is often undertaken through short-term research projects. This results in significant differences between countries both in terms of sustainability and observing technologies, methods and research priorities. Unlike the open ocean, where challenges are rather well-defined and stakeholders are fewer and well-identified, coastal processes are complex, acting on several spatial and temporal scales, with numerous and diversified users and stakeholders, often with conflicting interests. To adapt to such complexity coastal ocean observing system must be an integrated, multidisciplinary and multiscale system of systems. Regime shifts Marine ecosystems are affected by diverse pressures and consequently may undergo significant changes that can be interpreted as regime shifts. Marine ecosystems worldwide are affected by increasing natural and anthropogenic pressures and consequently undergo significant changes at unprecedented rates. Affected by these changes, ecosystems can reorganise and still maintain the same function, structure, and identity. However, under some circumstances, the ecosystem may undergo changes that modify the system’s structure and function and this process can be described as a shift to a new regime. Usually, a regime shift is triggered by large-scale climate-induced variations, intense fishing exploitation or both. Criteria used to define regime shifts vary and the changes that have to occur in order to consider that a system has undergone a regime shift are not well-defined. Normally, regime shifts are defined as high amplitude, low-frequency and often abrupt changes in species abundance and community composition that are observed at multiple trophic levels (TLs). These changes are expected to occur on a large spatial scale and take place concurrently with physical changes in the climate system. Regime shifts have been described in several marine ecosystems including Northern Benguela, the North Sea, and the Baltic Sea. In large upwelling ecosystems, it is common to observe decadal fluctuations in species abundance and their replacements. These fluctuations might be irreversible and might be an indicator of the new regime, as was the case in the Northern Benguela ecosystem. However, changes in the upwelling systems might be interpreted as fluctuations within the limits of natural variability for an ecosystem, and not as an indicator of the regime shift. The Portuguese continental shelf ecosystem (PCSE) constitutes the northernmost part of the Canary Current Upwelling System and is characterised by seasonal upwelling that occurs during the spring and summer as a result of steady northerly winds. It has recently changed in the abundance of coastal pelagic species such as sardine, chub mackerel, horse mackerel, blue jack mackerel and anchovy. Moreover, in the last decades, an increase in higher trophic level species has been documented. The causes underlying changes in the pelagic community are not clear but it has been suggested that they result from a complex interplay between environmental variability, species interactions and fishing pressure. There is evidence, that changes in the intensity of the Iberian coastal upwelling (resulting from the strengthening or weakening northern winds) had occurred in the last decades. However, the character of these changes is contradictory where some authors observed intensification of upwelling-favourable winds while others documented their weakening. A 2019 review of upwelling rate and intensity along the Portuguese coast documented a successive weakening of the upwelling since 1950 that lasted till mid/late 1970s in the north-west and south-west and till 1994 in the south coast. An increase in upwelling index over the period 1985–2009 was documented in all studied regions while additionally upwelling intensification were observed in the south. A continuous increase in water temperature, ranging from 0.1 to 0.2 °C per decade has also been documented. Threats and decline Many marine fauna utilise coastal habitats as critical nursery areas, for shelter and feeding, yet these habitats are increasingly at risk from agriculture, aquaculture, industry and urban expansion. Indeed, these systems are subject to what may be called "a triple whammy" of increasing industrialisation and urbanisation, an increased loss of biological and physical resources (fish, water, energy, space), and a decreased resilience to the consequences of a warming climate and sea level rise. This has given rise to the complete loss, modification or disconnection of natural coastal ecosystems globally. For example, almost 10% of the entire Great Barrier Reef coastline in Australia (2,300 km) has been replaced with urban infrastructure (e.g., rock seawalls, jetties, marinas), causing massive loss and fragmentation of sensitive coastal ecosystems. Global loss of seagrass reached around 7% of seagrasses area per year by the end of the twentieth century. A global analysis of tidal wetlands (mangroves, tidal flats, and tidal marshes) published in 2022 estimated global losses of from 1999-2019, however, this study also estimated that these losses were largely offset by the establishment of of new tidal wetlands that were not present in 1999. Approximately three-quarters of the net decrease between 1999 and 2019 occurred in Asia (74.1%), with 68.6% concentrated in three countries: Indonesia (36%), China (20.6%), and Myanmar (12%). Of these global tidal wetland losses and gains, 39% of losses and 14% of gains were attributed to direct human activities. Approximately 40% of the global mangrove has been lost since the 1950's with more than 9,736 km2 of the world's mangroves continuing to be degraded in the 20 years period between 1996 and 2016. Saltmarshes are drained when coastal land is claimed for agriculture, and deforestation is an increasing threat to shoreline vegetation (such as mangroves) when coastal land is appropriated for urban and industrial development, both of which may result in the degradation of blue carbon storages and increasing greenhouse gas emissions. These accumulating pressures and impacts on coastal ecosystems are neither isolated nor independent, rather they are synergistic, with feedbacks and interactions that cause individual effects to be greater than their sums. In the year before the ecosystem restoration Decade commences, there is a critical knowledge deficit inhibiting an appreciation of the complexity of coastal ecosystems that hampers the development of responses to mitigate continuing impacts—not to mention uncertainty on projected losses of coastal systems for some of the worst-case future climate change scenarios. Restoration The United Nations has declared 2021–2030 the UN Decade on Ecosystem Restoration. This call to action has the purpose of recognising the need to massively accelerate global restoration of degraded ecosystems, to fight the climate heating crisis, enhance food security, provide clean water and protect biodiversity on the planet. The scale of restoration will be key. For example, the Bonn Challenge has the goal to restore 350 million km2, about the size of India, of degraded terrestrial ecosystems by 2030. However, international support for restoration of blue coastal ecosystems, which provide an impressive array of benefits to people, has lagged. The diagram on the right shows the current state of modified and impacted coastal ecosystems and the expected state following the decade of restoration. Also, shown is the uncertainty in the success of past restoration efforts, current state of altered systems, climate variability, and restoration actions that are available now or on the horizon. This could mean that delivering the Decade on Ecosystem Restoration for coastal systems needs to be viewed as a means of getting things going where the benefits might take longer than a decade. Only the Global Mangrove Alliance comes close to the Bonn Challenge, with the aim of increasing the global area of mangroves by 20% by 2030. However, mangrove scientists have reservations about this target, voicing concerns that it is unrealistic and may prompt inappropriate practices in attempting to reach this target. Conservation and connectivity There has recently been a perceptual shift away from habitat representation as the sole or primary focus of conservation prioritisation, towards consideration of ecological processes that shape the distribution and abundance of biodiversity features. In marine ecosystems, connectivity processes are paramount, and designing systems of marine protected areas that maintain connectivity between habitat patches has long been considered an objective of conservation planning. Two forms of connectivity are critical to structuring coral reef fish populations: dispersal of larvae in the pelagic environment, and post-settlement migration by individuals across the seascape. Whilst a growing literature has described approaches for considering larval connectivity in conservation prioritisation, relatively less attention has been directed towards developing and applying methods for considering post-settlement connectivity Seascape connectivity (connectedness among different habitats in a seascape, c.f. among patches of the same habitat type is essential for species that utilise more than one habitat, either during diurnal movements or at different stages in their life history. Mangroves, seagrass beds, and lagoon reefs provide nursery areas for many commercially and ecologically important fish species that subsequently make ontogenetic shifts to adult populations on coral reefs. These back-reef habitats are often overlooked for conservation or management in favour of coral reefs that support greater adult biomass, yet they can be equally if not more at risk from habitat degradation and loss. Even where juveniles are not targeted by fishers, they can be vulnerable to habitat degradation, for example from sedimentation caused by poor land-use practices. There is clear empirical evidence that proximity to nursery habitats can enhance the effectiveness (i.e. increasing the abundance, density, or biomass of fish species) of marine protected areas on coral reefs. For example, at study sites across the western Pacific, the abundance of harvested fish species was significantly greater on protected reefs close to mangroves, but not on protected reefs isolated from mangroves. The functional role of herbivorous fish species that perform ontogenetic migrations may also enhance the resilience of coral reefs close to mangroves. Despite this evidence, and widespread calls to account for connectivity among habitats in the design of spatial management, there remain few examples where seascape connectivity is explicitly considered in spatial conservation prioritisation (the analytical process of identifying priority areas for conservation or management actions). See also Blue carbon Coast Coastal biogeomorphology Shallow water marine environment Tides in marginal seas References Further reading Ecosystems Biological oceanography
Marine coastal ecosystem
Biology
8,784
4,415,145
https://en.wikipedia.org/wiki/Wold%27s%20decomposition
In mathematics, particularly in operator theory, Wold decomposition or Wold–von Neumann decomposition, named after Herman Wold and John von Neumann, is a classification theorem for isometric linear operators on a given Hilbert space. It states that every isometry is a direct sum of copies of the unilateral shift and a unitary operator. In time series analysis, the theorem implies that every stationary discrete-time stochastic process can be decomposed into a pair of uncorrelated processes, one deterministic, and the other being a moving average process. Details Let H be a Hilbert space, L(H) be the bounded operators on H, and V ∈ L(H) be an isometry. The Wold decomposition states that every isometry V takes the form for some index set A, where S is the unilateral shift on a Hilbert space Hα, and U is a unitary operator (possible vacuous). The family {Hα} consists of isomorphic Hilbert spaces. A proof can be sketched as follows. Successive applications of V give a descending sequences of copies of H isomorphically embedded in itself: where V(H) denotes the range of V. The above defined Hi = Vi(H). If one defines then It is clear that K1 and K2 are invariant subspaces of V. So V(K2) = K2. In other words, V restricted to K2 is a surjective isometry, i.e., a unitary operator U. Furthermore, each Mi is isomorphic to another, with V being an isomorphism between Mi and Mi+1: V "shifts" Mi to Mi+1. Suppose the dimension of each Mi is some cardinal number α. We see that K1 can be written as a direct sum Hilbert spaces where each Hα is an invariant subspaces of V and V restricted to each Hα is the unilateral shift S. Therefore which is a Wold decomposition of V. Remarks It is immediate from the Wold decomposition that the spectrum of any proper, i.e. non-unitary, isometry is the unit disk in the complex plane. An isometry V is said to be pure if, in the notation of the above proof, The multiplicity of a pure isometry V is the dimension of the kernel of V*, i.e. the cardinality of the index set A in the Wold decomposition of V. In other words, a pure isometry of multiplicity N takes the form In this terminology, the Wold decomposition expresses an isometry as a direct sum of a pure isometry and a unitary operator. A subspace M is called a wandering subspace of V if Vn(M) ⊥ Vm(M) for all n ≠ m. In particular, each Mi defined above is a wandering subspace of V. A sequence of isometries The decomposition above can be generalized slightly to a sequence of isometries, indexed by the integers. The C*-algebra generated by an isometry Consider an isometry V ∈ L(H). Denote by C*(V) the C*-algebra generated by V, i.e. C*(V) is the norm closure of polynomials in V and V*. The Wold decomposition can be applied to characterize C*(V). Let C(T) be the continuous functions on the unit circle T. We recall that the C*-algebra C*(S) generated by the unilateral shift S takes the following form C*(S) = {Tf + K | Tf is a Toeplitz operator with continuous symbol f ∈ C(T) and K is a compact operator}. In this identification, S = Tz where z is the identity function in C(T). The algebra C*(S) is called the Toeplitz algebra. Theorem (Coburn) C*(V) is isomorphic to the Toeplitz algebra and V is the isomorphic image of Tz. The proof hinges on the connections with C(T), in the description of the Toeplitz algebra and that the spectrum of a unitary operator is contained in the circle T. The following properties of the Toeplitz algebra will be needed: The semicommutator is compact. The Wold decomposition says that V is the direct sum of copies of Tz and then some unitary U: So we invoke the continuous functional calculus f → f(U), and define One can now verify Φ is an isomorphism that maps the unilateral shift to V: By property 1 above, Φ is linear. The map Φ is injective because Tf is not compact for any non-zero f ∈ C(T) and thus Tf + K = 0 implies f = 0. Since the range of Φ is a C*-algebra, Φ is surjective by the minimality of C*(V). Property 2 and the continuous functional calculus ensure that Φ preserves the *-operation. Finally, the semicommutator property shows that Φ is multiplicative. Therefore the theorem holds. References Operator theory Invariant subspaces C*-algebras Theorems in functional analysis de:Shiftoperator#Wold-Zerlegung
Wold's decomposition
Mathematics
1,093
8,775,865
https://en.wikipedia.org/wiki/List%20of%20web%20browsers%20for%20Unix%20and%20Unix-like%20operating%20systems
The following is a list of web browsers for various Unix and Unix-like operating systems. Not all of these browsers are specific to these operating systems; some are available on non-Unix systems as well. Some, but not most, have a mobile version. Graphical Colored items in this table are discontinued. Text-based Links ELinks Line-mode browser Lynx w3m See also List of web browsers Comparison of web browsers Comparison of lightweight web browsers References https://www.mozilla.org/en-US/firefox/android/ Web browsers
List of web browsers for Unix and Unix-like operating systems
Technology
124
16,145,231
https://en.wikipedia.org/wiki/Christopher%20Henn-Collins
Lieutenant-Colonel Christopher A Henn-Collins (5 June 1915 – 8 August 2006), CEng, FIEE, FIERE served in the Second World War, notably, in the Polish Campaign under General Adrian Carton de Wiart. After the war Henn-Collins was a prolific inventor, including the first transistorised quartz clock. Early life Born in 1915, Christopher Henn-Collins was the third son of Lieutenant-Colonel the Hon. Richard Henn Collins, CMG, DSO, and grandson of Lord Collins, Master of the Rolls from 1901 to 1907. He was educated at Shrewsbury, and destined for a military career in his father's regiment, but pleaded to be allowed to pursue his boyhood ambition to be a telecommunications engineer. In 1934 he enlisted as a Gentleman Cadet at the Royal Military Academy at Woolwich for signals training and was commissioned in 1935. Polish Campaign After service in Palestine he earned the dubious distinction of being possibly the first serving officer to come under enemy fire in the first few hours of the Second World War. In August 1939, when he was Brigade Signals Officer to the 1st Brigade of Guards, he had been ordered to lead a detachment of signallers and their equipment into Poland, as part of a British Military Mission under the command of the battle-scarred veteran General Carton de Wiart, VC, blinded in one eye and with an artificial hand. Their objective was to set up radio communications between Mission HQ in Warsaw, the UK and units of the Polish army. They were to travel in plain clothes, but with battle-dress in their kit, and six tons of equipment, through France to Marseilles, where HMS Shropshire would take them to Alexandria. There they were issued with passports and fictitious occupations, before trans-shipping to a ferry en route to Turkey, by which time Britain and France were at war with Germany. From there they travelled by rail through Romania, setting up radio communications along the way. By the time they crossed the Polish frontier southeast of Warsaw, German armoured divisions were driving east towards the capital, and their reconnaissance planes were taking an interest in this strange convoy, which was now in a war zone. The detachment was ordered to change into uniform. In Lvov they were under heavy fire from low-flying aircraft: they could not move forward, nor could they stay put, risking further attentions from the Luftwaffe. For several nights they shuttled to and fro a few miles west to east and back again, awaiting instructions, and it was not until 8 September when they rendezvoused with General de Wiart, who had moved his headquarters from Warsaw to Tarnopol, that their mission was abandoned. They were ordered to destroy their equipment, and make their way home in twos and threes as best they could. Back in Alexandria Henn-Collins's instructions were to return to London where he was posted to Staff College at Camberley, and wrote a critical report on the lessons to be learned from this expedition. Although the mission was aborted, the outcome would have been quite different if the Russians had not invaded. The Poles had plans to conduct a guerrilla war in the east, and a British Signals unit behind the lines would have been of considerable use to the Allies. Later wartime postings For Henn-Collins various postings during the next three years included a period in the Directorate of Military Training, and promotion to major and then, with the rank of lieutenant-colonel, to Allied Forces Headquarters in Algiers as Officer in Charge of Radio Section, to set up links throughout the North African Theatre. Post-war engineering career and retirement He was a resourceful, inventive and practical engineer. He patented an enciphering and deciphering machine, assigned to the Ministry of Supply with no financial benefit to himself; and he had so many ideas for civilian projects which could not be exploited within the service that he resigned his commission in 1947 in order to set up as a consulting engineer. Partly as a result of his wartime contacts, his company, Henn-Collins Associates, undertook a wide range of projects for government agencies and commercial organisations worldwide, mostly in the field of telecommunications, but he had other interests as well, and in the 1950s and 60s he patented a number of devices of an electro-mechanical nature. In his workshop he developed his idea for a quartz crystal clock which by using transistors in place of thermionic valves, made possible a much smaller quartz clock than was previously feasible. He described his "mantelpiece" clock in the British Horological Journal in 1957 and showed it at an exhibition in Goldsmiths' Hall in 1958, "The Pendulum to the Atom", which was opened by Prince Philip, Duke of Edinburgh. Christopher Henn-Collins and Dr Louis Essen, inventor of the caesium clock were presented to him. Before he retired to Guernsey in 1970 he represented the Institution of Electrical Engineers and the Institution of Electrical and Radio Engineers on a British Standards Institution committee which produced a Code of Practice for the reception of sound and television broadcasting. He returned to England three years before his death. Personal life He married first Patricia Hooper, who died in 1974, and in 1976 he married Andora de Quehen who survived him. References Tearle, John Lieutenant Colonel C A Henn-Collins, CEng, FIEE,FIERE draft notes for Times obituary External links Obituary, The Times, 27 September 2006 1915 births 2006 deaths Military personnel from Shrewsbury Fellows of the Institution of Engineering and Technology British inventors 20th-century British engineers
Christopher Henn-Collins
Engineering
1,123
24,175,195
https://en.wikipedia.org/wiki/C9H6O4
{{DISPLAYTITLE:C9H6O4}} The molecular formula C9H6O4 (molar mass: 178.14 g/mol, exact mass 178.026609 u) may refer to: Aesculetin, a coumarin Daphnetin, a coumarin Ninhydrin (2,2-dihydroxyindane-1,3-dione) Molecular formulas
C9H6O4
Physics,Chemistry
93