id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
22,748,103
https://en.wikipedia.org/wiki/Microwave%20cavity
A microwave cavity or radio frequency cavity (RF cavity) is a special type of resonator, consisting of a closed (or largely closed) metal structure that confines electromagnetic fields in the microwave or RF region of the spectrum. The structure is either hollow or filled with dielectric material. The microwaves bounce back and forth between the walls of the cavity. At the cavity's resonant frequencies they reinforce to form standing waves in the cavity. Therefore, the cavity functions similarly to an organ pipe or sound box in a musical instrument, oscillating preferentially at a series of frequencies, its resonant frequencies. Thus it can act as a bandpass filter, allowing microwaves of a particular frequency to pass while blocking microwaves at nearby frequencies. A microwave cavity acts similarly to a resonant circuit with extremely low loss at its frequency of operation, resulting in quality factors (Q factors) up to the order of 106, for copper cavities, compared to 102 for circuits made with separate inductors and capacitors at the same frequency. For superconducting cavities, quality factors up to the order of 1010 are possible. They are used in place of resonant circuits at microwave frequencies, since at these frequencies discrete resonant circuits cannot be built because the values of inductance and capacitance needed are too low. They are used in oscillators and transmitters to create microwave signals, and as filters to separate a signal at a given frequency from other signals, in equipment such as radar equipment, microwave relay stations, satellite communications, and microwave ovens. RF cavities can also manipulate charged particles passing through them by application of acceleration voltage and are thus used in particle accelerators and microwave vacuum tubes such as klystrons and magnetrons. Theory of operation Most resonant cavities are made from closed (or short-circuited) sections of waveguide or high-permittivity dielectric material (see dielectric resonator). Electric and magnetic energy is stored in the cavity. This energy decays over time due to several possible loss mechanisms. The section on 'Physics of SRF cavities' in the article on superconducting radio frequency contains a number of important and useful expressions which apply to any microwave cavity: The energy stored in the cavity is given by the integral of field energy density over its volume, , where: H is the magnetic field in the cavity and μ0 is the permeability of free space. The power dissipated due just to the resistivity of the cavity's walls is given by the integral of resistive wall losses over its surface, , where: Rs is the surface resistance. For copper cavities operating near room temperature, Rs is simply determined by the empirically measured bulk electrical conductivity σ see Ramo et al pp.288-289 . A resonator's quality factor is defined by , where: ω is the resonant frequency in [rad/s], U is the energy stored in [J], and Pd is the power dissipated in [W] in the cavity to maintain the energy U. Basic losses are due to finite conductivity of cavity walls and dielectric losses of material filling the cavity. Other loss mechanisms exist in evacuated cavities, for example the multipactor effect or field electron emission. Both multipactor effect and field electron emission generate copious electrons inside the cavity. These electrons are accelerated by the electric field in the cavity and thus extract energy from the stored energy of the cavity. Eventually the electrons strike the walls of the cavity and lose their energy. In superconducting radio frequency cavities there are additional energy loss mechanisms associated with the deterioration of the electric conductivity of the superconducting surface due to heating or contamination. Every cavity has numerous resonant frequencies that correspond to electromagnetic field modes satisfying necessary boundary conditions on the walls of the cavity. Because of these boundary conditions that must be satisfied at resonance (tangential electric fields must be zero at cavity walls), at resonance, cavity dimensions must satisfy particular values. Depending on the resonance transverse mode, transverse cavity dimensions may be constrained to expressions related to geometric functions, or to zeros of Bessel functions or their derivatives (see below), depending on the symmetry properties of the cavity's shape. Alternately it follows that cavity length must be an integer multiple of half-wavelength at resonance (see page 451 of Ramo et al). In this case, a resonant cavity can be thought of as a resonance in a short circuited half-wavelength transmission line. The external dimensions of a cavity can be made considerably smaller at its lowest frequency mode by loading the cavity with either capacitive or inductive elements. Loaded cavities usually have lower symmetries and compromise certain performance indicators, such as the best Q factor. As examples, the reentrant cavity and helical resonator are capacitive and inductive loaded cavities, respectively. Multi-cell cavity Single-cell cavities can be combined in a structure to accelerate particles (such as electrons or ions) more efficiently than a string of independent single cell cavities. The figure from the U.S. Department of Energy shows a multi-cell superconducting cavity in a clean room at Fermi National Accelerator Laboratory. Loaded microwave cavities A microwave cavity has a fundamental mode, which exhibits the lowest resonant frequency of all possible resonant modes. For example, the fundamental mode of a cylindrical cavity is the TM010 mode. For certain applications, there is motivation to reduce the dimensions of the cavity. This can be done by using a loaded cavity, where a capacitive or an inductive load is integrated in the cavity's structure. The precise resonant frequency of a loaded cavity must be calculated using finite element methods for Maxwell's equations with boundary conditions. Loaded cavities (or resonators) can also be configured as multi-cell cavities. Loaded cavities are particularly suited for accelerating low velocity charged particles. This application for many types of loaded cavities, Some common types are listed below. The reentrant cavity The helical resonator The spiral resonator The split-ring resonator The quarter wave resonator The half wave resonator. A variant of the half-wave resonator is the spoke resonator. The Radio-frequency quadrupole . Compact Crab cavity. Compact crab cavities are an important upgrade for the LHC. The Q factor of a particular mode in a resonant cavity can be calculated. For a cavity with high degrees of symmetry, using analytical expressions of the electric and magnetic field, surface currents in the conducting walls and electric field in dielectric lossy material. For cavities with arbitrary shapes, finite element methods for Maxwell's equations with boundary conditions must be used. Measurement of the Q of a cavity are done using a Vector Network analyzer (electrical), or in the case of a very high Q by measuring the exponential decay time of the fields, and using the relationship . The electromagnetic fields in the cavity are excited via external coupling. An external power source is usually coupled to the cavity by a small aperture, a small wire probe or a loop, see page 563 of Ramo et al. External coupling structure has an effect on cavity performance and needs to be considered in the overall analysis, see Montgomery et al page 232. Resonant frequencies The resonant frequencies of a cavity are a function of its geometry. Rectangular cavity Resonance frequencies of a rectangular microwave cavity for any or resonant mode can be found by imposing boundary conditions on electromagnetic field expressions. This frequency is given at page 546 of Ramo et al: where is the wavenumber, with , , being the mode numbers and , , being the corresponding dimensions; c is the speed of light in vacuum; and and are relative permeability and permittivity of the cavity filling respectively. Cylindrical cavity The field solutions of a cylindrical cavity of length and radius follow from the solutions of a cylindrical waveguide with additional electric boundary conditions at the position of the enclosing plates. The resonance frequencies are different for TE and TM modes. TM modes See Jackson TE modes See Jackson Here, denotes the -th zero of the -th Bessel function, and denotes the -th zero of the derivative of the -th Bessel function. and are relative permeability and permittivity respectively. Quality factor The quality factor of a cavity can be decomposed into three parts, representing different power loss mechanisms. , resulting from the power loss in the walls which have finite conductivity. The Q of the lowest frequency mode, or "fundamental mode" are calculated, see pp. 541-551 in Ramo et al for a rectangular cavity (Equation 3a) with dimensions and parameters , and the mode of a cylindrical cavity (Equation 3b) with parameters as defined above. where is the intrinsic impedance of the dielectric, is the surface resistivity of the cavity walls. Note that . , resulting from the power loss in the lossy dielectric material filling the cavity, where is the loss tangent of the dielectric , resulting from power loss through unclosed surfaces (holes) of the cavity geometry. Total Q factor of the cavity can be found as in page 567 of Ramo et al Comparison to LC circuits Microwave resonant cavities can be represented and thought of as simple LC circuits, see Montgomery et al pages 207-239. For a microwave cavity, the stored electric energy is equal to the stored magnetic energy at resonance as is the case for a resonant LC circuit. In terms of inductance and capacitance, the resonant frequency for a given mode can be written as given in Montgomery et al page 209 where V is the cavity volume, is the mode wavenumber and and are permittivity and permeability respectively. To better understand the utility of resonant cavities at microwave frequencies, it is useful to note that conventional inductors and capacitors start to become impractically small with frequency in the VHF, and definitely so for frequencies above one gigahertz. Because of their low losses and high Q factors, cavity resonators are preferred over conventional LC and transmission-line resonators at high frequencies. Losses in LC resonant circuits Conventional inductors are usually wound from wire in the shape of a helix with no core. Skin effect causes the high frequency resistance of inductors to be many times their direct current resistance. In addition, capacitance between turns causes dielectric losses in the insulation which coats the wires. These effects make the high frequency resistance greater and decrease the Q factor. Conventional capacitors use air, mica, ceramic or perhaps teflon for a dielectric. Even with a low loss dielectric, capacitors are also subject to skin effect losses in their leads and plates. Both effects increase their equivalent series resistance and reduce their Q. Even if the Q factor of VHF inductors and capacitors is high enough to be useful, their parasitic properties can significantly affect their performance in this frequency range. The shunt capacitance of an inductor may be more significant than its desirable series inductance. The series inductance of a capacitor may be more significant than its desirable shunt capacitance. As a result, in the VHF or microwave regions, a capacitor may appear to be an inductor and an inductor may appear to be a capacitor. These phenomena are better known as parasitic inductance and parasitic capacitance. Losses in cavity resonators Dielectric loss of air is extremely low for high-frequency electric or magnetic fields. Air-filled microwave cavities confine electric and magnetic fields to the air spaces between their walls. Electric losses in such cavities are almost exclusively due to currents flowing in cavity walls. While losses from wall currents are small, cavities are frequently plated with silver to increase their electrical conductivity and reduce these losses even further. Copper cavities frequently oxidize, which increases their loss. Silver or gold plating prevents oxidation and reduces electrical losses in cavity walls. Even though gold is not quite as good a conductor as copper, it still prevents oxidation and the resulting deterioration of Q factor over time. However, because of its high cost, it is used only in the most demanding applications. Some satellite resonators are silver-plated and covered with a gold flash layer. The current then mostly flows in the high-conductivity silver layer, while the gold flash layer protects the silver layer from oxidizing. References External links Cavity Resonators, The Feynman Lectures on Physics Vol. II Ch. 23 Crab cavity for the LHC Microwave technology Accelerator physics
Microwave cavity
Physics
2,667
46,533,089
https://en.wikipedia.org/wiki/The%20Blob%20%28Pacific%20Ocean%29
The Blob is a large mass of relatively warm water in the Pacific Ocean off the coast of North America that was first detected in late 2013 and continued to spread throughout 2014 and 2015. It is an example of a marine heatwave. Sea surface temperatures indicated that the Blob persisted into 2016, but it was initially thought to have dissipated later that year. By September 2016, the Blob resurfaced and made itself known to meteorologists. The warm water mass was unusual for open ocean conditions and was considered to have played a role in the formation of the unusual weather conditions experienced along the Pacific coast of North America during the same time period. The warm waters of the Blob were nutrient-poor and adversely affected marine life. In 2019 another scare was caused by a weaker form of the effect referred as "The Blob 2.0" and in 2021 the appearance of "The Southern Blob" at south of the equator near New Zealand has caused a major effect in South America, particularly Chile and Argentina. Origin The Blob was first detected in October 2013 and early 2014 by Nicholas Bond and his colleagues at the Joint Institute for the Study of the Atmosphere and Ocean of the University of Washington. It was detected when a large circular body of seawater did not cool as expected and remained much warmer than the average normal temperatures for that location and season. Bond, then the state climatologist for Washington, coined the term the Blob, with the term first appearing in an article in the monthly newsletter of the Office of the Washington State Climatologist for June 2014. Description Initially the Blob was reported as being wide and deep. It later expanded and reached a size of long, wide, and deep in June 2014 when the term the Blob was coined. The Blob now hugged the coast of North America from Mexico to Alaska and beyond, stretching more than , and formed three distinct patches: the first, off the coast of Canada, Washington, Oregon, and northern California, a region known to oceanographers as the Coastal Upwelling Domain; the second in the Bering Sea off the coast of Alaska; and the third and smallest off the coast of southern California and Mexico. In February 2014, the temperature of the Blob was around warmer than what was usual for the time of year. A NOAA scientist noted in September 2014 that, based on ocean temperature records, the North Pacific Ocean had not previously experienced temperatures so warm since climatologists began taking measurements. In 2015 the atmospheric ridge causing the Blob finally disappeared. The Blob vanished shortly after in 2016. However, in its wake are many species that will take a long time to recover. Although the Blob is gone for now, scientists predict that similar marine heat waves are becoming more common due to the Earth's warming climate. Residual heat from the first blob in addition to warmer temperatures in 2019 lead to a second Blob scare. However, it was quelled by a series of storms that cooled the rising temperatures. Cause The immediate cause of the phenomenon was the lower than normal rates of heat loss from the sea to the atmosphere, compounded with lower than usual water circulation resulting in a static upper layer of water. Both of these are attributed to a static high pressure region in the atmosphere, termed the Ridiculously Resilient Ridge, which was formed in the spring of 2014. The lack of air movement impacted the wind-forced currents and the wind-generated stirring of surface waters. These in turn influenced the weather in the American Pacific Northwest from the winter of 2013–2014 onward and may have been associated with the unusually hot summer experienced in the continental American Pacific Northwest in 2014. The reason for the phenomenon remains unclear, but it is speculated to partially be human caused climate change. Some experts consider that the wedge of warm water portends a cyclical change with the surface waters of the mid-latitude Pacific Ocean flipping from a cold phase to a warm phase in a cycle known as the Pacific decadal oscillation (PDO). This poorly-understood change happens at irregular intervals of years or decades. During a warm phase, the west Pacific becomes cooler and part of the eastern ocean warms; during the cool phase, these changes reverse. Scientists believe a cold phase started in the late 1990s and the arrival of the Blob may have been the start of the next warm phase. The PDO phases may also be related to the likelihood of El Nino events. The implementation of China's clean air action plan in 2013 may have inadvertently contributed to the Blob, by removing pollution that had blocked and scattered heat from the Sun. However, experts stress that this was one of many factors, including, notably, greenhouse gas emissions. NASA climatologist William Patzert predicts that if the PDO is at work here, there will be widespread climatological consequences and southern California and the American south may be in for a period of high precipitation with an increase in the rate of global warming. Another climatologist, Matt Newman of the University of Colorado, does not think the Blob fits the pattern of a shift in the PDO. He believes the unusually warm water is due to the persistent area of high pressure stationary over the northeastern Pacific Ocean. Dan Cayan of the Scripps Institution of Oceanography is unsure about the ultimate cause of the phenomenon, but states "there's no doubt that this anomaly in sea surface temperature is very meaningful". Effects Ecosystem disruption Sea surface temperature anomalies are a physical indicator which adversely affect the zooplankton (mainly copepods) in the northeast Pacific Ocean and specifically in the Coastal Upwelling Domain. Warm waters are much less nutrient-rich than the cold upwelling waters which were normal until recently off the Pacific coast. This resulted in reduced phytoplankton productivity with knock-on effects on the zooplankton which fed on it and the higher levels of the food chain. Species lower in the food web that prefer colder waters, which tend to be fattier, were replaced by warmer water species of lower nutritional value. The Northwest Fisheries Science Center in Seattle predicted reduced catches of coho and Chinook salmon, a major contributing factor being the raised temperatures of seawater in the Blob. Salmon catches dropped as the fish migrated away having found low levels of zooplankton. Thousands of sea lion pups have starved in California which lead to forced beachings. Thousands of Cassin's auklets in Oregon have starved due to lack of food. Animals that favored warm southern waters were spotted as far north as Alaska, examples being the warm water thresher sharks (Alopias spp) and ocean sunfish (Mola mola). In the spring of 2016, acres of Velella velella were reported in the waters south of the Copper River Delta. The discovery of a skipjack tuna (Katsuwonus pelamis), primarily a fish of warm tropical waters, off Copper River, in Alaska, north of the previous geographic limit, and a dead sooty storm-petrel (Oceanodroma tristrami), a species native to Northern Asia and Hawaii, along with a few brown boobies (Sula leucogaster) in the Farallon Islands of California, besides other such records, has led marine biologists to worry that the food web across the Pacific is in danger of disruption. Biologists from the University of Queensland observed the first-ever mass bleaching event for Hawaiian coral reefs in 2014, and attributed it to the blob. Weather and seasons Research from the University of Washington found positive temperature anomalies in the northeast Pacific Ocean (upper ~100 m, greater than 2.5 °C, with temperatures at the coast below normal) for the winter period of 2013–2014. Heat loss from the ocean during the winter was suppressed. During spring and summer 2014 the warmer sea surface temperature anomalies reached coastal waters. The anomaly may have had a significant effect on the unusually warm summer of 2014, with record high temperatures over parts of land in the Pacific Northwest. Offshore sea surface temperatures (SSTs) in the northeast Pacific for the month of February were the greatest at least since the 1980s, possibly as early as 1900. Additionally they found anomalous sea surface pressure SSP, with a peak magnitude approaching 10 hPa, a record high value for the years of 1949–2014. Canadian senior climatologist David Phillips noted in May 2015 about the coming winter season, "If that blob continues, if it stays warm ... and then you add to that El Nino, it may complement each other and then it may be the year winter is cancelled." See also Cold blob (North Atlantic) Special Report on the Ocean and Cryosphere in a Changing Climate References Oceanography Pacific Ocean Anomalous weather
The Blob (Pacific Ocean)
Physics,Environmental_science
1,828
25,315,066
https://en.wikipedia.org/wiki/Guy%20Bertrand%20%28chemist%29
Guy Bertrand, born on July 17, 1952, at Limoges is a chemistry professor at the University of California, San Diego. Bertrand obtained his B.Sc. from the University of Montpellier in 1975 and his Ph.D. from the Paul Sabatier University, Toulouse, in 1979. He was a postdoctoral researcher at Sanofi Research, France, in 1981. The research interests of Bertrand and his co-workers lie mainly in the chemistry of with main group elements from group 13 to 16, at the border between organic, organometallic and inorganic chemistry; especially their use in stabilizing carbenes, nitrenes, phosphinidenes, radicals and biradicals, 1,3-dipoles, anti-aromatic heterocycles, and more. He has directed the synthesis of some original persistent carbenes, including bis(diisopropylamino)cyclopropenylidene, the first example of a carbene with all-carbon environment that is stable at room-temperature. Guy Bertrand is an honorific member or fellow of several scientific societies, such as the AAAS (2006), the French Academy of Sciences (2004), the European Academy of Sciences (2003), Academia Europaea (2002), and the recipient of various prizes and awards. Scientific work Questioning the current dogma is a design feature of Guy Bertrand's research program. He has made many important contributions to the chemistry of main group elements and new binding systems in inorganic, organometallic and organic chemistry. Throughout his career, he has isolated a variety of species that were supposed to be only transitional intermediates, and are now powerful tools for chemists. Its best-known contribution was the discovery in 1988 of the first stable carbene, a (phosphino)(silyl)carbene, three years before Arduengo's report on a stable N-heterocyclic carbene. Guy Bertrand is at the origin of the chemistry of stable carbenes. Since then, he has made several revolutionary discoveries that have allowed us to better understand the stability of carbenes. He was the first to isolate cyclopropenylidenes, mesoionic carbenes that cannot dimerize, resulting in a relaxation of steric requirements for their isolation More importantly, he discovered cyclic (alkyl) (amino) (amino) carbenes (CAACs), including the recently published six-membered version. CAACs are even richer in electrons than NHCs and phosphines, but at the same time, due to the presence of a single pair of free electrons on nitrogen, CAACs are more accepting than NHCs. The electronic properties of CAACs stabilize highly reactive species, including organic and major group radicals, as well as paramagnetic metal species, such as gold complexes (0), which were completely unknown. CAACs have also allowed the isolation of bis(copper)acetylide complexes, which are key catalytic intermediates in the famous "Click Reaction", and which were supposed to be only transient species. He also used CAACs to prepare and isolate the first isoelectronic nucleophilic tricoordinated organoborane from amines. These recent developments appear paradoxical since they consist in using carbenes long considered as prototypic reactive intermediates to isolate otherwise unstable molecules. Among the large-scale applications already known of CAACs are their use as a ligand for transition metal catalysts. For example, in collaboration with Grubbs, Guy Bertrand has shown that ruthenium catalysts bearing a CAAC are extremely active in the ethenolysis of methyl oleate. This is the first time that a series of metathesis catalysts have performed so well in cross-metathesis reactions using ethylene gas, with sufficient activity to make ethenolysis applicable to the industrial production of linear alpha-olefins (LAOs) and other olefinic end products from biomass. Today, hundreds of academic and industrial groups use Guy Bertrand's CAACs and other carbenes in transition metal catalysis, but also for other purposes. The most recent developments cover a wide range from nanoparticle stabilization to the antibacterial and anti-cancer properties of silver (I) and gold (I) complexes. A CAAC-copper complex even allows OLEDs to be used with a quantum efficiency close to 100% at high brightness. The discovery of stable carbenes was a breakthrough for fundamental chemistry, a real paradigm shift, but its importance also comes, and perhaps more importantly, from applications. In his review article on "N-heterocyclic carbenes", a terminology that includes carbenes, Glorius et al. wrote: "The discovery and development of N-heterocyclic carbenes is undoubtedly one of the greatest successes of recent chemical research", "N-heterocyclic carbenes are today among the most powerful tools in organic chemistry, with many applications in commercially important processes", "the meteoric rise of NHC is far from over". Guy Bertrand's contribution is not limited to carbenes. Recent highlights include the isolation of the first stable nitrenes and phosphinidenes. He showed that the first can be used to transfer a nitrogen atom to organic fragments, a difficult task for nitrido complexes of transition metals. For the second, it has recently demonstrated that it mimics the behaviour of transition metals, just like carbenes. Honours and awards He was awarded the CNRS silver medal in 1998. He is a member of the French Academy of Technology (2000), the Academia Europaea (2002), the European Academy of Sciences (2003), the French Academy of Sciences (2004) and the American Association for Advancement of Sciences (2006). He was recently awarded the Sir Ronald Nyholm Medal from the SRC (2009), the Grand Prix Le Bel from the French Chemical Society (2010), the ACS Prize in Inorganic Chemistry (2014), the Sir Geoffrey Wilkinson Prize from the SRC (2016) and the Sacconi Medal from the Italian Chemical Society (2017). He is one of the associate editors of Chemical Reviews and a member of the editorial boards of several journals. He is Chevalier of the Légion d'Honneur. References University of California, San Diego faculty Year of birth missing (living people) Living people Inorganic chemists American organic chemists Members of Academia Europaea
Guy Bertrand (chemist)
Chemistry
1,357
14,659,436
https://en.wikipedia.org/wiki/HBE1
Hemoglobin subunit epsilon is a protein that in humans is encoded by the HBE1 gene. Function The epsilon globin gene (HBE) is normally expressed in the embryonic yolk sac: two epsilon chains together with two zeta chains (an alpha-like globin) constitute the embryonic hemoglobin Hb Gower I; two epsilon chains together with two alpha chains form the embryonic Hb Gower II. Both of these embryonic hemoglobins are normally supplanted by fetal, and later, adult hemoglobin. The five beta-like globin genes are found within a 45 kb cluster on chromosome 11 in the following order: 5' - epsilon – gamma-G – gamma-A – delta – beta - 3'. See also Hemoglobin Human β-globin locus Hemoglobin alpha chains (two genes, same sequence): HBA1 HBA2 References Further reading Hemoglobins
HBE1
Chemistry
207
6,713,728
https://en.wikipedia.org/wiki/Canons%20of%20page%20construction
The canons of page construction are historical reconstructions, based on careful measurement of extant books and what is known of the mathematics and engineering methods of the time, of manuscript-framework methods that may have been used in Medieval- or Renaissance-era book design to divide a page into pleasing proportions. Since their popularization in the 20th century, these canons have influenced modern-day book design in the ways that page proportions, margins and type areas (print spaces) of books are constructed. The notion of canons, or laws of form, of book page construction was popularized by Jan Tschichold in the mid to late twentieth century, based on the work of J. A. van de Graaf, Raúl Rosarivo, Hans Kayser, and others. Tschichold wrote: "Though largely forgotten today, methods and rules upon which it is impossible to improve have been developed for centuries. To produce perfect books these rules have to be brought to life and applied", as cited in . Kayser's 1946 had earlier used the term canon in this context. Typographers and book designers are influenced by these principles to this day in page layout, with variations related to the availability of standardized paper sizes, and the diverse types of commercially printed books. Van de Graaf canon The Van de Graaf canon is a historical reconstruction of a method that may have been used in book design to divide a page in pleasing proportions. This canon is also known as the "secret canon" used in many medieval manuscripts and incunabula. The geometrical solution of the construction of Van de Graaf's canon, which works for any page width:height ratio, enables the book designer to position the type area in a specific area of the page. Using the canon, the proportions are maintained while creating pleasing and functional margins of size 1/9 and 2/9 of the page size. The resulting inner margin is one-half of the outer margin, and of proportions 2:3:4:6 (inner:top:outer:bottom) when the page proportion is 2:3 (more generally 1:R:2:2R for page proportion 1:R). This method was discovered by Van de Graaf, and used by Tschichold and other contemporary designers; they speculate that it may be older. The page proportions vary, but most commonly used is the 2:3 proportion. Tschichold writes: "For purposes of better comparison I have based his figure on a page proportion of 2:3, which Van de Graaf does not use." In this canon the type area and page size are of same proportions, and the height of the type area equals the page width. This canon was popularized by Jan Tschichold in his book The Form of the Book. Robert Bringhurst, in his The Elements of Typographic Style, asserts that the proportions that are useful for the shapes of pages are equally useful in shaping and positioning the textblock. This was often the case in medieval books, although later on in the Renaissance, typographers preferred to apply a more polyphonic page in which the proportions of page and textblock would differ. Golden canon Tschichold's "golden canon of page construction" is based on simple integer ratios, equivalent to Rosarivo's "typographical divine proportion". Interpretation of Rosarivo Raúl Rosarivo analyzed Renaissance-era books with the help of a drafting compass and a ruler, and concluded in his Divina proporción tipográfica ("Typographical Divine Proportion", first published in 1947) that Gutenberg, Peter Schöffer, Nicolaus Jenson and others had applied the golden canon of page construction in their works. According to Rosarivo, his work and assertion that Gutenberg used the "golden number" 2:3, or "secret number" as he called it, to establish the harmonic relationships between the diverse parts of a work, was analyzed by experts at the Gutenberg Museum and re-published in the Gutenberg-Jahrbuch, its official magazine. Ros Vicente points out that Rosarivo "demonstrates that Gutenberg had a module different from the well-known one of Luca Pacioli" (the golden ratio). Tschichold also interprets Rosarivo's golden number as 2:3, saying: The figures he refers to are reproduced in combination here. John Man's interpretation of Gutenberg Historian John Man suggests that both the Gutenberg Bible's pages and printed area were based on the golden ratio (commonly approximated as the decimal 0.618 or the ratio 5:8). He quotes the dimensions of Gutenberg's half-folio Bible page as 30.7 x 44.5 cm, a ratio of 0.690, close to Rosarivo's golden 2:3 (0.667) but not to the golden ratio (0.618). Tschichold and the golden ratio Building on Rosarivo's work, contemporary experts in book design such as Jan Tschichold and Richard Hendel assert as well that the page proportion of the golden ratio has been used in book design, in manuscripts, and incunabula, mostly in those produced between 1550 and 1770. Hendel writes that since Gutenberg's time, books have been most often printed in an upright position, that conform loosely, if not precisely, to the golden ratio. These page proportions based on the golden ratio, are usually described through its convergents such as 2:3, 3:5, 5:8, 8:13, 13:21, 21:34, etc. Tschichold says that common ratios for page proportion used in book design include as 2:3, 1:, and the golden ratio. The image with circular arcs depicts the proportions in a medieval manuscript, that according to Tschichold feature a "Page proportion 2:3. Margin proportions 1:1:2:3. Type area in accord with the Golden Section. The lower outer corner of the type area is fixed by a diagonal as well." By accord with the golden ratio, he does not mean exactly equal to, which would conflict with the stated proportions. Tschichold refers to a construction equivalent to van de Graaf's or Rosarivo's with a 2:3 page ratio as "the Golden Canon of book page construction as it was used during late Gothic times by the finest of scribes." For the canon with the arc construction, which yields a type area ratio closer to the golden ratio, he says "I abstracted from manuscripts that are older yet. While beautiful, it would hardly be useful today." Of the different page proportions that such a canon can be applied to, he says "Book pages come in many proportions, i.e., relationships between width and height. Everybody knows, at least from hearsay, the proportion of the Golden Section, exactly 1:1.618. A ratio of 5:8 is no more than an approximation of the Golden Section. It would be difficult to maintain the same opinion about a ratio of 2:3." Tschichold also expresses a preference for certain ratios over others: "The geometrically definable irrational page proportions like 1:1.618 (Golden Section), 1:, 1:, 1:, 1:1.538, and the simple rational proportions of 1:2, 2:3, 5:8 and 5:9 I call clear, intentional and definite. All others are unclear and accidental ratios. The difference between a clear and an unclear ratio, though frequently slight, is noticeable… Many books show none of the clear proportions, but accidental ones." John Man's quoted Gutenberg page sizes are in a proportion not very close to the golden ratio, but Rosarivo's or van de Graaf's construction is applied by Tschichold to make a pleasing type area on pages of arbitrary proportions, even such accidental ones. Current applications Richard Hendel, associate director of the University of North Carolina Press, describes book design as a craft with its own traditions and a relatively small body of accepted rules. The dust cover of his book, On Book Design, features the Van de Graaf canon. Christopher Burke, in his book on German typographer Paul Renner, creator of the Futura typeface, described his views about page proportions: Bringhurst describes a book page as a tangible proportion, which together with the textblock produce an antiphonal geometry, which has the capability to bind the reader to the book, or conversely put the reader's nerve on edge or drive the reader away. See also Book Grid Page layout References Sources shows the Van de Graaf canon and a variant that divides the page into twelfths) Further reading Luca Pacioli, De Divina Proportione (1509) Raúl Rosarivo. Divina proporción tipográfica (Brief discussion about his work) External links Book design Page layout Typography de:Satzspiegel
Canons of page construction
Engineering
1,875
47,332,291
https://en.wikipedia.org/wiki/Amanita%20chepangiana
Amanita chepangiana, commonly known as the Chepang slender Caesar, is a species of agaric fungus in the family Amanitaceae native to China and southern Asia. In parts of Yunnan, China, the species is traditionally consumed. However, toxicity analysis found out at least one type of amatoxin and phallotoxin each within the species. Since it is difficult to distinguish from other lethal species, human consumption is generally not recommended. References External links chepangiana Fungi of China Fungi described in 1992 Fungus species
Amanita chepangiana
Biology
113
74,680,257
https://en.wikipedia.org/wiki/Environmental%20Rights%20Action
Environmental Rights Action (ERA), sometimes referred to as Friends of Earth Nigeria, is a Nigerian advocacy non-governmental organization, with a focus on the environmental human right issues in Nigeria and protection of the human ecosystem. The organization which was established in 1993, is the Nigerian chapter of Friends of the Earth International. The goals of the organization is to help promote environmentally responsible government and communities in Nigeria. The top priority areas for the organization include advocating for waste and plastic policies and regulations, building coalitions and strengthening alliances, regional and international advocacy to expose the violations connected to industrial plantation companies, who are drivers of biodiversity loss at all levels of work. Campaign and Project Environmental Rights Action coordinated and hosted Oilwatch International, the Africa Tobacco Control Regional Initiative (ATCRI), the Nigerian Tobacco Control Alliance (NTCA). In addition, the organization is involved in Green Alliance Nigeria (GAN). Awards and recognition ERA have received the following awards and recognition: 2009 Bloomberg Awards for Global Tobacco Control 1998 Sophie Prize for excellence and courage in the struggle for environmental Justice See also Friends of the Earth, Inc. v. Laidlaw Environmental Services, Inc. List of environmental organizations Friends of the Earth (HK) References Anti-nuclear organizations Environmental organizations established in 1993
Environmental Rights Action
Engineering
252
26,861,483
https://en.wikipedia.org/wiki/SDSS%20J0303-0019
SDSS J0303-0019 is a distant quasar in the z≥6 regime. It is one of the first two quasars discovered that appear to be "dust-free", the other being QSO J0005-0006. On 17 March 2010, Xiaohui Fan, an astronomer at the University of Arizona, leader of the team that made the discovery, announced the discovery of two quasars that have dustless spectra. The implication of this result is that the region of space in which they inhabit is primordially pristine, having not been polluted by "dust" created by the first stars. These are thought to represent the earliest type of quasar. They also announced the next earliest class of quasar, where dust is detected in proportion to the growth of the galaxy. In more recent quasars, dust is not related to the quasar or galaxy. See also Stardust References SDSS J0303-0019 Cetus SDSS objects
SDSS J0303-0019
Astronomy
211
60,756,169
https://en.wikipedia.org/wiki/Caffeine%20use%20for%20sport
Caffeine use for sport is a worldwide known and tested idea. Many athletes use caffeine as a legal performance enhancer, as the benefits it provides, both physically and cognitively outweigh the disadvantages. The benefits caffeine provides influences the performance of both endurance athletes and anaerobic athletes. Caffeine has been proven to be effective in enhancing performance. Caffeine is a stimulant drug. Once consumed, it is absorbed in the stomach and small intestine as well as being circulated throughout the body. It targets muscles and organs, in particular the brain. Caffeine is most commonly known for being in coffee. It is also found in tea, chocolate, soft drinks, energy drinks and medications. The short term effects from caffeine are usually noticed after 5–30 minutes and long term ones last for up to 12 hours. Those who use caffeine regularly, most often drinking at least one coffee a day, can become dependent and addicted. If caffeine use for these people is stopped they may have withdrawals symptoms of feeling tired and headaches. Effects Physical Caffeine acts on both the respiratory system and cardiovascular system. The cardiovascular system is the pathway the human body uses for circulating blood, supplying oxygen and removing waste products. The respiratory system is the system involved with the exchange of oxygen and carbon dioxide between the atmosphere and the blood. Via many of these physiological responses, the fatigue an athlete would normally feel is postponed, allowing physical activity to be sustained for longer and of a higher level. Cognitive As caffeine targets the brain, there are many cognitive effects from using it.  Caffeine can reduce tiredness and improve reaction time. Disadvantages Physical Caffeine is a mild diuretic, which can often lead to dehydration. Other physical disadvantages include, impaired fine motor control, observed via the shakiness of athlete's hands, gastrointestinal upset, increased heart rate and sleep disruptions. Cognitive Caffeine can cause feelings of anxiety and insomnia. Studies have found that sleep deprivation has a significant effect on sub-maximal, prolonged exercise. Caffeine also elevates stress hormone levels and one's perception of stress. Effectiveness Studies have found that typical doses of caffeine from 1–3 mg per kg of body weight will provide an effective improvement to performance. Previously, high doses were used such as 6 mg/kg, until recently lower doses supply the desired benefits with less consequences. There is preliminary evidence that shows caffeine is effective for endurance and anaerobic activities. Anaerobic athletes In studies of trained males the discovery of the optimal amount of caffeine for anaerobic exercise was determined. A caffeine dosage of 3–5 mg/kg may improve high-intensity sprint performance when consumed prior to exercise. One analysis showed that there were small improvements, in which they discussed for these activities correlate to meaningful differences in performance. The following conclusions were drawn: Caffeine ingested resulted in an increase in upper body strength but not lower body strength. For strength exercises, there was no significant differences between trained and untrained subjects Caffeine in capsule form had a greater influence on performance rather than liquid form, gums and gels were not tested. Using a vertical jump as an indicator of muscle power, results showed a significant increase in power, supporting caffeine as a possible ergogenic aid. References Exercise biochemistry Sports culture Doping in sport
Caffeine use for sport
Chemistry,Biology
713
191,490
https://en.wikipedia.org/wiki/Machine%20tool
A machine tool is a machine for handling or machining metal or other rigid materials, usually by cutting, boring, grinding, shearing, or other forms of deformations. Machine tools employ some sort of tool that does the cutting or shaping. All machine tools have some means of constraining the workpiece and provide a guided movement of the parts of the machine. Thus, the relative movement between the workpiece and the cutting tool (which is called the toolpath) is controlled or constrained by the machine to at least some extent, rather than being entirely "offhand" or "freehand". It is a power-driven metal cutting machine which assists in managing the needed relative motion between cutting tool and the job that changes the size and shape of the job material. The precise definition of the term machine tool varies among users, as discussed below. While all machine tools are "machines that help people to make things", not all factory machines are machine tools. Today machine tools are typically powered other than by the human muscle (e.g., electrically, hydraulically, or via line shaft), used to make manufactured parts (components) in various ways that include cutting or certain other kinds of deformation. With their inherent precision, machine tools enabled the economical production of interchangeable parts. Nomenclature and key concepts, interrelated Many historians of technology consider that true machine tools were born when the toolpath first became guided by the machine itself in some way, at least to some extent, so that direct, freehand human guidance of the toolpath (with hands, feet, or mouth) was no longer the only guidance used in the cutting or forming process. In this view of the definition, the term, arising at a time when all tools up till then had been hand tools, simply provided a label for "tools that were machines instead of hand tools". Early lathes, those prior to the late medieval period, and modern woodworking lathes and potter's wheels may or may not fall under this definition, depending on how one views the headstock spindle itself; but the earliest historical records of a lathe with direct mechanical control of the cutting tool's path are of a screw-cutting lathe dating to about 1483. This lathe "produced screw threads out of wood and employed a true compound slide rest". The mechanical toolpath guidance grew out of various root concepts: First is the spindle concept itself, which constrains workpiece or tool movement to rotation around a fixed axis. This ancient concept predates machine tools per se; the earliest lathes and potter's wheels incorporated it for the workpiece, but the movement of the tool itself on these machines was entirely freehand. The machine slide (tool way), which has many forms, such as dovetail ways, box ways, or cylindrical column ways. Machine slides constrain tool or workpiece movement linearly. If a stop is added, the length of the line can also be accurately controlled. (Machine slides are essentially a subset of linear bearings, although the language used to classify these various machine elements may be defined differently by some users in some contexts, and some elements may be distinguished by contrasting with others) Tracing, which involves following the contours of a model or template and transferring the resulting motion to the toolpath. cam operation, which is related in principle to tracing but can be a step or two removed from the traced element's matching the reproduced element's final shape. For example, several cams, no one of which directly matches the desired output shape, can actuate a complex toolpath by creating component vectors that add up to a net toolpath. Van Der Waals Force between like materials is high; freehand manufacture of square plates, produces only square, flat, machine tool building reference components, accurate to millionths of an inch, but of nearly no variety. The process of feature replication allows the flatness and squareness of a milling machine cross slide assembly, or the roundness, lack of taper, and squareness of the two axes of a lathe machine to be transferred to a machined work piece with accuracy and precision better than a thousandth of an inch, not as fine as millionths of an inch. As the fit between sliding parts of a made product, machine, or machine tool approaches this critical thousandth of an inch measurement, lubrication and capillary action combine to prevent Van Der Waals force from welding like metals together, extending the lubricated life of sliding parts by a factor of thousands to millions; the disaster of oil depletion in the conventional automotive engine is an accessible demonstration of the need, and in aerospace design, like-to-unlike design is used along with solid lubricants to prevent Van Der Waals welding from destroying mating surfaces. Given the modulus of elasticity of metals, the range of fit tolerances near one thousandth of an inch correlates to the relevant range of constraint between at one extreme, permanent assembly of two mating parts and at the other, a free sliding fit of those same two parts. Abstractly programmable toolpath guidance began with mechanical solutions, such as in musical box cams and Jacquard looms. The convergence of programmable mechanical control with machine tool toolpath control was delayed many decades, in part because the programmable control methods of musical boxes and looms lacked the rigidity for machine tool toolpaths. Later, electromechanical solutions (such as servos) and soon electronic solutions (including computers) were added, leading to numerical control and computer numerical control. When considering the difference between freehand toolpaths and machine-constrained toolpaths, the concepts of accuracy and precision, efficiency, and productivity become important in understanding why the machine-constrained option adds value. Matter-Additive, Matter-Preserving, and Matter-Subtractive "Manufacturing" can proceed in sixteen ways: Firstly, the work may be held either in a hand, or a clamp; secondly, the tool may be held either in a hand, or a clamp; thirdly, the energy can come from either the hand(s) holding the tool and/or the work, or from some external source, including for examples a foot treadle by the same worker, or a motor, without limitation; and finally, the control can come from either the hand(s) holding the tool and/or the work, or from some other source, including computer numerical control. With two choices for each of four parameters, the types are enumerated to sixteen types of Manufacturing, where Matter-Additive might mean painting on canvas as readily as it might mean 3D printing under computer control, Matter-Preserving might mean forging at the coal fire as readily as stamping license plates, and Matter-Subtracting might mean casually whittling a pencil point as readily as it might mean precision grinding the final form of a laser deposited turbine blade. A precise description of what a machine tool is and does in an instant moment is given by a 12 component vector relating the linear and rotational degrees of freedom of the single work piece and the single tool contacting that work piece in any machine arbitrarily and in order to visualize this vector it makes sense to arrange it in four rows of three columns with labels x y and z on the columns and labels spin and move on the rows, with those two labels repeated one more time to make a total of four rows so that the first row might be labeled spin work, the second row might be labeled move work, the third row might be labeled spin tool, and the fourth row might be labeled move tool although the position of the labels is arbitrary which is to say there is no agreement in the literature of mechanical engineering on what order these labels should be but there are 12 degrees of freedom in a machine tool. That said it is important to remember that this is in an instant moment and that instant moment may be a preparatory moment before a tool makes contact with a work piece, or maybe an engaged moment during which contact with work and tool requires an input of rather large amounts of power to get work done which is why machine tools are large and heavy and stiff. Since what these vectors describe our instant moments of degrees of freedom the vector structure is capable of expressing the changing mode of a machine tool as well as expressing its fundamental structure in the following way: imagine a lathe spending a cylinder on a horizontal axis with a tool ready to cut a face on that cylinder in some preparatory moment. What the operator of such a lathe would do is lock the x-axis on the carriage of the lathe establishing a new vector condition with a zero in the x slide position for the tool. Then the operator would unlock the y-axis on the cross slide of the lathe, assuming that our examples were equipped with that, and then the operator would apply some method of traversing the facing tool across the face of the cylinder being cut and a depth combined with the rotational speed selected which engages cutting ability within the power of range of the motor powering the lathe. So the answer to what a machine tool is, is a very simple answer but it is highly technical and is unrelated to the history of machine tools. Preceding, there is an answer for what machine tools are. We may consider what they do also. Machine tools produce finished surfaces. They may produce any finish from an arbitrary degree of very rough work to a specular optical grade finish the improvement of which is moot. Machine tools produce the surfaces comprising the features of machine parts by removing chips. These chips may be very rough or even as fine as dust. Every machine tools supports its removal process with a stiff, redundant and so vibration resisting structure because each chip is removed in a semi a synchronous way, creating multiple opportunities for vibration to interfere with precision. Humans are generally quite talented in their freehand movements; the drawings, paintings, and sculptures of artists such as Michelangelo or Leonardo da Vinci, and of countless other talented people, show that human freehand toolpath has great potential. The value that machine tools added to these human talents is in the areas of rigidity (constraining the toolpath despite thousands of newtons (pounds) of force fighting against the constraint), accuracy and precision, efficiency, and productivity. With a machine tool, toolpaths that no human muscle could constrain can be constrained; and toolpaths that are technically possible with freehand methods, but would require tremendous time and skill to execute, can instead be executed quickly and easily, even by people with little freehand talent (because the machine takes care of it). The latter aspect of machine tools is often referred to by historians of bytechnology as "building the skill into the tool", in contrast to the toolpath-constraining skill being in the person who wields the tool. As an example, it is physically possible to make interchangeable screws, bolts, and nuts entirely with freehand toolpaths. But it is economically practical to make them only with machine tools. In the 1930s, the U.S. National Bureau of Economic Research (NBER) referenced the definition of a machine tool as "any machine operating by other than hand power which employs a tool to work on metal". The narrowest colloquial sense of the term reserves it only for machines that perform metal cutting—in other words, the many kinds of [conventional] machining and grinding. These processes are a type of deformation that produces swarf. However, economists use a slightly broader sense that also includes metal deformation of other types that squeeze the metal into shape without cutting off swarf, such as rolling, stamping with dies, shearing, swaging, riveting, and others. Thus presses are usually included in the economic definition of machine tools. For example, this is the breadth of definition used by Max Holland in his history of Burgmaster and Houdaille, which is also a history of the machine tool industry in general from the 1940s through the 1980s; he was reflecting the sense of the term used by Houdaille itself and other firms in the industry. Many reports on machine tool export and import and similar economic topics use this broader definition. The colloquial sense implying [conventional] metal cutting is also growing obsolete because of changing technology over the decades. The many more recently developed processes labeled "machining", such as electrical discharge machining, electrochemical machining, electron beam machining, photochemical machining, and ultrasonic machining, or even plasma cutting and water jet cutting, are often performed by machines that could most logically be called machine tools. In addition, some of the newly developed additive manufacturing processes, which are not about cutting away material but rather about adding it, are done by machines that are likely to end up labeled, in some cases, as machine tools. In fact, machine tool builders are already developing machines that include both subtractive and additive manufacturing in one work envelope, and retrofits of existing machines are underway. The natural language use of the terms varies, with subtle connotative boundaries. Many speakers resist using the term "machine tool" to refer to woodworking machinery (joiners, table saws, routing stations, and so on), but it is difficult to maintain any true logical dividing line, and therefore many speakers accept a broad definition. It is common to hear machinists refer to their machine tools simply as "machines". Usually the mass noun "machinery" encompasses them, but sometimes it is used to imply only those machines that are being excluded from the definition of "machine tool". This is why the machines in a food-processing plant, such as conveyors, mixers, vessels, dividers, and so on, may be labeled "machinery", while the machines in the factory's tool and die department are instead called "machine tools" in contradistinction. Regarding the 1930s NBER definition quoted above, one could argue that its specificity to metal is obsolete, as it is quite common today for particular lathes, milling machines, and machining centers (definitely machine tools) to work exclusively on plastic cutting jobs throughout their whole working lifespan. Thus the NBER definition above could be expanded to say "which employs a tool to work on metal or other materials of high hardness". And its specificity to "operating by other than hand power" is also problematic, as machine tools can be powered by people if appropriately set up, such as with a treadle (for a lathe) or a hand lever (for a shaper). Hand-powered shapers are clearly "the 'same thing' as shapers with electric motors except smaller", and it is trivial to power a micro lathe with a hand-cranked belt pulley instead of an electric motor. Thus one can question whether power source is truly a key distinguishing concept; but for economics purposes, the NBER's definition made sense, because most of the commercial value of the existence of machine tools comes about via those that are powered by electricity, hydraulics, and so on. Such are the vagaries of natural language and controlled vocabulary, both of which have their places in the business world. History Forerunners of machine tools included bow drills and potter's wheels, which had existed in ancient Egypt prior to 2500 BC, and lathes, known to have existed in multiple regions of Europe since at least 1000 to 500 BC. But it was not until the later Middle Ages and the Age of Enlightenment that the modern concept of a machine tool—a class of machines used as tools in the making of metal parts, and incorporating machine-guided toolpath—began to evolve. Clockmakers of the Middle Ages and renaissance men such as Leonardo da Vinci helped expand humans' technological milieu toward the preconditions for industrial machine tools. During the 18th and 19th centuries, and even in many cases in the 20th, the builders of machine tools tended to be the same people who would then use them to produce the end products (manufactured goods). However, from these roots also evolved an industry of machine tool builders as we define them today, meaning people who specialize in building machine tools for sale to others. Historians of machine tools often focus on a handful of major industries that most spurred machine tool development. In order of historical emergence, they have been firearms (small arms and artillery); clocks; textile machinery; steam engines (stationary, marine, rail, and otherwise) (the story of how Watt's need for an accurate cylinder spurred Boulton's boring machine is discussed by Roe); sewing machines; bicycles; automobiles; and aircraft. Others could be included in this list as well, but they tend to be connected with the root causes already listed. For example, rolling-element bearings are an industry of themselves, but this industry's main drivers of development were the vehicles already listed—trains, bicycles, automobiles, and aircraft; and other industries, such as tractors, farm implements, and tanks, borrowed heavily from those same parent industries. Machine tools filled a need created by textile machinery during the Industrial Revolution in England in the middle to late 1700s. Until that time, machinery was made mostly from wood, often including gearing and shafts. The increase in mechanization required more metal parts, which were usually made of cast iron or wrought iron. Cast iron could be cast in molds for larger parts, such as engine cylinders and gears, but was difficult to work with a file and could not be hammered. Red hot wrought iron could be hammered into shapes. Room temperature wrought iron was worked with a file and chisel and could be made into gears and other complex parts; however, hand working lacked precision and was a slow and expensive process. James Watt was unable to have an accurately bored cylinder for his first steam engine, trying for several years until John Wilkinson invented a suitable boring machine in 1774, boring Boulton & Watt's first commercial engine in 1776. The advance in the accuracy of machine tools can be traced to Henry Maudslay and refined by Joseph Whitworth. That Maudslay had established the manufacture and use of master plane gages in his shop (Maudslay & Field) located on Westminster Road south of the Thames River in London about 1809, was attested to by James Nasmyth who was employed by Maudslay in 1829 and Nasmyth documented their use in his autobiography. The process by which the master plane gages were produced dates back to antiquity but was refined to an unprecedented degree in the Maudslay shop. The process begins with three square plates each given an identification (ex., 1,2 and 3). The first step is to rub plates 1 and 2 together with a marking medium (called bluing today) revealing the high spots which would be removed by hand scraping with a steel scraper, until no irregularities were visible. This would not produce true plane surfaces but a "ball and socket" concave-concave and convex-convex fit, as this mechanical fit, like two perfect planes, can slide over each other and reveal no high spots. The rubbing and marking are repeated after rotating 2 relative to 1 by 90 degrees to eliminate concave-convex "potato-chip" curvature. Next, plate number 3 is compared and scraped to conform to plate number 1 in the same two trials. In this manner plates number 2 and 3 would be identical. Next plates number 2 and 3 would be checked against each other to determine what condition existed, either both plates were "balls" or "sockets" or "chips" or a combination. These would then be scraped until no high spots existed and then compared to plate number 1. Repeating this process of comparing and scraping the three plates could produce plane surfaces accurate to within millionths of an inch (the thickness of the marking medium). The traditional method of producing the surface gages used an abrasive powder rubbed between the plates to remove the high spots, but it was Whitworth who contributed the refinement of replacing the grinding with hand scraping. Sometime after 1825, Whitworth went to work for Maudslay and it was there that Whitworth perfected the hand scraping of master surface plane gages. In his paper presented to the British Association for the Advancement of Science at Glasgow in 1840, Whitworth pointed out the inherent inaccuracy of grinding due to no control and thus unequal distribution of the abrasive material between the plates which would produce uneven removal of material from the plates. With the creation of master plane gages of such high accuracy, all critical components of machine tools (i.e., guiding surfaces such as machine ways) could then be compared against them and scraped to the desired accuracy. The first machine tools offered for sale (i.e., commercially available) were constructed by Matthew Murray in England around 1800. Others, such as Henry Maudslay, James Nasmyth, and Joseph Whitworth, soon followed the path of expanding their entrepreneurship from manufactured end products and millwright work into the realm of building machine tools for sale. Important early machine tools included the slide rest lathe, screw-cutting lathe, turret lathe, milling machine, pattern tracing lathe, shaper, and metal planer, which were all in use before 1840. With these machine tools the decades-old objective of producing interchangeable parts was finally realized. An important early example of something now taken for granted was the standardization of screw fasteners such as nuts and bolts. Before about the beginning of the 19th century, these were used in pairs, and even screws of the same machine were generally not interchangeable. Methods were developed to cut screw thread to a greater precision than that of the feed screw in the lathe being used. This led to the bar length standards of the 19th and early 20th centuries. American production of machine tools was a critical factor in the Allies' victory in World War II. Production of machine tools tripled in the United States in the war. No war was more industrialized than World War II, and it has been written that the war was won as much by machine shops as by machine guns. The production of machine tools is concentrated in about 10 countries worldwide: China, Japan, Germany, Italy, South Korea, Taiwan, Switzerland, US, Austria, Spain and a few others. Machine tool innovation continues in several public and private research centers worldwide. Drive power sources Machine tools can be powered from a variety of sources. Human and animal power (via cranks, treadles, treadmills, or treadwheels) were used in the past, as was water power (via water wheel); however, following the development of high-pressure steam engines in the mid 19th century, factories increasingly used steam power. Factories also used hydraulic and pneumatic power. Many small workshops continued to use water, human and animal power until electrification after 1900. Today most machine tools are powered by electricity; hydraulic and pneumatic power are sometimes used, but this is uncommon. Automatic control Machine tools can be operated manually, or under automatic control. Early machines used flywheels to stabilize their motion and had complex systems of gears and levers to control the machine and the piece being worked on. Soon after World War II, the numerical control (NC) machine was developed. NC machines used a series of numbers punched on paper tape or punched cards to control their motion. In the 1960s, computers were added to give even more flexibility to the process. Such machines became known as computerized numerical control (CNC) machines. NC and CNC machines could precisely repeat sequences over and over, and could produce much more complex pieces than even the most skilled tool operators. Before long, the machines could automatically change the specific cutting and shaping tools that were being used. For example, a drill machine might contain a magazine with a variety of drill bits for producing holes of various sizes. Previously, either machine operators would usually have to manually change the bit or move the work piece to another station to perform these different operations. The next logical step was to combine several different machine tools together, all under computer control. These are known as machining centers, and have dramatically changed the way parts are made. Examples Examples of machine tools are: Broaching machine Drill press Gear shaper Hobbing machine Hone Lathe Honing Machine Screw machines Milling machine Shear (sheet metal) Shaper Bandsaw Saws Planer Stewart platform mills Grinding machines Multitasking machines (MTMs)—CNC machine tools with many axes that combine turning, milling, grinding, and material handling into one highly automated machine tool When fabricating or shaping parts, several techniques are used to remove unwanted metal. Among these are: Electrical discharge machining Grinding (abrasive cutting) Multiple edge cutting tools Single edge cutting tools Other techniques are used to add desired material. Devices that fabricate components by selective addition of material are called rapid prototyping machines. Adverse effects on humans Adverse effects mitigations Regulations Machine tool manufacturing industry The worldwide market for machine tools was approximately $81 billion in production in 2014 according to a survey by market research firm Gardner Research. The largest producer of machine tools was China with $23.8 billion of production followed by Germany and Japan at neck and neck with $12.9 billion and $12.88 billion respectively. South Korea and Italy rounded out the top 5 producers with revenue of $5.6 billion and $5 billion respectively. Safety See also References Bibliography A history most specifically of Burgmaster, which specialized in turret drills; but in telling Burgmaster's story, and that of its acquirer Houdaille, Holland provides a history of the machine tool industry in general between World War II and the 1980s that ranks with Noble's coverage of the same era (Noble 1984) as a seminal history. Later republished under the title From Industry to Alchemy: Burgmaster, a Machine Tool Company. . The Moore family firm, the Moore Special Tool Company, independently invented the jig borer (contemporaneously with its Swiss invention), and Moore's monograph is a seminal classic of the principles of machine tool design and construction that yield the highest possible accuracy and precision in machine tools (second only to that of metrological machines). The Moore firm epitomized the art and science of the tool and die maker. . A seminal classic of machine tool history. Extensively cited by later works. . Collection of previously published monographs bound as one volume. A collection of seminal classics of machine tool history. Further reading A memoir that contains quite a bit of general history of the industry. . A monograph with a focus on history, economics, and import and export policy. Original 1976 publication: LCCN 75-046133, . One of the most detailed histories of the machine tool industry from the late 18th century through 1932. Not comprehensive in terms of firm names and sales statistics (like Floud focuses on), but extremely detailed in exploring the development and spread of practicable interchangeability, and the thinking behind the intermediate steps. Extensively cited by later works. One of the most detailed histories of the machine tool industry from World War II through the early 1980s, relayed in the context of the social impact of evolving automation via NC and CNC. . A biography of a machine tool builder that also contains some general history of the industry. Ryder, Thomas and Son, Machines to Make Machines 1865 to 1968, a centenary booklet, (Derby: Bemrose & Sons, 1968) External links Milestones in the History of Machine Tools Industrial machinery Machines Machining Tools Woodworking
Machine tool
Physics,Technology,Engineering
5,643
3,994,988
https://en.wikipedia.org/wiki/Thick%20set
In mathematics, a thick set is a set of integers that contains arbitrarily long intervals. That is, given a thick set , for every , there is some such that . Examples Trivially is a thick set. Other well-known sets that are thick include non-primes and non-squares. Thick sets can also be sparse, for example: Generalisations The notion of a thick set can also be defined more generally for a semigroup, as follows. Given a semigroup and , is said to be thick if for any finite subset , there exists such that It can be verified that when the semigroup under consideration is the natural numbers with the addition operation , this definition is equivalent to the one given above. See also Cofinal (mathematics) Cofiniteness Ergodic Ramsey theory Piecewise syndetic set Syndetic set References J. McLeod, "Some Notions of Size in Partial Semigroups", Topology Proceedings, Vol. 25 (Summer 2000), pp. 317-332. Vitaly Bergelson, "Minimal Idempotents and Ergodic Ramsey Theory", Topics in Dynamics and Ergodic Theory 8-39, London Math. Soc. Lecture Note Series 310, Cambridge Univ. Press, Cambridge, (2003) Vitaly Bergelson, N. Hindman, "Partition regular structures contained in large sets are abundant", Journal of Combinatorial Theory, Series A 93 (2001), pp. 18-36 N. Hindman, D. Strauss. Algebra in the Stone-Čech Compactification. p104, Def. 4.45. Semigroup theory Ergodic theory
Thick set
Mathematics
341
325,064
https://en.wikipedia.org/wiki/Scenic%20painting%20%28theatre%29
Theatrical scenic painting includes wide-ranging disciplines, encompassing virtually the entire scope of painting and craft techniques. An experienced scenic painter (or scenic artist) will have skills in landscape painting, figurative painting, trompe-l'œil, and faux finishing, and be versatile in different media such as acrylic, oil, and tempera paint. The painter might also be accomplished in three-dimensional skills such as sculpting, plasterering and gilding. To select the optimal materials, scenic painters must also have knowledge of paint composition. The scenic painter takes direction from the theatre designer. In some cases designers paint their own designs. The techniques and specialized knowledge of the scenic painter replicate an image to a larger scale from a designer's maquette, perhaps with accompanying photographs, printouts and original research, and sometimes with paint and style samples. Often, custom tools are made to create the desired effect. History The first written description of scenic painting as an art form is from the Italian Renaissance, when Leon Battista Alberti examined Greek stage painting and decoration in the time of Aeschylus. During and after the Renaissance, the ability to draw in perspective became core to painting for the stage. In the late 19th century, it was not unusual for successful scenic artists to achieve celebrity status, as spectacular backdrops became fashionable. With the emergence of modern stage design in the early 20th century, painted scenery came to considered "quaint". Since then, he practice of modern stage painting has evolved and continues to flourish today. Although the best scenic painters are rarely credited in theatre programs on the same level as scenic designers, they are highly respected in the theatre profession and critical to the creative process. Scenic paint Scenic paint has traditionally been mixed by the painter using pigment powder colour, a binder and a medium. The binder adheres the powder to itself and to the surface on which it is applied. The medium is a thinner which allows the paint to be worked more easily, disappearing as the paint dries. Today it is common to use brands of ready-made scenic paint, or pigment suspended in a medium to which a binder will be added. References Further reading Crabtree, Susan; Beudert, Peter (2011), Scenic Art for the Theatre, Focal Press, Scenic design Theatrical occupations Visual arts genres
Scenic painting (theatre)
Engineering
470
1,491,567
https://en.wikipedia.org/wiki/Rm%20%28Unix%29
rm (short for remove) is a basic command on Unix and Unix-like operating systems used to remove objects such as computer files, directories and symbolic links from file systems and also special files such as device nodes, pipes and sockets, similar to the del command in MS-DOS, OS/2, and Microsoft Windows. The command is also available in the EFI shell. Overview The rm command removes references to objects from the filesystem using the unlink system call, where those objects might have had multiple references (for example, a file with two different names), and the objects themselves are discarded only when all references have been removed and no programs still have open handles to the objects. This allows for scenarios where a program can open a file, immediately remove it from the filesystem, and then use it for temporary space, knowing that the file's space will be reclaimed after the program exits, even if it exits by crashing. The command generally does not destroy file data, since its purpose is really merely to unlink references, and the filesystem space freed may still contain leftover data from the removed file. This can be a security concern in some cases, and hardened versions sometimes provide for wiping out the data as the last link is being cut, and programs such as shred and srm are available which specifically provide data wiping capability. rm is generally only seen on UNIX-derived operating systems, which typically do not provide for recovery of deleted files through a mechanism like the recycle bin, hence the tendency for users to enclose rm in some kind of wrapper to limit accidental file deletion. There are undelete utilities that will attempt to reconstruct the index and can bring the file back if the parts were not reused. History On some old versions of Unix, the rm command would delete directories if they were empty. This behaviour can still be obtained in some versions of rm with the -d flag, e.g., the BSDs (such as FreeBSD, NetBSD, OpenBSD and macOS) derived from 4.4BSD-Lite2. The version of rm bundled in GNU coreutils was written by Paul Rubin, David MacKenzie, Richard Stallman, and Jim Meyering. This version also provides -d option, to help with compatibility. The same functionality is provided by the standard rmdir command. The -i option in Version 7 replaced dsw, or "delete from switches", which debuted in Version 1. Doug McIlroy wrote that dsw "was a desperation tool designed to clean up files with unutterable names". The command is available as a separate package for Microsoft Windows as part of the UnxUtils collection of native Win32 ports of common GNU Unix-like utilities. KolibriOS includes an implementation of the command. The command has also been ported to the IBM i operating system. Syntax rm deletes the file specified after options are added. Users can use a full path or a relative file path to specify the files to delete. rm doesn't delete a directory by default.rm foo deletes the file "foo" in the directory the user is currently in. rm, like other commands, uses options to specify how it will behave: -r, "recursive," which removes directories, removing the contents recursively beforehand (so as not to leave files without a directory to reside in). -i, "interactive" which asks for every deletion to be confirmed. -f, "force," which ignores non-existent files and overrides any confirmation prompts (effectively canceling -i), although it will not remove files from a directory if the directory is write-protected. -v, "verbose," which prints what rm is doing onto the terminal -d, "directory," which deletes an empty directory, and only works if the specified directory is empty. --one-file-system, only removes files on the same file system as the argument, and will ignore mounted file systems. rm can be overlain by a shell alias (C shell alias, Bourne shell or Bash) function of "rm -i" so as to avoid accidental deletion of files. If a user still wishes to delete a large number of files without confirmation, they can manually cancel out the -i argument by adding the -f option (as the option specified later on the expanded command line "rm -i -f" takes precedence). Unfortunately this approach generates dangerous habits towards the use of wildcarding, leading to its own version of accidental removals. rm -rf (variously, rm -rf /, rm -rf *, and others) is frequently used in jokes and anecdotes about Unix disasters, such as the loss of many files during the production of film Toy Story 2 at Pixar. The rm -rf / variant of the command, if run by a superuser, would cause every file accessible from the present file system to be deleted from the machine. rm is often used in conjunction with xargs to supply a list of files to delete: xargs rm < filelist Or, to remove all PNG images in all directories below the current one: find . -name '*.png' -exec rm {} + Permissions Usually, on most filesystems, deleting a file requires write permission on the parent directory (and execute permission, in order to enter the directory in the first place). (Note that, confusingly for beginners, permissions on the file itself are irrelevant. However, GNU rm asks for confirmation if a write-protected file is to be deleted, unless the -f option is used.) To delete a directory (with rm -r), one must delete all of its contents recursively. This requires that one must have read and write and execute permission to that directory (if it's not empty) and all non-empty subdirectories recursively (if there are any). The read permissions are needed to list the contents of the directory in order to delete them. This sometimes leads to an odd situation where a non-empty directory cannot be deleted because one doesn't have write permission to it and so cannot delete its contents; but if the same directory were empty, one would be able to delete it. If a file resides in a directory with the sticky bit set, then deleting the file requires one to be the owner of the file. Protection of the filesystem root Sun Microsystems introduced "rm -rf /" protection in Solaris 10, first released in 2005. Upon executing the command, the system now reports that the removal of / is not allowed. Shortly after, the same functionality was introduced into FreeBSD version of rm utility. GNU rm refuses to execute rm -rf / if the --preserve-root option is given, which has been the default since version 6.4 of GNU Core Utilities was released in 2006. In newer systems, this failsafe is always active, even without the option. To run the command, user must bypass the failsafe by adding the option --no-preserve-root, even if they are the superuser. User-proofing Systems administrators, designers, and even users often attempt to defend themselves against accidentally deleting files by creating an alias or function along the lines of: alias rm="rm -i" rm () { /bin/rm -i "$@" ; } This results in rm asking the user to confirm on a file-by-file basis whether it should be deleted, by pressing the Y or N key. Unfortunately, this tends to train users to be careless about the wildcards they hand into their rm commands, as well as encouraging a tendency to alternately pound y and the return key to affirm removes - until just past the one file they needed to keep. Users have even been seen going as far as "yes | rm files", which automatically inserts "y" for each file. A compromise that allows users to confirm just once, encourages proper wildcarding, and makes verification of the list easier can be achieved with something like: if [ -n "$PS1" ] ; then rm () { ls -FCsd "$@" echo 'remove[ny]? ' | tr -d '\012' ; read if [ "_$REPLY" = "_y" ]; then /bin/rm -rf "$@" else echo '(cancelled)' fi } fi It is important to note that this function should not be made into a shell script, which would run a risk of it being found ahead of the system rm in the search path, nor should it be allowed in non-interactive shells where it could break batch jobs. Enclosing the definition in the if [ -n "$PS1" ] ; then .... ; fi construct protects against the latter. There exist third-party alternatives which prevent accidental deletion of important files, such as "safe-rm" or "trash". Maximum command line argument limitation GNU Core Utilities implementation used in multiple Linux distributions have limits on command line arguments. Arguments are nominally limited to 32 times the kernel's allocated page size. Systems with 4KB page size would thus have a argument size limit of 128KB. For command-line arguments before kernel 2.6.23, (released on 9 October 2007,) the limits were defined at kernel compile time and can be modified by changing the variable MAX_ARG_PAGES in include/linux/binfmts.h file. Newer kernels limit the maximum argument length to 25% of the maximum stack limit (ulimit -s). Exceeding the limit would prompt the display of the error message /bin/rm: Argument list too long. See also srm (Unix): secure remove file in Unix unlink(): the underlying system call called by this user space program for its main functionality del (command) deltree dsw (command) - an obsolete Unix command for deleting difficult files References Further reading External links File deletion Standard Unix programs Unix SUS2008 utilities Plan 9 commands Inferno (operating system) commands IBM i Qshell commands
Rm (Unix)
Technology
2,141
69,715,647
https://en.wikipedia.org/wiki/Ruler%20function
In number theory, the ruler function of an integer can be either of two closely related functions. One of these functions counts the number of times can be evenly divided by two, which for the numbers 1, 2, 3, ... is Alternatively, the ruler function can be defined as the same numbers plus one, which for the numbers 1, 2, 3, ... produces the sequence As well as being related by adding one, these two sequences are related in a different way: the second one can be formed from the first one by removing all the zeros, and the first one can be formed from the second one by adding zeros at the start and between every pair of numbers. For either definition of the ruler function, the rising and falling patterns of the values of this function resemble the lengths of marks on rulers with traditional units such as inches. These functions should be distinguished from Thomae's function, a function on real numbers which behaves similarly to the ruler function when restricted to the dyadic rational numbers. In advanced mathematics, the 0-based ruler function is the 2-adic valuation of the number, and the lexicographically earliest infinite square-free word over the natural numbers. It also gives the position of the bit that changes at each step of the Gray code. In the Tower of Hanoi puzzle, with the disks of the puzzle numbered in order by their size, the 1-based ruler function gives the number of the disk to move at each step in an optimal solution to the puzzle. A simulation of the puzzle, in conjunction with other methods for generating its optimal sequence of moves, can be used in an algorithm for generating the sequence of values of the ruler function in constant time per value. References External links Calculus Special functions Number theory
Ruler function
Mathematics
361
9,133,272
https://en.wikipedia.org/wiki/Vepalimomab
Vepalimomab is an experimental mouse monoclonal antibody intended for the treatment of inflammations. It blocks vascular adhesion protein 1. Development of the drug was discontinued in 2002. References Monoclonal antibodies Abandoned drugs
Vepalimomab
Chemistry
47
3,919,958
https://en.wikipedia.org/wiki/Cc%3AMail
cc:Mail is a discontinued store-and-forward LAN-based email system originally developed on Microsoft's MS-DOS platform by Concentric Systems, Inc. in the 1980s. The company, founded by Robert Plummer, Hubert Lipinski, and Michael Palmer, later changed its name to PCC Systems, Inc., and then to cc:Mail, Inc. At the height of its popularity, cc:Mail had about 14 million users, and won various awards for being the top email software package of the mid-1990s. Architecture overview In the 1980s and 1990s, it became common in office environments to have a personal computer on every desk, all connected via a local area network (LAN). Typically, (at least) one computer is set up as a file server, so that any computer on the LAN can store and access files on the server as if they were local files. cc:Mail was designed to operate in that environment. The central point of focus in the cc:Mail architecture is the cc:Mail "post office," which is a collection of files located on the file server and consisting of the message store and related data. However, no cc:Mail software needs to be installed or run on the file server itself. The cc:Mail application is installed on the user desktops. It provides a user interface, and reads and writes to the post office files directly in order to send, access, and manage email messages. This arrangement is called a "shared-file mail system" (which was also implemented later in competing products such as Microsoft Mail). This is in contrast to a "client/server mail system" which involves a mail client application interacting with a mail server application (the latter then being the focal point of message handling). Client/server mail was added later to the cc:Mail product architecture (see below), and also became available in competing offerings (such as Microsoft Exchange). Other than the cc:Mail desktop application, key software elements of the cc:Mail architecture include cc:Mail Router (for transferring messages between post offices, possibly in distant locations, and for providing a dial-in access point for users using the mobile version of the cc:Mail desktop application), gateways (providing links to other mail system types), and various administrative tools. Like the cc:Mail desktop application, all of these can access the post office via the file-sharing facility of the local area network. However, some administrative functions required exclusive access to the post office, so post offices would be taken offline periodically for necessary maintenance (such as recovering disk space from deleted messages). Message store The cc:Mail message store is based on a related set of files including a message store file, a directory and index file, and user files. In this structure, multiple users may have a reference in their individual files to the same message, thus the product offered a single instance message store. Message references in user files relate to message offsets stored in an indexed structure. Message offsets refer to locations within the message store file which is common to all users within a given database or "post office". Client technology The cc:Mail system provided native email clients for DOS, Microsoft Windows, OS/2, Macintosh, and Unix (the MIT X Window System under HP-UX and Solaris). cc:Mail allowed client access via native clients, web browsers, POP3 and IMAP4. cc:Mail provided the first commercial web-based email product in 1995. MTA (Router) The cc:Mail MTA or Router, which ran on DOS, 16-bit Windows, Windows NT, and OS/2, supported file access, asynchronous communications, and various network protocols including Novell SPX and TCP/IP. The cc:Mail Router also provided remote access to end users via dial-up and network protocols such as TCP/IP. The "remote call through" feature of the cc:Mail Router made it possible for a mobile user to connect through a single point to access any cc:Mail database within a given cc:Mail system. Various connection types and schedules could be configured along with conditions related to message attributes such as priority or message size to create complex message routing topologies. Gateways The cc:Mail system offered a wide range of email gateways, connectors, and add-on products including links to SMTP, UUCP, IBM PROFS, pager networks, fax, commercial email services such as MCI and more. Directory services cc:Mail provided directory synchronization throughout a system via an Automatic Directory Exchange (ADE) feature which supported a number of 'propagation types', such as peer, superior, and subordinate, from which sophisticated topologies could be constructed. cc:Mail also provided an email-based newsgroup or discussion-like feature referred to as Bulletin Boards which were propagated and synchronized using similar mechanisms. Related features included the ability to synchronize the cc:Mail directory with other directories, such as that of Novell NetWare. Server technology The core cc:Mail technology relied on OSI model network operating systems such as Novell NetWare. These network operating systems provided redirection of native operating system file I/O allowing network nodes to access server-based files transparently, as well as concurrently. Delivery of messages in cc:Mail is time invariant meaning that many database changes, such as message deliveries and deletions, can be under way at the same time without conflicting. Fundamentally, time invariance is made possible in OSI model network operating systems by the combination of the ability to write data to a file system past the end of a file and the ability to lock a record within a file. Advantages The shared file access architecture of cc:Mail offered significant performance benefits and made it possible for cc:Mail to implement a single instance message store years in advance of other products. The file-based nature of the message store also made the system very flexible and in some respects, e.g., moving a database to a new server, easy to manage. Criticisms The architectural approach of cc:Mail had drawbacks both in terms of scalability and in terms of vulnerability of cc:Mail databases to data corruption due to network errors or network operating system software defects. The cc:Mail system became notorious for its tendency to suffer database corruptions. Additionally, the technology was originally developed in a 1980s environment comprising disconnected LANs linked by dial-up connections. While the technology adapted well to WAN environments due to the robust nature of the Router, the system was best suited to a highly distributed deployment model. Client access over a WAN was not recommended because of poor performance related to the network traffic overhead of file I/O redirection and because of increased risk of database corruption. Although automation was possible, maintenance of large numbers of databases, each with relatively few users, was undesirable compared to highly centralized client/server systems where client access could be reliably provided over a WAN. Client/Server cc:Mail cc:Mail developed a native cc:Mail server, cc:Guardian, which would allow superior scalability, reliable client access over a WAN, and virtually eliminate database corruptions by removing file I/O access to the database. At the same time the development of POP3 and IMAP4 servers provided integration with Internet standards-based client/server technologies. With the development of cc:Guardian and with support for POP3 and IMAP4, cc:Mail evolved into a true client/server platform. However, customers never deployed cc:Mail as a client/server solution in large numbers. Acquisition by Lotus Lotus Development acquired cc:Mail, Inc. (formerly PCC Systems), which was a Silicon Valley startup, in 1991 and used the cc:Mail technology to enhance Lotus Notes. Lotus Notes features derived from cc:Mail included Shared Mail, client type-ahead addressing, enhancements to the Notes MTA (also called Router), and the Notes Passthru feature. Lotus developed a version of cc:Mail Remote for the HP 95LX. cc:Mail Remote was also included in the built-in software of the HP 100LX, HP 200LX and HP OmniGo 700LX. Lotus, which was acquired by IBM in 1995, attempted to move cc:Mail customers to Lotus Notes, which was a superior client/server platform, but their efforts met with limited success, because of early challenges in the area of coexistence and migration between cc:Mail and Notes and because Lotus was focused on groupware rather than simple email. Microsoft, which provided a simpler migration path and a more focused solution (email), succeeded in winning the majority of the cc:Mail installed base in the United States. End of life LAN-based email technology was rendered obsolete by client/server email systems such as Lotus Notes and Microsoft Exchange. The final version of cc:Mail was 8.5 and was released in 2000. October 31, 2000: cc:Mail withdrawn from the market. January 31, 2001: All cc:Mail development ceased. October 31, 2001: cc:Mail telephone support ceased. References Further reading Message transfer agents Email systems
Cc:Mail
Technology
1,888
177,309
https://en.wikipedia.org/wiki/Lighting%20design
In theatre, a lighting designer (or LD) works with the director, choreographer, set designer, costume designer, and sound designer to create the lighting, atmosphere, and time of day for the production in response to the text while keeping in mind issues of visibility, safety, and cost. The LD also works closely with the stage manager or show control programming, if show control systems are used in that production. Outside stage lighting, the job of a lighting designer can be much more diverse, and they can be found working on rock and pop tours, corporate launches, art installations, or lighting effects at sporting events. During pre-production The role of the lighting designer varies greatly within professional and amateur theater. For a Broadway show, a touring production and most regional and small productions the LD is usually an outside freelance specialist hired early in the production process. Smaller theater companies may have a resident lighting designer responsible for most of the company's productions or rely on a variety of freelance or even volunteer help to light their productions. At the off-Broadway or off-off-Broadway level, the LD will occasionally be responsible for much of the hands-on technical work (such as hanging instruments, programming the light board, etc.) that would be the work of the lighting crew in a larger theater. The LD will read the script carefully and make notes on changes in place and time between scene—and will have meetings (called design or production meetings) with the director, designers, stage manager, and production manager to discuss ideas for the show and establish budget and scheduling details. The LD will also attend several later rehearsals to observe the way the actors are being directed to use the stage area ('blocking') during different scenes and will receive updates from the stage manager on any changes that occur. The LD will also ensure that they have an accurate plan of the theatre's lighting positions and a list of their equipment, as well as an accurate copy of the set design, especially the ground plan and section. The LD must consider the show's mood and the director's vision in creating a lighting design. To help the LD communicate artistic vision, they may employ renderings, storyboards, photographs, reproductions of artwork, or mockups of effects to help communicate how the lighting should look. Various forms of paperwork are essential for the LD to successfully communicate their design to various production team members. Examples of typical paperwork include cue sheets, light plots, instrument schedules, shop orders, and focus charts. Cue sheets communicate the placement of cues that the LD has created for the show, using artistic terminology rather than technical language, and information on exactly when each cue is called so that the stage manager and the assistants know when and where to call the cue. Cue sheets are of the most value to stage management. The light plot is a scale drawing that communicates the location of lighting fixtures and lighting positions so a team of electricians can independently install the lighting system. Next to each instrument on the plan will be information for any color gel, gobo, or other accessories that need to go with it, and its channel number. Often, paperwork listing all of this information is also generated by using a program such as Lightwright. The lighting designer uses this paperwork to aid in the visualization of not only ideas but also simple lists to assist the master electrician during load-in, focus, and technical rehearsals. Professional LDs generally use special computer-aided design packages to create accurate and easily readable drafted plots that can be swiftly updated as necessary. The LD will discuss the plot with the show's production manager and the theatre's master electrician or technical director to make sure there are no unforeseen problems during load-in. The lighting designer is responsible, in conjunction with the production's independently hired production electrician, who will interface with the theater's master electrician, for directing the theater's electrics crew in the realization of their designs during the technical rehearsals. After the Electricians have hung, circuited, and patched the lighting units, the LD will direct the focusing (pointing, shaping and sizing of the light beams) and gelling (coloring) of each unit. After focus has occurred the LD usually sits at a temporary desk (tech table) in the theater (typically on the center line in the middle of the house) where they have a good view of the stage and work with the light board operator, who will either be seated alongside them at a portable control console or talk via headset to the control room. At the tech table, the LD will generally use a Magic Sheet, which is a pictorial layout of how the lights relate to the stage, so they can have quick access to channel numbers that control particular lighting instruments. The LD may also have a copy of the light plot and channel hookup, a remote lighting console, a computer monitor connected to the light board (so they can see what the board op is doing), and a headset, though in smaller theatres this is less common. There may be a time allowed for pre-lighting or "pre-cueing", a practice that is often done with people known as Light Walkers who stand in for performers so the LD can see what the light looks like on bodies. At an arranged time, the performers arrive and the production is worked through in chronological order, with occasional stops to correct sound, lighting, entrances, etc.; known as a "cue-to-cue" or tech rehearsal. The lighting designer will work constantly with the board operator to refine the lighting states as the technical rehearsal continues, but because the focus of a "tech" rehearsal is the production's technical aspects, the LD may require the performers to pause ("hold") frequently. Nevertheless, any errors of focusing or changes to the lighting plan are corrected only when the performers are not onstage. These changes take place during 'work' or 'note' calls. The LD only attends these notes calls if units are hung or rehung and require additional focusing. The LD or assistant lighting director (also known as the ALD, see below for description) will be in charge if in attendance. If the only work to be done is maintenance (i.e. changing a lamp or burnt out gel) then the production or master electrician will be in charge and will direct the electrics crew. After the tech process, the performance may (or may not, depending on time constraints) go into dress rehearsal without a ticketed audience or previews with a ticketed audience. During this time, if the cueing is finished, the LD will sit in the audience and take notes on what works and what needs changing. At this point, the stage manager will begin to take over the work of calling cues for the light board op to follow. Generally, the LD will stay on the headset, and may still have a monitor connected to the light board in case of problems, or will be in the control booth with the board operator when a monitor is not available. Changes will often occur during notes call, but if serious problems occur, the performance may be halted and the issue will be resolved. Once the show is open to the public, the lighting designer will often stay and watch several performances of the show, making notes each night and making desired changes the next day during notes call. If the show is still in previews, then the LD will make changes, but once the production officially opens, normally, the lighting designer will not make further changes. Changes should not be made after the lighting design is finished, and never without the LD's approval. There may be times when changes are necessary after the production has officially opened. Reasons for changes after opening night include: casting changes; significant changes in blocking; addition, deletion or rearrangement of scenes; or the tech and/or preview period (if there was a preview period) was too short to accommodate as thorough a cueing as was needed (this is particularly common in dance productions). If significant changes need to be made, the LD will come in and make them, however, if only smaller changes are needed, the LD may opt to send the ALD. If a show runs for a particularly long time then the LD may come in periodically to check the focus of each lighting instrument and if they are retaining their color (some gel, especially saturated gel, loses its richness and can fade or 'burn out' over time). The LD may also sit in on a performance to make sure that the cues are still being called at the right place and time. The goal is often to finish by the opening of the show, but what is most important is that the LD and the directors believe that the design is finished to each's satisfaction. If that happens to be by opening night, then after opening no changes are normally made to that particular production run at that venue. The general maintenance of the lighting rig then becomes the responsibility of the master electrician. In small theatres It is uncommon for a small theatre to have a very large technical crew, as there is less work to do. Many times, the lighting crew of a small theater will consist of a single lighting designer and one to three people, who collectively are in charge of hanging, focusing, and patching all lighting instruments. The lighting designer, in this situation, commonly works directly with this small team, fulfilling the role of both master electrician and lighting designer. Many times the designer will directly participate in the focusing of lights. The same crew will generally also program cues and operate the light board during rehearsals and performances. In some cases, the light board and sound board are operated by the same person, depending on the complexity of the show. The lighting designer may also take on other roles in addition to lights when they are finished hanging lights and programming cues on the board. Advances in visualization and presentation As previously mentioned, it is difficult to fully communicate the intent of a lighting design before all the lights are installed and all the cues are written. With the advancement in computer processing and visualization software, lighting designers are now able to create computer-generated images (CGI) that represent their ideas. The lighting designer enters the light plot into the visualization software and then enters the ground plan of the theater and set design, giving as much three-dimensional data as possible (which helps in creating complete renderings). This creates a 3D model in computer space that can be lit and manipulated. Using the software, the LD can use the lights from his plot to create actual lighting in the 3D model with the ability to define parameters such as color, focus, gobo, beam angle etc. The designer can then take renderings or "snapshots" of various looks that can then be printed out and shown to the director and other members of the design team. Mockups and lighting scale models In addition to computer visualization, either full-scale or small-scale mockups are a good method for depicting a lighting designer's ideas. Fiber optic systems such as LightBox or Luxam allow a users to light a scale model of the set. For example, a set designer can create a model of the set in 1/4" scale, and the lighting designer can then take the fiber optic cables and attach them to scaled-down lighting units that can accurately replicate the beam angles of specified lighting fixtures. These 'mini lights' can then be attached to cross pieces simulating different lighting positions. Fiber optic fixtures have the capacity to simulate attributes of full scale theatrical lighting fixtures including; color, beam angle, intensity, and gobos. The most sophisticated fiber optic systems are controllable through computer software or a DMX controlled Light board. This gives the lighting designer the ability to mock up real-time lighting effects as they will look during the show. Additional members of the lighting design team If the production is large or especially complex, the lighting designer may hire additional lighting professionals to help execute the design. Associate lighting designer The associate lighting designer (associate LD) will assist the lighting designer in creating and executing the lighting design. While the duties that an LD may expect the associate LD to perform may differ from person to person, usually the Ass't LD will do the following: Attend design and production meetings with or in place of the LD Attend rehearsals with or in place of LD and take notes of specific design ideas and tasks that the lighting department needs to accomplish Assist the LD in generating the light plot, channel hookup and sketches If needed, the Associate may need to take the set drawings and put them into a CAD program to be manipulated by the LD (however, this job is usually given to the assistant LD if there is one). The assistant LD may be in charge of running focus, and may even direct where the lights are to be focused. The associate is generally authorized to speak on behalf of the LD and can make creative and design decisions when needed (and when authorized by the LD). This is one of the biggest differences between the Associate and the Assistant. Assistant lighting designer The assistant lighting designer (assistant LD) assists the lighting designer and associate lighting designer. Depending on the particular arrangement the ALD may report directly to the LD, or they may in essence be the Associate's assistant. There also may be more than one assistant on a show depending on the size of the production. The ALD will usually: Attend design and production meetings with the LD or the associate LD Attend rehearsals with the LD or the associate LD Assist the LD in generating the light plot and channel hookup. If the plot is to be computer generated, the ALD is the one who physically enters the information into the computer. The ALD may run errands for the LD such as picking up supplies or getting the light plot printed in large format. The ALD will help the Associate LD in running focus. The ALD may take Focus Charts during focus. Track and coordinate followspots (if any exist for the production) and generate paperwork to aid in their cueing and color changes. In rare instances the ALD may be the light board operator. See also List of lighting designers Architectural lighting design Landscape lighting Master electrician Professional Lighting and Sound Association References Stage Lighting Design: The Art, the Craft, the Life, by Richard Pilbrow on books.google.com Stage Lighting Design: A Practical Guide, Neil Fraser, on books.google.com A Practical Guide to Stage Lighting, By Steven Louis Shelley, on books.google.com The Lighting Art: The Aesthetics of Stage Lighting Design, by Richard H. Palmer, on books.google.com Stage lighting design in Britain: the emergence of the lighting designer, 1881-1950, by Nigel H. Morgan, on books.google.com Scene Design and Stage Lighting By R. Wolf, Dick Block, on books.google.com External links stagelightingprimer.com,Stage Lighting for Students northern.edu, A brief history of stage lighting Broadcasting occupations Design occupations Filmmaking occupations Landscape and garden designers Mass media occupations Television terminology Theatrical occupations Stage crew Stage lighting sv:Ljusdesign#Ljusdesigner
Lighting design
Engineering
3,075
27,499,338
https://en.wikipedia.org/wiki/St%20Theresa%27s%20Independent%20State%20Grammar%20School%20for%20Girls%20%28and%20Boys%29
The New Coalition Academy was a column in Private Eye that depicted the UK coalition government led by David Cameron and Nick Clegg as if they were in fact taking over a failing school. The first episode explained that "Brown's Comprehensive" had been replaced by the Academy, and the new motto is "Duo in Uno" (Latin for "Two in One"). From May 2015 the academy was renamed the "Cameron Free School", reflecting the Conservative majority government. In July 2016 the school was renamed "St Theresa's Independent State Grammar School for Girls (and Boys)". Following the 2017 General Election, the school incorporated the William III Orange Academy, in reference to the confidence and supply deal that the Democratic Unionist Party reached with the second ministry of Theresa May. Style It commonly includes a Headmaster's Message, though that is sometimes substituted by a message from the Deputy Head. The Deputy Head's Message is often cut short due to "insufficient space". Since 2016, the Deputy Head's Message has been replaced by The Bursar Writes..., this too is often cut short, however. "Pupils" are invariably referred to by their surname, for example Nick Ferrari is "Ferrari, N" and Danny Finkelstein is "Finkelstein, D" who contributes jokes. Sports events such as The Ashes are referenced through the school's sports teams (the First XI or First XV). Defence-related news is referenced through the school's Combined Cadet Force and the school submarine. Characters 2010–15 David Cameron is the Headmaster. Nick Clegg is the Deputy Headmaster. Miriam Clegg is the wife of the Deputy Headmaster who does not want her children to go to the Academy, but instead to private schools. George Osborne is the Bursar. Danny Alexander is his assistant. Vince Cable is the Head of Business Studies. John Bercow is the Head of the Debating Society. Rupert Murdoch is a school financier and Rebekah Brooks is his caretaker. Andy Coulson was a miscreant student who the Headmaster put on the school switchboard for a time. Anna Soubry teaches in the Military Studies department. Chris Grayling was the Careers Master who was promoted to become Head of Discipline. Boris Johnson is a personal friend of the Headmaster who is considered to be a potential Headmaster and therefore is constantly discredited by the Headmaster. Liam Fox was the head of the CCF, who got into trouble for allowing his personal friend Adam Werritty to appear in his classroom without authorisation. David Davis is a former member of staff who attempted to become Headmaster. Theresa May is Head of Discipline. Michael Gove is the Curriculum Director, often on the "naughty chair". Lord Coe is the Head of Physical Education who was charged with organising the Sports Day. Dennis Skinner is an old school retainer. Chris Huhne was the head of Environmental Studies who has resigned. He was in an affair with his secretary Carina Trimmingham. Ed Davey is the new head of Environmental Studies. Andrew Lansley is in charge of reorganising the Sanitorium who has now gone on leave. Rowan Williams was known as Reverend Beardie and is the chaplain of the Academy until his departure in December and replacement by Justin Welby (Reverend Oilwellby), who used to run the local petrol station. Jeremy Hunt was the head of Media Studies and has been reassigned to running the sanitorium, where he intends to put satellite dishes on the roof. Andrew Mitchell is the head of Staff Discipline who got into trouble for losing his temper at some school security guards. He was later forced to resign. Justin Welby is the school chaplain. Justine Greening is a teacher who is opposed to the opening of a third driveway in the school on the grounds that she lived too close to it. Nadine Dorries is a teacher who wanted to mount a dartboard at the school Curry House. Lynton Crosby is an attack dog bought by the headmaster from his old friend Mr. Johnson to be a key part of the Senior Staff Management Directive team. Iain Duncan Smith is the new Careers Master. Ed Miliband is a supply teacher. Fraser Nelson is a Year 12 student who is Secretary of the Politics Society. Phillip Schofield and Holly Willoughby are members of the Sixth Form Media Studies Group. Owen Paterson is Head of Catering. Eric Pickles is a teacher who ate most of the Christmas dinner. Nigel Farage is a teacher who dislikes teaching about Europe. He also runs the fruitcake stall in the school. Maria Miller is Head of Cultural Studies. Nick Boles is Head of Planning. Malcolm Rifkind is Head of Intelligence Testing. Nick Ferrari is a student who runs the Sixth Form Media Service. Simon Hughes is a teacher who commonly attempts to undermine the Deputy Headmaster. Steve Hilton used to work in the Headmaster's Office, as Head of Blue Skies Thinking. He has moved to California to teach and has criticised the Headmaster. Oliver Letwin is a teacher who forged an agreement with the Deputy Headmaster on how to regulate the school magazine. Abu Qatada is a Year 7 Student that the Admissions Secretary is trying to expel. Eric Joyce is a supply teacher who got in trouble for starting a drunken brawl in the tuck shop. Roy Hodgson is the coach of the 1st XI who played matches in Poland and Ukraine. Bob Diamond is the local bank manager. Anish Kapoor is a Year 7 student who has made a sculpture for the Sports Day. Characters 2015–16 Nicola Sturgeon runs a shortbread stall. Alex Salmond is her assistant. Jeremy Corbyn is a classroom assistant. Grant Shapps is a teacher who was forced to leave after accusations of bullying. Andy Murray is a pupil who captain's the school tennis squad. Characters 2016 onwards Theresa May is the Headmistress. Philip May is the Headmistress's husband. Philip Hammond is the bursar. Angela Merkel is the Headmistress of Vorsprung Durch Technikal Kollege. Francois Hollande is the Headmaster of Grande Ecole de la Récession. Jeremy Corbyn is still a teaching assistant. John Redwood is Mr. Deadwood, a teacher complaining about the new Brexit Fudge before tasting it. Iain Duncan Smith is Mr. Ian Dunce Smith, a teacher complaining about the new Brexit Fudge before tasting it. Justine Greening is the Head of Educational Standards. She left her position after being offered a new job as Head of the Stationery Cupboard. Jeremy Hunt remains in charge of the sanatorium which is often closed due to Dr Junior's unwillingness to work. Sir Alan Duncan is a teacher who is placed in charge of school paper clips. Vladimir Putin is the Headmaster of the Kremlin School of Advanced Intelligence. Barack Obama is the Headmaster of the Washington Drone Academy. Boris Johnson is placed in charge of the school whilst the Headmistress is on holiday. He is also the Head of Foreign Studies. David Davis shares the school accommodation with Boris Johnson. Liam Fox shares the school accommodation "Squabbling House" with Johnson and Davis. David Cameron is the former Headmaster. Michael Gove is a former teacher at the school, who was recently fired. Following the results of the School Debate, he was rehired. George Osborne is the former school bursar who has gone back to live with his parents. Nicky Morgan is the former head of the school curriculum, who now wanders around the grounds. Shinzo Abe is the Headmaster of the Advanced Kamikaze Institute. Ban Ki-moon is the head of the United Nations of Headmasters. George Hollingbery is a junior staff member. Steve Hilton is a former teaching assistant to David Cameron. Peter Bone is one of the eldest teachers. Nick Timothy is Mrs May's adviser. He was sacked following the result of the School Debate. Donald Tusk is in charge of the seating plan at the European Education Union. Ed Balls is a competitor in Very Strictly Come Dancing. Zac Goldsmith is a teacher who failed his audition for the role on Mayor of London in the school panto, before leaving the school entirely. Michael Heseltine is a former deputy Head, described as "the worst Headmaster we never had." He's also currently in trouble with the RSPCA for strangling his dog. Carol Vorderman was the winner of the school's maths prize in 1976. Stephen Hawking was the Head of Astrophysics from 1900 to 1943. Mark Carney is the local bank manager who was caught in a conspiracy to blow up the school. Gina Miller is a self-appointed Health & Safety Representative. Nigel Farage no longer runs the school's fruitcake stall, as he is now the First Locker-Room Banter-Buddy to Donald Trump. He is also known as Mr Farago, or Mr Far-Right. Donald Trump is the Headmaster of the Washington Shutdown Finance College. Beata Szydło is the Headmistress of the Warsaw Academy of Building and Plumbing Sciences. Fiona Hill is the Headmistress's secretary. She was sacked following the result of the School Debate Chris Grayling is the head of school transport. Sir Ivan Rogers worked on the school's Outreach Programme who had to resign. Nicola Sturgeon is Nicola Krankie, the Headmistress of the Never-Will-Be-Independent School in Edinburgh. She does still run the shortbread stall. Recep Erdoğan is the Headmaster of the Turkish School of Extremely Hard Knocks. Kenneth Clarke is one of the older teachers at the school. Chloe Smith is the former assistant in the Bursar's office. Samantha Cameron is SamCam Cameron, the wife of the former Headmaster. John Bercow is a moderator of the Debating Society. Sarah Vine is a member of the Sixth Form Fashion Society. Her favourite teacher is Michael Gove. Paul Nuttall helps run the fruitcake stall. He is also known as Mr Nutter or Mr Nutcase. Following the collapse of the stall after the School Debate, he put the stall up for sale. Douglas Carswell helps run the fruitcake stall. He is also known as Mr Wonky-Jaw and Mr Carcrash. He sought to rejoin the teaching staff after developing a nut allergy, but eventually settled for setting up his own nut stall. Michael Crick is a member of the Sixth Form Media Society. Jacob Rees-Mogg is the TeachFirst Classics Master assistant. Nick Robinson is a member of the Debating Society. Eddie Mair is a member of the Debating Society. Geordie Greig is a member of the Sixth Form Media Society. Lynton Crosby is the school's attack dog. Following the results of the School Debate, he was put down. Alex Jones interviewed the Headmistress on "The One-derful Headmistress Show." Matt Baker interviewed the Headmistress on "The One-derful Headmistress Show." Ed Miliband is a supply teacher. Andrew Neil is a student who interviewed the Headmistress during the School Debate. Robert Peston is a student who tested Boris Johnson during the School Debate. Damian Green is a friend of the Headmistress from Oxford. He was later sacked from his position. Arlene Foster is the Head of Music, as well as the Head of Biblical History, Sex Education, and the school's energy supply. She has also taken over the Headmistresses's office. Emmanuel Macron is the Headmaster of the Académie Nouveau Napoleon. Elizabeth II is the school's patron. Ruth Davidson is the Honorary Commander of the Combined Cadet Force. Andrew Mitchell is a teacher who was overheard saying that the Headmistress should resign following the results of the School Debate. Robbie Gibb is the Headmistresses's assistant. Amber Rudd is a teacher who is very loyal to the Headmistress and touted as a potential successor. Geoffrey Boycott is the former 1st XI batsman. Michael Fallon was a member of staff who had to stand down for undisclosed reasons. Priti Patel was a member of staff who had to stand down after having secret meetings with Israeli headmasters. Gavin Williamson is a teacher who runs the CCF after being promoted for 'successfully' making sure that no teachers were involved in any scandals. He also has a pet tarantula, Cronos. Penny Mordaunt is a teacher well known for her use of the school's diving board. Stephen Hammond is a teacher who was fired after voting against the Headmistress in the Big School Debate. Toby Young was the new watchdog for higher standards in education, known for his sense of humour. He was fired before taking up his position. Andrew Adonis was an advisor to the Headmistress on School Infrastructure, but he resigned from his post. Tracey Crouch is the Minister for Loneliness, who needs to see the Headmistress frequently. Henry Bolton runs the fruitcake stall, but his extra-fruity fruitcakes are not popular with students. Desmond Swayne is a teacher who fell asleep while Kenneth Clarke talked about leaving the European Education Union. Ben Bradley is the new Head of Sixth Form, but may have to leave his position due to some poor jokes. References External links Private Eye – The New Coalition Academy British political satire Private Eye Satirical columns Cultural depictions of prime ministers of the United Kingdom Cultural depictions of politicians Cultural depictions of British people Nick Clegg Parodies of Donald Trump Cultural depictions of Vladimir Putin Cultural depictions of Stephen Hawking Cultural depictions of Barack Obama Cultural depictions of Angela Merkel Cultural depictions of David Cameron Cultural depictions of Rupert Murdoch
St Theresa's Independent State Grammar School for Girls (and Boys)
Astronomy
2,772
27,618,144
https://en.wikipedia.org/wiki/Bcl-2%20family
The Bcl-2 family (TC# 1.A.21) consists of a number of evolutionarily-conserved proteins that share Bcl-2 homology (BH) domains. The Bcl-2 family is most notable for their regulation of apoptosis, a form of programmed cell death, at the mitochondrion. The Bcl-2 family proteins consists of members that either promote or inhibit apoptosis, and control apoptosis by governing mitochondrial outer membrane permeabilization (MOMP), which is a key step in the intrinsic pathway of apoptosis. A total of 25 genes in the Bcl-2 family were identified by 2008. Members of the BCL-2 family regulate apoptosis in mammals, reptiles, amphibs, fish, and other phyla of metazoan life, with exception of nematodes and insects. Their molecular structure and function, as well as their protein dynamics, are highly conserved over hundreds of millions of years in tissue forming life forms. Structure Bcl-2 family proteins have a general structure that consists of a hydrophobic α-helix surrounded by amphipathic α-helices. Some members of the family have transmembrane domains at their c-terminus which primarily function to localize them to the mitochondrion. Bcl-x(L) is 233 amino acyl residues (aas) long and exhibits a single very hydrophobic putative transmembrane α-helical segment (residues 210-226) when in the membrane. Homologues of Bcl-x include the Bax (rat; 192 aas) and Bak (mouse; 208 aas) proteins, which also influence apoptosis. The high resolution structure of the monomeric soluble form of human Bcl-x(L) has been determined by both x-ray crystallography and NMR. The structure consists of two central primarily hydrophobic α-helices surrounded by amphipathic helices. The arrangement of the α-helices in Bcl-X(L) resembles that for diphtheria toxin and the colicins. Diphtheria toxin forms a transmembrane pore and translocates the toxic catalytic domain into the animal cell cytoplasm. The colicins similarly form pores in lipid bilayers. Structural homology therefore suggests that Bcl-2 family members that contain the BH1 and BH2 domains (Bcl-X(L) Bcl-2 and Bax) function similarly. Domains The members of the Bcl-2 family share one or more of the four characteristic domains of homology entitled the Bcl-2 homology (BH) domains (named BH1, BH2, BH3 and BH4) (see figure). The BH domains are known to be crucial for function, as deletion of these domains via molecular cloning affects survival/apoptosis rates. The anti-apoptotic Bcl-2 proteins, such as Bcl-2 and Bcl-xL, conserve all four BH domains. The BH domains also serve to subdivide the pro-apoptotic Bcl-2 proteins into those with several BH domains (e.g. Bax and Bak) or those proteins that have only the BH3 domain (e.g. Bim Bid and BAD) All proteins belonging to the Bcl-2 family contain either a BH1, BH2, BH3 or BH4 domain. All anti-apoptotic proteins contain BH1 and BH2 domains, some of them contain an additional N-terminal BH4 domain (Bcl-2, Bcl-x(L) and Bcl-w), which is also seen in some pro-apoptotic proteins like Bcl-x(S), Diva, Bok-L and Bok-S. On the other hand, all pro-apoptotic proteins contain a BH3 domain necessary for dimerization with other proteins of Bcl-2 family and crucial for their killing activity, some of them also contain BH1 and BH2 domains (Bax and Bak). The BH3 domain is also present in some anti-apoptotic protein, such as Bcl-2 or Bcl-x(L). The three functionally important Bcl-2 homology regions (BH1, BH2 and BH3) are in close spatial proximity. They form an elongated cleft that may provide the binding site for other Bcl-2 family members. Function Regulated cell death (apoptosis) is induced by events such as growth factor withdrawal and toxins. It is controlled by regulators, which have either an inhibitory effect on programmed cell death (anti-apoptotic) or block the protective effect of inhibitors (pro-apoptotic). Many viruses have found a way of countering defensive apoptosis by encoding their own anti-apoptosis genes preventing their target-cells from dying too soon. Bcl-x is a dominant regulator of programmed cell death in mammalian cells. The long form (Bcl-x(L), displays cell death repressor activity, but the short isoform (Bcl-x(S)) and the β-isoform (Bcl-xβ) promote cell death. Bcl-x(L), Bcl-x(S) and Bcl-xβ are three isoforms derived by alternative RNA splicing. There are a number of theories concerning how the Bcl-2 gene family exert their pro- or anti-apoptotic effect. An important one states that this is achieved by activation or inactivation of an inner mitochondrial permeability transition pore, which is involved in the regulation of matrix Ca2+, pH, and voltage. It is also thought that some Bcl-2 family proteins can induce (pro-apoptotic members) or inhibit (anti-apoptotic members) the release of cytochrome c into the cytosol which, once there, activates caspase-9 and caspase-3, leading to apoptosis. Although Zamzami et al. suggest that the release of cytochrome c is indirectly mediated by the PT pore on the inner mitochondrial membrane, strong evidence suggest an earlier implication of the MAC pore on the outer membrane. Another theory suggests that Rho proteins play a role in Bcl-2, Mcl-1 and Bid activation. Rho inhibition reduces the expression of anti-apoptotic Bcl-2 and Mcl-1 proteins and increases protein levels of pro-apoptotic Bid but had no effect on Bax or FLIP levels. Rho inhibition induces caspase-9 and caspase-3-dependent apoptosis of cultured human endothelial cells. Site of action These proteins are localized to the outer mitochondrial membrane of the animal cell where they are thought to form a complex with the voltage-dependent anion channel porin (VDAC). Interaction of Bcl-2 with VDAC1 or with peptides derived from VDAC3 protects against cell death by inhibiting cytochrome c release. A direct interaction of Bcl-2 with bilayer-reconstituted purified VDAC was demonstrated, with Bcl-2 decreasing channel conductance. Within the mitochondria are apoptogenic factors (cytochrome c, Smac/Diablo homolog, Omi) that if released activate the executioners of apoptosis, the caspases. Depending on their function, once activated, Bcl-2 proteins either promote the release of these factors, or keep them sequestered in the mitochondria. Whereas the activated pro-apoptotic Bak and/or Bax would form MAC and mediate the release of cytochrome c, the anti-apoptotic Bcl-2 would block it, possibly through inhibition of Bax and/or Bak. Proteins of the Bcl-2 family are also present in the perinuclear envelope and are widely distributed in many body tissues. Their ability to form oligomeric pores in artificial lipid bilayers has been documented but the physiological significance of pore formation is not clear. Each of these proteins has distinctive properties, including some degree of ion selectivity. Transport reaction The generalized transport reaction proposed for membrane-embedded, oligomeric Bcl-2 family members is: cytochrome c (mitochondrial intermembrane space) ⇌ cytochrome c (cytoplasm) BH3-only family The BH3-only subset of the Bcl-2 family of proteins contain only a single BH3-domain. The BH3-only members play a key role in promoting apoptosis. The BH3-only family members are Bim, Bid, BAD and others. Various apoptotic stimuli induce expression and/or activation of specific BH3-only family members, which translocate to the mitochondria and initiate Bax/Bak-dependent apoptosis. Examples Proteins that are known to contain these domains include vertebrate Bcl-2 (alpha and beta isoforms) and Bcl-x (isoforms Bcl-x(L). BCL2L1, BCL2L2, BCL2L10, BCL2L13, BCL2L14 BOK MCL1 See also Bcl-2 inhibitor, anti-cancer drugs targeted at this family of proteins B-cell CLL/lymphoma, a wider group References Protein families Transmembrane transporters Transport proteins Single-pass transmembrane proteins
Bcl-2 family
Biology
2,061
6,033,983
https://en.wikipedia.org/wiki/BD-J
BD-J, or Blu-ray Disc Java, is a specification supporting Java ME (specifically the Personal Basis Profile of the Connected Device Configuration or CDC) Xlets for advanced content on Blu-ray Disc and the Packaged Media profile of Globally Executable MHP (GEM). BD-J allows for more sophisticated bonus content on Blu-ray Disc titles than standard DVD, including network access, picture-in-picture, and access to expanded local storage. Collectively, these features (other than internet access) are referred to as "Bonus View", and the addition of internet access is called "BD Live". BD-J was developed by the Blu-ray Disc Association. All Blu-ray Disc players supporting video content are required by the specification to support BD-J. Starting on October 31, 2007, all new players are required to have hardware support for the "Bonus View" features, but the players may require future firmware updates to enable the aforementioned features. "BD Live" support is always optional for a BD player. Sony's PlayStation 3 has been the de facto leader in compliance and support of BD-J, adding Blu-ray Profile 1.1 support with a firmware upgrade used to showcase BD-Live at CES 2008. BD-J Xlet capabilities The invocation of BD-J Xlets are triggered by events occurring around them—for example, by the selection of a film title or by the insertion of a new disc. Xlets in turn can then call other Xlets into play. Security in BD-J is based on the Java platform security model. That is, signed applications in JARs can perform more tasks than non-signed, such as Read/Write access to local storage, network access, selection of other titles on the BD-ROM disc, and control of other running BD-J applications. Xlets (as part of the CDC Personal Basis Profile) have no GUI (i.e. no AWT widgets such as ), so additional classes are called into play for generating animation and GUI. The BD-J uses the Havi UI device model and widget set for remote control use, but it is extended to allow for the BD supported resolutions and BD supported A/V controls. BD-J has classes that allow the user to synchronize accurately to specific frames in the film. There are two types of video synchronizations allowed, one called "loose synchronization", which uses a call back method and is accurate to within several frames of the event, and the other being "tight synchronization", which uses the package . Tight synchronization allows applications to synchronize accurately to the exact frame using timecodes from the package of JMF (Java Media Framework). A BD-J application's GUI can be operated with a remote control with a required set of keys and an optional pointing device. The set of required keys includes at least the keys needed to support the User Operations in high-definition movie (HDMV) applications. The GUI framework in BD-J includes the HAVi(6) UI framework mandated by GEM; it is not a desktop GUI framework like Swing or AWT. The GUI framework is based on the core of AWT as specified by PBP, but the widget set includes mechanisms for remote control navigation from GEM and easy customization of look and feel from HAVi. BD-J includes a media framework similar to JMF for the playback of media content related to the BD-ROM disc. It is assumed that the BD-ROM disc will be the prime source for media files, but it will not be the only one; other sources could be the studio's web server and local storage. BD-J includes standard Java libraries for decoding and displaying images in JFIF (JPEG), PNG and other image formats. These images can be displayed on the Java graphics plane using standard Java graphics functions. An image can also be rendered in the background plane using a BD-J specific package. Text can be rendered using standard Java text functions. These text-rendering functions are extended with a more advanced text layout manager that integrates with the BD-J UI framework. The text is rendered using a vector-based font either coming from the disc, the player (default font) or downloaded from the network. Button sounds from HDMV can also be used by the Java UI framework. Sound files can be loaded and rendered as a reaction to the user pressing a key, or as a reaction on a marked event related to the film—or as a reaction to any event generated by a BD-J Application. Authenticated applications can use a (signed) permission request file to acquire permissions that go beyond the BD-J sandbox. Permissions can be acquired for: Reading and writing to local and system storage Using the network connection (to connect to defined servers) Access of the file system on the BD-ROM disc Title selection of other titles on the BD-ROM disc Control of other running BD-J applications BD-J applications can use the package to connect to servers on the Internet. The physical connection might differ between implementations e.g. Ethernet, telephone line, etc. At the network level, TCP/IP is supported and the HTTP protocol may be used. Moreover, the Java package for secure connections is included (JSSE) as part of the BD-J platform. Before a BD-J application can use the network connection, it must be authenticated and have suitable permission to use the network. The websites to which the application will go are under full control of the Content Provider. This control is guaranteed in two ways: Only (disc) authenticated BD-J applications are allowed to run when the disc is played. The application controls the use of the network connection. In addition, permissions defined on the disc can restrict the use of the (TCP/IP) network connection to certain sites. BD-J will include support for storage. Two flavors of storage are included: mandatory System Storage and optional Local Storage. All storage is accessed using methods from the Java IO package. The path for local storage is as specified by [GEM]. System storage is storage that will be present in all BD-J players. The required minimum size of this system storage will permit storage of application data like settings, high-scores etc. It will not be big enough to store downloaded AV material. For this purpose, optional local storage is available. Typically system storage will be implemented using Flash memory and the optional local storage will be implemented on a HDD. Since storage is a shared resource between all discs played on the player, Java access control is part of BD-J. BD-J applications can only access a disc specific part of the storage space and cannot access the part belonging to other discs. Content development Content authors have a variety of development strategies available, including the use of traditional Integrated Development Environments (IDEs) like NetBeans or Eclipse, non-programming graphical environments similar to Macromedia Director, or via rendering engines which consume standard data formats such as HTML, XML, or SVG. Having a full programming environment available on every Blu-ray Disc player provides developers with a platform for creating content types not bound by the restrictions of standard DVD. In addition to the standard BD-J APIs, developers may make use of existing Java libraries and application frameworks, assuming they do not use features outside the constraints of the BD-J platform, include that Java ME only supports Java version 1.3 class files. A set of freely available tools that allow Java developers to produce complete disc images incorporating BD-J is available from the HD Cookbook Project. In order to test content in a typical development environment (MS Windows), one needs either a PlayStation 3 or a third-party software player for Windows, paying attention to player versions to ensure that the player supports BD-J. Because of the many different standards and components involved, creating unified documentation on BD-J has proven to be a challenge. Sample code The BD-J environment is designed to run Xlets with non- packages available to take advantage of the features particular to this platform beyond that defined by Java TV. Even a simple example such as FirstBDJApp. A developer might choose to use not packages and instead use: HAVi classes in package tree : alternative classes to obtain, for example, an far beyond what is provided by (they are both extensions of ) Digital Video Broadcasting (DVB) classes in package tree : alternative classes to, for example, the interface rather than for support for key presses and keycodes specific to popular CDC devices. Blu-ray Disc classes in the package tree : the DAVIC and DVB classes depend upon to recognize additional events peculiar to the BD-J platform such as popup menus and to locate media on the Blu-ray disc. DAVIC API classes in package tree : A small set of classes wrapping or extending other network and media resources peculiar to interactive TV the HAVi, DVB and Blu-ray classes use for locators and specialized exceptions beyond the realm of JMF (such as content authorization). A working example of a program using some features from each of the class trees would be the BdjGunBunny Xlet (a very simple version of Space Invaders using an image of a rabbit as the shooter and turtles as the targets) provided as an example in the Java ME 3.0 SDK. import javax.tv.xlet.XletContext; import org.havi.ui.HScene; import org.havi.ui.HSceneFactory; import java.awt.Container; import javax.tv.graphics.TVContainer; // Getting a container for the screen could be public void initXlet(XletContext context) { // Java TV API to be compatible with Java TV TVContainer scene = TVContainer.getRootContainer(context); // Or for BD-J, to utilize HAVi features not available in Java TV HScene scene = HSceneFactory.getInstance().getDefaultHScene(); // Or perhaps more generally... Container container = null; boolean realBDJ = true; if (realBDJ) container = HSceneFactory.getInstance().getDefaultHScene(); else container = TVContainer.getRootContainer(context); ... } And the same for the other non- packages. Likewise, when trying to play a video, one might call the Blu-ray and DAVIC utility rather than using generic JMF: import javax.media.Player; import org.bluray.net.BDLocator; import org.davic.media.MediaLocator; MediaLocator stars = new MediaLocator(new BDLocator("bd://0.PLAYLIST:00003")); Player player = Manager.createPlayer(stars); // Rather than traditional and portable but more limited pure JMF import java.net.URL; import javax.media.Manager; import javax.media.Player; Player mediaPlayer = Manager.createRealizedPlayer( new URL("file:/mymovie.mov" )); Related publication Programming HD DVD and Blu-ray Disc The HD Cookbook (2008) by Michael Zink, Philip C. Starner, Bill Foote - - book website See also Advanced Content, BD-J's counterpart on HD DVD References External links Official java.net BD-J Forums - Official Sun java.net Forums for Blu-ray Disc Java. bdjforum.com - Unofficial forum for BD-J developers and issues surround HD authoring. JavaOne 2007 Technical Sessions: Producing Blu-ray Java Software Titles for Hollywood Official website for DVB-MHP and DVB-GEM - Open Middleware for Interactive TV TV Without Borders - MHP/OCAP Website from Steven Morris. HD Cookbook - Code and other recipes for Blu-ray Java, GEM, MHP and OCAP Alticast BD-J Solutions Blu-ray Disc Interactive television Java platform 120 mm discs High-definition television Video storage
BD-J
Technology
2,592
24,058,550
https://en.wikipedia.org/wiki/Amide%20reduction
Amide reduction is a reaction in organic synthesis where an amide is reduced to either an amine or an aldehyde functional group. Catalytic hydrogenation Catalytic hydrogenation can be used to reduce amides to amines; however, the process often requires high hydrogenation pressures and reaction temperatures to be effective (i.e. often requiring pressures above 197 atm and temperatures exceeding 200 °C). Selective catalysts for the reaction include copper chromite, rhenium trioxide and rhenium(VII) oxide or bimetallic catalyst. Amines from other hydride sources Reducing agents able to effect this reaction include metal hydrides such as lithium aluminium hydride, or lithium borohydride in mixed solvents of tetrahydrofuran and methanol. Iron catalysis by triiron dodecacarbonyl in combination with polymethylhydrosiloxane has been reported. Lawesson's reagent converts amides to thioamides, which then catalytically desulfurize. Noncatalytic routes to aldehydes Some amides can be reduced to aldehydes in the Sonn-Müller method, but most routes to aldehydes involve a well-chosen organometallic reductant. Lithium aluminum hydride reduces an excess of N,N-disubstituted amides to an aldehyde: R(CO)NRR' + LiAlH4 → RCHO + HNRR' With further reduction the alcohol is obtained. Schwartz's reagent reduces amides to aldehydes, and so does hydrosilylation with a suitable catalyst. References External links Amide reduction @ organic-chemistry.org Organic redox reactions
Amide reduction
Chemistry
364
2,347,254
https://en.wikipedia.org/wiki/Cygnus%20A
Cygnus A (3C 405) is a radio galaxy, one of the strongest radio sources in the sky. A concentrated radio source in Cygnus was discovered by Grote Reber in 1939. In 1946 Stanley Hey and his colleague James Phillips identified that the source scintillated rapidly, and must therefore be a compact object. In 1951, Cygnus A, along with Cassiopeia A, and Puppis A were the first "radio stars" identified with an optical source. Of these, Cygnus A became the first radio galaxy, the other two being nebulae inside the Milky Way. In 1953 Roger Jennison and M K Das Gupta showed it to be a double source. Like all radio galaxies, it contains an active galactic nucleus. The supermassive black hole at the core has a mass of . Images of the galaxy in the radio portion of the electromagnetic spectrum show two jets protruding in opposite directions from the galaxy's center. These jets extend many times the width of the portion of the host galaxy which emits radiation at visible wavelengths. At the ends of the jets are two lobes with "hot spots" of more intense radiation at their edges. These hot spots are formed when material from the jets collides with the surrounding intergalactic medium. In 2016, a radio transient was discovered 460 parsecs away from the center of Cygnus A. Between 1989 and 2016, the object, cospatial with a previously-known infrared source, exhibited at least an eightfold increase in radio flux density, with comparable luminosity to the brightest known supernova. Due to the lack of measurements in the intervening years, the rate of brightening is unknown, but the object has remained at a relatively constant flux density since its discovery. The data are consistent with a second supermassive black hole orbiting the primary object, with the secondary having undergone a rapid accretion rate increase. The inferred orbital timescale is of the same order as the activity of the primary source, suggesting the secondary may be perturbing the primary and causing the outflows. See also List of galaxies List of nearest galaxies References Further reading External links Information about Cygnus A from SIMBAD. Hubble Uncovers a Hidden Quasar in a Nearby Galaxy (Cygnus A) A primitive transit recording (7 July 1998) of Cygnus A by English radio amateur Geoff Royle G4FAS 405.0 Cygnus (constellation) Radio galaxies Quasars Astronomical objects discovered in 1939
Cygnus A
Astronomy
519
25,119,713
https://en.wikipedia.org/wiki/Lactarius%20paradoxus
Lactarius paradoxus is a North American member of the large milk-cap genus, Lactarius, in the order Russulales. It was first described in 1940. Description The cap has a blue-green to gray color. When damaged, it bleeds dark red latex. The spore print is light yellowish. Similar species Lactarius indigo looks similar, but with a blue latex. Lactarius rubrilacteus has a reddish latex and does not appear blue. Additionally, L. chelidonium and L. subpurpureus are similar. Distribution and habitat Fruiting from early fall to late winter, the species is found in the southern and eastern United States. It appears in grass and under pines. It is mycorrhizal with pine and oak. Edibility The species is edible and mild, but bitter if too old. See also List of Lactarius species References External links Lactarius paradoxus @ Morel Mushroom Hunting Club paradoxus Edible fungi Fungi described in 1940 Fungus species
Lactarius paradoxus
Biology
212
49,078,162
https://en.wikipedia.org/wiki/Sufotidine
Sufotidine (INN, USAN, codenamed AH25352) is a long-acting competitive H2 receptor antagonist which was under development as an antiulcerant by Glaxo (now GlaxoSmithKline). It was planned to be a follow-up compound to ranitidine (Zantac). When taken in doses of 600 mg twice daily it induced virtually 24-hour gastric anacidity thus closely resembling the antisecretory effect of the proton pump inhibitor omeprazole. Its development was terminated in 1989 from phase III clinical trials based on the appearance of carcinoid tumors in long-term toxicity testing in rodents. Synthesis See also Lavoltidine (previously known as loxtidine) — a similar compound in which methylsulfone group is replaced with hydroxyl References Abandoned drugs Amines H2 receptor antagonists 1-Piperidinyl compounds Triazoles Phenol ethers Sulfones
Sufotidine
Chemistry
206
57,601,067
https://en.wikipedia.org/wiki/Solar%20Terrestrial%20Probes%20program
NASA's Solar Terrestrial Probes program (STP) is a series of missions focused on studying the Sun-Earth system. It is part of NASA's Heliophysics Science Division within the Science Mission Directorate. Objectives Understand the fundamental physical processes of the complex space environment throughout the Solar System, which includes the flow of energy and charged material, known as plasma, as well as a dynamic system of magnetic and electric fields. Understand how human society, technological systems, and the habitability of planets are affected by solar variability and planetary magnetic fields. Develop the capability to predict the extreme and dynamic conditions in space in order to maximize the safety and productivity of human and robotic explorers. Missions TIMED The TIMED (Thermosphere Ionosphere Mesosphere Energetics and Dynamics) is an orbiter mission dedicated to study the dynamics of the Mesosphere and Lower Thermosphere (MLT) portion of the Earth's atmosphere. The mission was launched from Vandenberg Air Force Base in California on December 7, 2001 aboard a Delta II rocket launch vehicle. Hinode Hinode, an ongoing collaboration with JAXA, is a mission to explore the magnetic fields of the Sun. It was launched on the final flight of the M-V-7 rocket from Uchinoura Space Center, Japan on September 22, 2006. STEREO STEREO (Solar Terrestrial Relations Observatory) is a solar observation mission. It consists in two nearly identical spacecraft, launched on October 26, 2006. MMS The Magnetospheric Multiscale Mission (MMS) is a mission to study the Earth's magnetosphere, using four identical spacecraft flying in a tetrahedral formation. The spacecraft were launched on March 13, 2015. IMAP IMAP (Interstellar Mapping and Acceleration Probe) is a heliosphere observation mission. Planned for launch in 2025, it will sample, analyze, and map particles streaming to Earth from the edges of interstellar space. References External links NASA Goddard Space Flight Center - Solar Terrestrial Probes Program NASA Science Mission Directorate - Solar Terrestrial Probes Program NASA programs Space plasmas Space science experiments
Solar Terrestrial Probes program
Physics
429
77,143,022
https://en.wikipedia.org/wiki/Defeng%20Sun
Defeng Sun (Chinese name: 孙德锋) is a Chinese applied mathematician and operations researcher. He holds the position of Chair Professor of Applied Optimization and Operations Research, and has been serving as the Head of Department of Applied Mathematics in The Hong Kong Polytechnic University (PolyU) since 2019. Sun had been the President of The Hong Kong Mathematical Society in 2020-2024. Education Sun received his Ph.D. degree in 1995 from Chinese Academy of Sciences in Beijing, after obtaining his bachelor’s and Master’s degree from Nanjing University in 1989 and 1992, respectively. Research Sun’s research interests lie in the broad areas of non-convex continuous optimization and machine learning including mathematical theory, algorithmic developments and real-world applications. In 2006, he solved the long-standing open question of characterizing the strong regularity of nonlinear semidefinite programming (SDP) problems. Sun was awarded the triennial 2018 Beale-Orchard-Hays Prize for Excellence in Computational Mathematical Programming by Mathematical Optimization Society jointly with his collaborators for the work on software SDPNAL/SDPNAL+ for general purpose large scale semidefinite programming problems Awards He was named a SIAM Fellow in 2020 , for "contributions to algorithms and software for conic optimization, particularly matrix optimization", and Fellow of China Society for Industrial and Applied Mathematics in 2020 "for contributions to the field of industrial and applied mathematics" He received the RGC Senior Research Fellowship for his project "Nonlinear Conic Programming: Theory, Algorithms and Software" by Hong Kong's University Grants Committee in 2022/23. References External links The Hong Kong Mathematical Society Applied mathematicians Year of birth missing (living people) Living people 21st-century Chinese mathematicians Academic staff of Hong Kong Polytechnic University Nanjing University alumni Fellows of the Society for Industrial and Applied Mathematics
Defeng Sun
Mathematics
365
1,711,423
https://en.wikipedia.org/wiki/Wurtz%20reaction
In organic chemistry, the Wurtz reaction, named after Charles Adolphe Wurtz, is a coupling reaction in which two alkyl halides are treated with sodium metal to form a higher alkane. 2 R−X + 2 Na → R−R + 2 NaX The reaction is of little value except for intramolecular versions, such as 1,6-dibromohexane + 2 Na → cyclohexane + 2 NaBr. A related reaction, which combines alkyl halides with aryl halides is called the Wurtz–Fittig reaction. Despite its very modest utility, the Wurtz reaction is widely cited as representative of reductive coupling. Mechanism The reaction proceeds by an initial metal–halogen exchange, which is described with the following idealized stoichiometry: R−X + 2 M → RM + MX This step may involve the intermediacy of radical species R·. The conversion resembles the formation of a Grignard reagent. The RM intermediates have been isolated in several cases. The radical is susceptible to diverse reactions. The organometallic intermediate (RM) next reacts with the alkyl halide (RX) forming a new carbon–carbon covalent bond. RM + RX → R−R + MX The process resembles an SN2 reaction, but the mechanism is probably complex. Examples and reaction conditions The reaction is intolerant of many functional groups which would be attacked by sodium. For similar reasons, the reaction is conducted in unreactive solvents such as ethers. In efforts to improve the reaction yields, other metals have also been tested to effect the Wurtz-like couplings: silver, zinc, iron, activated copper, indium, as well as mixture of manganese and copper chloride. Wurtz coupling is useful in closing small, especially three-membered, rings. In the cases of 1,3-, 1,4-, 1,5-, and 1,6- dihalides, Wurtz-reaction conditions lead to formation of cyclic products, although yields are variable. Under Wurtz conditions, vicinal dihalides yield alkenes, whereas geminal dihalides convert to alkynes. Bicyclobutane was prepared this way from 1-bromo-3-chlorocyclobutane in 95% yield. The reaction is conducted in refluxing dioxane, at which temperature, the sodium is liquid. Extensions to main group compounds Although the Wurtz reaction is only of limited value in organic synthesis, analogous couplings are useful for coupling main group halides. Hexamethyldisilane arises efficiently by treatment of trimethylsilyl chloride with sodium: Tetraphenyldiphosphine is prepared analogously: Similar couplings have been applied to many main group halides. When applied to main group dihalides, rings and polymers result. Polysilanes and polystannanes are produced in this way See also Wurtz–Fittig reaction Ullmann reaction Further reading Organic Chemistry Portal, organic-chemistry.org Organic Chemistry, by Morrison and Boyd Organic Chemistry, by Graham Solomons and Craig Fryhle, Wiley Publications References Condensation reactions Carbon-carbon bond forming reactions Name reactions
Wurtz reaction
Chemistry
693
69,819,769
https://en.wikipedia.org/wiki/List%20of%20reagent%20testing%20color%20charts
It is advised to check the references for photos of reaction results. Reagent testers might show the colour of the desired substance while not showing a different colour for a more dangerous additive. For this reason it is essential to use multiple different tests to show all adulterants. Folin's—Mandelin Marquis—Simon's Custom reagents See also Counterfeit medications Drug checking Harm reduction References Adulteration Color charts Color charts Comparison of psychoactive substances Drug culture Color charts Harm reduction Drug-related lists
List of reagent testing color charts
Chemistry
106
2,463,438
https://en.wikipedia.org/wiki/Halogeton%20glomeratus
Halogeton glomeratus is a species of flowering plant in the family Amaranthaceae known by the common names saltlover, Aral barilla, and halogeton. It is native to Russia, Central Asia and China, but the plant is probably better known in the western United States, where it is an introduced species and a notorious noxious weed. This annual herb is a hardy halophyte, thriving in soils far too saline to support many other plants. It also grows in alkali soils such as those on alkali flats and disturbed, barren habitat. It can be found in sagebrush and shadscale habitat, and it grows well in areas with cold winters. This plant produces a usually erect stem with several curving branches up to about 25 centimeters (10 in) tall. It has a taproot reaching up to half a meter deep in the soil and many lateral roots. The branches are lined with narrow, fleshy, blue-green leaves each up to about 2 centimeters long tipped with stiff bristles. The inflorescences are located all along the stem branches next to the leaves. Each inflorescence is a small cluster of tiny bisexual and female-only flowers accompanied by waxy bracts. The winged, membranous flowers surround the developing fruit, which is all that remains on the plant when it is ripe, the leaves and flower parts having fallen away. The fruit is a pale cylindrical utricle. The plant produces large amounts of seeds, which are dispersed by many vehicles, including human activity, animals (including ants), water flow, wind, and by being carried on the dry plant when it breaks off at ground level and rolls away as a tumbleweed. The seeds have the ability to germinate within one hour after being exposed to water. This herb is a pest on rangelands in the western United States. It has a high oxalate content, with up to 30% of the plant's dry weight made up of oxalate crystals, making it toxic to livestock that graze on it. It is especially toxic to sheep, which can be fatally poisoned by as little as twelve ounces (350 g) of the plant. Halogeton was first recognized as a danger to sheep in the 1940s after a rancher lost a herd of 160 sheep to poisoning. The oxalate causes acute hypocalcemia in the sheep, causing them to stagger, spasm, and finally die. Ingestion of a fatal dose of the plant can cause death in a sheep in under 12 hours. Ranchers often provide calcium-supplemented feed to sheep grazing on halogeton-infested land. Sheep are also able to adapt to halogeton in their diets over time, becoming sick from it less easily, and since it is hardly palatable they tend to avoid it in the first place when possible. Halogeton is also destructive to the land of the American west because its excretion of mineral salts makes it harder for other plants to grow where it occurs. The growth of the plant is controlled by introducing certain nonnative plants, such as immigrant kochia (Kochia prostrata) and crested wheatgrass (Agropyron cristatum), which compete successfully with halogeton. Grazing practices are changed to assure that land is not denuded, since land which is disturbed by overgrazing is susceptible to halogeton invasion. References External links Jepson Manual Treatment USDA Plants Profile Photo gallery Amaranthaceae Halophytes
Halogeton glomeratus
Chemistry
724
44,484,532
https://en.wikipedia.org/wiki/Tylopilus%20jalapensis
Tylopilus jalapensis is a bolete fungus in the family Boletaceae found in Veracruz, Mexico, where it grows under oak in montane forests. It was described as new to science in 1991. See also List of North American boletes References External links jalapensis Fungi described in 1991 Fungi of Mexico Fungi without expected TNC conservation status Fungus species
Tylopilus jalapensis
Biology
78
40,439,128
https://en.wikipedia.org/wiki/Omate%20TrueSmart
The Omate TrueSmart is a smartwatch designed by Omate, a Chinese company based in Hong Kong and Shenzhen. It has been funded by crowd funding via Kickstarter. The funding period was from August 21, 2013 until September 20, 2013. The funding goal of $100,000 was reached within 12 hours, with more than $1,032,000 raised by the end of the campaign. In contrast to other smartwatches, the Omate is a complete standalone telecom mobile device that can be used to make calls, navigate and use Android apps independent of the user's smartphone. Specifications The TrueSmart is powered by an ARM7 MediaTek MT6572 chipset running Android 4.2.2 and a ROM also known as OUI - Omate User Interface. The body of 45 mm × 45 mm × 14 mm has two side buttons for Power and Home functions. It also features a 3 megapixel camera module which is upscaled by software from 3 megapixel to 5 megapixel. In contrast to other smart watches, the TrueSmart is a standalone smartwatch that can support 3G GSM micro SIM card. The TrueSmart is a full featured Android smartphone in a watch form factor promoted as a smartwatch 2.0 (Telecom Wearable) by Omate to differentiate the device with other traditional smart watches which are mainly companion devices. Its sensors include a magnetometer, a three-axis accelerometer and a GPS. It has a vibrator, a microphone and an audio speaker. The watch is charged and connected to computers using a standard USB cable and an external charging cradle accessory. Processor It is powered by a dual-core ARM Cortex-A7 MediaTek MTK6572 processor running at 1 GHz (1.3 GHz according to kickstarter specs). The GPU is a Mali 400. Software It runs Android 4.2.2 with the customized Omate User Interface (OUI 2.1). The TrueSmart ecosystem includes basic Android applications: Settings / Phone / Email / Messaging / Contact / File Manager / Camera / Gallery and several third parties apps which are available in Omate own application store called the Ostore. The Ostore features several categories System firmware updates are provided over-the-air. Display It has an LG 1.54-inch 240 × 240 pixel color IPS TFT display and a multi point touch panel covered with sapphire glass coating. Battery The battery is a removable 600 mAH li-ion battery, providing for 240 minutes talk time in standalone mode and 100 hours of standby time. Removing or replacing the battery will void the device's warranty. OK stickers are placed on the back plate screws. Memory/storage It has either 512 MB or 1 GB depending on the model of memory and 4 GB or 8 GB of storage, expandable with up to 128 GB microSD cards. Build The Omate TrueSmart is IP67 certified and dustproof. However water damages are not covered by the warranty as long as the device can be opened to insert a SIM card, an SD card or to replace the battery. The case is steel alloy or aluminum based, the glass is protected by a sapphire crystal coating, the straps are silicone. The straps are not replaceable since the GSM and GPS antennas are integrated into them. Measurements are 45 mm × 45 mm × 14 mm. Connectivity The TrueSmart is an Android-based GSM standalone smartwatch which features a micro-SIM slot. There are two versions, for different regional 3G networks: a 2100 MHz (Europe) and a 1900 MHz (US), both supporting UMTS, HSDPA, HSUPA, HSPA, and HSPA+. The 2G modem in both versions supports quad band: 900/1800/ 850/1900 GSM, GPRS, and EDGE. It has Bluetooth 3.0 (4.0 when OS is upgraded to 4.3 or newer) and Wi-Fi 802.11b/g/n connectivity. Kickstarter campaign Omate launched a Kickstarter campaign on , with an initial fundraising target of $100,000. Backers spending $199 would receive an Omate TrueSmart when they became available ($179 for the first 500 backers and $189 for the next 500 backers). The project had met the $100,000 goal in half a day. Just before the end of the campaign, over $1 million was reached. Project Challenges The camera was stated to be 5 megapixels, but turned out to be a 3 megapixel camera with software upscaling the pictures to 5 megapixels. The processor was stated to run at 1.3 GHz, but on watches with 1 GB RAM and 8 GB storage, the processor only runs at 1 GHz. Omate later explained that at full speed the heat was not bearable and therefore they reduced the clock speed It was promoted as an IP67 waterproof watch, which included a video of a guy swimming with it. It was later stated that swimming was not covered under warranty. The watch also isn't certified waterproof by any standard. Omate performed a test themselves, by putting a test model in a bucket of water, but didn't use a functioning model and didn't conform to IP67 standards. Even though the warranty is void if the watch is opened, it is the official recommendation to open it and pull the battery if the firmware update fails. The delivery was later than promised due to the massive success of the Kickstarter campaign however Omate upgraded all the backers of the 512 MB+4 GB and 1 GB+4 GB / 1900 MHz version to 1 GB+8 GB to apologize for the delays. In mid June 2014 the last batch of Kickstarter watches was shipped. The TrueSmart runs in rootuser mode without any checks in place like SU and SuperUser. Legitimate apps that request root access are denied due to the lack of proper steps being taken to root but secure the TrueSmart. This means that malicious apps that try to gain access without requesting permission have full root access to everything on the TrueSmart without the user ever knowing. This includes usernames and passwords, pictures, emails and so on. It is technically feasible to have an app upload all that data to a hacker without the user ever knowing it was happening and then delete itself and almost everything on the TrueSmart. This same type of hack and data theft can also be done in the background by visiting malicious websites. Because of issues delaying production and shipping, Omate decided to upgrade the backers who have pledged the 512 MB+4 GB (1900 MHz) to 1 GB+8 GB (1900 MHz). They offered a TrueSmart PCB keychain to those who paid for a 1 GB+8 GB version, which most users didn't receive The watch has a MicroSD slot for storage expansion, but opening it to use the memory card slot voids the warranty. Even the versions ordered with a memory card are shipped with the memory card not preinstalled, forcing users to open their device and thus void the warranty by breaking seal stickers. The companion app allowing users to get notifications from their smartphone on their TrueSmart was not shipped with the TrueSmart as planned. Later Omate certified SWApp Link developed by Cyril Preiss as the official companion app however the app does provide the promised iOs compatibility. Parts of the source code are not made public as they should be under the GPLv2 license. This is an issue with most MediaTek devices. Not only is this ignoring the license but it is crippling the community development that Omate envisioned. Several backers had difficulties getting their watch through customs. Some backers had to pay large sums of money to get the device out of customs in some countries The charger cable was changed to a charging cradle that has a more robust connection, but doesn't allow pressing the buttons while charging, such as to snooze or disable an alarm. It was possible to pledge for (order) extra accessories, such as charging cables and batteries, but several supporters didn't get their accessories. Even some accessories that are part of the basic package weren't always shipped, such as the second battery that was an (achieved) stretch goal. Google certification issue (see below) Google Certification Issue During the Kickstarter campaign, Omate claimed the TrueSmart will run Google Play, with the heading of the first Kickstarter update being "Yes, Google Play Apps Store!". However, an article on Phandroid revealed the watch cannot ship with official Google Services support due to it not meeting the Android Compatibility Definition Document requirements. However, Omate did not update their campaign to state the lack of Google certification before the campaign ended. It was not until 22 October 2013, a month after the Kickstarter campaign ended, that Omate acknowledged that the TrueSmart will not be able to pass the Google Certification after receiving official feedback from Google Therefore, the TrueSmart shipped without the Google Apps and Omate did not support any side loading but instead created its own Application Store called the Ostore featuring less than 50 apps in multiple categories, but including third party application stores such as Aptoide and 1mobilemarket. See also Wearable computer MetaWatch References External links Official forum (down since 13 February 2014) Unofficial forum: Smartwatch Planet - Be smart, use your watch! TrueSmart FAQ Watch brands Smartwatches Kickstarter-funded products Products introduced in 2013
Omate TrueSmart
Technology
1,990
73,096,333
https://en.wikipedia.org/wiki/HD%20149404
HD 149404, also known as HR 6164 and V918 Scorpii, is a star about 4,300 light years from the Earth, in the constellation Scorpius. It is a 5th magnitude star, so it will be faintly visible to the naked eye of an observer far from city lights. It is a rotating ellipsoidal variable, a binary star for which the two stars' combined brightness varies slightly, from magnitude 5.42 to 5.50, during their 9.8 day orbital period. It is one of the brightest members of the Ara OB1 association, which has the open cluster NGC 6193 at its center. The brightness variability of HD 149404 was marginally detected by the Argentinian astronomer Alenjandro Feinstein, during a photoelectric photometry study undertaken from 1963 through 1965. It was given the variable star designation V918 Scorpii in 1980. In 1977, Peter Conti et al. discovered that HD 149404 has double spectral lines, implying it is a spectroscopic binary. Philip Massey and Peter Conti derived the first set of orbital elements for the binary system, in 1979. The secondary star in the HD 149404 system is believed to have originally been the more massive of the two, but it is now less massive than the primary due to mass transfer caused by Roche lobe overflow in the past. The secondary may still be close to filling its Roche lobe. It is a rare ON supergiant, a star with unusually strong absorption lines of nitrogen in its stellar spectrum. Spectroscopic studies show that both stars have a stellar wind and a shock is formed where the two winds collide, which produces emission line features. References Scorpius 081305 006164 Scorpii, V918 Binary stars Rotating ellipsoidal variables O-type supergiants Emission-line stars
HD 149404
Astronomy
393
48,361,236
https://en.wikipedia.org/wiki/NGC%20129
NGC 129 is an open cluster in the constellation Cassiopeia. It was discovered by William Herschel in 1788. It is located almost exactly halfway between the bright stars Caph (β Cassiopeiae) and γ Cassiopeiae. It is large but not dense and can be observed by binoculars, through which the most obvious component is a small triangle of stars of magnitude 8 and 9, located in the center of the cluster. NGC 129 contains several giant stars. The brightest member of the cluster is DL Cassiopeiae, a binary system which contains a Cepheid variable with a 8,00 day period. Using the fluctuations of the brightness of DL Cassiopeia from 8,7 to 9,28, Gieren et al. in 1994 determined the distance of NGC 129 to be 2034 ±110 pc (6.630 ±360 ly), much larger than the distance obtained by Turner et al. (1992), who obtained a distance of 1,670 ±13 pc from ZAMS fitting of the cluster. A possible cause of this difference is the different level of obstruction of light and star reddening of the stars of the cluster. One more cepheid variable, V379 Cas, is also a possible member of NGC 129. References External links 0129 Cassiopeia (constellation) NGC 0129 Astronomical objects discovered in 1788 Discoveries by William Herschel
NGC 129
Astronomy
291
21,161,599
https://en.wikipedia.org/wiki/Bithionol
Bithionol is an antibacterial, anthelmintic, and algaecide. It is used to treat Anoplocephala perfoliata (tapeworms) in horses and Fasciola hepatica (liver flukes). Mechanism of action Bithionol has been shown to be a potent inhibitor of soluble adenylyl cyclase, an intracellular enzyme important in the catalysis of adenosine triphosphate (ATP) to cyclic adenosine monophosphate (cAMP). Soluble adenylyl cyclase is uniquely activated by bicarbonate. The cAMP formed by this enzyme is associated with capacitation of sperm, eye pressure regulation, acid-base regulation, and astrocyte/neuron communication. It is related to the organochlorine hexachlorophene, which has been shown to be an isomer-specific inhibitor of soluble adenylyl cyclase. Bithionol has two aromatic rings with a sulfur atom bonded between them and multiple chlorine ions and hydroxyl groups attached to the phenyl groups. These functional groups are capable of hydrophobic, ionic, and polar interactions. These intermolecular interactions are responsible for the binding of bithionol to the bicarbonate binding site of soluble adenylyl cyclase efficiently enough to cause competitive inhibition with the usual bicarbonate substrate. The side chain of arginine 176 within the bicarbonate binding site interacts significantly with the aromatic ring of the bithionol molecule. This allosteric, conformational change interferes with the ability of the active site of soluble adenylyl cyclase to adequately bind ATP to convert it into cAMP. Arginine 176 usually interacts with the ATP and other catalytic ions at the active site, so when it turns from its normal position to interact with the bithionol inhibitor, it no longer functions in keeping the ATP bound to the active site. In another form of inhibition, bithionol is a much larger molecule than simple sodium bicarbonate, so it is large enough to reach through a small channel in the soluble adenylyl cyclase and interfere with binding of ATP, preventing its conversion to cAMP. This inhibition of the soluble adenylyl cyclase by bithionol at the bicarbonate binding site is demonstrated through a mixed-inhibition graph, where higher concentrations of bithionol have a lower Vmax and a larger Km. This translates to a decreased rate of reaction and a decreased affinity between substrates when bithionol is in higher concentrations. However, concentrations of bithionol that are required inhibit soluble adenylyl cyclase at clinically relevant levels are also cytotoxic in vivo. Thus, it cannot be used as the therapeutic drug needed to inhibit soluble adenylyl cyclase and therefore decrease the accumulation of cAMP within the cell. However, it sheds light on the search for a compound that will eventually be able to target the bicarbonate binding site of soluble adenylyl cyclase. Bithionol is the first known soluble adenylyl cyclase inhibitor to act through the bicarbonate binding site via a mostly allosteric mechanism. Safety and regulation LD50 (oral, mouse) is 2100 mg/kg. Bithionol was formerly used in soaps and cosmetics until the U.S. FDA banned it for its photosensitizing effects. The compound has been known to cause photocontact sensitization. References Antiparasitic agents Chlorobenzene derivatives Phenols Thioethers
Bithionol
Biology
751
36,224,143
https://en.wikipedia.org/wiki/Planetary%20science
Planetary science (or more rarely, planetology) is the scientific study of planets (including Earth), celestial bodies (such as moons, asteroids, comets) and planetary systems (in particular those of the Solar System) and the processes of their formation. It studies objects ranging in size from micrometeoroids to gas giants, with the aim of determining their composition, dynamics, formation, interrelations and history. It is a strongly interdisciplinary field, which originally grew from astronomy and Earth science, and now incorporates many disciplines, including planetary geology, cosmochemistry, atmospheric science, physics, oceanography, hydrology, theoretical planetary science, glaciology, and exoplanetology. Allied disciplines include space physics, when concerned with the effects of the Sun on the bodies of the Solar System, and astrobiology. There are interrelated observational and theoretical branches of planetary science. Observational research can involve combinations of space exploration, predominantly with robotic spacecraft missions using remote sensing, and comparative, experimental work in Earth-based laboratories. The theoretical component involves considerable computer simulation and mathematical modelling. Planetary scientists are generally located in the astronomy and physics or Earth sciences departments of universities or research centres, though there are several purely planetary science institutes worldwide. Generally, planetary scientists study one of the Earth sciences, astronomy, astrophysics, geophysics, or physics at the graduate level and concentrate their research in planetary science disciplines. There are several major conferences each year, and a wide range of peer reviewed journals. Some planetary scientists work at private research centres and often initiate partnership research tasks. History The history of planetary science may be said to have begun with the Ancient Greek philosopher Democritus, who is reported by Hippolytus as saying The ordered worlds are boundless and differ in size, and that in some there is neither sun nor moon, but that in others, both are greater than with us, and yet with others more in number. And that the intervals between the ordered worlds are unequal, here more and there less, and that some increase, others flourish and others decay, and here they come into being and there they are eclipsed. But that they are destroyed by colliding with one another. And that some ordered worlds are bare of animals and plants and all water. In more modern times, planetary science began in astronomy, from studies of the unresolved planets. In this sense, the original planetary astronomer would be Galileo, who discovered the four largest moons of Jupiter, the mountains on the Moon, and first observed the rings of Saturn, all objects of intense later study. Galileo's study of the lunar mountains in 1609 also began the study of extraterrestrial landscapes: his observation "that the Moon certainly does not possess a smooth and polished surface" suggested that it and other worlds might appear "just like the face of the Earth itself". Advances in telescope construction and instrumental resolution gradually allowed increased identification of the atmospheric as well as surface details of the planets. The Moon was initially the most heavily studied, due to its proximity to the Earth, as it always exhibited elaborate features on its surface, and the technological improvements gradually produced more detailed lunar geological knowledge. In this scientific process, the main instruments were astronomical optical telescopes (and later radio telescopes) and finally robotic exploratory spacecraft, such as space probes. The Solar System has now been relatively well-studied, and a good overall understanding of the formation and evolution of this planetary system exists. However, there are large numbers of unsolved questions, and the rate of new discoveries is very high, partly due to the large number of interplanetary spacecraft currently exploring the Solar System. Disciplines Planetary science studies observational and theoretical astronomy, geology (astrogeology), atmospheric science, and an emerging subspecialty in planetary oceans, called planetary oceanography. Planetary astronomy This is both an observational and a theoretical science. Observational researchers are predominantly concerned with the study of the small bodies of the Solar System: those that are observed by telescopes, both optical and radio, so that characteristics of these bodies such as shape, spin, surface materials and weathering are determined, and the history of their formation and evolution can be understood. Theoretical planetary astronomy is concerned with dynamics: the application of the principles of celestial mechanics to the Solar System and extrasolar planetary systems. Observing exoplanets and determining their physical properties, exoplanetology, is a major area of research besides Solar System studies. Every planet has its own branch. Planetary geology In planetary science, the term geology is used in its broadest sense, to mean the study of the surface and interior parts of planets and moons, from their core to their magnetosphere. The best-known research topics of planetary geology deal with the planetary bodies in the near vicinity of the Earth: the Moon, and the two neighboring planets: Venus and Mars. Of these, the Moon was studied first, using methods developed earlier on the Earth. Planetary geology focuses on celestial objects that exhibit a solid surface or have significant solid physical states as part of their structure. Planetary geology applies geology, geophysics and geochemistry to planetary bodies. Planetary geomorphology Geomorphology studies the features on planetary surfaces and reconstructs the history of their formation, inferring the physical processes that acted on the surface. Planetary geomorphology includes the study of several classes of surface features: Impact features (multi-ringed basins, craters) Volcanic and tectonic features (lava flows, fissures, rilles) Glacial features Aeolian features Space weathering – erosional effects generated by the harsh environment of space (continuous micrometeorite bombardment, high-energy particle rain, impact gardening). For example, the thin dust cover on the surface of the lunar regolith is a result of micrometeorite bombardment. Hydrological features: the liquid involved can range from water to hydrocarbon and ammonia, depending on the location within the Solar System. This category includes the study of paleohydrological features (paleochannels, paleolakes). The history of a planetary surface can be deciphered by mapping features from top to bottom according to their deposition sequence, as first determined on terrestrial strata by Nicolas Steno. For example, stratigraphic mapping prepared the Apollo astronauts for the field geology they would encounter on their lunar missions. Overlapping sequences were identified on images taken by the Lunar Orbiter program, and these were used to prepare a lunar stratigraphic column and geological map of the Moon. Cosmochemistry, geochemistry and petrology One of the main problems when generating hypotheses on the formation and evolution of objects in the Solar System is the lack of samples that can be analyzed in the laboratory, where a large suite of tools are available, and the full body of knowledge derived from terrestrial geology can be brought to bear. Direct samples from the Moon, asteroids and Mars are present on Earth, removed from their parent bodies, and delivered as meteorites. Some of these have suffered contamination from the oxidising effect of Earth's atmosphere and the infiltration of the biosphere, but those meteorites collected in the last few decades from Antarctica are almost entirely pristine. The different types of meteorites that originate from the asteroid belt cover almost all parts of the structure of differentiated bodies: meteorites even exist that come from the core-mantle boundary (pallasites). The combination of geochemistry and observational astronomy has also made it possible to trace the HED meteorites back to a specific asteroid in the main belt, 4 Vesta. The comparatively few known Martian meteorites have provided insight into the geochemical composition of the Martian crust, although the unavoidable lack of information about their points of origin on the diverse Martian surface has meant that they do not provide more detailed constraints on theories of the evolution of the Martian lithosphere. As of July 24, 2013, 65 samples of Martian meteorites have been discovered on Earth. Many were found in either Antarctica or the Sahara Desert. During the Apollo era, in the Apollo program, 384 kilograms of lunar samples were collected and transported to the Earth, and three Soviet Luna robots also delivered regolith samples from the Moon. These samples provide the most comprehensive record of the composition of any Solar System body besides the Earth. The numbers of lunar meteorites are growing quickly in the last few years – as of April 2008 there are 54 meteorites that have been officially classified as lunar. Eleven of these are from the US Antarctic meteorite collection, 6 are from the Japanese Antarctic meteorite collection and the other 37 are from hot desert localities in Africa, Australia, and the Middle East. The total mass of recognized lunar meteorites is close to 50 kg. Planetary geophysics and space physics Space probes made it possible to collect data in not only the visible light region but in other areas of the electromagnetic spectrum. The planets can be characterized by their force fields: gravity and their magnetic fields, which are studied through geophysics and space physics. Measuring the changes in acceleration experienced by spacecraft as they orbit has allowed fine details of the gravity fields of the planets to be mapped. For example, in the 1970s, the gravity field disturbances above lunar maria were measured through lunar orbiters, which led to the discovery of concentrations of mass, mascons, beneath the Imbrium, Serenitatis, Crisium, Nectaris and Humorum basins. If a planet's magnetic field is sufficiently strong, its interaction with the solar wind forms a magnetosphere around a planet. Early space probes discovered the gross dimensions of the terrestrial magnetic field, which extends about 10 Earth radii towards the Sun. The solar wind, a stream of charged particles, streams out and around the terrestrial magnetic field, and continues behind the magnetic tail, hundreds of Earth radii downstream. Inside the magnetosphere, there are relatively dense regions of solar wind particles, the Van Allen radiation belts. Planetary geophysics includes, but is not limited to, seismology and tectonophysics, geophysical fluid dynamics, mineral physics, geodynamics, mathematical geophysics, and geophysical surveying. Planetary geodesy Planetary geodesy (also known as planetary geodetics) deals with the measurement and representation of the planets of the Solar System, their gravitational fields and geodynamic phenomena (polar motion in three-dimensional, time-varying space). The science of geodesy has elements of both astrophysics and planetary sciences. The shape of the Earth is to a large extent the result of its rotation, which causes its equatorial bulge, and the competition of geologic processes such as the collision of plates and of vulcanism, resisted by the Earth's gravity field. These principles can be applied to the solid surface of Earth (orogeny; Few mountains are higher than , few deep sea trenches deeper than that because quite simply, a mountain as tall as, for example, , would develop so much pressure at its base, due to gravity, that the rock there would become plastic, and the mountain would slump back to a height of roughly in a geologically insignificant time. Some or all of these geologic principles can be applied to other planets besides Earth. For instance on Mars, whose surface gravity is much less, the largest volcano, Olympus Mons, is high at its peak, a height that could not be maintained on Earth. The Earth geoid is essentially the figure of the Earth abstracted from its topographic features. Therefore, the Mars geoid (areoid) is essentially the figure of Mars abstracted from its topographic features. Surveying and mapping are two important fields of application of geodesy. Planetary atmospheric science An atmosphere is an important transitional zone between the solid planetary surface and the higher rarefied ionizing and radiation belts. Not all planets have atmospheres: their existence depends on the mass of the planet, and the planet's distance from the Sun – too distant and frozen atmospheres occur. Besides the four giant planets, three of the four terrestrial planets (Earth, Venus, and Mars) have significant atmospheres. Two moons have significant atmospheres: Saturn's moon Titan and Neptune's moon Triton. A tenuous atmosphere exists around Mercury. The effects of the rotation rate of a planet about its axis can be seen in atmospheric streams and currents. Seen from space, these features show as bands and eddies in the cloud system and are particularly visible on Jupiter and Saturn. Planetary oceanography Exoplanetology Exoplanetology studies exoplanets, the planets existing outside our Solar System. Until recently, the means of studying exoplanets have been extremely limited, but with the current rate of innovation in research technology, exoplanetology has become a rapidly developing subfield of astronomy. Comparative planetary science Planetary science frequently makes use of the method of comparison to give a greater understanding of the object of study. This can involve comparing the dense atmospheres of Earth and Saturn's moon Titan, the evolution of outer Solar System objects at different distances from the Sun, or the geomorphology of the surfaces of the terrestrial planets, to give only a few examples. The main comparison that can be made is to features on the Earth, as it is much more accessible and allows a much greater range of measurements to be made. Earth analog studies are particularly common in planetary geology, geomorphology, and also in atmospheric science. The use of terrestrial analogs was first described by Gilbert (1886). In fiction In Frank Herbert's 1965 science fiction novel Dune, the major secondary character Liet-Kynes serves as the "Imperial Planetologist" for the fictional planet Arrakis, a position he inherited from his father Pardot Kynes. In this role, a planetologist is described as having skills of an ecologist, geologist, meteorologist, and biologist, as well as basic understandings of human sociology. The planetologists apply this expertise to the study of entire planets.In the Dune series, planetologists are employed to understand planetary resources and to plan terraforming or other planetary-scale engineering projects. This fictional position in Dune has had an impact on the discourse surrounding planetary science itself and is referred to by one author as a "touchstone" within the related disciplines. In one example, a publication by Sybil P. Seitzinger in the journal Nature opens with a brief introduction on the fictional role in Dune, and suggests we should consider appointing individuals with similar skills to Liet-Kynes to help with managing human activity on Earth. Professional activity Journals Annual Review of Earth and Planetary Sciences Earth and Planetary Science Letters Earth, Moon, and Planets Geochimica et Cosmochimica Acta Icarus Journal of Geophysical Research – Planets Meteoritics and Planetary Science Planetary and Space Science The Planetary Science Journal Professional bodies This non-exhaustive list includes those institutions and universities with major groups of people working in planetary science. Alphabetical order is used. Division for Planetary Sciences (DPS) of the American Astronomical Society American Geophysical Union Meteoritical Society Europlanet Government space agencies Canadian Space Agency (CSA) China National Space Administration (CNSA, People's Republic of China). French National Centre of Space Research Deutsches Zentrum für Luft- und Raumfahrt e.V., (German: abbreviated DLR), the German Aerospace Center European Space Agency (ESA) Indian Space Research Organisation (ISRO) Israel Space Agency (ISA) Italian Space Agency Japan Aerospace Exploration Agency (JAXA) NASA (National Aeronautics and Space Administration, United States of America) JPL GSFC Ames National Space Organization (Taiwan). Russian Federal Space Agency UK Space Agency (UKSA). Major conferences Lunar and Planetary Science Conference (LPSC), organized by the Lunar and Planetary Institute in Houston. Held annually since 1970, occurs in March. Division for Planetary Sciences (DPS) meeting held annually since 1970 at a different location each year, predominantly within the mainland US. Occurs around October. American Geophysical Union (AGU) annual Fall meeting in December in San Francisco. American Geophysical Union (AGU) Joint Assembly (co-sponsored with other societies) in April–May, in various locations around the world. Meteoritical Society annual meeting, held during the Northern Hemisphere summer, generally alternating between North America and Europe. European Planetary Science Congress (EPSC), held annually around September at a location within Europe. Smaller workshops and conferences on particular fields occur worldwide throughout the year. See also Areography (geography of Mars) Planetary cartography Planetary coordinate system Selenography – study of the surface and physical features of the Moon Theoretical planetology Timeline of Solar System exploration References Further reading Carr, Michael H., Saunders, R. S., Strom, R. G., Wilhelms, D. E. 1984. The Geology of the Terrestrial Planets. NASA. Morrison, David. 1994. Exploring Planetary Worlds. W. H. Freeman. Hargitai H et al. (2015) Classification and Characterization of Planetary Landforms. In: Hargitai H (ed) Encyclopedia of Planetary Landforms. Springer. https://link.springer.com/content/pdf/bbm%3A978-1-4614-3134-3%2F1.pdf Hauber E et al. (2019) Planetary geologic mapping. In: Hargitai H (ed) Planetary Cartography and GIS. Springer. Page D (2015) The Geology of Planetary Landforms. In: Hargitai H (ed) Encyclopedia of Planetary Landforms. Springer. Rossi, A.P., van Gasselt S (eds) (2018) Planetary Geology. Springer External links Planetary Science Research Discoveries (articles) The Planetary Society (world's largest space-interest group: see also their active news blog) Planetary Exploration Newsletter (PSI-published professional newsletter, weekly distribution) Women in Planetary Science (professional networking and news) Space science Astronomical sub-disciplines
Planetary science
Astronomy
3,678
2,285,301
https://en.wikipedia.org/wiki/Mixed%20oxide
In chemistry, a mixed oxide is a somewhat informal name for an oxide that contains cations of more than one chemical element or cations of a single element in several states of oxidation. The term is usually applied to solid ionic compounds that contain the oxide anion and two or more element cations. Typical examples are ilmenite (), a mixed oxide of iron () and titanium () cations, perovskite and garnet.The cations may be the same element in different ionization states: a notable example is magnetite , which is also known as ferrosoferric oxide , contains the cations ("ferrous" iron) and ("ferric" iron) in 1:2 ratio. Other notable examples include red lead , the ferrites, and the yttrium aluminum garnet , used in lasers. The term is sometimes also applied to compounds of oxygen and two or more other elements, where some or all of the oxygen atoms are covalently bound into oxyanions. In sodium zincate , for example, the oxygens are bound to the zinc atoms forming zincate anions. (On the other hand, strontium titanate , despite its name, contains cations and not the anion.) Sometimes the term is applied loosely to solid solutions of metal oxides rather than chemical compounds, or to fine mixtures of two or more oxides. Mixed oxide minerals are plentiful in nature. Synthetic mixed oxides are components of many ceramics with remarkable properties and important advanced technological applications, such as strong magnets, fine optics, lasers, semiconductors, piezoelectrics, superconductors, catalysts, refractories, gas mantles, nuclear fuels, and more. Piezoelectric mixed oxides, in particular, are extensively used in pressure and strain gauges, microphones, ultrasound transducers, micromanipulators, delay lines, etc. See also Complex oxide Double salt MOX fuel References
Mixed oxide
Chemistry
414
1,490,598
https://en.wikipedia.org/wiki/Index%20locking
In databases an index is a data structure, part of the database, used by a database system to efficiently navigate access to user data. Index data are system data distinct from user data, and consist primarily of pointers. Changes in a database (by insert, delete, or modify operations), may require indexes to be updated to maintain accurate user data accesses. Index locking is a technique used to maintain index integrity. A portion of an index is locked during a database transaction when this portion is being accessed by the transaction as a result of attempt to access related user data. Additionally, special database system transactions (not user-invoked transactions) may be invoked to maintain and modify an index, as part of a system's self-maintenance activities. When a portion of an index is locked by a transaction, other transactions may be blocked from accessing this index portion (blocked from modifying, and even from reading it, depending on lock type and needed operation). Index Locking Protocol guarantees that phantom read phenomenon won't occur. Index locking protocol states: Every relation must have at least one index. A transaction can access tuples only after finding them through one or more indices on the relation A transaction Ti that performs a lookup must lock all the index leaf nodes that it accesses, in S-mode, even if the leaf node does not contain any tuple satisfying the index lookup (e.g. for a range query, no tuple in a leaf is in the range) A transaction Ti that inserts, updates or deletes a tuple ti in a relation must update all indices to and it must obtain exclusive locks on all index leaf nodes affected by the insert/update/delete The rules of the two-phase locking protocol must be observed. Specialized concurrency control techniques exist for accessing indexes. These techniques depend on the index type, and take advantage of its structure. They are typically much more effective than applying to indexes common concurrency control methods applied to user data. Notable and widely researched are specialized techniques for B-trees (B-Tree concurrency control) which are regularly used as database indexes. Index locks are used to coordinate threads accessing indexes concurrently, and typically shorter-lived than the common transaction locks on user data. In professional literature, they are often called latches. See also Database index Concurrency control Lock (database) B-Tree concurrency control References Databases Transaction processing Concurrency control
Index locking
Technology
493
203,897
https://en.wikipedia.org/wiki/Hyundai%20Group
Hyundai Group (; ) is a South Korean conglomerate founded by Chung Ju-yung. The group was founded in 1947 as a construction company. With government assistance, Chung and his family members rapidly expanded into various industries, eventually becoming South Korea's second chaebol. Chung Ju-yung was directly in control of the company until his death in 2001. The company spun off many of its better known businesses after the 1997 Asian financial crisis and founder Chung Ju-yung's death, including Hyundai Motor Group, Hyundai Department Store Group, and Hyundai Heavy Industries Group. The Hyundai Group now focuses on elevators and tourism to Mount Kumgang. Etymology The name "Hyundai" comes from the Korean word (), meaning "modernity". History In 1947, Hyundai Togun (Hyundai Engineering and Construction), the initial company of the Hyundai Group, was established by Chung Ju-yung. Hyundai Construction began operating outside of South Korea in 1965, initially entering the markets of Guam, Thailand and Vietnam. In 1950, Hyundai Togun was renamed Hyundai Construction. In 1958, Keumkang Company was established to make construction materials. In 1965, Hyundai Construction began its first overseas venture, a highway project in Thailand. In 1967, Hyundai Motors was established. Hyundai Heavy Industries was founded in 1973, and completed the construction of its first ships in June 1974. In 1975, the group began construction on an integrated car factory and launched a new Korean vehicle. In 1973, the group's shipyard was incorporated as Hyundai Shipbuilding and Heavy Industries and renamed Hyundai Heavy Industries in 1978. In 1976, Hyundai Corporation was established as a trading arm. The same year, Asia Merchant Marine Co. established, later renamed Hyundai Merchant Marine.In 1977, Asan Foundation was established. In 1983 Hyundai entered the semiconductor industry through the establishment of Hyundai Electronics (renamed Hynix in 2001). In 1986, Hyundai Research Institute was established. In 1986 a Hyundai-manufactured IBM PC-XT compatible called the Blue Chip PC was sold in discount and toy stores throughout the US. It was one of the earliest PC clones marketed toward consumers instead of business. In 1988, Asian Sangsun was established, renamed Hyundai Logistics in 1992. By the mid-1990s Hyundai comprised over 60 subsidiary companies and was active in a diverse range of activities including automobile manufacturing, construction, chemicals, electronics, financial services, heavy industry and shipbuilding. In the same period it had total annual revenues of around US$90 billion and over 200,000 employees. In December 1995, Hyundai announced a major management restructuring, affecting 404 executives. During 1997 Asian financial crisis, Hyundai acquired Kia Motors and LG Semi-Conductor. In 1998, Korea's economic crisis forced the group to begin restructuring efforts, which include selling off subsidiaries and focusing on five core business areas. Nevertheless, Hyundai began South Korean tourism to North Korea's Kumgangsan. In 1999, Hyundai Asan was established to operating Kumgang tourism, the Kaesong Industrial Complex, and other inter-Korean work. In April 1999 Hyundai announced an enormous corporate restructuring, involving a two-thirds reduction of the number of business units and a plan to break up the group into five independent business groups by 2003. In 2001, the founder Chung Ju-yung died, and the Hyundai Group conglomerate continued to be dismantled. In 2007, Hyundai Construction Equipment India Pvt. Ltd. was established in India. In 2010, Hyundai Group was selected as a preferred bidder by creditors for the acquisition of Hyundai Engineering & Construction. As of 2023, Hyundai Group "includes divisions that build and export diesel and electric locomotives, freight cars, and passenger coaches for the railroad industry, and offshore drilling and extraction equipment to the oil industry." Hyundai's "[i]nternational exports range from heavy industrial equipment to consumer products, and include cement, pianos, military uniforms, and consumer electronics products. Hyundai is represented on all continents but Australia, and has a number of international subsidiaries under its control." Logo Affiliated companies As of 2017, these are the affiliated companies of the Hyundai Group. Hyundai Power Equipment Hyundai Elevator Hyundai Movex Hyundai Asan Hyundai Research Institute Hyundai Investment Partners Hyundai Global Able Hyundai Hotel & Resort Bloomvista Hyundai Network Hyundai GBFMS Hyundai Motor Company Hyundai branded vehicles are manufactured by Hyundai Motor Company, which along with Kia forms the Hyundai Kia Automotive Group. Headquartered in Seoul, South Korea, Hyundai operates in Ulsan the world's largest integrated automobile manufacturing facility, which is capable of producing 1.6 million units annually. The company employs about 75,000 people around the world. Hyundai vehicles are sold in 193 countries through some 6,000 dealerships and showrooms worldwide. In 2012, Hyundai sold over 4.4 million vehicles worldwide. Popular models include the Sonata and Elantra mid-sized sedans. The Asan Foundation, established by Chung Ju-yung in 1977 with 50 percent of the stock of Hyundai Construction, subsidizes medical services in Korea primarily through the Asan Medical Center and six other hospitals. The foundation has sponsored conferences on Eastern ethics and funded academic research into traditional Korean culture. In 1991, it established the annual Filial Piety Award. See also Economy of South Korea Chaebol Hyundai Motor Company References External links Hyundai Group Website Hyundai Group Homepage Funding Universe profile Hyundai Power Equipment (UK) Conglomerate companies of South Korea Manufacturing companies of South Korea Conglomerate companies established in 1947 South Korean companies established in 1947 Hyundai Companies based in Seoul Computer companies of South Korea Computer hardware companies Electronics companies of South Korea Manufacturing companies established in 1947 Vehicle manufacturing companies established in 1947 South Korean brands
Hyundai Group
Technology
1,138
32,255,745
https://en.wikipedia.org/wiki/Otidea%20concinna
Otidea concinna is a species of apothecial fungus belonging to the family Pyronemataceae. This rather uncommon European species appears from late summer to autumn as vivid yellow elongated "ears" in small groups in woodland and parkland. Compared to some species of the genus, O. concinna looks like the tops of the 'ears' have been chopped off. Similar species include O. alutacea and O. microspora. References Pyronemataceae Fungi described in 1774 Fungi of Europe Taxa named by Christiaan Hendrik Persoon Fungus species
Otidea concinna
Biology
121
16,735,999
https://en.wikipedia.org/wiki/Basal%20electrical%20rhythm
The basal or basic electrical rhythm (BER) or electrical control activity (ECA) is the spontaneous depolarization and repolarization of pacemaker cells known as interstitial cells of Cajal (ICCs) in the smooth muscle of the stomach, small intestine, and large intestine. This electrical rhythm is spread through gap junctions in the smooth muscle of the GI tract. These pacemaker cells, also called the ICCs, control the frequency of contractions in the gastrointestinal tract. The cells can be located in either the circular or longitudinal layer of the smooth muscle in the GI tract; circular for the small and large intestine, longitudinal for the stomach. The frequency of contraction differs at each location in the GI tract beginning with 3 per minute in the stomach, then 12 per minute in the duodenum, 9 per minute in the ileum, and a normally low one contraction per 30 minutes in the large intestines that increases 3 to 4 times a day due to a phenomenon called mass movement. The basal electrical rhythm controls the frequency of contraction but additional neuronal and hormonal controls regulate the strength of each contraction. Physiology Smooth muscle within the GI tract causes the involuntary peristaltic motion that moves consumed food down the esophagus and towards the rectum. The smooth muscle throughout most of the GI tract is divided into two layers: an outer longitudinal layer and an inner circular layer. Both layers of muscle are located within the muscularis externa. The stomach has a third layer: an innermost oblique layer. The physical contractions of the smooth muscle cells can be caused by action potentials in efferent motor neurons of the enteric nervous system, or by receptor mediated calcium influx. These efferent motor neurons of the enteric nervous system are cholinergic and adrenergic neurons. The inner circular layer is innervated by both excitatory and inhibitory motor neurons, while the outer longitudinal layer is innervated by mainly excitatory neurons. These action potentials cause the smooth muscle cells to contract or relax, depending on the particular stimulation the cells receive. Longitudinal muscle fibers depend on calcium influx into the cell for excitation-contraction coupling, while circular muscle fibers rely on intracellular calcium release. Contraction of the smooth muscle can occur when the BER reaches its plateau (an absolute value less than -45mV) while a simultaneous stimulatory action potential occurs. A contraction will not occur unless an action potential occurs. Generally, BER waves stimulate action potentials and action potentials stimulate contractions. The interstitial cells of Cajal are specialized pacemaker cells located in the wall of the stomach, small intestine, and large intestine. These cells are connected to the smooth muscle via gap junctions and the myenteric plexus. The cell membranes of the pacemaker cells undergo a rhythmic depolarization and repolarization from -65mV to -45mV. This rhythm of depolarization-repolarization of the cell membrane creates a slow wave known as a BER, and it is transmitted to the smooth muscle cells. The frequency of these depolarizations in a region of the GI tract determines the possible frequency of contractions. In order for a contraction to occur, a hormone or neurocrine signal must induce the smooth muscle cell to have an action potential. The basal electrical rhythm allows the smooth muscle cell to depolarize and contract rhythmically when exposed to hormonal signals. This action potential is transmitted to other smooth muscle cells via gap junctions, creating a peristaltic wave. The specific mechanism for the contraction of smooth muscle in the GI tract depends upon IP3R calcium release channels in the muscle. Calcium release from IP3 sensitive calcium stores activates calcium dependent chloride channels. These chloride channels trigger spontaneous transient inward current which couples the calcium oscillations to electrical activity. Frequency The number of action potentials during the plateau of a particular BER slow wave can vary. This variation in action potential generation does not impact the frequency of waves through the GI tract, only the strength of those contractile waves. Factors that impact gastric motility Gastrin - Stimulates increased contraction force and motility in the stomach, small intestine, and large intestine. CCK - Suppresses motility in the stomach and duodenum Additionally, CCK stimulates secretion of PPY and inhibits the secretion of ghrelin. PPY - inhibition of upper GI tract motility GLP-1 - Functions as an "Ileal Brake" to inhibit upper GI tract motility when the distal gut is exposed to unabsorbed nutrients. Slows gastric emptying to promote nutrient absorption. Distension of the stomach increases motility of the stomach. Distension of the duodenum inhibits stomach motility in order to prevent the over filling of the duodenum. Presence of fat, low pH, and hypertonic solutions cause a decrease in motility of the stomach. Sympathetic nervous system innervation inhibits gastric motility. Parasympathetic nervous system innervation stimulates gastric motility. Rate and motility are also dependent upon the meal composition. Meals that are solid and contain a greater macronutrient composition require slower and more forceful contraction in order to extract the maximum amount of nutrients throughout the GI tract. The cells that respond to and secrete these substances include I cells and K cells in the proximal small intestine, whose stimulation is dependent on nutrient exposure and entry into the duodenum, and L cells in the distal small intestine and colon which are stimulated by unabsorbed nutrients and gastric emptying. The frequency of the BER, and thus the contractions, changes throughout the GI tract. The frequency in the stomach is 3 per minute, while the duodenum is 11 to 12 per minute and the ileum is 9 per minute. The colon can have a BER frequency between 2 and 13 per minute. The electrical activity is oscillatory, so that the BER has peaks and valleys when graphed over time. See also Neurogastroenterology References Digestive system Electrophysiology
Basal electrical rhythm
Biology
1,287
2,382,824
https://en.wikipedia.org/wiki/Jordanus%20%28constellation%29
Jordanus (the Jordan River) was a constellation introduced in 1612 (or 1613) on a globe by Petrus Plancius and first shown in print by ‍‍Jakob Bartsch ‍in ‍his ‍book ‍‍Usus ‍Astronomicus ‍Planisphaerii ‍Stellati ‍(1624). One end lay in the present-day Canes Venatici and then it flowed through the areas now occupied by Leo Minor and Lynx, ending near Camelopardalis. This constellation was not adopted in the atlases of Johann Bode and fell into disuse. References See also Obsolete constellations Coelum Stellatum Christianum (Julius Schiller, 1627) Christianized the constellation Hydra as the Jordan river. Former constellations Constellations listed by Petrus Plancius Jordan River 1610s beginnings
Jordanus (constellation)
Astronomy
159
2,272,662
https://en.wikipedia.org/wiki/Charles%20J.%20Joachain
Charles J. Joachain is a Belgian physicist. Biography Born in Brussels on 9 May 1937, Charles J. Joachain obtained his Ph.D. in Physics in 1963 at the Université Libre de Bruxelles (Free University of Brussels). From 1964 to 1965 he was a Postdoctoral Fellow of the Belgian American Educational Foundation at the University of California, Berkeley and the Lawrence Berkeley Laboratory, and from 1965 to 1966 a Research Physicist at these institutions. At the Université Libre de Bruxelles he was appointed chargé de cours associé in 1965, chargé de cours in 1968, professeur extraordinaire in 1971 and professeur ordinaire in 1978. He was chairman of the Department of Physics in 1980 and 1981. He was also appointed professor at the Université Catholique de Louvain in 1984. In 2002, he became professeur ordinaire émérite (Emeritus Professor) at the Université Libre de Bruxelles and professeur honoraire at the Université Catholique de Louvain. Professor Joachain has been a visiting professor in several universities and laboratories in Europe and the United States, in particular at the University of California, Berkeley and the Lawrence Berkeley Laboratory, the Université Pierre et Marie Curie (Paris VI), the University of Rome “La Sapienza” and the Max Planck Institute for Quantum Optics in Garching. Research activities The research activities of Professor Joachain concern two areas of theoretical physics: 1) Quantum collision theory: electron and positron collisions with atomic systems, atom-atom collisions, nuclear reactions, high-energy hadron collisions with nuclei. 2) High-intensity laser-atom interactions: multiphoton ionization, harmonic generation, laser-assisted atomic collisions, attophysics, relativistic effects in laser-atom interactions. Publications Professor Joachain has published five books : 1) "Quantum Collision Theory", North Holland, Amsterdam (1975), 2d edition (1979); 3d edition (1983). 2) "Physics of Atoms and Molecules" (with B.H. Bransden), Longman, London (1983); 2d edition, Prentice Hall-Pearson (2003). 3) "Quantum Mechanics" (with B.H. Bransden), Longman, London (1989), 2d edition, Prentice Hall- Pearson (2000). 4) "Theory of Electron-Atom Collisions. Part I. Potential Scattering" (with P.G. Burke), Plenum Press, New York (1995). 5) "Atoms in Intense Laser Fields" (with N.J. Kylstra and R.M. Potvliege), Cambridge University Press (2012). He has co-edited four books : 1) "Atomic and Molecular Physics of Controlled Thermonuclear Fusion"(with D.E. Post), Plenum, New York (1983). 2) "Photon and Electron Collisions with Atoms and Molecules"(with P.G. Burke), Plenum, New York (1997). 3) "Atoms, Solids and Plasmas in Super-Intense Laser Fields" (with D. Batani, S. Martellucci and A.N. Chester), Kluwer Academic-Plenum, New-York (2001). 4) "Atoms and Plasmas in Super-Intense Laser Fields" (with D. Batani and S. Martellucci), Conference Proceedings, Volume 88, Italian Physical Society, Bologna (2004). He is also the author of hundred and forty-seven research articles and forty-five review articles in theoretical physics, devoted mainly to quantum collision theory with applications to atomic, nuclear and high-energy processes and to the theory of high-intensity laser-atom interactions. Distinctions and prizes Joachain has received many scientific distinctions and prizes, in particular the Prix Louis Empain in 1963, the Alexander von Humboldt Prize in 1998 and the Blaise Pascal Medal for Physics of the European Academy of Sciences in 2012. He was President of the Belgian Physical Society from 1987 to 1989 and of the “Institut des Hautes Etudes” of Belgium from 2006 to 2011. He has been a Fellow of the Institute of Physics (UK) since 1974, a Fellow of the American Physical Society since 1977 and a Doctor Honoris Causa of the University of Durham since 1989. He is a member of the Royal Academy of Science, Letters and Fine Arts of Belgium (President, 2015–16), of the Academia Europaea and of the European Academy of Sciences. References 1937 births Living people Belgian nuclear physicists Free University of Brussels (1834–1969) alumni University of California, Berkeley alumni Science teachers Belgian science writers Fellows of the Institute of Physics Theoretical physicists Fellows of the American Physical Society
Charles J. Joachain
Physics
984
32,747,596
https://en.wikipedia.org/wiki/Structural%20coloration
Structural coloration in animals, and a few plants, is the production of colour by microscopically structured surfaces fine enough to interfere with visible light instead of pigments, although some structural coloration occurs in combination with pigments. For example, peacock tail feathers are pigmented brown, but their microscopic structure makes them also reflect blue, turquoise, and green light, and they are often iridescent. Structural coloration was first described by English scientists Robert Hooke and Isaac Newton, and its principle—wave interference—explained by Thomas Young a century later. Young described iridescence as the result of interference between reflections from two or more surfaces of thin films, combined with refraction as light enters and leaves such films. The geometry then determines that at certain angles, the light reflected from both surfaces interferes constructively, while at other angles, the light interferes destructively. Different colours therefore appear at different angles. In animals such as on the feathers of birds and the scales of butterflies, interference is created by a range of photonic mechanisms, including diffraction gratings, selective mirrors, photonic crystals, crystal fibres, matrices of nanochannels and proteins that can vary their configuration. Some cuts of meat also show structural coloration due to the exposure of the periodic arrangement of the muscular fibres. Many of these photonic mechanisms correspond to elaborate structures visible by electron microscopy. In the few plants that exploit structural coloration, brilliant colours are produced by structures within cells. The most brilliant blue coloration known in any living tissue is found in the marble berries of Pollia condensata, where a spiral structure of cellulose fibrils produces Bragg's law scattering of light. The bright gloss of buttercups is produced by thin-film reflection by the epidermis supplemented by yellow pigmentation, and strong diffuse scattering by a layer of starch cells immediately beneath. Structural coloration has potential for industrial, commercial and military applications, with biomimetic surfaces that could provide brilliant colours, adaptive camouflage, efficient optical switches and low-reflectance glass. History In his 1665 book Micrographia, Robert Hooke described the "fantastical" colours of the peacock's feathers: In his 1704 book Opticks, Isaac Newton described the mechanism of the colours other than the brown pigment of peacock tail feathers. Newton noted that Thomas Young (1773–1829) extended Newton's particle theory of light by showing that light could also behave as a wave. He showed in 1803 that light could diffract from sharp edges or slits, creating interference patterns. In his 1892 book Animal Coloration, Frank Evers Beddard (1858–1925) acknowledged the existence of structural colours: But Beddard then largely dismissed structural coloration, firstly as subservient to pigments: "in every case the [structural] colour needs for its display a background of dark pigment;" and then by asserting its rarity: "By far the commonest source of colour in invertebrate animals is the presence in the skin of definite pigments", though he does later admit that the Cape golden mole has "structural peculiarities" in its hair that "give rise to brilliant colours". Principles Structure not pigment Structural coloration is caused by interference effects rather than by pigments. Colours are produced when a material is scored with fine parallel lines, or formed of one or more parallel thin layers, or otherwise composed of microstructures on the scale of the colour's wavelength. Structural coloration is responsible for the blues and greens of the feathers of many birds (the bee-eater, kingfisher and roller, for example), as well as many butterfly wings, beetle wing-cases (elytra) and (while rare among flowers) the gloss of buttercup petals. These are often iridescent, as in peacock feathers and nacreous shells such as of pearl oysters (Pteriidae) and Nautilus. This is because the reflected colour depends on the viewing angle, which in turn governs the apparent spacing of the structures responsible. Structural colours can be combined with pigment colours: peacock feathers are pigmented brown with melanin, while buttercup petals have both carotenoid pigments for yellowness and thin films for reflectiveness. Principle of iridescence Iridescence, as explained by Thomas Young in 1803, is created when extremely thin films reflect part of the light falling on them from their top surfaces. The rest of the light goes through the films, and a further part of it is reflected from their bottom surfaces. The two sets of reflected waves travel back upwards in the same direction. But since the bottom-reflected waves travelled a little farther – controlled by the thickness and refractive index of the film, and the angle at which the light fell – the two sets of waves are out of phase. When the waves are one or more whole wavelengths apart – in other words, at certain specific angles, they add (interfere constructively), giving a strong reflection. At other angles and phase differences, they can subtract, giving weak reflections. The thin film therefore selectively reflects just one wavelength – a pure colour – at any given angle, but other wavelengths – different colours – at different angles. So, as a thin-film structure such as a butterfly's wing or bird's feather moves, it seems to change colour. Mechanisms Fixed structures A number of fixed structures can create structural colours, by mechanisms including diffraction gratings, selective mirrors, photonic crystals, crystal fibres and deformed matrices. Structures can be far more elaborate than a single thin film: films can be stacked up to give strong iridescence, to combine two colours, or to balance out the inevitable change of colour with angle to give a more diffuse, less iridescent effect. Each mechanism offers a specific solution to the problem of creating a bright colour or combination of colours visible from different directions. A diffraction grating constructed of layers of chitin and air gives rise to the iridescent colours of various butterfly wing scales as well as to the tail feathers of birds such as the peacock. Hooke and Newton were correct in their claim that the peacock's colours are created by interference, but the structures responsible, being close to the wavelength of light in scale (see micrographs), were smaller than the striated structures they could see with their light microscopes. Another way to produce a diffraction grating is with tree-shaped arrays of chitin, as in the wing scales of some of the brilliantly coloured tropical Morpho butterflies (see drawing). Yet another variant exists in Parotia lawesii, Lawes's parotia, a bird of paradise. The barbules of the feathers of its brightly coloured breast patch are V-shaped, creating thin-film microstructures that strongly reflect two different colours, bright blue-green and orange-yellow. When the bird moves the colour switches sharply between these two colours, rather than drifting iridescently. During courtship, the male bird systematically makes small movements to attract females, so the structures must have evolved through sexual selection. Photonic crystals can be formed in different ways. In Parides sesostris, the emerald-patched cattleheart butterfly, photonic crystals are formed of arrays of nano-sized holes in the chitin of the wing scales. The holes have a diameter of about 150 nanometres and are about the same distance apart. The holes are arranged regularly in small patches; neighbouring patches contain arrays with differing orientations. The result is that these emerald-patched cattleheart scales reflect green light evenly at different angles instead of being iridescent. In Lamprocyphus augustus, a weevil from Brazil, the chitin exoskeleton is covered in iridescent green oval scales. These contain diamond-based crystal lattices oriented in all directions to give a brilliant green coloration that hardly varies with angle. The scales are effectively divided into pixels about a micrometre wide. Each such pixel is a single crystal and reflects light in a direction different from its neighbours. Selective mirrors to create interference effects are formed of micron-sized bowl-shaped pits lined with multiple layers of chitin in the wing scales of Papilio palinurus, the emerald swallowtail butterfly. These act as highly selective mirrors for two wavelengths of light. Yellow light is reflected directly from the centres of the pits; blue light is reflected twice by the sides of the pits. The combination appears green, but can be seen as an array of yellow spots surrounded by blue circles under a microscope. Crystal fibres, formed of hexagonal arrays of hollow nanofibres, create the bright iridescent colours of the bristles of Aphrodita, the sea mouse, a non-wormlike genus of marine annelids. The colours are aposematic, warning predators not to attack. The chitin walls of the hollow bristles form a hexagonal honeycomb-shaped photonic crystal; the hexagonal holes are 0.51 μm apart. The structure behaves optically as if it consisted of a stack of 88 diffraction gratings, making Aphrodita one of the most iridescent of marine organisms. Deformed matrices, consisting of randomly oriented nanochannels in a spongelike keratin matrix, create the diffuse non-iridescent blue colour of Ara ararauna, the blue-and-yellow macaw. Since the reflections are not all arranged in the same direction, the colours, while still magnificent, do not vary much with angle, so they are not iridescent. Spiral coils, formed of helicoidally stacked cellulose microfibrils, create Bragg reflection in the "marble berries" of the African herb Pollia condensata, resulting in the most intense blue coloration known in nature. The berry's surface has four layers of cells with thick walls, containing spirals of transparent cellulose spaced so as to allow constructive interference with blue light. Below these cells is a layer two or three cells thick containing dark brown tannins. Pollia produces a stronger colour than the wings of Morpho butterflies, and is one of the first instances of structural coloration known from any plant. Each cell has its own thickness of stacked fibres, making it reflect a different colour from its neighbours, and producing a pixellated or pointillist effect with different blues speckled with brilliant green, purple, and red dots. The fibres in any one cell are either left-handed or right-handed, so each cell circularly polarizes the light it reflects in one direction or the other. Pollia is the first organism known to show such random polarization of light, which, nevertheless does not have a visual function, as the seed-eating birds who visit this plant species are not able to perceive polarised light. Spiral microstructures are also found in scarab beetles where they produce iridescent colours. Thin film with diffuse reflector, based on the top two layers of a buttercup's petals. The brilliant yellow gloss derives from a combination, rare among plants, of yellow pigment and structural coloration. The very smooth upper epidermis acts as a reflective and iridescent thin film; for example, in Ranunculus acris, the layer is 2.7 micrometres thick. The unusual starch cells form a diffuse but strong reflector, enhancing the flower's brilliance. The curved petals form a paraboloidal dish which directs the sun's heat to the reproductive parts at the centre of the flower, keeping it some degrees Celsius above the ambient temperature. Surface gratings, consisting of ordered surface features due to exposure of ordered muscle cells on cuts of meat. The structural coloration on meat cuts appears only after the ordered pattern of muscle fibrils is exposed and light is diffracted by the proteins in the fibrils. The coloration or wavelength of the diffracted light depends on the angle of observation and can be enhanced by covering the meat with translucent foils. Roughening the surface or removing water content by drying causes the structure to collapse, thus, the structural coloration to disappear. Interference from multiple total internal reflections can occur in microscale structures, such as sessile water droplets and biphasic oil-in-water droplets as well as polymer microstructured surfaces. In this structural coloration mechanism, light rays that travel by different paths of total internal reflection along an interface interfere to generate iridescent colour. Variable structures Some animals including cephalopods such as squid are able to vary their colours rapidly for both camouflage and signalling. The mechanisms include reversible proteins which can be switched between two configurations. The configuration of reflectin proteins in chromatophore cells in the skin of the Doryteuthis pealeii squid is controlled by electric charge. When charge is absent, the proteins stack together tightly, forming a thin, more reflective layer; when charge is present, the molecules stack more loosely, forming a thicker layer. Since chromatophores contain multiple reflectin layers, the switch changes the layer spacing and hence the colour of light that is reflected. Blue-ringed octopuses spend much of their time hiding in crevices whilst displaying effective camouflage patterns with their dermal chromatophore cells. If they are provoked, they quickly change colour, becoming bright yellow with each of the 50-60 rings flashing bright iridescent blue within a third of a second. In the greater blue-ringed octopus (Hapalochlaena lunulata), the rings contain multi-layer iridophores. These are arranged to reflect blue–green light in a wide viewing direction. The fast flashes of the blue rings are achieved using muscles under neural control. Under normal circumstances, each ring is hidden by contraction of muscles above the iridophores. When these relax and muscles outside the ring contract, the bright blue rings are exposed. Examples In technology Gabriel Lippmann won the Nobel Prize in Physics in 1908 for his work on a structural coloration method of colour photography, the Lippmann plate. This used a photosensitive emulsion fine enough for the interference caused by light waves reflecting off the back of the glass plate to be recorded in the thickness of the emulsion layer, in a monochrome (black and white) photographic process. Shining white light through the plate effectively reconstructs the colours of the photographed scene. In 2010, the dressmaker Donna Sgro made a dress from Teijin Fibers' Morphotex, an undyed fabric woven from structurally coloured fibres, mimicking the microstructure of Morpho butterfly wing scales. The fibres are composed of 61 flat alternating layers, between 70 and 100 nanometres thick, of two plastics with different refractive indices, nylon and polyester, in a transparent nylon sheath with an oval cross-section. The materials are arranged so that the colour does not vary with angle. The fibres have been produced in red, green, blue, and violet. Several countries and regions, including the U.S., European Union, and Brazil, use banknotes that include optically variable ink, which is structurally coloured, as a security feature. These pearlescent inks appear as different colours depending on the angle the banknote is viewed from. Because the ink is hard to obtain, and because a photocopier or scanner (which works from only one angle) cannot reproduce or even perceive the color-shifting effect, the ink serves to make counterfeiting more difficult. Structural coloration could be further exploited industrially and commercially, and research that could lead to such applications is under way. A direct parallel would be to create active or adaptive military camouflage fabrics that vary their colours and patterns to match their environments, just as chameleons and cephalopods do. The ability to vary reflectivity to different wavelengths of light could also lead to efficient optical switches that could function like transistors, enabling engineers to make fast optical computers and routers. The surface of the compound eye of the housefly is densely packed with microscopic projections that have the effect of reducing reflection and hence increasing transmission of incident light. Similarly, the eyes of some moths have antireflective surfaces, again using arrays of pillars smaller than the wavelength of light. "Moth-eye" nanostructures could be used to create low-reflectance glass for windows, solar cells, display devices, and military stealth technologies. Antireflective biomimetic surfaces using the "moth-eye" principle can be manufactured by first creating a mask by lithography with gold nanoparticles, and then performing reactive-ion etching. See also Animal coloration Camouflage Patterns in nature References Bibliography Pioneering books Beddard, Frank Evers (1892). Animal Coloration, An Account of the Principal Facts and Theories Relating to the Colours and Markings of Animals. Swan Sonnenschein, London. --- 2nd Edition, 1895. Hooke, Robert (1665). Micrographia, John Martyn and James Allestry, London. Newton, Isaac (1704). Opticks, William Innys, London. Research Fox, D.L. (1992). Animal Biochromes and Animal Structural Colours. University of California Press. Johnsen, S. (2011). The Optics of Life: A Biologist's Guide to Light in Nature. Princeton University Press. Kolle, M. (2011). Photonic Structures Inspired by Nature . Springer. General books Brebbia, C.A. (2011). Colour in Art, Design and Nature. WIT Press. Lee, D.W. (2008). Nature's Palette: The Science of Plant Color. University of Chicago Press. Kinoshita, S. (2008). "Structural Color in the Realm of Nature". World Scientific Publishing Mouchet, S. R., Deparis, O. (2021). "Natural Photonics and Bioinspiration". Artech House External links National Geographic News: Peacock Plumage Secrets Uncovered Causes of Color: Peacock feathers Butterflies and Gyroids – Numberphile Animal coat colors Color Nanotechnology Optical materials
Structural coloration
Physics,Materials_science,Engineering
3,738
70,128,178
https://en.wikipedia.org/wiki/Cerasicoccus
Cerasicoccus is a Gram-negative, non-motile, obligately aerobic and chemoheterotrophic bacterial genus from the family Puniceicoccaceae. See also List of bacterial orders List of bacteria genera References Verrucomicrobiota Bacteria genera Taxa described in 2007
Cerasicoccus
Biology
65
35,292,764
https://en.wikipedia.org/wiki/Stig%20Stenholm
Stig Torsten Stenholm (26 February 1939 – 30 September 2017) was a theoretical physicist who formerly held an Academy of Finland professorship. Education and career Stenholm obtained an engineering degree at the Helsinki University of Technology (HUT), and a master of science degree in mathematics at the University of Helsinki, both in 1964. He then earned his Dr. phil. at Oxford in 1967 on the topic of quantum liquids under supervision of Dirk ter Haar. From 1967 to 1968, he performed postdoctorate work at Yale University. He obtained his a position as professor at the University of Helsinki in 1974. In 1980, Stenholm was appointed as the scientific director of the Research Institute for Theoretical Physics (TFT). His colleague Kalle-Antti Suominen later affirmed: "As a director Stig was very broad-minded and without this the happy atmosphere of TFT could not have existed." In the 1990s, the TFT was replaced, and the Helsinki Institute of Physics (HIP) took its place. In 1997, Stenholm moved to the Royal Institute of Technology in Stockholm, Sweden. He retired in 2005. He delivered the presentation speech for the 2005 Nobel Prize in Physics at the Stockholm Concert Hall. Work Stenholm specialised on quantum optics and worked among other topics on laser cooling, Bose–Einstein condensation and quantum information. Honors He received an Academy of Finland professorship for the work he performed from 1992 to 1997, and he was member of the Royal Swedish Academy of Sciences, the Austrian Academy of Sciences, the Finnish Society of Sciences and Letters and the Swedish Academy of Engineering Sciences in Finland. Books Stenholm, Stig: The Quest for Reality. Bohr and Wittgenstein – two complementary views. Oxford University Press. 2011. (abstract) Stenholm, Stig & Suominen, Kalle-Antti: Quantum Approach to Informatics. Wiley, 2005. Stenholm, Stig: The Foundations of Laser Spectroscopy. Wiley / Dover Books on Physics, 1984. Stenholm, Stig: The semiclassical theory of the gas laser. Pergamon Press, 1971. References External links Stig Stenholm. Scientific Commons Quantum complex systems: entanglemant and decoherence from nano- to macroscales – QUACS. Research Leader: Professor emeritus Stig Stenholm, KTH Research Projects Database. Theoretical physicists 1939 births 2017 deaths Academic staff of the University of Helsinki Academic staff of the KTH Royal Institute of Technology 20th-century Finnish physicists University of Helsinki alumni Members of the Royal Swedish Academy of Sciences Members of the Austrian Academy of Sciences
Stig Stenholm
Physics
552
14,662,238
https://en.wikipedia.org/wiki/Balance%20%28ability%29
Balance in biomechanics, is an ability to maintain the line of gravity (vertical line from centre of mass) of a body within the base of support with minimal postural sway. Sway is the horizontal movement of the centre of gravity even when a person is standing still. A certain amount of sway is essential and inevitable due to small perturbations within the body (e.g., breathing, shifting body weight from one foot to the other or from forefoot to rearfoot) or from external triggers (e.g., visual distortions, floor translations). An increase in sway is not necessarily an indicator of dysfunctional balance so much as it is an indicator of decreased sensorimotor control. Maintaining balance Maintaining balance requires coordination of input from multiple sensory systems including the vestibular, somatosensory, and visual systems. Vestibular system: sense organs that regulate equilibrium (equilibrioception); directional information as it relates to head position (internal gravitational, linear, and angular acceleration) Somatosensory system: senses of proprioception and kinesthesia of joints; information from skin and joints (pressure and vibratory senses); spatial position and movement relative to the support surface; movement and position of different body parts relative to each other Visual system: Reference to verticality of body and head motion; spatial location relative to objects The senses must detect changes of spatial orientation with respect to the base of support, regardless of whether the body moves or the base is altered. There are environmental factors that can affect balance such as light conditions, floor surface changes, alcohol, drugs, and ear infection. Balance impairments There are balance impairments associated with aging. Age-related decline in the ability of the above systems to receive and integrate sensory information contributes to poor balance in older adults. As a result, the elderly are at an increased risk of falls. In fact, one in three adults aged 65 and over will fall each year. In the case of an individual standing quietly upright, the limit of stability is defined as the amount of postural sway at which balance is lost and corrective action is required. Body sway can occur in all planes of motion, which make it an increasingly difficult ability to rehabilitate. There is strong evidence in research showing that deficits in postural balance is related to the control of medial-lateral stability and an increased risk of falling. To remain balanced, a person standing must be able to keep the vertical projection of their center of mass within their base of support, resulting in little medial-lateral or anterior-posterior sway. Ankle sprains are one of the most frequently occurring injuries among athletes and physically active people. The most common residual disability post ankle sprain is instability along with body sway. Mechanical instability includes insufficient stabilizing structures and mobility that exceed physiological limits. Functional instability involves recurrent sprains or a feeling of giving way of the ankle. Nearly 40% of patients with ankle sprains suffer from instability and an increase in body sway. Injury to the ankle causes a proprioceptive deficit and impaired postural control. Individuals with muscular weakness, occult instability, and decreased postural control are more susceptible to ankle injury than those with better postural control. Balance can be severely affected in individuals with neurological conditions. People who suffer a stroke or spinal cord injury for example, can struggle with this ability. Impaired balance is strongly associated with future function and recovery after a stroke, and is the strongest predictor of falls. Another population where balance is severely affected is Parkinson's disease patients. A study done by Nardone and Schieppati (2006) showed that individuals with Parkinson's disease problems in balance have been related to a reduced limit of stability and an impaired production of anticipatory motor strategies and abnormal calibration. Balance can also be negatively affected in a normal population through fatigue in the musculature surrounding the ankles, knees, and hips. Studies have found, however, that muscle fatigue around the hips (gluteals and lumbar extensors) and knees have a greater effect on postural stability (sway). It is thought that muscle fatigue leads to a decreased ability to contract with the correct amount of force or accuracy. As a result, proprioception and kinesthetic feedback from joints are altered so that conscious joint awareness may be negatively effected. Balance training Since balance is a key predictor of recovery and is required in many activities of daily living, it is often introduced into treatment plans by physiotherapists and occupational therapists when dealing with geriatrics, patients with neurological conditions, or others for whom balance training has been determined to be beneficial. Balance training in stroke patients has been supported in the literature. Methods commonly used and proven to be effective for this population include sitting or standing balance practice with various progressions including reaching, variations in base of support, use of tilt boards, gait training varying speed, and stair climbing exercises. Another method to improve balance is perturbation training, which is an external force applied to a person's center of mass in an attempt to move it from the base of support. The type of training should be determined by a physiotherapist and will depend on the nature and severity of the stroke, stage of recovery, and the patient's abilities and impairments after the stroke. Populations such as the elderly, children with neuromuscular diseases, and those with motor deficits such as chronic ankle instability have all been studied and balance training has been shown to result in improvements in postural sway and improved "one-legged stance balance" in these groups. The effects of balance training can be measured by more varied means, but typical quantitative outcomes are centre of pressure (CoP), postural sway, and static/dynamic balance, which are measured by the subject's ability to maintain a set body position while undergoing some type of instability. Studies have suggested, higher level of physical activity have shown to reduce the morbidity and mortality along with risk of fall up to 30% to 50%. Some types of exercise (gait, balance, co-ordination and functional tasks; strengthening exercise; 3D exercise and multiple exercise types) improve clinical balance outcomes in older people, and are seemingly safe. A study has shown to be effective in improving ability to balance after undergoing aerobic exercises along with resistance exercises. There is still insufficient evidence supporting general physical activity, computerized balance programs or vibration plates. Functional balance assessments Functional tests of balance focus on maintenance of both static and dynamic balance, whether it involves a type of perturbation/change of center of mass or during quiet stance. Standardized tests of balance are available to allow allied health care professionals to assess an individual's postural control. Some functional balance tests that are available are: Romberg Test: used to determine proprioceptive contributions to upright balance. Subject remains in quiet standing while eyes are open. If this test is not difficult enough, there is a Sharpened Romberg's test. Subjects would have to have their arms crossed, feet together and eyes closed. This decreases the base of support, raises the subject's center of mass, and prevents them from using their arms to help balance. Functional Reach Test: measures the maximal distance one can reach forward beyond arm's length while maintaining feet planted in a standing position. Berg Balance Scale: measures static and dynamic balance abilities using functional tasks commonly performed in everyday life. One study reports that the Berg Balance Scale is the most commonly used assessment tool throughout stroke rehabilitation, and found it to be a sound measure of balance impairment in patients following a stroke. Berg balance scale is known to be the golden test. BBS was first published in 1989 and to this day in 2022, it's still effective which is pretty remarkable. Not every test and every study that was made stuck around this long so its truly a golden test. Performance-Oriented Mobility Assessment (POMA): measures both static and dynamic balance using tasks testing balance and gait. Timed Up and Go Test: measures dynamic balance and mobility. Balance Efficacy Scale: self-report measure that examines an individual's confidence while performing daily tasks with or without assistance. Star Excursion Test: A dynamic balance test that measures single stance maximal reach in multiple directions. Balance Evaluation Systems Test (BESTest): Tests for 6 unique balance control methods to create a specialized rehabilitation protocol by identifying specific balance deficits. The Mini-Balance Evaluation Systems Test (Mini-BESTest): Is a short form of the Balance Evaluation System Test that is used widely in both clinical practice and research. The test is used to assess balance impairments and includes 14 items of dynamic balance task, divided in to four subcomponents: anticipatory postural adjustments, reactive postural control, sensory orientation and dynamic gait. Mini-BESTest has been tested for mainly neurological diseases, but also other diseases. A review of psychometric properties of the test support the reliability, validity and responsiveness, and according to the review, it can be considered a standard balance measure. BESS: The BESS (Balance Error Scoring System) is a commonly used way to assess balance. It is known as a simple and affordable way to get an accurate assessment of balance, although the validity of the BESS protocol has been questioned. The BESS is often used in sports settings to assess the effects of mild to moderate head injury on one's postural stability. The BESS tests three separate stances (double leg, single leg, tandem) on two different surfaces (firm surface and medium density foam) for a total of six tests. Each test is 20 seconds long, with the entire time of the assessment approximately 5–7 minutes. The first stance is the double leg stance. The participant is instructed to stand on a firm surface with feet side by side with hands on hips and eyes closed. The second stance is the single leg stance. In this stance the participant is instructed to stand on their non-dominant foot on a firm surface with hands on hips and eyes closed. The third stance is the tandem stance. The participant stands heel to toe on a firm surface with hands on hips and eyes closed. The fourth, fifth, and sixth stances repeat in order stances one, two, and three except the participant performs these stances on a medium density foam surface. The BESS is scored by an examiner who looks for deviations from the proper stances. A deviation is noted when any of the following occurs in the participant during testing: opening the eyes, removing hands from the hips, stumbling forward or falling, lifting the forefoot or heel off the testing surface, abduction or flexion of the hip beyond 30 degrees, or remaining out of the proper testing position for more than 5 seconds. Concussion (or mild traumatic brain injury) have been associated with imbalance among sports participants and military personnel. Some of the standard balance tests may be too easy or time-consuming for application to these high-functioning groups, s. Expert recommendations have been gathered concerning balance assessments appropriate to military service-members. Quantitative (computerized) assessments Due to recent technological advances, a growing trend in balance assessments has become the monitoring of center of pressure (terrestrial locomotion) (CoP), the reaction vector of center of mass on the ground, path length for a specified duration. With quantitative assessments, minimal CoP path length is suggestive of good balance. Laboratory-grade force plates are considered the "gold-standard" of measuring CoP. The NeuroCom Balance Manager (NeuroCom, Clackamas, OR, United States) is a commercially available dynamic posturography system that uses computerized software to track CoP during different tasks. These different assessments range from the sensory organization test looking at the different systems that contribute through sensory receptor input to the limits of stability test observing a participant's ankle range of motion, velocity, and reaction time. While the NeuroCom is considered the industry standard for balance assessments, it does come at a steep price (about $250,000). Within the past 5 years research has headed toward inexpensive and portable devices capable of measuring CoP accurately. Recently, Nintendo's Wii balance board (Nintendo, Kyoto, Japan) has been validated against a force plate and found to be an accurate tool to measure CoP This is very exciting as the price difference in technology ($25 vs $10,000) makes the Wii balance board a suitable alternative for clinicians to use quantitative balance assessments. Other inexpensive, custom-built force plates are being integrated into this new dynamic to create a growing field of research and clinical assessment that will benefit many populations. Fatigue's effect on balance The complexity of balance allows for many confounding variables to affect a person's ability to stay upright. Fatigue, causing central nervous system (CNS) dysfunction, can indirectly result in the inability to remain upright. This is seen repeatedly in clinical populations (e.g. Parkinson's disease, multiple sclerosis). Another major concern regarding fatigue's effect on balance is in the athletic population. Balance testing has become a standard measure to help diagnose concussions in athletes, but due to the fact that athletes can be extremely fatigued has made it hard for clinicians to accurately determine how long the athletes need to rest before fatigue is gone, and they can measure balance to determine if the athlete is concussed. So far, researchers have only been able to estimate that athletes need anywhere from 8–20 minutes of rest before testing balance That can be a huge difference depending on the circumstances. Other factors influencing balance Age, gender, and height have all been shown to impact an individual's ability to balance and the assessment of that balance. Typically, older adults have more body sway with all testing conditions. Tests have shown that older adults demonstrate shorter functional reach and larger body sway path lengths. Height also influences body sway in that as height increases, functional reach typically decreases. However, this test is only a measure of anterior and posterior sway. This is done to create a repeatable and reliable clinical balance assessment tool. A 2011 Cochrane Review found that specific types of exercise (such as gait, balance, co-ordination and functional tasks; strengthening exercises; 3D exercises [e.g. Tai Chi] and combinations of these) can help improve balance in older adults. However, there was no or limited evidence on the effectiveness of general physical activities, such as walking and cycling, computer-based balance games and vibration plates. Voluntary control of balance While balance is mostly an automatic process, voluntary control is common. Active control usually takes place when a person is in a situation where balance is compromised. This can have the counter-intuitive effect of increasing postural sway during basic activities such as standing. One explanation for this effect is that conscious control results in over-correcting an instability and "may inadvertently disrupt relatively automatic control processes." While concentration on an external task "promotes the utilization of more automatic control processes." Balance and dual-tasking Supra-postural tasks are those activities that rely on postural control while completing another behavioral goal, such as walking or creating a text message while standing upright. Research has demonstrated that postural stability operates to permit the achievement of other activities. In other words, standing in a stable upright position is not at all beneficial if one falls as soon as any task is attempted. In a healthy individual, it is believed that postural control acts to minimize the amount of effort required (not necessarily to minimize sway), while successfully accomplishing the supra-postural task. Research has shown that spontaneous reductions in postural sway occur in response to the addition of a secondary goal. McNevin and Wulf (2002) found an increase in postural performance when directing an individual's attention externally compared to directing attention internally That is, focusing attention on the effects of one's movements rather than on the movement itself will boost performance. This results from the use of more automatic and reflexive control processes. When one is focused on their movements (internal focus), they may inadvertently interfere with these automatic processes, decreasing their performance. Externally focusing attention improves postural stability, despite increasing postural sway at times. It is believed that utilizing automatic control processes by focusing attention externally enhances both performance and learning. Adopting an external focus of attention subsequently improves the performance of supra-postural tasks, while increasing postural stability. References Further reading Biomechanics Physical fitness
Balance (ability)
Physics
3,356
54,953,948
https://en.wikipedia.org/wiki/Type%201%20regulatory%20T%20cell
Type 1 regulatory cells or Tr1 (TR1) cells are a class of regulatory T cells participating in peripheral immunity as a subset of CD4+ T cells. Tr1 cells regulate tolerance towards antigens of any origin. Tr1 cells are self or non-self antigen specific and their key role is to induce and maintain peripheral tolerance and suppress tissue inflammation in autoimmunity and graft vs. host disease. Characterization and surface molecules The specific cell-surface markers for Tr1 cells in humans and mice are CD4+ CD49b+LAG-3+ CD226+ from which LAG-3+ and CD49b+ are indispensable. LAG-3 is a membrane protein on Tr1 cells that negatively regulates TCR-mediated signal transduction in cells. LAG-3 activates dendritic cells (DCs) and enhances the antigen-specific T-cell response which is necessary for Tr1 cells antigen specificity. CD49b belongs to the integrin family and is a receptor for many (extracellular) matrix and non-matrix molecules. CD49b provides only little contribution to the differentiation and function of Tr1 cells. They characteristically produce high levels of IL-10, IFN-γ, IL-5 and also TGF- β but neither IL-4 nor IL-2. Production of IL-10 is also much more rapid than its production by other T-helper cell types. Tr1 cells do not constitutively express FOXP3 but only transiently, upon their activation and in smaller amounts than CD25+ FOXP3+ regulatory cells. FOXP3 is not required for Tr1 induction, nor for its function. They also express repressor of GATA-3 (ROG), while CD25+ FOXP3+ regulatory cells do not. ROG then downregulates GATA-3, a characteristic transcription factor for Th2 cells. Tr1 cells express high levels of regulatory factors, such as glucocorticoid-induced tumor necrosis factor receptor (GITR), OX40 (CD134), and tumor-necrosis factor receptor (TNFRSF9). Resting human Tr1 cells express Th1 associated chemokine receptors CXCR3 and CCR5, and Th2-associated CCR3, CCR4 and CCR8. Upon activation, Tr1 cells migrate preferentially in response to I-309, a ligand for CCR8. Mechanism of Tr1-mediated suppression The suppressing and tolerance-inducing effect of Tr1 cells is mediated mainly by cytokines. The other mechanism as cell to cell contact, modulation of dendritic cells, metabolic disruption and cytolysis is however also available to them. In vivo Tr1 cells need to be activated, to be able to exert their regulatory effects. Mechanisms of suppression Cytokines mediated Tr1 cells secrete large amount of suppressing cytokines IL-10 and TGF-β. IL-10 directly inhibits T cells by blocking its production of IL-2, IFN-γ and GM-CSF and have tolerogenic effect on B cells and support differentiation of other regulatory T cells. IL-10 indirectly downregulates MHC II molecules and co-stimulatory molecules on antigen-presenting cells (APC) and force them to upregulate tolerogenic molecules such as ILT-3, ILT-4 and HLA-G. Cell to cell contact: Type 1 regulatory T cells poses inhibitory receptor CTLA-4 through which they exert suppressor function. Metabolic disruption: Tr1 cells can express ectoenzymes CD39 and CD73 and are suspected of generating adenosine which suppresses effector T cell proliferation and their cytokine production in vitro. Cytolitic activity: Tr1 cells can both express Granzyme A and granzyme B. It was shown recently, that Tr1 cells, in vitro and also ex vivo, specifically lyse cells of myeloid origin, but not other APC or T or B lymphocytes. Cytolysis indirectly suppresses immune response by reducing numbers of myeloid-origin antigen presenting cells. Differentiation Tr 1 cells are inducible, arising from precursors naive T cells. They can be differentiated ex vivo and in vivo. The ways of Tr1 cells induction in vivo, ex vivo and in vitro differ and also envelop many different approaches but the molecular mechanism appears to be conserved. IL-27, together with TGF-β induces IL-10–producing regulatory T cells with Tr1-like properties cells. IL-27 alone can induce IL-10-producing Tr1 cells, but in the absence of TGF-β, the cells produce large quantities of both IFN-γ and IL-10. IL-6 and IL-21 also plays a role in differentiation as they regulate expression of transcription factors necessary for IL-10 production, which is believed to start up the differentiation itself later on. Proposed transcription biomarkers for type 1 regulatory cells differentiation are: musculoaponeurotic fibrosarcoma(c-Maf) the aryl hydrocarbon receptor (AhR) interferon regulatory factor 4 (IRF4) the repressor of GATA-3 (ROG) early growth response protein 2 (Egr-2) Expression of these transcriptional factors are driven by IL-6 in IL-21 and IL-2 dependant manner. Clinical manifestation and application Tr1 cells possess huge clinical potential in means to prevent, block and even cure several T cells mediated diseases, including GvHD, allograft rejection, autoimmunity and chronic inflammatory diseases. The first successful tests were performed on mouse models and on humans as well. Transplantation research has shown, that donor Tr1 in response to recipient alloantigens, was found to correlate with the absence of GvHD after bone marrow transplantation, while decreased numbers of Tr1 markedly associated with severe GvHD. Decreased levels of IL-10 CD4+ producing cells were also observed in inflamed synovium and peripheral blood of patients with rheumatoid arthritis. Phase I/II of clinical trials of Tr1 cell treatment concerning Crohn's disease have been successful and appear to be safe and do not lead to a general immune suppression. References Cell biology Immunology Immune system
Type 1 regulatory T cell
Biology
1,341
184,726
https://en.wikipedia.org/wiki/Heat%20transfer
Heat transfer is a discipline of thermal engineering that concerns the generation, use, conversion, and exchange of thermal energy (heat) between physical systems. Heat transfer is classified into various mechanisms, such as thermal conduction, thermal convection, thermal radiation, and transfer of energy by phase changes. Engineers also consider the transfer of mass of differing chemical species (mass transfer in the form of advection), either cold or hot, to achieve heat transfer. While these mechanisms have distinct characteristics, they often occur simultaneously in the same system. Heat conduction, also called diffusion, is the direct microscopic exchanges of kinetic energy of particles (such as molecules) or quasiparticles (such as lattice waves) through the boundary between two systems. When an object is at a different temperature from another body or its surroundings, heat flows so that the body and the surroundings reach the same temperature, at which point they are in thermal equilibrium. Such spontaneous heat transfer always occurs from a region of high temperature to another region of lower temperature, as described in the second law of thermodynamics. Heat convection occurs when the bulk flow of a fluid (gas or liquid) carries its heat through the fluid. All convective processes also move heat partly by diffusion, as well. The flow of fluid may be forced by external processes, or sometimes (in gravitational fields) by buoyancy forces caused when thermal energy expands the fluid (for example in a fire plume), thus influencing its own transfer. The latter process is often called "natural convection". The former process is often called "forced convection." In this case, the fluid is forced to flow by use of a pump, fan, or other mechanical means. Thermal radiation occurs through a vacuum or any transparent medium (solid or fluid or gas). It is the transfer of energy by means of photons or electromagnetic waves governed by the same laws. Overview Heat transfer is the energy exchanged between materials (solid/liquid/gas) as a result of a temperature difference. The thermodynamic free energy is the amount of work that a thermodynamic system can perform. Enthalpy is a thermodynamic potential, designated by the letter "H", that is the sum of the internal energy of the system (U) plus the product of pressure (P) and volume (V). Joule is a unit to quantify energy, work, or the amount of heat. Heat transfer is a process function (or path function), as opposed to functions of state; therefore, the amount of heat transferred in a thermodynamic process that changes the state of a system depends on how that process occurs, not only the net difference between the initial and final states of the process. Thermodynamic and mechanical heat transfer is calculated with the heat transfer coefficient, the proportionality between the heat flux and the thermodynamic driving force for the flow of heat. Heat flux is a quantitative, vectorial representation of heat flow through a surface. In engineering contexts, the term heat is taken as synonymous with thermal energy. This usage has its origin in the historical interpretation of heat as a fluid (caloric) that can be transferred by various causes, and that is also common in the language of laymen and everyday life. The transport equations for thermal energy (Fourier's law), mechanical momentum (Newton's law for fluids), and mass transfer (Fick's laws of diffusion) are similar, and analogies among these three transport processes have been developed to facilitate the prediction of conversion from any one to the others. Thermal engineering concerns the generation, use, conversion, storage, and exchange of heat transfer. As such, heat transfer is involved in almost every sector of the economy. Heat transfer is classified into various mechanisms, such as thermal conduction, thermal convection, thermal radiation, and transfer of energy by phase changes. Mechanisms The fundamental modes of heat transfer are: Advection Advection is the transport mechanism of a fluid from one location to another, and is dependent on motion and momentum of that fluid. Conduction or diffusion The transfer of energy between objects that are in physical contact. Thermal conductivity is the property of a material to conduct heat and is evaluated primarily in terms of Fourier's law for heat conduction. Convection The transfer of energy between an object and its environment, due to fluid motion. The average temperature is a reference for evaluating properties related to convective heat transfer. Radiation The transfer of energy by the emission of electromagnetic radiation. Advection By transferring matter, energy—including thermal energy—is moved by the physical transfer of a hot or cold object from one place to another. This can be as simple as placing hot water in a bottle and heating a bed, or the movement of an iceberg in changing ocean currents. A practical example is thermal hydraulics. This can be described by the formula: where is heat flux (W/m2), is density (kg/m3), is heat capacity at constant pressure (J/kg·K), is the difference in temperature (K), is velocity (m/s). Conduction On a microscopic scale, heat conduction occurs as hot, rapidly moving or vibrating atoms and molecules interact with neighboring atoms and molecules, transferring some of their energy (heat) to these neighboring particles. In other words, heat is transferred by conduction when adjacent atoms vibrate against one another, or as electrons move from one atom to another. Conduction is the most significant means of heat transfer within a solid or between solid objects in thermal contact. Fluids—especially gases—are less conductive. Thermal contact conductance is the study of heat conduction between solid bodies in contact. The process of heat transfer from one place to another place without the movement of particles is called conduction, such as when placing a hand on a cold glass of water—heat is conducted from the warm skin to the cold glass, but if the hand is held a few inches from the glass, little conduction would occur since air is a poor conductor of heat. Steady-state conduction is an idealized model of conduction that happens when the temperature difference driving the conduction is constant so that after a time, the spatial distribution of temperatures in the conducting object does not change any further (see Fourier's law). In steady state conduction, the amount of heat entering a section is equal to amount of heat coming out, since the temperature change (a measure of heat energy) is zero. An example of steady state conduction is the heat flow through walls of a warm house on a cold day—inside the house is maintained at a high temperature and, outside, the temperature stays low, so the transfer of heat per unit time stays near a constant rate determined by the insulation in the wall and the spatial distribution of temperature in the walls will be approximately constant over time. Transient conduction (see Heat equation) occurs when the temperature within an object changes as a function of time. Analysis of transient systems is more complex, and analytic solutions of the heat equation are only valid for idealized model systems. Practical applications are generally investigated using numerical methods, approximation techniques, or empirical study. Convection The flow of fluid may be forced by external processes, or sometimes (in gravitational fields) by buoyancy forces caused when thermal energy expands the fluid (for example in a fire plume), thus influencing its own transfer. The latter process is often called "natural convection". All convective processes also move heat partly by diffusion, as well. Another form of convection is forced convection. In this case, the fluid is forced to flow by using a pump, fan, or other mechanical means. Convective heat transfer, or simply, convection, is the transfer of heat from one place to another by the movement of fluids, a process that is essentially the transfer of heat via mass transfer. The bulk motion of fluid enhances heat transfer in many physical situations, such as between a solid surface and the fluid. Convection is usually the dominant form of heat transfer in liquids and gases. Although sometimes discussed as a third method of heat transfer, convection is usually used to describe the combined effects of heat conduction within the fluid (diffusion) and heat transference by bulk fluid flow streaming. The process of transport by fluid streaming is known as advection, but pure advection is a term that is generally associated only with mass transport in fluids, such as advection of pebbles in a river. In the case of heat transfer in fluids, where transport by advection in a fluid is always also accompanied by transport via heat diffusion (also known as heat conduction) the process of heat convection is understood to refer to the sum of heat transport by advection and diffusion/conduction. Free, or natural, convection occurs when bulk fluid motions (streams and currents) are caused by buoyancy forces that result from density variations due to variations of temperature in the fluid. Forced convection is a term used when the streams and currents in the fluid are induced by external means—such as fans, stirrers, and pumps—creating an artificially induced convection current. Convection-cooling Convective cooling is sometimes described as Newton's law of cooling: However, by definition, the validity of Newton's law of cooling requires that the rate of heat loss from convection be a linear function of ("proportional to") the temperature difference that drives heat transfer, and in convective cooling this is sometimes not the case. In general, convection is not linearly dependent on temperature gradients, and in some cases is strongly nonlinear. In these cases, Newton's law does not apply. Convection vs. conduction In a body of fluid that is heated from underneath its container, conduction, and convection can be considered to compete for dominance. If heat conduction is too great, fluid moving down by convection is heated by conduction so fast that its downward movement will be stopped due to its buoyancy, while fluid moving up by convection is cooled by conduction so fast that its driving buoyancy will diminish. On the other hand, if heat conduction is very low, a large temperature gradient may be formed and convection might be very strong. The Rayleigh number () is the product of the Grashof () and Prandtl () numbers. It is a measure that determines the relative strength of conduction and convection. where g is the acceleration due to gravity, ρ is the density with being the density difference between the lower and upper ends, μ is the dynamic viscosity, α is the Thermal diffusivity, β is the volume thermal expansivity (sometimes denoted α elsewhere), T is the temperature, ν is the kinematic viscosity, and L is characteristic length. The Rayleigh number can be understood as the ratio between the rate of heat transfer by convection to the rate of heat transfer by conduction; or, equivalently, the ratio between the corresponding timescales (i.e. conduction timescale divided by convection timescale), up to a numerical factor. This can be seen as follows, where all calculations are up to numerical factors depending on the geometry of the system. The buoyancy force driving the convection is roughly , so the corresponding pressure is roughly . In steady state, this is canceled by the shear stress due to viscosity, and therefore roughly equals , where V is the typical fluid velocity due to convection and the order of its timescale. The conduction timescale, on the other hand, is of the order of . Convection occurs when the Rayleigh number is above 1,000–2,000. Radiation Radiative heat transfer is the transfer of energy via thermal radiation, i.e., electromagnetic waves. It occurs across vacuum or any transparent medium (solid or fluid or gas). Thermal radiation is emitted by all objects at temperatures above absolute zero, due to random movements of atoms and molecules in matter. Since these atoms and molecules are composed of charged particles (protons and electrons), their movement results in the emission of electromagnetic radiation which carries away energy. Radiation is typically only important in engineering applications for very hot objects, or for objects with a large temperature difference. When the objects and distances separating them are large in size and compared to the wavelength of thermal radiation, the rate of transfer of radiant energy is best described by the Stefan-Boltzmann equation. For an object in vacuum, the equation is: For radiative transfer between two objects, the equation is as follows: where is the heat flux, is the emissivity (unity for a black body), is the Stefan–Boltzmann constant, is the view factor between two surfaces a and b, and and are the absolute temperatures (in kelvins or degrees Rankine) for the two objects. The blackbody limit established by the Stefan-Boltzmann equation can be exceeded when the objects exchanging thermal radiation or the distances separating them are comparable in scale or smaller than the dominant thermal wavelength. The study of these cases is called near-field radiative heat transfer. Radiation from the sun, or solar radiation, can be harvested for heat and power. Unlike conductive and convective forms of heat transfer, thermal radiation – arriving within a narrow-angle i.e. coming from a source much smaller than its distance – can be concentrated in a small spot by using reflecting mirrors, which is exploited in concentrating solar power generation or a burning glass. For example, the sunlight reflected from mirrors heats the PS10 solar power tower and during the day it can heat water to . The reachable temperature at the target is limited by the temperature of the hot source of radiation. (T4-law lets the reverse flow of radiation back to the source rise.) The (on its surface) somewhat 4000 K hot sun allows to reach coarsely 3000 K (or 3000 °C, which is about 3273 K) at a small probe in the focus spot of a big concave, concentrating mirror of the Mont-Louis Solar Furnace in France. Phase transition Phase transition or phase change, takes place in a thermodynamic system from one phase or state of matter to another one by heat transfer. Phase change examples are the melting of ice or the boiling of water. The Mason equation explains the growth of a water droplet based on the effects of heat transport on evaporation and condensation. Phase transitions involve the four fundamental states of matter: Solid – Deposition, freezing, and solid-to-solid transformation. Liquid – Condensation and melting / fusion. Gas – Boiling / evaporation, recombination/ deionization, and sublimation. Plasma – Ionization. Boiling The boiling point of a substance is the temperature at which the vapor pressure of the liquid equals the pressure surrounding the liquid and the liquid evaporates resulting in an abrupt change in vapor volume. In a closed system, saturation temperature and boiling point mean the same thing. The saturation temperature is the temperature for a corresponding saturation pressure at which a liquid boils into its vapor phase. The liquid can be said to be saturated with thermal energy. Any addition of thermal energy results in a phase transition. At standard atmospheric pressure and low temperatures, no boiling occurs and the heat transfer rate is controlled by the usual single-phase mechanisms. As the surface temperature is increased, local boiling occurs and vapor bubbles nucleate, grow into the surrounding cooler fluid, and collapse. This is sub-cooled nucleate boiling, and is a very efficient heat transfer mechanism. At high bubble generation rates, the bubbles begin to interfere and the heat flux no longer increases rapidly with surface temperature (this is the departure from nucleate boiling, or DNB). At similar standard atmospheric pressure and high temperatures, the hydrodynamically quieter regime of film boiling is reached. Heat fluxes across the stable vapor layers are low but rise slowly with temperature. Any contact between the fluid and the surface that may be seen probably leads to the extremely rapid nucleation of a fresh vapor layer ("spontaneous nucleation"). At higher temperatures still, a maximum in the heat flux is reached (the critical heat flux, or CHF). The Leidenfrost Effect demonstrates how nucleate boiling slows heat transfer due to gas bubbles on the heater's surface. As mentioned, gas-phase thermal conductivity is much lower than liquid-phase thermal conductivity, so the outcome is a kind of "gas thermal barrier". Condensation Condensation occurs when a vapor is cooled and changes its phase to a liquid. During condensation, the latent heat of vaporization must be released. The amount of heat is the same as that absorbed during vaporization at the same fluid pressure. There are several types of condensation: Homogeneous condensation, as during the formation of fog. Condensation in direct contact with subcooled liquid. Condensation on direct contact with a cooling wall of a heat exchanger: This is the most common mode used in industry: Dropwise condensation is difficult to sustain reliably; therefore, industrial equipment is normally designed to operate in filmwise condensation mode. Melting Melting is a thermal process that results in the phase transition of a substance from a solid to a liquid. The internal energy of a substance is increased, typically through heat or pressure, resulting in a rise of its temperature to the melting point, at which the ordering of ionic or molecular entities in the solid breaks down to a less ordered state and the solid liquefies. Molten substances generally have reduced viscosity with elevated temperature; an exception to this maxim is the element sulfur, whose viscosity increases to a point due to polymerization and then decreases with higher temperatures in its molten state. Modeling approaches Heat transfer can be modeled in various ways. Heat equation The heat equation is an important partial differential equation that describes the distribution of heat (or temperature variation) in a given region over time. In some cases, exact solutions of the equation are available; in other cases the equation must be solved numerically using computational methods such as DEM-based models for thermal/reacting particulate systems (as critically reviewed by Peng et al.). Lumped system analysis Lumped system analysis often reduces the complexity of the equations to one first-order linear differential equation, in which case heating and cooling are described by a simple exponential solution, often referred to as Newton's law of cooling. System analysis by the lumped capacitance model is a common approximation in transient conduction that may be used whenever heat conduction within an object is much faster than heat conduction across the boundary of the object. This is a method of approximation that reduces one aspect of the transient conduction system—that within the object—to an equivalent steady-state system. That is, the method assumes that the temperature within the object is completely uniform, although its value may change over time. In this method, the ratio of the conductive heat resistance within the object to the convective heat transfer resistance across the object's boundary, known as the Biot number, is calculated. For small Biot numbers, the approximation of spatially uniform temperature within the object can be used: it can be presumed that heat transferred into the object has time to uniformly distribute itself, due to the lower resistance to doing so, as compared with the resistance to heat entering the object. Climate models Climate models study the radiant heat transfer by using quantitative methods to simulate the interactions of the atmosphere, oceans, land surface, and ice. Engineering Heat transfer has broad application to the functioning of numerous devices and systems. Heat-transfer principles may be used to preserve, increase, or decrease temperature in a wide variety of circumstances. Heat transfer methods are used in numerous disciplines, such as automotive engineering, thermal management of electronic devices and systems, climate control, insulation, materials processing, chemical engineering and power station engineering. Insulation, radiance and resistance Thermal insulators are materials specifically designed to reduce the flow of heat by limiting conduction, convection, or both. Thermal resistance is a heat property and the measurement by which an object or material resists to heat flow (heat per time unit or thermal resistance) to temperature difference. Radiance, or spectral radiance, is a measure of the quantity of radiation that passes through or is emitted. Radiant barriers are materials that reflect radiation, and therefore reduce the flow of heat from radiation sources. Good insulators are not necessarily good radiant barriers, and vice versa. Metal, for instance, is an excellent reflector and a poor insulator. The effectiveness of a radiant barrier is indicated by its reflectivity, which is the fraction of radiation reflected. A material with a high reflectivity (at a given wavelength) has a low emissivity (at that same wavelength), and vice versa. At any specific wavelength, reflectivity=1 - emissivity. An ideal radiant barrier would have a reflectivity of 1, and would therefore reflect 100 percent of incoming radiation. Vacuum flasks, or Dewars, are silvered to approach this ideal. In the vacuum of space, satellites use multi-layer insulation, which consists of many layers of aluminized (shiny) Mylar to greatly reduce radiation heat transfer and control satellite temperature. Devices A heat engine is a system that performs the conversion of a flow of thermal energy (heat) to mechanical energy to perform mechanical work. A thermocouple is a temperature-measuring device and a widely used type of temperature sensor for measurement and control, and can also be used to convert heat into electric power. A thermoelectric cooler is a solid-state electronic device that pumps (transfers) heat from one side of the device to the other when an electric current is passed through it. It is based on the Peltier effect. A thermal diode or thermal rectifier is a device that causes heat to flow preferentially in one direction. Heat exchangers A heat exchanger is used for more efficient heat transfer or to dissipate heat. Heat exchangers are widely used in refrigeration, air conditioning, space heating, power generation, and chemical processing. One common example of a heat exchanger is a car's radiator, in which the hot coolant fluid is cooled by the flow of air over the radiator's surface. Common types of heat exchanger flows include parallel flow, counter flow, and cross flow. In parallel flow, both fluids move in the same direction while transferring heat; in counter flow, the fluids move in opposite directions; and in cross flow, the fluids move at right angles to each other. Common types of heat exchangers include shell and tube, double pipe, extruded finned pipe, spiral fin pipe, u-tube, and stacked plate. Each type has certain advantages and disadvantages over other types. A heat sink is a component that transfers heat generated within a solid material to a fluid medium, such as air or a liquid. Examples of heat sinks are the heat exchangers used in refrigeration and air conditioning systems or the radiator in a car. A heat pipe is another heat-transfer device that combines thermal conductivity and phase transition to efficiently transfer heat between two solid interfaces. Applications Architecture Efficient energy use is the goal to reduce the amount of energy required in heating or cooling. In architecture, condensation and air currents can cause cosmetic or structural damage. An energy audit can help to assess the implementation of recommended corrective procedures. For instance, insulation improvements, air sealing of structural leaks, or the addition of energy-efficient windows and doors. Smart meter is a device that records electric energy consumption in intervals. Thermal transmittance is the rate of transfer of heat through a structure divided by the difference in temperature across the structure. It is expressed in watts per square meter per kelvin, or W/(m2K). Well-insulated parts of a building have a low thermal transmittance, whereas poorly-insulated parts of a building have a high thermal transmittance. Thermostat is a device to monitor and control temperature. Climate engineering Climate engineering consists of carbon dioxide removal and solar radiation management. Since the amount of carbon dioxide determines the radiative balance of Earth's atmosphere, carbon dioxide removal techniques can be applied to reduce the radiative forcing. Solar radiation management is the attempt to absorb less solar radiation to offset the effects of greenhouse gases. An alternative method is passive daytime radiative cooling, which enhances terrestrial heat flow to outer space through the infrared window (8–13 μm). Rather than merely blocking solar radiation, this method increases outgoing longwave infrared (LWIR) thermal radiation heat transfer with the extremely cold temperature of outer space (~2.7 K) to lower ambient temperatures while requiring zero energy input. Greenhouse effect The greenhouse effect is a process by which thermal radiation from a planetary surface is absorbed by atmospheric greenhouse gases and clouds, and is re-radiated in all directions, resulting in a reduction in the amount of thermal radiation reaching space relative to what would reach space in the absence of absorbing materials. This reduction in outgoing radiation leads to a rise in the temperature of the surface and troposphere until the rate of outgoing radiation again equals the rate at which heat arrives from the Sun. Heat transfer in the human body The principles of heat transfer in engineering systems can be applied to the human body to determine how the body transfers heat. Heat is produced in the body by the continuous metabolism of nutrients which provides energy for the systems of the body. The human body must maintain a consistent internal temperature to maintain healthy bodily functions. Therefore, excess heat must be dissipated from the body to keep it from overheating. When a person engages in elevated levels of physical activity, the body requires additional fuel which increases the metabolic rate and the rate of heat production. The body must then use additional methods to remove the additional heat produced to keep the internal temperature at a healthy level. Heat transfer by convection is driven by the movement of fluids over the surface of the body. This convective fluid can be either a liquid or a gas. For heat transfer from the outer surface of the body, the convection mechanism is dependent on the surface area of the body, the velocity of the air, and the temperature gradient between the surface of the skin and the ambient air. The normal temperature of the body is approximately 37 °C. Heat transfer occurs more readily when the temperature of the surroundings is significantly less than the normal body temperature. This concept explains why a person feels cold when not enough covering is worn when exposed to a cold environment. Clothing can be considered an insulator which provides thermal resistance to heat flow over the covered portion of the body. This thermal resistance causes the temperature on the surface of the clothing to be less than the temperature on the surface of the skin. This smaller temperature gradient between the surface temperature and the ambient temperature will cause a lower rate of heat transfer than if the skin were not covered. To ensure that one portion of the body is not significantly hotter than another portion, heat must be distributed evenly through the bodily tissues. Blood flowing through blood vessels acts as a convective fluid and helps to prevent any buildup of excess heat inside the tissues of the body. This flow of blood through the vessels can be modeled as pipe flow in an engineering system. The heat carried by the blood is determined by the temperature of the surrounding tissue, the diameter of the blood vessel, the thickness of the fluid, the velocity of the flow, and the heat transfer coefficient of the blood. The velocity, blood vessel diameter, and fluid thickness can all be related to the Reynolds Number, a dimensionless number used in fluid mechanics to characterize the flow of fluids. Latent heat loss, also known as evaporative heat loss, accounts for a large fraction of heat loss from the body. When the core temperature of the body increases, the body triggers sweat glands in the skin to bring additional moisture to the surface of the skin. The liquid is then transformed into vapor which removes heat from the surface of the body. The rate of evaporation heat loss is directly related to the vapor pressure at the skin surface and the amount of moisture present on the skin. Therefore, the maximum of heat transfer will occur when the skin is completely wet. The body continuously loses water by evaporation but the most significant amount of heat loss occurs during periods of increased physical activity. Cooling techniques Evaporative cooling Evaporative cooling happens when water vapor is added to the surrounding air. The energy needed to evaporate the water is taken from the air in the form of sensible heat and converted into latent heat, while the air remains at a constant enthalpy. Latent heat describes the amount of heat that is needed to evaporate the liquid; this heat comes from the liquid itself and the surrounding gas and surfaces. The greater the difference between the two temperatures, the greater the evaporative cooling effect. When the temperatures are the same, no net evaporation of water in the air occurs; thus, there is no cooling effect. Laser cooling In quantum physics, laser cooling is used to achieve temperatures of near absolute zero (−273.15 °C, −459.67 °F) of atomic and molecular samples to observe unique quantum effects that can only occur at this heat level. Doppler cooling is the most common method of laser cooling. Sympathetic cooling is a process in which particles of one type cool particles of another type. Typically, atomic ions that can be directly laser-cooled are used to cool nearby ions or atoms. This technique allows the cooling of ions and atoms that cannot be laser-cooled directly. Magnetic cooling Magnetic evaporative cooling is a process for lowering the temperature of a group of atoms, after pre-cooled by methods such as laser cooling. Magnetic refrigeration cools below 0.3K, by making use of the magnetocaloric effect. Radiative cooling Radiative cooling is the process by which a body loses heat by radiation. Outgoing energy is an important effect in the Earth's energy budget. In the case of the Earth-atmosphere system, it refers to the process by which long-wave (infrared) radiation is emitted to balance the absorption of short-wave (visible) energy from the Sun. The thermosphere (top of atmosphere) cools to space primarily by infrared energy radiated by carbon dioxide () at 15 μm and by nitric oxide (NO) at 5.3 μm. Convective transport of heat and evaporative transport of latent heat both remove heat from the surface and redistribute it in the atmosphere. Thermal energy storage Thermal energy storage includes technologies for collecting and storing energy for later use. It may be employed to balance energy demand between day and nighttime. The thermal reservoir may be maintained at a temperature above or below that of the ambient environment. Applications include space heating, domestic or process hot water systems, or generating electricity. History Newton's law of cooling In 1701, Isaac Newton anonymously published an article in Philosophical Transactions noting (in modern terms) that the rate of temperature change of a body is proportional to the difference in temperatures (, "degrees of heat") between the body and its surroundings. The phrase "temperature change" was later replaced with "heat loss", and the relationship was named Newton's law of cooling. In general, the law is valid only if the temperature difference is small and the heat transfer mechanism remains the same. Thermal conduction In heat conduction, the law is valid only if the thermal conductivity of the warmer body is independent of temperature. The thermal conductivity of most materials is only weakly dependent on temperature, so in general the law holds true. Thermal convection In convective heat transfer, the law is valid for forced air or pumped fluid cooling, where the properties of the fluid do not vary strongly with temperature, but it is only approximately true for buoyancy-driven convection, where the velocity of the flow increases with temperature difference. Thermal radiation In the case of heat transfer by thermal radiation, Newton's law of cooling holds only for very small temperature differences. Thermal conductivity of different metals In a 1780 letter to Benjamin Franklin, Dutch-born British scientist Jan Ingenhousz relates an experiment which enabled him to rank seven different metals according to their thermal conductivities: Benjamin Thompson's experiments on heat transfer During the years 1784 – 1798, the British physicist Benjamin Thompson (Count Rumford) lived in Bavaria, reorganizing the Bavarian army for the Prince-elector Charles Theodore among other official and charitable duties. The Elector gave Thompson access to the facilities of the Electoral Academy of Sciences in Mannheim. During his years in Mannheim and later in Munich, Thompson made a large number of discoveries and inventions related to heat. Conductivity experiments "New Experiments upon Heat" In 1785 Thompson performed a series of thermal conductivity experiments, which he describes in great detail in the Philosophical Transactions article "New Experiments upon Heat" from 1786. The fact that good electrical conductors are often also good heat conductors and vice versa must have been well known at the time, for Thompson mentions it in passing. He intended to measure the relative conductivities of mercury, water, moist air, "common air" (dry air at normal atmospheric pressure), dry air of various rarefication, and a "Torricellian vacuum". For these experiments, Thompson employed a thermometer inside a large, closed glass tube. Under the circumstances described, heat may—unbeknownst to Thompson—have been transferred more by radiation than by conduction. These were his results. After the experiments, Thompson was surprised to observe that a vacuum was a significantly poorer heat conductor than air "which of itself is reckoned among the worst", but only a very small difference between common air and rarefied air. He also noted the great difference between dry air and moist air, and the great benefit this affords. Temperature vs. sensible heat Thompson concluded with some comments on the important difference between temperature and sensible heat. Coining of the term "convection" In the 1830s, in The Bridgewater Treatises, the term convection is attested in a scientific sense. In treatise VIII by William Prout, in the book on chemistry, it says:This motion of heat takes place in three ways, which a common fire-place very well illustrates. If, for instance, we place a thermometer directly before a fire, it soon begins to rise, indicating an increase of temperature. In this case the heat has made its way through the space between the fire and the thermometer, by the process termed radiation. If we place a second thermometer in contact with any part of the grate, and away from the direct influence of the fire, we shall find that this thermometer also denotes an increase of temperature; but here the heat must have travelled through the metal of the grate, by what is termed conduction. Lastly, a third thermometer placed in the chimney, away from the direct influence of the fire, will also indicate a considerable increase of temperature; in this case a portion of the air, passing through and near the fire, has become heated, and has carried up the chimney the temperature acquired from the fire. There is at present no single term in our language employed to denote this third mode of the propagation of heat; but we venture to propose for that purpose, the term convection, [in footnote: [Latin] Convectio, a carrying or conveying] which not only expresses the leading fact, but also accords very well with the two other terms.Later, in the same treatise VIII, in the book on meteorology, the concept of convection is also applied to "the process by which heat is communicated through water". See also Combined forced and natural convection Heat capacity Heat transfer enhancement Heat transfer physics Stefan–Boltzmann law Thermal contact conductance Thermal physics Thermal resistance Citations References External links A Heat Transfer Textbook - (free download). Thermal-FluidsPedia - An online thermal fluids encyclopedia. Hyperphysics Article on Heat Transfer - Overview Interseasonal Heat Transfer - a practical example of how heat transfer is used to heat buildings without burning fossil fuels. Aspects of Heat Transfer, Cambridge University Thermal-Fluids Central Energy2D: Interactive Heat Transfer Simulations for Everyone Chemical engineering Mechanical engineering Unit operations Transport phenomena
Heat transfer
Physics,Chemistry,Engineering
7,449
31,585,964
https://en.wikipedia.org/wiki/Industrial%20catalysts
The first time a catalyst was used in the industry was in 1746 by J. Roebuck in the manufacture of lead chamber sulfuric acid. Since then catalysts have been in use in a large portion of the chemical industry. In the start only pure components were used as catalysts, but after the year 1900 multicomponent catalysts were studied and are now commonly used in the industry. In the chemical industry and industrial research, catalysis play an important role. Different catalysts are in constant development to fulfil economic, political and environmental demands. When using a catalyst, it is possible to replace a polluting chemical reaction with a more environmentally friendly alternative. Today, and in the future, this can be vital for the chemical industry. In addition, it's important for a company/researcher to pay attention to market development. If a company's catalyst is not continually improved, another company can make progress in research on that particular catalyst and gain market share. For a company, a new and improved catalyst can be a huge advantage for a competitive manufacturing cost. It's extremely expensive for a company to shut down the plant because of an error in the catalyst, so the correct selection of a catalyst or a new improvement can be key to industrial success. To achieve the best understanding and development of a catalyst it is important that different special fields work together. These fields can be: organic chemistry, analytic chemistry, inorganic chemistry, chemical engineers and surface chemistry. The economics must also be taken into account. One of the issues that must be considered is if the company should use money on doing the catalyst research themselves or buy the technology from someone else. As the analytical tools are becoming more advanced, the catalysts used in the industry are improving. One example of an improvement can be to develop a catalyst with a longer lifetime than the previous version. Some of the advantages an improved catalyst gives, that affects people's lives, are: cheaper and more effective fuel, new drugs and medications and new polymers. Some of the large chemical processes that use catalysis today are the production of methanol and ammonia. Both methanol and ammonia synthesis take advantage of the water-gas shift reaction and heterogeneous catalysis, while other chemical industries use homogenous catalysis. If the catalyst exists in the same phase as the reactants it is said to be homogenous; otherwise it is heterogeneous. Water gas shift reaction The water gas shift reaction was first used industrially at the beginning of the 20th century. Today the WGS reaction is used primarily to produce hydrogen that can be used for further production of methanol and ammonia. WGS reaction The reaction refers to carbon monoxide (CO) that reacts with water (H2O) to form carbon dioxide (CO2) and hydrogen (H2). The reaction is exothermic with ΔH -41.1 kJ/mol and have an adiabatic temperature rise of 8–10 °C per percent CO converted to CO2 and H2. The most common catalysts used in the water-gas shift reaction are the high temperature shift (HTS) catalyst and the low temperature shift (LTS) catalyst. The HTS catalyst consists of iron oxide stabilized by chromium oxide, while the LTS catalyst is based on copper. The main purpose of the LTS catalyst is to reduce CO content in the reformate which is especially important in the ammonia production for high yield of H2. Both catalysts are necessary for thermal stability, since using the LTS reactor alone increases exit-stream temperatures to unacceptable levels. The equilibrium constant for the reaction is given as: Low temperatures will therefore shift the reaction to the right, and more products will be produced. The equilibrium constant is extremely dependent on the reaction temperature, for example is the Kp equal to 228 at 200 °C, but only 11.8 at 400 °C. The WGS reaction can be performed both homogenously and heterogeneously, but only the heterogeneous method is used commercially. High temperature shift (HTS) catalyst The first step in the WGS reaction is the high temperature shift which is carried out at temperatures between 320 °C and 450 °C. As mentioned before, the catalyst is a composition of iron-oxide, Fe2O3(90-95%), and chromium oxides Cr2O3 (5-10%) which have an ideal activity and selectivity at these temperatures. When preparing this catalyst, one of the most important step is washing to remove sulfate that can turn into hydrogen sulfide and poison the LTS catalyst later in the process. Chromium is added to the catalyst to stabilize the catalyst activity over time and to delay sintering of iron oxide. Sintering will decrease the active catalyst area, so by decreasing the sintering rate the lifetime of the catalyst will be extended. The catalyst is usually used in pellets form, and the size play an important role. Large pellets will be strong, but the reaction rate will be limited. In the end, the dominant phase in the catalyst consist of Cr3+ in α-Fe2O3 but the catalyst is still not active. To be active α-Fe2O3 must be reduced to Fe and CrO3 must be reduced to Cr in presence of H2. This usually happens in the reactor start-up phase and because the reduction reactions are exothermic the reduction should happen under controlled circumstances. The lifetime of the iron-chrome catalyst is approximately 3–5 years, depending on how the catalyst is handled. Even though the mechanism for the HTS catalyst has been done a lot of research on, there is no final agreement on the kinetics/mechanism. Research has narrowed it down to two possible mechanisms: a regenerative redox mechanism and an adsorptive(associative) mechanism. The redox mechanism is given below: First a CO molecule reduces an O molecule, yielding CO2 and a vacant surface center: The vacant side is then reoxidized by water, and the oxide center is regenerated: The adsorptive mechanism assumes that format species is produced when an adsorbed CO molecule reacts with a surface hydroxyl group: The format decomposes then in the presence of steam: Low temperature shift (LTS) catalyst The low temperature process is the second stage in the process, and is designed to take advantage of higher hydrogen equilibrium at low temperatures. The reaction is carried out between 200 °C and 250 °C, and the most commonly used catalyst is based on copper. While the HTS reactor used an iron-chrome based catalyst, the copper-catalyst is more active at lower temperatures thereby yielding a lower equilibrium concentration of CO and a higher equilibrium concentration of H2. The disadvantage with a copper catalysts is that it is very sensitive when it comes to sulfide poisoning, a future use of for example a cobalt- molybdenum catalyst could solve this problem. The catalyst mainly used in the industry today is a copper-zinc-alumina (Cu/ZnO/Al2O3) based catalyst. Also the LTS catalyst has to be activated by reduction before it can be used. The reduction reaction CuO + H2 →Cu + H2O is highly exothermic and should be conducted in dry gas for an optimal result. As for the HTS catalyst mechanism, two similar reaction mechanisms are suggested. The first mechanism that was proposed for the LTS reaction was a redox mechanism, but later evidence showed that the reaction can proceed via associated intermediates. The different intermediates that is suggested are: HOCO, HCO and HCOO. In 2009 there are in total three mechanisms that are proposed for the water-gas shift reaction over Cu(111), given below. Intermediate mechanism (usually called associative mechanism): An intermediate is first formed and then decomposes into the final products: Associative mechanism: CO2 produced from the reaction of CO with OH without the formation of an intermediate: Redox mechanism: Water dissociation that yields surface oxygen atoms which react with CO to produce CO2: It is not said that just one of these mechanisms is controlling the reaction, it is possible that several of them are active. Q.-L. Tang et al. has suggested that the most favorable mechanism is the intermediate mechanism (with HOCO as intermediate) followed by the redox mechanism with the rate determining step being the water dissociation. For both HTS catalyst and LTS catalyst the redox mechanism is the oldest theory and most published articles support this theory, but as technology has developed the adsorptive mechanism has become more of interest. One of the reasons to the fact that the literature is not agreeing on one mechanism can be because of experiments are carried out under different assumptions. Carbon Monoxide CO must be produced for the WGS reaction to take place. This can be done in different ways from a variety of carbon sources such as: passing steam over coal: steam reforming methane, over a nickel catalyst: or by using biomass. Both the reactions shown above are highly endothermic and can be coupled to an exothermic partial oxidation. The products of CO and H2 are known as syngas. When dealing with a catalyst and CO, it is common to assume that the intermediate CO-Metal is formed before the intermediate reacts further into the products. When designing a catalyst this is important to remember. The strength of interaction between the CO molecule and the metal should be strong enough to provide a sufficient concentration of the intermediate, but not so strong that the reaction will not continue. CO is a common molecule to use in a catalytic reaction, and when it interacts with a metal surface it is actually the molecular orbitals of CO that interacts with the d-band of the metal surface. When considering a molecular orbital(MO)-diagram CO can act as an σ-donor via the lone pair of the electrons on C, and a π-acceptor ligand in transition metal complexes. When a CO molecule is adsorbed on a metal surface, the d-band of the metal will interact with the molecular orbitals of CO. It is possible to look at a simplified picture, and only consider the LUMO (2π*) and HOMO (5σ) to CO. The overall effect of the σ-donation and the π- back donation is that a strong bond between C and the metal is being formed and in addition the bond between C and O will be weakened. The latter effect is due to charge depletion of the CO 5σ bonding and charge increase of the CO 2π* antibonding orbital. When looking at chemical surfaces, many researchers seems to agree on that the surface of the Cu/Al2O3/ZnO is most similar to the Cu(111) surface. Since copper is the main catalyst and the active phase in the LTS catalyst, many experiments has been done with copper. In the figure given here experiments has been done on Cu(110) and Cu(111). The figure shows Arrhenius plot derived from reaction rates. It can be seen from the figure that Cu(110) shows a faster reaction rate and a lower activation energy. This can be due to the fact that Cu(111) is more closely packed than Cu(110). Methanol production Production of methanol is an important industry today and methanol is one of the largest volume carbonylation products. The process uses syngas as feedstock and for that reason the water gas shift reaction is important for this synthesis. The most important reaction based on methanol is the decomposition of methanol to yield carbon monoxide and hydrogen. Methanol is therefore an important raw material for production of CO and H2 that can be used in generation of fuel. BASF was the first company (in 1923) to produce methanol on large-scale, then using a sulfur-resistant ZnO/Cr2O3 catalyst. The feed gas was produced by gasification over coal. Today the synthesis gas is usually manufactured via steam reforming of natural gas. The most effective catalysts for methanol synthesis are Cu, Ni, Pd and Pt, while the most common metals used for support are Al and Si. In 1966 ICI (Imperial Chemical Industries) developed a process that is still in use today. The process is a low-pressure process that uses a Cu/ZnO/Al2O3 catalyst where copper is the active material. This catalyst is actually the same that the low-temperature shift catalyst in the WGS reaction is using. The reaction described below is carried out at 250 °C and 5-10 MPa: Both of these reactions are exothermic and proceeds with volume contraction. Maximum yield of methanol is therefore obtained at low temperatures and high pressure and with use of a catalyst that has a high activity at these conditions. A catalyst with sufficiently high activity at the low temperature does still not exist, and this is one of the main reasons that companies keep doing research and catalyst development. A reaction mechanism for methanol synthesis has been suggested by Chinchen et al.: Today there are four different ways to catalytically obtain hydrogen production from methanol, and all reactions can be carried out by using a transition metal catalyst (Cu, Pd): Steam reforming The reaction is given as: Steam reforming is a good source for production of hydrogen, but the reaction is endothermic. The reaction can be carried out over a copper-based catalyst, but the reaction mechanism is dependent on the catalyst. For a copper-based catalyst two different reaction mechanisms have been proposed, a decomposition-water-gas shift sequence and a mechanism that proceeds via methanol dehydrogenation to methyl formate. The first mechanism aims at methanol decomposition followed by the WGS reaction and has been proposed for the Cu/ZnO/Al2O3: The mechanism for the methyl format reaction can be dependent of the composition of the catalyst. The following mechanism has been proposed over Cu/ZnO/Al2O3: When methanol is almost completely converted CO is being produced as a secondary product via the reverse water-gas shift reaction. Methanol decomposition The second way to produce hydrogen from methanol is by methanol decomposition: As the enthalpy shows, the reaction is endothermic and this can be taken further advantage of in the industry. This reaction is the opposite of the methanol synthesis from syngas, and the most effective catalysts seems to be Cu, Ni, Pd and Pt as mentioned before. Often, a Cu/ZnO-based catalyst is used at temperatures between 200 and 300 °C but by-products of production like dimethyl ether, methyl format, methane and water are common. The reaction mechanism is not fully understood and there are two possible mechanism proposed (2002) : one producing CO2 and H2 by decomposition of formate intermediates and the other producing CO and H2 via a methyl formate intermediate. Partial oxidation Partial oxidation is a third way for producing hydrogen from methanol. The reaction is given below, and is often carried out with air or oxygen as oxidant : The reaction is exothermic and has, under favorable conditions, a higher reaction rate than steam reforming. The catalyst used is often Cu (Cu/ZnO) or Pd and they differ in qualities such as by-product formation, product distribution and the effect of oxygen partial pressure. Combined reforming Combined reforming is a combination of partial oxidation and steam reforming and is the last reaction that is used for hydrogen production. The general equation is given below: and are the stoichiometric coefficients for steam reforming and partial oxidation, respectively. The reaction can be both endothermic and exothermic determined by the conditions, and combine both the advantages of steam reforming and partial oxidation. Ammonia synthesis Ammonia synthesis was discovered by Fritz Haber, by using iron catalysts. The ammonia synthesis advanced between 1909 and 1913, and two important concepts were developed; the benefits of a promoter and the poisoning effect (see catalysis for more details). Ammonia production was one of the first commercial processes that required the production of hydrogen, and the cheapest and best way to obtain hydrogen was via the water-gas shift reaction. The Haber–Bosch process is the most common process used in the ammonia industry. A lot of research has been done on the catalyst used in the ammonia process, but the main catalyst that is used today is not that dissimilar to the one that was first developed. The catalyst the industry use is a promoted iron catalyst, where the promoters can be K2O (potassium oxide), Al2O3 (aluminium oxide) and CaO (calcium oxide) and the basic catalytic material is iron. The most common is to use fixed bed reactors for the synthesis catalyst. The main ammonia reaction is given below: The produced ammonia can be used further in production of nitric acid via the Ostwald process. See also Ammonia Chemical plant Chemical industry References Catalysis
Industrial catalysts
Chemistry
3,501
17,378,755
https://en.wikipedia.org/wiki/Glutaconyl-CoA
Glutaconyl-CoA is an intermediate in the metabolism of lysine. It is an organic compound containing a coenzyme substructure, which classifies it as a fatty ester lipid molecule. Being a lipid makes the molecule hydrophobic, which makes it insoluble in water. The molecule has a molecular formula of , and a molecular weight 879.62 grams per mole. Glutaconyl-CoA is postulated to be the main toxin in glutaric aciduria type 1. In certain fermentative bacteria, glutaconyl-CoA decarboxylation is catalyzed by a Na+-dependent decarboxylase () and is coupled with Na+ ion translocation, which creates a sodium-motive force as an alternate energy source for these organisms. See also Glutaconate CoA-transferase Glutaconyl-CoA decarboxylase References Thioesters of coenzyme A
Glutaconyl-CoA
Chemistry
210
2,939,479
https://en.wikipedia.org/wiki/Fontages%20Airport
Fontages Airport is located east of Fontanges, Quebec, Canada. References James Bay Project Registered aerodromes in Nord-du-Québec
Fontages Airport
Engineering
30
57,587,843
https://en.wikipedia.org/wiki/List%20of%20search%20appliance%20vendors
A search appliance is a type of computer which is attached to a corporate network for the purpose of indexing the content shared across that network in a way that is similar to a web search engine. It may be made accessible through a public web interface or restricted to users of that network. A search appliance is usually made up of: a gathering component, a standardizing component, a data storage area, a search component, a user interface component, and a management interface component. Vendors of search appliances Fabasoft Google InfoLibrarian Search Appliance™ Maxxcat Searchdaimon Thunderstone Former/defunct vendors of search appliances Black Tulip Systems Google Search Appliance Index Engines Munax Perfect Search Appliance References External links 7 Enterprise Search Appliances That Can Save the Day Computer hardware Information retrieval systems Internet search Computing-related lists
List of search appliance vendors
Technology,Engineering
175
43,334,966
https://en.wikipedia.org/wiki/Valvulotome
A valvulotome is a catheter-based controllable surgical instrument used for cutting or disabling the venous valves. This is needed to enable an in situ bypass in patients with an occluded artery (especially femoral artery), where the saphenous vein is disconnected from the venous system and connected to arteries above and below the occluded segment to allow blood to flow to the lower leg. Since the leg veins usually contain a number of valves that direct flow towards the heart, they cannot directly be used as graft, but if vein valves are removed the arterial blood can flow via the GSV to the lower leg - this is called an in situ graft procedure, a type of vascular bypass. The valvulotome itself is a long, flexible catheter with a recessed cutting blade at its end for the destruction of venous valves. The valvulotome is inserted at the distal end of the vein, guided to the proximal end, then withdrawn. It is during withdrawal that the valves are destroyed. The blade is designed to prevent exposure of the vein intima to the sharp cutting surface to avoid damage to the vessel wall. It is often designed resembling a hook, with a blunt outer surface and a sharp inner surface that makes contact with the venous valve as the device is withdrawn, but not during insertion. References Medical devices
Valvulotome
Biology
286
38,771,064
https://en.wikipedia.org/wiki/Shafarevich%E2%80%93Weil%20theorem
In algebraic number theory, the Shafarevich–Weil theorem relates the fundamental class of a Galois extension of local or global fields to an extension of Galois groups. It was introduced by for local fields and by for global fields. Statement Suppose that F is a global field, K is a normal extension of F, and L is an abelian extension of K. Then the Galois group Gal(L/F) is an extension of the group Gal(K/F) by the abelian group Gal(L/K), and this extension corresponds to an element of the cohomology group H2(Gal(K/F), Gal(L/K)). On the other hand, class field theory gives a fundamental class in H2(Gal(K/F),IK) and a reciprocity law map from IK to Gal(L/K). The Shafarevich–Weil theorem states that the class of the extension Gal(L/F) is the image of the fundamental class under the homomorphism of cohomology groups induced by the reciprocity law map . Shafarevich stated his theorem for local fields in terms of division algebras rather than the fundamental class . In this case, with L the maximal abelian extension of K, the extension Gal(L/F) corresponds under the reciprocity map to the normalizer of K in a division algebra of degree [K:F] over F, and Shafarevich's theorem states that the Hasse invariant of this division algebra is 1/[K:F]. The relation to the previous version of the theorem is that division algebras correspond to elements of a second cohomology group (the Brauer group) and under this correspondence the division algebra with Hasse invariant 1/[K:F] corresponds to the fundamental class. References Reprinted in his collected works, pages 4–5 , reprinted in volume I of his collected papers, Theorems in algebraic number theory
Shafarevich–Weil theorem
Mathematics
408
31,675,267
https://en.wikipedia.org/wiki/Arago%27s%20rotations
Arago's rotations is an observable magnetic phenomenon that involves the interactions between a magnetized needle and a moving metal disk. The effect was discovered by François Arago in 1824. At the time of their discovery, Arago's rotations were surprising effects that were difficult to explain. In 1831, Michael Faraday introduced the theory of electromagnetic induction, which explained how the effects happen in detail. History Early observations and publications As has so often occurred in other branches of science, the discovery of the magnetic rotations was made nearly simultaneously by several persons, for all of whom priority has been claimed. About 1824, Gambey the celebrated instrument-maker of Paris, had made the casual observation that a compass-needle, when disturbed and set oscillating around its pivot, comes to rest sooner if the bottom of the compass-box is of copper than if it is of wood or other material. Barlow and Marsh, at Woolwich, had at the same time been observing the effect on a magnetic needle of rotating in its neighborhood a sphere of iron. Arago the astronomer, who is said to have learned of the phenomenon from Gambey, but who is also said to have independently discovered it in 1822, when working with Humboldt at magnetic determinations, was beyond question the first to publish an account of the observation, which he did verbally before the Académie des Sciences of Paris, on November 22, 1824. He hung a compass-needle within rings of different materials, pushed the needle aside to about 45°, and counted the number of oscillations made by the needle before the angle of swing decreased to 10°. In a wooden ring the number was 145, in a thin copper ring 66, and in a stout copper ring it was only 33. Magnetism of rotation The effect of the presence of the mass of copper is to damp the vibrations of the needle. Each swing takes the same time as before, but the amplitudes are lessened; the motion dying down, as though there were some invisible friction at work. Arago remarked that it gave evidence of the presence of a force which only existed whilst there was relative motion between the magnet-needle and the mass of copper. He gave the phenomenon the name of magnetism of rotation. In 1825 he published a further experiment, in which, arguing from the principle of action and reaction, he produced a reaction on a stationary needle by motion of a copper disk (Fig. 1). Suspending a compass-needle in a glass jar closed at the bottom by a sheet of paper or of glass, he held it over a rotating disk of copper. If the latter turns slowly the needle is simply deviated out of the magnetic meridian, tending to turn in the sense of rotation of the disk, as though invisibly dragged by it. With quicker rotation the deviation is greater. If the rotation is so swift that the needle is dragged over 90° a continuous rotation ensues. Arago found, however, that the force was not simply tangential. Suspending a needle vertically from the beam of a balance over the revolving disk he found that it was repelled when the disk was revolved. The pole which hung nearest the disk was also acted upon by radial forces tending, if the pole was near the edge of the disk, to force it radially outward, but if the pole was nearer the center, tending to force it radially inward. Poisson, steeped in Coulomb's notions about magnetic action at a distance, essayed to build up a theory of magnetism of rotation, affirming that all bodies acquire a temporary magnetism in the presence of a magnet, but that in copper this temporary magnetism took a longer time to die away. In vain did Arago point out that the theory failed to account for the facts. The so-called "magnetism of rotation" threatened to become a fixed idea. Investigations of the phenomena by other scientists At this stage the phenomenon was investigated by several English experimenters, by Babbage and Herschel, by Christie, and, later, by Sturgeon and by Faraday. Babbage and Herschel measured the amount of retarding force exerted on the needle by different materials, and found the most powerful to be silver and copper (which are the two best conductors of electricity), after them gold and zinc, whilst lead, mercury and bismuth were very inferior in power. In 1825 they announced the successful reversal of Arago's experiment; for by spinning the magnet underneath a pivoted copper disk (Fig. 2) they had caused the latter to rotate briskly. They also made the notable observation that slits cut radially in a copper disk (Fig. 3) diminished its tendency to be rotated by the spinning magnet. If the rotatory force of the unslit disk be reckoned as 100, one radial slit reduced it to 88, two radial slits to 77, four to 48, and eight to 24. Amperè, in 1826, showed that a rotating disk of copper also exercises a turning moment upon a neighbouring copper wire through which a current is flowing. Seebeck in Germany, Prévost and Colladon in Switzerland, Nobili and Bacelli in Italy, confirmed the observations of the English experimenters, and added others. Sturgeon showed that the damping effect of a magnet pole upon a moving copper disk was diminished by the presence of a second magnet pole of contrary kind placed beside the first. Five years later he returned to the subject and came to the conclusion that the effect was an electrical disturbance, "a kind of reaction to that which takes place in electro-magnetism," when the publication of Faraday's brilliant research on magneto-electric induction, in 1831, forestalled the complete explanation of which he was in search. Faraday, in fact, showed that relative motion between magnet and copper disk inevitably set up currents in the metal of the disk, which, in turn, reacted on the magnet pole with mutual forces tending to diminish the relative motion—that is, tending to drag the stationary part (whether magnet or disk) in the direction of the moving part, and tending always to oppose the motion of the moving part. In fact, the currents go eddying round in the moving disk, unless led off by sliding contacts. Experiments on eddy-currents by Faraday and Matteuci This, indeed, Faraday effected, when he inserted his copper disk edgeways (Fig. 4) between the poles of a powerful magnet, and spun it, while against edge and axle were pressed spring contacts to take off the currents. The electromotive-force, acting at right angles to the motion, and to the lines of the magnetic field, produces currents which flow along the radius of the disk. If no external path is provided, the currents must find for themselves internal return paths in the metal of the disk. Fig. 5 shows the way in which a pair of eddies is set up in a disk revolving between magnet poles. These eddies are symmetrically located on either side of the radius of maximum electromotive-force (Fig. 6). The direction of the circulation of eddy-currents is always such as to tend to oppose the relative motion. The eddy-current in the part receding from the poles tends to attract the poles forward or to drag this part of the disk backwards. The eddy-current in the part advancing toward the poles tends to repel those poles and to be repelled by them. It is obvious that any slits cut in the disk will tend to limit the flow of the eddy-currents, and by limiting them to increase the resistance of their possible paths in the metal, though it will not diminish the electromotive-force. In the researches of Sturgeon a number of experiments are described to ascertain the directions in which the eddy-currents flow in disks. Similar, but more complete researches were made by Matteuci. The induction in rotating spheres was mathematically investigated by Jochmann, and later by Hertz. Faraday showed several interesting experiments on eddy-currents. Amongst others he hung from a twisted thread a cube of copper in a direct line between the poles of a powerful electromagnet (Fig. 7). Before the current was turned on the cube, by its weight, untwisted the cord and spun rapidly. On exciting the magnet by switching on the current, the cube stops instantaneously; but begins again to spin as soon as the current is broken. Matteucci varied this experiment by constructing a cube of square bits of sheet copper separated by paper from one another. This laminated cube (Fig. 8) if suspended in the magnetic field by a hook a, so that its laminae were parallel to the lines of the magnetic field, could not be stopped in its rotation by the sudden turning on of the current in the electromagnet; whereas if hung up by the hook b, so that its laminations were in a vertical plane, and then set spinning, was arrested at once when the electromagnet was excited. In the latter case only could eddy-currents circulate; since they require paths in planes at right angles to the magnetic lines. With the explanation given by Faraday of the Arago rotations, as being merely due to induced eddy-currents, the peculiar interest which they excited whilst their cause was unknown, seems almost to have died out. True, a few years later some interest was revived when Foucault showed that they were capable of heating the metal disk, if in spite of the drag the rotation was forcibly continued in the magnetic field. Why this observation should have caused the eddy-currents discovered by Faraday as the explanation of Arago's phenomenon to be dubbed Foucault's currents is not clear. If anyone is entitled to the honour of having the eddy-currents named after him, it is obviously Faraday or Arago, not Foucault. A little later, Le Roux produced the paradox that a copper disk rotated between concentric magnet poles is not heated thereby, and does not suffer any drag. The explanation of this is as follows. If there is an annular north pole in front of one face of the disk, and an annular south pole in front of the other face, though there is a magnetic field produced right through the disk, there are no eddy-currents. For if all round the disk there are equal electromotive-forces directed radially inwards or radially outwards, there will be no return path for the currents along any radius of the disk. The periphery will simply take a slightly different potential from the center; but no currents will flow because the electro-motive forces around any closed path in the disk are balanced. Experiments with copper plate by other scientists In 1884, Willoughby Smith published an investigation on rotating metal disks in which he found iron disks to generate electromotive-forces superior to those generated in copper disks of equal size. Guthrie and Boys in 1879 hung a copper plate over a rotating magnet by means of a torsion thread, and found that the torsion was directly proportional to the velocity of rotation. They pointed out that such an instrument was a very exact one for measuring the speed of machinery. They also made experiments upon varying the distance between the copper plate and the magnet, and varying the diameter and thickness of the copper disk. Experiments were made upon various metals, and the torque was found to vary as the conductivity of the metal as far as the latter could be judged after being rolled into the form of plate. Messrs. Guthrie and Boys then applied the method to the measurement of the conductivity of liquids. In 1880, De Fonvielle and Lontin observed that a lightly pivoted copper disk could be maintained in continuous rotation—if once started—by being placed, in presence of a magnet, within a coil of copper wire wound on a rectangular frame (like the coil of an old galvanometer), and supplied with alternate currents from an ordinary Ruhmkorff induction coil. They called their apparatus an electromagnetic gyroscope. It does not seem to have occurred to anyone that the Arago rotations could be made use of in the construction of a motor prior to 1879. Short description of Arago's rotations A magnetic needle is freely suspended on a pivot or string, a short distance above a copper disc. If the disk is stationary, the needle aligns itself with the Earth's magnetic field. If the disc is rotated in its own plane, the needle rotates in the same direction as the disc. (The effect decreases as the distance between the magnet and the disk increases.) Variations: If the disk is free to rotate with minimal friction, and the needle is rotated above or below it, the disk rotates in the same direction as the needle. (This is easier to observe or measure if the needle is a larger magnet.) If the needle is not allowed to rotate, its presence retards rotation of the disc. (This is easier to observe or measure if the needle is a larger magnet.) Other non-magnetic materials having electrical conductivity (non-ferrous metals such as silver, aluminum, or zinc) also produce the effect. Non-conductive non-magnetic materials (wood, glass, plastic, ice, etc.) do not produce the effect. Relative motion of the conductor and the magnet induces eddy currents in the conductor, which produce a force or torque that opposes or resists relative motion, or tries to "couple" the objects. The same drag-like force is used in eddy current braking and magnetic damping. See also Faraday disc Induction motor History of electromagnetic theory Further reading Walter Baily, A Mode of producing Arago's Rotation. June 28, 1879. (Philosophical magazine: a journal of theoretical, experimental and applied physics. Taylor & Francis., 1879) Silvanus Phillips Thompson, Polyphase Electric Currents and Alternate-Current Motors. 1895. References External links Arago's rotations (YouTube video) Magnetism Physical phenomena
Arago's rotations
Physics
2,888
3,366,914
https://en.wikipedia.org/wiki/Artificial%20reproduction
Artificial reproduction is the re-creation of life brought about by means other than natural ones. It is new life built by human plans and projects. Examples include artificial selection, artificial insemination, in vitro fertilization, artificial womb, artificial cloning, and kinematic replication. Artificial reproduction is one aspect of artificial life. Artificial reproduction can be categorized into one of two classes according to its capacity to be self-sufficient: non-assisted reproductive technology and assisted reproductive technology. Cutting plants' stems and placing them in compost is a form of assisted artificial reproduction, xenobots are an example of a more autonomous type of reproduction, while the artificial womb presented in the movie the Matrix illustrates a non assisted hypothetical technology. The idea of artificial reproduction has led to various technologies. Theology Humans have aspired to create life since immemorial times. Most theologies and religions have conceived this possibility as exclusive of deities. Christian religions consider the possibility of artificial reproduction, in most cases, as heretical and sinful. Philosophy Although ancient Greek philosophy raised the concept that man could imitate the creative capacity of nature, classic Greeks thought that if possible, human beings would reproduce things as nature does, and vice versa, nature would do the things that man does in the same way. Aristotle, for example, wrote that if nature made tables, it would make them just as men do. In other words, Aristotle said that if nature were to create a table, such table will look like a human-made table. Correspondingly, Descartes envisioned the human body, and nature, as a machine. Cartesian philosophy does not stop seeing a perfect mirror between nature and the artificial. However, Kant revolutionized this old idea by criticizing such naturalism. Kant pedagogically wrote: Humans are not instructed by nature but rather use nature as raw material to invent. Humans find alternatives to the natural restrictions imposed by natural laws thus, nature is not necessarily mirrored. In accordance with Kant (and contrary to what Aristotle thought) Karl Marx, Alfred Whitehead, Jaques Derrida and Juan David García Bacca noticed that nature is incapable of reproducing tables; or airplanes, or submarines, or computers. If nature tried to create airplanes, it would produce birds. If nature tried to create submarines, it would get fishes. If nature tried to create computers, brains would grow. And if nature tried to create man, modern man, monkeys will be evolved. According to Whitehead, if we look for something natural in artificial life, in the most elaborate cases, if anything, only atoms remain natural. Juan David Garcia Bacca summarized, According to García Bacca, the major difference between natural causes and artificial causes is that nature does not have plans and projects, while humans design things following plans and projects. In contrast, other influential authors such as Michael Behe have depicted the concept and promoted the idea of intelligent design, a notion that has aroused several doubts and heated controversies, as it reframe natural causes in accordance with a natural plan. Previous ideas that have also provided a positive 'sense' to natural reproduction, are orthogenesis, syntropy, orgone and morphic resonance, among others. Although, these ideas have been historically marginalized and often called pseudoscience, recently Bio-semioticians are reconsidering some of them under symbolic approaches. Current metaphysics of science actually recognizes that the artificial ways of reproduction are diverse from nature, i.e., unnatural, anti-natural or supernatural. Because Biosemiotics does not focus on the function of life but on its meaning, it has a better understanding of the artificial than classic biology. Science Biology, being the study of cellular life, addresses reproduction in terms of growth and cellular division (i.e., binary fission, mitosis and meiosis); however, the science of artificial reproduction is not restricted by the mirroring of these natural processes.The science of artificial reproduction is actually transcending the natural forms, and natural rules, of reproduction. For example, xenobots have redefined the classical conception of reproduction. Although xenobots are made of eukariotic cells they do not reproduce by mitosis, but rather by kinematic replication. Such constructive replication does not involve growing but rather building. Assisted reproductive technologies Assisted reproductive technology (ART)'s purpose is to assist the development of a human embryo, commonly because of medical concerns due to fertility limitations. Non-assisted reproductive technologies Non-assisted reproductive technologies (NART) could have medical motivations but are mostly driven by a wider heterotopic ambition. Although, NARTs are initially designed by humans, they are programed to become independent of humans to a relative or absolute extent. James Lovelock proposed that such novelties could overcome humans. Artificial cloning Cloning is the cellular reproductive processes where two or more genetically identical organisms are created, either by natural or artificial means. Artificial cloning normally involves editing the genetic code, somatic cell nuclear transfer and 3D bioprinting. Non-assisted artificial womb A non-assisted artificial womb or artificial uterus is a device that allow for ectogenesis or extracorporeal pregnancy by growing an embryonic form outside the body of an organism (that would normally carry the embryo to term) without any human assistance. The aspect of non-assistance is the key distinction between the current artificial womb technology (AWT) in modern medical research, which still relies on human assistance. With this non-assisted hypothetical technology, a zygote or stem cells are used to create an embryo that is then incubated and monitored by artificial intelligence (AI) within a chamber composed of biocompatible material. The AI maintains the necessary conditions for the embryo to develop and thrive, proceeding to mimic organic labor and childbirth in order to best help the embryo adjust to the outside world. Ectogenesis—gestation, depicted in the science fiction movie The Matrix, is a fast approaching reality. This type of innovation presupposes that vertebrate wombs are not the only way for bearing humans or other similar forms of life. Kinematic replication Self-replication without binary fission, meiosis, mitosis (or any other form of cellular reproduction that involves division and growing) can be achieved. Xenobots are an example of kinematic replication. They are biobots, named after the African clawed frog (Xenopus laevis). Xenobots are cellular life forms designed by using artificial intelligence to build more of themselves by combining frog cells in a liquid medium. The term kinematic replication is usually reserved for biomolecules (e.g. DNA, RNA, prions, etc.) and artificially designed cellular forms (e.g. xenobots). Machine constructive replication Machine constructive replication mimics human traditional manufacturing but is entirely self-automated. Such constructive replication is a more general form of kinematic replication, which does not necessarily includes bio-molecular or cellular forms. This technology also includes non-organic forms of life such as robots, cyborgs and artificial intelligence reproduction. Constructive replication, as kinematic replication, does not involve growing. In nature growing is required for cellular reproduction, where a cell grows before it splits in two daughters cells. Examples of cellular division are binary fission, mitosis and meiosis; these natural reproductive processes require growing, however constructive replication does not require growing but rather a non-human subject performing the construction of more of itself by using available raw materials. In computational terms, constructive replication is understood as a multi-step process which involves self-learning algorithms to assemble machines, and it could involve machines collecting resources. Each machine is created with a neural-network "brain" that can learn and adapt based on information it gathers. That machine's goal is then to manufacture more of itself in the best way it can come up with. Such automated constructive replication involves the notion of inheritance and learning tasks, as machines create an exact copy of themselves through a blueprint that has been passed on to them. Each machine then learns over time, making modifications to its software and its blueprint for future machines' hardware. It then passes on that modified blueprint to the machines it creates or helps create. Consciousness amplification Amplification of an existing consciousness is a hypothetical technology. This idea has inspired several movies, Chappie (film) and Detroit: Become Human. The reproduction of AI is currently part of an innovative human project, involving code and the amplification of that code. In other terms, the reproduction could come from information the AI collected across the Internet. See also Male Pregnancy Artificial Uterus In Vitro Fertilization Xenobot Fertilization Pregnancy The concept of nature sensu Marx Juan David García Bacca References Reproduction Hypothetical technology Artificial life Artificial intelligence Cyborgs Transhumanism Cloning
Artificial reproduction
Technology,Engineering,Biology
1,807
11,800,340
https://en.wikipedia.org/wiki/Colletotrichum%20pisi
Colletotrichum pisi is a plant pathogen. References pisi Fungal plant pathogens and diseases Fungi described in 1891 Fungus species
Colletotrichum pisi
Biology
29
10,115,658
https://en.wikipedia.org/wiki/Molecular%20self-assembly
In chemistry and materials science, molecular self-assembly is the process by which molecules adopt a defined arrangement without guidance or management from an outside source. There are two types of self-assembly: intermolecular and intramolecular. Commonly, the term molecular self-assembly refers to the former, while the latter is more commonly called folding. Supramolecular systems Molecular self-assembly is a key concept in supramolecular chemistry. This is because assembly of molecules in such systems is directed through non-covalent interactions (e.g., hydrogen bonding, metal coordination, hydrophobic forces, van der Waals forces, pi-stacking interactions, and/or electrostatic) as well as electromagnetic interactions. Common examples include the formation of colloids, biomolecular condensates, micelles, vesicles, liquid crystal phases, and Langmuir monolayers by surfactant molecules. Further examples of supramolecular assemblies demonstrate that a variety of different shapes and sizes can be obtained using molecular self-assembly. Molecular self-assembly allows the construction of challenging molecular topologies. One example is Borromean rings, interlocking rings wherein removal of one ring unlocks each of the other rings. DNA has been used to prepare a molecular analog of Borromean rings. More recently, a similar structure has been prepared using non-biological building blocks. Biological systems Molecular self-assembly underlies the construction of biologic macromolecular assemblies and biomolecular condensates in living organisms, and so is crucial to the function of cells. It is exhibited in the self-assembly of lipids to form the membrane, the formation of double helical DNA through hydrogen bonding of the individual strands, and the assembly of proteins to form quaternary structures. Molecular self-assembly of incorrectly folded proteins into insoluble amyloid fibers is responsible for infectious prion-related neurodegenerative diseases. Molecular self-assembly of nanoscale structures plays a role in the growth of the remarkable β-keratin lamellae/setae/spatulae structures used to give geckos the ability to climb walls and adhere to ceilings and rock overhangs. Protein multimers When multiple copies of a polypeptide encoded by a gene self-assemble to form a complex, this protein structure is referred to as a "multimer". Genes that encode multimer-forming polypeptides appear to be common. When a multimer is formed from polypeptides produced by two different mutant alleles of a particular gene, the mixed multimer may exhibit greater functional activity than the unmixed multimers formed by each of the mutants alone. In such a case, the phenomenon is referred to as intragenic complementation. Jehle pointed out that, when immersed in a liquid and intermingled with other molecules, charge fluctuation forces favor the association of identical molecules as nearest neighbors. Nanotechnology Molecular self-assembly is an important aspect of bottom-up approaches to nanotechnology. Using molecular self-assembly, the final (desired) structure is programmed in the shape and functional groups of the molecules. Self-assembly is referred to as a 'bottom-up' manufacturing technique in contrast to a 'top-down' technique such as lithography where the desired final structure is carved from a larger block of matter. In the speculative vision of molecular nanotechnology, microchips of the future might be made by molecular self-assembly. An advantage to constructing nanostructure using molecular self-assembly for biological materials is that they will degrade back into individual molecules that can be broken down by the body. DNA nanotechnology DNA nanotechnology is an area of current research that uses the bottom-up, self-assembly approach for nanotechnological goals. DNA nanotechnology uses the unique molecular recognition properties of DNA and other nucleic acids to create self-assembling branched DNA complexes with useful properties. DNA is thus used as a structural material rather than as a carrier of biological information, to make structures such as complex 2D and 3D lattices (both tile-based as well as using the "DNA origami" method) and three-dimensional structures in the shapes of polyhedra. These DNA structures have also been used as templates in the assembly of other molecules such as gold nanoparticles and streptavidin proteins. Two-dimensional monolayers The spontaneous assembly of a single layer of molecules at interfaces is usually referred to as two-dimensional self-assembly. One of the common examples of such assemblies are Langmuir-Blodgett monolayers and multilayers of surfactants. Non-surface active molecules can assemble into ordered structures as well. Early direct proofs showing that non-surface active molecules can assemble into higher-order architectures at solid interfaces came with the development of scanning tunneling microscopy and shortly thereafter. Eventually two strategies became popular for the self-assembly of 2D architectures, namely self-assembly following ultra-high-vacuum deposition and annealing and self-assembly at the solid-liquid interface. The design of molecules and conditions leading to the formation of highly-crystalline architectures is considered today a form of 2D crystal engineering at the nanoscopic scale. See also Assembly theory Foldamer Ice-nine Macromolecular assembly Self-assembly of nanoparticles Supramolecular assembly References Supramolecular chemistry Self-organization ar:تجميع ذاتي جزيئي
Molecular self-assembly
Chemistry,Materials_science,Mathematics
1,147
12,254,052
https://en.wikipedia.org/wiki/Smoking
Smoking is a practice in which a substance is combusted and the resulting smoke is typically inhaled to be tasted and absorbed into the bloodstream of a person. Most commonly, the substance used is the dried leaves of the tobacco plant, which have been rolled with a small rectangle of paper into an elongated cylinder called a cigarette. Other forms of smoking include the use of a smoking pipe or a bong. Smoking is primarily practised as a route of administration for psychoactive chemicals because the active substances within the burnt dried plant leaves vaporize and can be airborne-delivered into the respiratory tract, where they are rapidly absorbed into the bloodstream of the lungs and then reach the central nervous system. In the case of tobacco smoking, these active substances are a mixture of aerosol particles that includes the pharmacologically active alkaloid nicotine, which stimulates the nicotinic acetylcholine receptors in the brain. Other notable active substances inhaled via smoking include tetrahydrocannabinol (from cannabis), morphine (from opium) and cocaine (from crack). Smoking is one of the most common forms of recreational drug use. Tobacco smoking is the most popular form, being practised by over one billion people globally, of whom the majority are in the developing countries. Less common drugs for smoking include cannabis and opium. Some of the substances are classified as hard narcotics, like heroin, but the use of these is very limited as they are usually not commercially available. Cigarettes are primarily industrially manufactured but also can be hand-rolled from loose tobacco and rolling paper. Other smoking implements include pipes, cigars, bidis, hookahs, and bongs. Smoking has negative health effects, because smoke inhalation inherently poses challenges to various physiologic processes such as respiration. Smoking tobacco is among the leading causes of many diseases such as lung cancer, heart attack, COPD, erectile dysfunction, and birth defects. Diseases related to tobacco smoking have been shown to kill approximately half of long-term smokers when compared to average mortality rates faced by non-smokers. Smoking caused over five million deaths a year from 1990 to 2015. Non-smokers account for 600,000 deaths globally due to second-hand smoke. The health hazards of smoking have caused many countries to institute high taxes on tobacco products, publish advertisements to discourage use, limit advertisements that promote use, and provide help with quitting for those who do smoke. Smoking can be dated to as early as 5000 BCE, and has been recorded in many different cultures across the world. Early smoking evolved in association with religious ceremonies; as offerings to deities; in cleansing rituals; or to allow shamans and priests to alter their minds for purposes of divination or spiritual enlightenment. After the European exploration and conquest of the Americas, the practice of smoking tobacco quickly spread to the rest of the world. In regions like India and Sub-Saharan Africa, it merged with existing practices of smoking (mostly of cannabis). In Europe, it introduced a new type of social activity and a form of drug intake which previously had been unknown. Perception surrounding smoking has varied over time and from one place to another: holy and sinful, sophisticated and vulgar, a panacea and deadly health hazard. In the last decade of the 20th century, smoking came to be viewed in a decidedly negative light, especially in Western countries. History Early uses The history of smoking dates back to as early as 5000 BCE for shamanistic rituals. Many ancient civilizations, such as the Babylonian and Chinese, burnt incense as a part of religious rituals, as did the Israelites and the later Catholic and Orthodox Christian churches. Smoking in the Americas probably had its origins in the incense-burning ceremonies of shamans but was later adopted for pleasure, or as a social tool. The smoking of tobacco, as well as various hallucinogenic drugs, was used to achieve trances and to come into contact with the spirit world. Substances such as cannabis, clarified butter (ghee), fish offal, dried snake skins and various pastes molded around incense sticks dates back at least 2000 years. Fumigation (dhupa) and fire offerings (homa) are prescribed in the Ayurveda for medical purposes, and have been practiced for at least 3,000 years while smoking, dhumrapana (literally "drinking smoke"), has been practiced for at least 2,000 years. Before modern times these substances have been consumed through pipes, with stems of various lengths or chillums. Archaeological findings also show the existence of pipes to smoke opium in Cyprus and Crete as soon as the Bronze Age. Cannabis smoking was common in the Middle East before the arrival of tobacco, and was early on a common social activity that centered around the type of water pipe called a hookah. Smoking, especially after the introduction of tobacco, was an essential component of Muslim society and culture and became integrated with important traditions such as weddings, funerals and was expressed in architecture, clothing, literature and poetry. Cannabis smoking was introduced to Sub-Saharan Africa through Ethiopia and the east African coast by either Indian or Arab traders in the 13th century or earlier and spread on the same trade routes as those that carried coffee, which originated in the highlands of Ethiopia. It was smoked in calabash water pipes with terracotta smoking bowls, apparently an Ethiopian invention which was later conveyed to eastern, southern and central Africa. Reports from the first European explorers and conquistadors to reach the Americas tell of rituals where native priests smoked themselves into such high degrees of intoxication that it is unlikely that the rituals were limited to just tobacco. Popularization In 1612, six years after the settlement of Jamestown, John Rolfe was credited as the first settler to successfully grow tobacco as a cash crop. The demand quickly grew as tobacco, referred to as "golden weed", revived the Virginia Company from its failed expeditions in search for gold in the Americas. In order to meet demands from the old world, tobacco was grown in succession, quickly depleting the land. This became a motivator to settle west into the unknown continent, and likewise an expansion of tobacco production. Indentured servants became the primary labor force up until Bacon's Rebellion, from which the focus turned to slavery. This trend abated following the American Revolution as slavery became regarded as unprofitable. However the practice was revived in 1794 with the invention of the cotton gin. A Frenchman named Jean Nicot (from whose name the word nicotine is derived) introduced tobacco to France in 1560. From France tobacco spread to England. The first report documents an English sailor in Bristol in 1556, seen "emitting smoke from his nostrils". Like tea, coffee and opium, tobacco was just one of many intoxicants that was originally used as a form of medicine. Tobacco was introduced around 1600 by French merchants in what today is modern-day The Gambia and Senegal. At the same time caravans from Morocco brought tobacco to the areas around Timbuktu and the Portuguese brought the commodity (and the plant) to southern Africa, establishing the popularity of tobacco throughout all of Africa by the 1650s. Soon after its introduction to the Old World, tobacco came under frequent criticism from state and religious leaders. Murad IV, sultan of the Ottoman Empire 1623–40 was among the first to attempt a smoking ban by claiming it was a threat to public morality and health. The Chongzhen Emperor of China issued an edict banning smoking two years before his death and the overthrow of the Ming dynasty. Later, the Manchu rulers of the Qing dynasty, would proclaim smoking "a more heinous crime than that even of neglecting archery". In Edo period Japan, some of the earliest tobacco plantations were scorned by the shōgun as being a threat to the military economy by letting valuable farmland go to waste for the use of a recreational drug instead of being used to plant food crops. Religious leaders have often been prominent among those who considered smoking immoral or outright blasphemous. In 1634, the Patriarch of Moscow and all Rus' forbade the sale of tobacco and sentenced men and women who flouted the ban to have their nostrils slit and their backs whipped until skin came off their backs. The Western church leader Pope Urban VII likewise condemned smoking in a papal bull of 1590. Despite many concerted efforts, restrictions and bans were almost universally ignored. When James VI and I, a staunch anti-smoker and the author of A Counterblaste to Tobacco, tried to curb the new trend by enforcing a whopping 4000% tax increase on tobacco in 1604, it proved a failure, as London had some 7,000 tobacco sellers by the early 17th century. Later, scrupulous rulers would realise the futility of smoking bans and instead turned tobacco trade and cultivation into lucrative government monopolies. By the mid-17th century every major civilization had been introduced to tobacco smoking and in many cases had already assimilated it into its culture, despite the attempts of many rulers to stamp the practice out with harsh penalties or fines. Tobacco, both product, and plant followed the major trade routes to major ports and markets, and then on into the hinterlands. The English language term smoking was coined in the late 18th century; before then the practice was referred to as drinking smoke. Tobacco and cannabis were used in Sub-Saharan Africa, much like elsewhere in the world, to confirm social relations, but also created entirely new ones. In what is today Congo, a society called Bena Diemba ("People of Cannabis") was organized in the late 19th century in Lubuko ("The Land of Friendship"). The Bena Diemba were collectivist pacifists that rejected alcohol and herbal medicines in favor of cannabis. The growth remained stable until the American Civil War in the 1860s, from which the primary labor force transition from slavery to sharecropping. This compounded with a change in demand, lead to the industrialization of tobacco production with the cigarette. James Albert Bonsack, a craftsman, in 1881 produced a machine to speed the production of cigarettes. Opium In the 19th century, the practise of smoking opium became widespread in China. Previously, opium had only been ingested via consumption, and then only for its medicinal properties (opium was an anaesthetic). The narcotic was also outlawed in China sometime in the early 18th century due the societal issues it caused. Due to a massive trade imbalance, however, foreign merchants started to smuggle opium into China via Canton, to the chagrin of the Chinese authorities. Attempts by Chinese official Lin Zexu to eliminate the trade led to the outbreak of the First Opium War. The Chinese defeat in the First and Second Opium Wars resulted in the legalization of the importation of opium into China. Opium smoking later spread with Chinese immigrants and spawned many infamous opium dens in Chinatowns around South and Southeast Asia, Europe and the Americas. In the latter half of the 19th century, opium smoking became popular in the artistic community in Europe, especially Paris; artists' neighborhoods such as Montparnasse and Montmartre became virtual "opium capitals". While opium dens that catered primarily to emigrant Chinese continued to exist in Chinatowns around the world, the trend among the European artists largely abated after the outbreak of World War I. The consumption of Opium abated in China during the Cultural Revolution in the 1960s and 1970s. Anti-tobacco movement Many people have been critical about tobacco use since it gained popularity. In 1798, Dr. Benjamin Rush (early American physician, signer of the Declaration of Independence, Surgeon General under George Washington, and anti-tobacco activist) was "against the habitual use of tobacco" because he believed it (a) "led to a desire for strong drink," (b) "was injurious both to health and morals," (c) "is generally offensive to" nonsmokers, (d) "produces a want of respect for" nonsmokers, and (e) "always disposes to unkind and unjust behavior towards them." With the modernization of cigarette production compounded with the increased life expectancies during the 1920s, adverse health effects began to become more prevalent. In Germany, anti-smoking groups, often associated with anti-liquor groups, first published advocacy against the consumption of tobacco in the journal Der Tabakgegner (The Tobacco Opponent) in 1912 and 1932. In 1929, Fritz Lickint of Dresden, Germany, published a paper containing formal statistical evidence of a lung cancer–tobacco link. During the Great Depression, Adolf Hitler condemned his earlier smoking habit as a waste of money, and later with stronger assertions. This movement was further strengthened with Nazi reproductive policy as women who smoked were viewed as unsuitable to be wives and mothers in a German family. The movement in Nazi Germany did reach across enemy lines during the Second World War, as anti-smoking groups quickly lost popular support. By the end of the Second World War, American cigarette manufacturers quickly reentered the German black market. Illegal smuggling of tobacco became prevalent, and leaders of the Nazi anti-smoking campaign were assassinated. As part of the Marshall Plan, the United States shipped free tobacco to Germany; with 24,000 tons in 1948 and 69,000 tons in 1949. Per capita yearly cigarette consumption in post-war Germany steadily rose from 460 in 1950 to 1,523 in 1963. By the end of the 20th century, anti-smoking campaigns in Germany were unable to exceed the effectiveness of the Nazi-era climax in the years 1939–41 and German tobacco health research was described by Robert N. Proctor as "muted". In the UK and the US, an increase in lung cancer rates, formerly "among the rarest forms of disease", was noted by the 1930s, but its cause remained unknown and even the credibility of this increase was sometimes disputed as late as 1950. For example, in Connecticut, reported age-adjusted incidence rates of lung cancer among males increased 220% between 1935–39 and 1950–54. In the UK, the share of lung cancer among all cancer deaths in men increased from 1.5% in 1920 to 19.7% in 1947. Nevertheless, these increases were questioned as potentially caused by increased reporting and improved methods of diagnosis. Although several carcinogens were already known at the time (for example, benzo[a]pyrene was isolated from coal tar and demonstrated to be a potent carcinogen in 1933), none were known to be contained in adequate quantities in tobacco smoke. Richard Doll in 1950 published research in the British Medical Journal showing a close link between smoking and lung cancer. Four years later, in 1954 the British Doctors Study, a study of some 40 thousand doctors over 20 years, confirmed the link, based on which the government issued advice that smoking and lung cancer rates were related. In 1964 the United States Surgeon General's Report on Smoking and Health demonstrated the relationship between smoking and cancer. Further reports confirmed this link in the 1980s and concluded in 1986 that passive smoking was also harmful. As scientific evidence mounted in the 1980s, tobacco companies claimed contributory negligence as the adverse health effects were previously unknown or lacked substantial credibility. Health authorities sided with these claims up until 1998, from which they reversed their position. The Tobacco Master Settlement Agreement, originally between the four largest US tobacco companies and the Attorneys General of 46 states, restricted certain types of tobacco advertisement and required payments for health compensation; which later amounted to the largest civil settlement in United States history. From 1965 to 2006, rates of smoking in the United States have declined from 42% to 20.8%. A significant majority of those who quit were professional, affluent men. Despite this decrease in the prevalence of consumption, the average number of cigarettes consumed per person per day increased from 22 in 1954 to 30 in 1978. This paradoxical event suggests that those who quit smoked less, while those who continued to smoke moved to smoke more light cigarettes. This trend has been paralleled by many industrialized nations as rates have either leveled-off or declined. In the developing countries, however, tobacco consumption continues to rise at 3.4% in 2002. In Africa, smoking is in most areas considered to be modern, and many of the strong adverse opinions that prevail in the West receive much less attention. Today Russia leads as the top consumer of tobacco followed by Indonesia, Laos, Ukraine, Belarus, Greece, Jordan, and China. At the global scale, initial ideas of an international convention towards the prevention of tobacco had been initiated in the World Health Assembly (WHA) in 1996. In 1998, along with the successful election of Dr. Gro Harlem Brundtland as the Director-General, the World Health Organization set tobacco control as its leading health concern and has begun a program known as the Tobacco Free Initiative (TFI) in order to reduce rates of consumption in the developing world. However, it was not until 2003 that the Framework Convention on Tobacco Control (FCTC) was accepted in WHA and entered into force in 2005. FCTC marked a milestone as the first international treaty concerning a global health issue that aims to combat tobacco in multiple aspects including tobacco taxes, advertisement, trading, environmental affects, health influences, etc. The birth of this evidence-based and systematic approach has resulted in the reinforcement of tobacco taxes and the implementation of smoke-free laws in 128 countries that led to the decrease of smoking prevalence in developing nations. In Nepal, "Smokers are not selfish", a health campaign lasting two weeks is started on the occasion of Valentine day and Vasant panchami to motiviate individuals to quit smoking as a sacrifice for their loved ones and making it a meaningful decision of life. This campaign is attracting public attention. Other substances In the early 1980s, organized international trafficking of cocaine grew. However, overproduction and tighter legal enforcement for the illegal product caused drug dealers to convert the powder to "crack" – a solid, smokable form of cocaine that could be sold in smaller quantities to more people. This trend abated in the 1990s as increased police action coupled with a robust economy caused many potential consumers to give up or fail to take up the habit. Recent years shows an increase in the consumption of vaporized heroin, methamphetamine and Phencyclidine (PCP). Along with a smaller number of psychedelic drugs such as Changa, DMT, 5-Meo-DMT, and Salvia divinorum. Substances and equipment The most popular type of substance that is smoked is tobacco. There are many different tobacco cultivars which are made into a wide variety of mixtures and brands. Tobacco is often sold flavored, often with various fruit aromas, something which is especially popular for use with water pipes, such as hookahs. The second most common substance that is smoked is cannabis, made from the flowers or leaves of Cannabis sativa or Cannabis indica. The substance is considered illegal in most countries in the world and in the countries that tolerate public consumption, it is sometimes only pseudo-legal. Despite this, a considerable percentage of the adult population in many countries have tried it with smaller minorities doing it on a regular basis. Since cannabis is illegal or only tolerated in many jurisdictions, there is no industrial mass-production of cigarettes, meaning that the most common form of smoking is with hand-rolled cigarettes (often called joints) or with pipes. Water pipes are also fairly common; water pipes used for cannabis include designs known as bongs and bubblers, among others. A few other recreational drugs are smoked by smaller minorities. Most of these substances are controlled, and some are considerably more intoxicating than either tobacco or cannabis. These include crack cocaine, heroin, methamphetamine and PCP. A small number of psychedelic drugs are also smoked, including DMT, 5-Meo-DMT, and Salvia divinorum. Even the most primitive form of smoking requires tools of some sort to perform. This has resulted in a staggering variety of smoking tools and paraphernalia from all over the world. Whether tobacco, cannabis, opium or herbs, some form of receptacle is required along with a source of fire to light the mixture. The most common today is by far the cigarette, consisting of a mild inhalant strain of tobacco in a tightly rolled tube of paper, usually manufactured industrially and including a filter, or hand-rolled with loose tobacco. Other popular smoking tools are various pipes and cigars. A less common but increasingly popular alternative to smoking is vaporizers, which use hot air convection to deliver the substance without combustion, which may reduce health risks. A portable vaporization alternative appeared in 2003 with the introduction of electronic cigarettes, battery-operated, cigarette-shaped devices which produce an aerosol intended to mimic the smoke from burning tobacco, delivering nicotine to the user without some of the harmful substances released in tobacco smoke. Other than actual smoking equipment, many other items are associated with smoking; cigarette cases, cigar boxes, lighters, matchboxes, cigarette holders, cigar holders, ashtrays, silent butlers, pipe cleaners, tobacco cutters, match stands, pipe tampers, cigarette companions and so on. Some examples of these have become valuable collector items and particularly ornate and antique items can fetch high prices. Health effects Smoking is one of the leading preventable causes of deaths globally and is the cause of over 8 million deaths annually, 1.2 million of which are non-smokers who die due to second-hand smoke. In the United States, about 500,000 deaths per year are attributed to smoking-related diseases and a recent study estimated that as much as one-third of China's male population will have significantly shortened lifespans due to smoking. Male and female smokers lose an average of 13.2 and 14.5 years of life, respectively. At least half of all lifelong smokers die earlier as a result of smoking. The risk of dying from lung cancer before age 85 is 22.1% for a male smoker and 11.9% for a female current smoker, in the absence of competing causes of death. The corresponding estimates for lifelong nonsmokers are a 1.1% probability of dying from lung cancer before age 85 for a man of European descent, and a 0.8% probability for a woman. Smoking just one cigarette a day results in a risk of coronary heart disease that is halfway between that of a heavy smoker and a non-smoker. The non-linear dose–response relationship may be explained by smoking's effect on platelet aggregation. Among the diseases that can be caused by smoking are vascular stenosis, lung cancer, heart attacks and chronic obstructive pulmonary disease (COPD). Smoking during pregnancy may cause ADHD to a fetus. Smoking is a risk factor strongly associated with periodontitis and tooth loss. The effects of smoking on periodontal tissues depend on the number of cigarettes smoked daily and the duration of the habit. A study showed that smokers had 2.7 times and former smokers 2.3 times greater probabilities to have established periodontal disease than non‐smokers, independent of age, sex and plaque index, however, the effect of tobacco on periodontal tissues seems to be more pronounced in men than in women. Studies have found that smokers had greater odds for more severe dental bone loss compared to non‐smokers; also, people who smoke and drink alcohol heavily have much higher risk of developing oral cancer (mouth and lip) compared with people who do neither. Smoking can also cause milanosis in the mouth. Smoking has been also associated with oral conditions including dental caries, dental implant failures, premalignant lesions, and cancer. Smoking can affect the immune-inflammatory processes which may increase susceptibility to infections; it can alter the oral mycobiota and facilitate colonization of the oral cavity with fungi and pathogenic molds. Many governments are trying to deter people from smoking with anti-smoking campaigns in mass media stressing the harmful long-term effects of smoking. Passive smoking, or secondhand smoking, which affects people in the immediate vicinity of smokers, is a major reason for the enforcement of smoking bans. These are laws enforced to stop individuals from smoking in indoor public places, such as bars, pubs and restaurants, thus reducing nonsmokers' exposure to secondhand smoke. A common concern among legislators is to discourage smoking among minors and many states have passed laws against selling tobacco products to underage customers (establishing a smoking age). Many developing countries have not adopted anti-smoking policies, leading some to call for anti-smoking campaigns and further education to explain the negative effects of ETS (Environmental Tobacco Smoke) in developing countries. Tobacco advertising is also sometimes regulated to make smoking less appealing. Despite the many bans, European countries still hold 18 of the top 20 spots, and according to the ERC, a market research company, the heaviest smokers are from Greece, averaging 3,000 cigarettes per person in 2007. Rates of smoking have leveled off or declined in the developed world but continue to rise in developing countries. Smoking rates in the United States have dropped by half from 1965 to 2006, falling from 42% to 20.8% in adults. The effects of addiction on society vary considerably between different substances that can be smoked and the indirect social problems that they cause, in great part because of the differences in legislation and the enforcement of narcotics legislation around the world. Though nicotine is a highly addictive drug, its effects on cognition are not as intense or noticeable as other drugs such as cocaine, amphetamines or any of the opiates (including heroin and morphine). Smoking is a risk factor in Alzheimer's disease. While smoking more than 15 cigarettes per day has been shown to worsen the symptoms of Crohn's disease, smoking has been shown to actually lower the prevalence of ulcerative colitis. Smokers are 30-40% more likely to develop type 2 diabetes than non-smokers, and the risk increases with the number of cigarettes smoked. Physiology Inhaling the vaporized gas form of substances into the lungs is a quick and very effective way of delivering drugs into the bloodstream (as the gas diffuses directly into the pulmonary vein, then into the heart and from there to the brain) and affects the user within less than a second of the first inhalation. The lungs consist of several million tiny bulbs called alveoli that altogether have an area of over 70 m2 (about the area of a tennis court). This can be used to administer useful medical as well as recreational drugs such as aerosols, consisting of tiny droplets of a medication, or as gas produced by burning plant material with a psychoactive substance or pure forms of the substance itself. Not all drugs can be smoked, for example the sulphate derivative that is most commonly inhaled through the nose, though purer free base forms of substances can, but often require considerable skill in administering the drug properly. The method is also somewhat inefficient since not all of the smoke will be inhaled. The inhaled substances trigger chemical reactions in nerve endings in the brain due to being similar to naturally occurring substances such as endorphins and dopamine, which are associated with sensations of pleasure. The result is what is usually referred to as a "high" that ranges between the mild stimulus caused by nicotine to the intense euphoria caused by heroin, cocaine and methamphetamines. Inhaling smoke into the lungs, no matter the substance, has adverse effects on one's health. The incomplete combustion produced by burning plant material, like tobacco or cannabis, produces carbon monoxide, which impairs the ability of blood to carry oxygen when inhaled into the lungs. There are several other toxic compounds in tobacco that constitute serious health hazards to long-term smokers from a whole range of causes; vascular abnormalities such as stenosis, lung cancer, heart attacks, strokes, impotence, low birth weight of infants born by smoking mothers. 8% of long-term smokers develop the characteristic set of facial changes known to doctors as smoker's face. Tobacco smoke is a complex mixture of over 5,000 identified chemicals, of which 98 are known to have specific toxicological properties. The most important chemicals causing cancer are those that produce DNA damage since such damage appears to be the primary underlying cause of cancer. Cunningham et al. combined the microgram weight of the compound in the smoke of one cigarette with the known genotoxic effect per microgram to identify the most carcinogenic compounds in cigarette smoke. The seven most important carcinogens in tobacco smoke are shown in the table, along with DNA alterations they cause. Psychology Most tobacco smokers begin during adolescence or early adulthood. Smoking has elements of risk-taking and rebellion, which often appeal to young people. The presence of high-status models and peers may also encourage smoking. Because teenagers are influenced more by their peers than by adults, attempts by parents, schools, and health professionals at preventing people from trying cigarettes are not always successful. Smokers often report that cigarettes help relieve feelings of stress. However, the stress levels of adult smokers are slightly higher than those of nonsmokers. Adolescent smokers report increasing levels of stress as they develop regular patterns of smoking, and smoking cessation leads to reduced stress. Far from acting as an aid for mood control, nicotine dependency seems to exacerbate stress. This is confirmed in the daily mood patterns described by smokers, with normal moods during smoking and worsening moods between cigarettes. Thus, the apparent relaxant effect of smoking only reflects the reversal of the tension and irritability that develop during nicotine depletion. Dependent smokers need nicotine to remain feeling normal. In the mid-20th century psychologists such as Hans Eysenck developed a personality profile for the typical smoker of that period; extraversion was associated with smoking, and smokers tended to be sociable, impulsive, risk taking, and excitement-seeking individuals. Although personality and social factors may make people likely to smoke, the actual habit is a function of operant conditioning. During the early stages, smoking provides pleasurable sensations (because of its action on the dopamine system) and thus serves as a source of positive reinforcement. After an individual has smoked for many years, the avoidance of withdrawal symptoms and negative reinforcement become the key motivations. Like all addictive substances, the amount of exposure required to become dependent on nicotine can vary from person to person. In terms of the Big Five personality traits, research has found smoking to be correlated with lower levels of agreeableness and conscientiousness, as well as higher levels of extraversion and neuroticism. Prevention Education and counselling by physicians of children and adolescents has been found to be effective in decreasing the risk of tobacco use. Systematic reviews show that psychosocial interventions can help women stop smoking in late pregnancy, reducing low birthweight and preterm births. A 2016 Cochrane review showed that the combination of medication and behavioural support was more effective than minimal interventions or usual care. Another Cochrane review "suggests that neither reducing smoking to quit nor quitting abruptly results in superior quit rates; people could therefore be given a choice of how to quit, and support provided to people who would specifically like to reduce their smoking before quitting." Prevalence Smoking, primarily of tobacco, is an activity that is practiced by some 1.1 billion people, and up to 1/3 of the adult population. The image of the smoker can vary considerably, but is very often associated, especially in fiction, with individuality and aloofness. Even so, smoking of both tobacco and cannabis can be a social activity which serves as a reinforcement of social structures and is part of the cultural rituals of many and diverse social and ethnic groups. Many smokers begin smoking in social settings and the offering and sharing of a cigarette is often an important rite of initiation or simply a good excuse to start a conversation with strangers in many settings; in bars, night clubs, at work or on the street. Lighting a cigarette is often seen as an effective way of avoiding the appearance of idleness or mere loitering. For adolescents, it can function as a first step out of childhood or as an act of rebellion against the adult world. Also, smoking can be seen as a sort of camaraderie. It has been shown that even opening a packet of cigarettes, or offering a cigarette to other people, can increase the level of dopamine (the "happy feeling") in the brain, and it is doubtless that people who smoke form relationships with fellow smokers, in a way that only proliferates the habit, particularly in countries where smoking inside public places has been made illegal. Other than recreational drug use, it can be used to construct identity and a development of self-image by associating it with personal experiences connected with smoking. The rise of the modern anti-smoking movement in the late 19th century did more than create awareness of the hazards of smoking; it provoked reactions of smokers against what was, and often still is, perceived as an assault on personal freedom and has created an identity among smokers as rebels or outcasts, apart from non-smokers: The importance of tobacco to soldiers was early on recognized as something that could not be ignored by commanders. By the 17th century allowances of tobacco were a standard part of the naval rations of many nations and by World War I cigarette manufacturers and governments collaborated in securing tobacco and cigarette allowances to soldiers in the field. It was asserted that regular use of tobacco while under duress would not only calm the soldiers but allow them to withstand greater hardship. Until the mid-20th century, the majority of the adult population in many Western nations were smokers and the claims of anti-smoking activists were met with much skepticism, if not outright contempt. Today the movement has considerably more weight and evidence of its claims, but a considerable proportion of the population remains steadfast smokers. Society and culture Smoking has been accepted into culture, in various art forms, and has developed many distinct, and often conflicting or mutually exclusive, meanings depending on time, place and the practitioners of smoking. Pipe smoking, until recently one of the most common forms of smoking, is today often associated with solemn contemplation, old age and is often considered quaint and archaic. Cigarette smoking, which did not begin to become widespread until the late 19th century, has more associations of modernity and the faster pace of the industrialized world. Cigars have been, and still are, associated with masculinity, power and is an iconic image associated with the stereotypical capitalist. In fact, some evidence suggests that men with higher than average testosterone levels are more likely to smoke. Smoking in public has for a long time been something reserved for men and when done by women has been associated with promiscuity. In Japan during the Edo period, prostitutes and their clients would often approach one another under the guise of offering a smoke; the same was true for 19th-century Europe. Art The earliest depictions of smoking can be found on Classical Mayan pottery from around the 9th century. The art was primarily religious in nature and depicted deities or rulers smoking early forms of cigarettes. Soon after smoking was introduced outside of the Americas it began appearing in painting in Europe and Asia. The painters of the Dutch Golden Age were among the first to paint portraits of people smoking and still lifes of pipes and tobacco. For southern European painters of the 17th century, a pipe was much too modern to include in the preferred motifs inspired by mythology from Greek and Roman antiquity. At first smoking was considered lowly and was associated with peasants. Many early paintings were of scenes set in taverns or brothels. Later, as the Dutch Republic rose to considerable power and wealth, smoking became more common amongst the affluent and portraits of elegant gentlemen tastefully raising a pipe appeared. Smoking represented pleasure, transience and the briefness of earthly life as it, quite literally, went up in smoke. Smoking was also associated with representations of both the sense of smell and that of taste. In the 18th century smoking became far more sparse in painting as the elegant practice of taking snuff became popular. Smoking a pipe was again relegated to portraits of lowly commoners and country folk and the refined sniffing of shredded tobacco followed by sneezing was rare in art. When smoking appeared it was often in the exotic portraits influenced by Orientalism. Many proponents of postcolonialism controversially believe this portrayal was a means of projecting an image of European superiority over its colonies and a perception of the male dominance of a feminized Orient. Proponents believe the theme of the exotic and alien "Other" escalated in the 19th century, fueled by the rise in the popularity of ethnology during the Enlightenment. In the 19th century smoking was common as a symbol of simple pleasures; the pipe smoking "noble savage", solemn contemplation by Classical Roman ruins, scenes of an artist becoming one with nature while slowly toking a pipe. The newly empowered middle class also found a new dimension of smoking as a harmless pleasure enjoyed in smoking saloons and libraries. Smoking a cigarette or a cigar would also become associated with the Bohemian, someone who shunned the conservative middle class values and displayed his contempt for conservatism. But this was a pleasure that was to be confined to a male world; women smokers were associated with prostitution and smoking was not considered an activity fit for proper ladies. It was not until the start of the 20th century that smoking women would appear in paintings and photos, giving a chic and charming impression. Impressionists like Vincent van Gogh, who was a pipe smoker himself, would also begin to associate smoking with gloom and fin-du-siècle fatalism. While the symbolism of the cigarette, pipe and cigar respectively were consolidated in the late 19th century, it was not until the 20th century that artists began to use it fully; a pipe would stand for thoughtfulness and calm; the cigarette symbolized modernity, strength and youth, but also nervous anxiety; the cigar was a sign of authority, wealth and power. The decades following World War II, during the apex of smoking when the practice had still not come under fire by the growing anti-smoking movement, a cigarette casually tucked between the lips represented the young rebel, epitomized in actors like Marlon Brando and James Dean or mainstays of advertising like the Marlboro Man. It was not until the 1970s when the negative aspects of smoking began to appear, yielding the image of the unhealthy lower-class individual, reeking of cigarette smoke and lack of motivation and drive, which was especially prominent in art inspired or commissioned by anti-smoking campaigns. In his painting "Holy Smokes", artist Brian Whelan pokes fun at the smoking debate and its newly found focus on morality and guilt. Film and TV Ever since the era of silent films, smoking has had a major part in film symbolism. In the hard-boiled film noir crime thrillers, cigarette smoke often frames characters and is frequently used to add an aura of mystique or nihilism. One of the forerunners of this symbolism can be seen in Fritz Lang's Weimar era Dr Mabuse, der Spieler, 1922 (Dr Mabuse, the Gambler), where men mesmerized by card playing smoke cigarettes while gambling. Female smokers in film were also early on associated with a type of sensuous and seductive sexuality, most notably personified by German film star Marlene Dietrich. Similarly, actors like Humphrey Bogart and Audrey Hepburn have been closely identified with their smoker persona, and some of their most famous portraits and roles have involved them being haloed by a mist of cigarette smoke. Hepburn often enhanced the glamor with a cigarette holder, most notably in the film Breakfast at Tiffany's. Smoking could also be used as a means to subvert censorship, as two cigarettes burning unattended in an ashtray were often used to suggest sexual activity. Since World War II, smoking has gradually become less frequent on screen as the obvious health hazards of smoking have become more widely known. With the anti-smoking movement gaining greater respect and influence, conscious attempts not to show smoking on screen are now undertaken in order to avoid encouraging smoking or giving it positive associations, particularly for family films. Smoking on screen is more common today among characters who are portrayed as anti-social or even criminal. According to a 2019 study, the introduction of television in the United States led to a substantial increase in smoking, in particular among 16–21-year-olds. The study suggested "that television increased the share of smokers in the population by 5–15 percentage points, generating roughly 11 million additional smokers between 1946 and 1970." Literature Just as in other types of fiction, smoking has had an important place in literature and smokers are often portrayed as characters with great individuality, or outright eccentrics, something typically personified in one of the most iconic smoking literary figures of all, Sherlock Holmes. Other than being a frequent part of short stories and novels, smoking has spawned endless eulogies, praising its qualities and affirming the author's identity as a devoted smoker. Especially during the late 19th century and early 20th century, a panoply of books with titles like Tobacco: Its History and associations (1876), Cigarettes in Fact and Fancy (1906) and Pipe and Pouch: The Smokers Own Book of Poetry (1905) were written in the UK and the US. The titles were written by men for other men and contained general tidbits and poetic musings about the love for tobacco and all things related to it, and frequently praised the refined bachelor's life. The Fragrant Weed: Some of the Good Things Which Have been Said or Sung about Tobacco, published in 1907, contained, among many others, the following lines from the poem A Bachelor's Views by Tom Hall that were typical of the attitude of many of the books: These works were all published in an era before the cigarette had become the dominant form of tobacco consumption and pipes, cigars, and chewing tobacco were still commonplace. Many of the books were published in novel packaging that would attract the learned smoking gentleman. Pipe and Pouch came in a leather bag resembling a tobacco pouch and Cigarettes in Fact and Fancy (1901) came bound in leather, packaged in an imitation cardboard cigar box. By the late 1920s, the publication of this type of literature largely abated and was only sporadically revived in the later 20th century. Music There have been few examples of tobacco in music in early modern times, though there are occasional signs of influence in pieces such as Johann Sebastian Bach's Enlightening Thoughts of a Tobacco-Smoker. However, from the early 20th century and onwards smoking has been closely associated with popular music. Jazz was from early on closely intertwined with the smoking that was practiced in the venues where it was played, such as bars, dance halls, jazz clubs and even brothels. The rise of jazz coincided with the expansion of the modern tobacco industry, and in the United States also contributed to the spread of cannabis. The latter went under names like "tea", "muggles" and "reefer" in the jazz community and was so influential in the 1920s and 30s that it found its way into songs composed at the time such as Louis Armstrong's Muggles, Larry Adler's Smoking Reefers, and Don Redman's Chant of The Weed. The popularity of marijuana among jazz musicians remained high until the 1940s and 50s, when it was partially replaced by the use of heroin. Another form of modern popular music that has been closely associated with cannabis smoking is reggae, a style of music that originated in Jamaica in the late 1950s and early 60s. Cannabis, or ganja, is believed to have been introduced to Jamaica in the mid-19th century by Indian immigrant labor and was primarily associated with Indian workers until it was appropriated by the Rastafari movement in the middle of the 20th century. The Rastafari considered cannabis smoking to be a way to come closer to God, or Jah, an association that was greatly popularized by reggae icons such as Bob Marley and Peter Tosh in the 1960s and 70s. Economics Estimates claim that smokers cost the U.S. economy $97.6 billion a year in lost productivity and that an additional $96.7 billion is spent on public and private health care combined. This is over 1% of the gross domestic product. A male smoker in the United States that smokes more than one pack a day can expect an average increase of $19,000 just in medical expenses over the course of his lifetime. A U.S. female smoker that also smokes more than a pack a day can expect an average of $25,800 additional healthcare costs over her lifetime. See also Cigarette smoking for weight loss Electronic cigarette Outline of smoking Plain tobacco packaging Reverse smoking Schizophrenia and tobacco smoking Smoke social Varenicline Smoker's paradox Smoking in association football References Further reading Ashes to Ashes: The History of Smoking and Health (1998) edited by S. Lock, L.A. Reynolds and E.M. Tansey 2nd ed. Rodopi. Coe, Sophie D. (1994) America's first cuisines Gately, Iain (2003) Tobacco: A Cultural History of How an Exotic Plant Seduced Civilization Goldberg, Ray (2005) Drugs Across the Spectrum. 5th ed. Thomson Brooks/Cole. Goodman, Jordan, ed. Tobacco in History and Culture. An Encyclopedia (2 vol, Gage Cengage, 2005) online Greaves, Lorraine (2002) High Culture: Reflections on Addiction and Modernity. edited by Anna Alexander and Mark S. Roberts. State University of New York Press. Hirschfelder, Arlene B. Encyclopedia of smoking and tobacco (1999) online James I of England, A Counterblaste to Tobacco Lloyd, J & Mitchinson, J: "The Book of General Ignorance". Faber & Faber, 2006 Marihuana and Medicine (1999), editor: Gabriel Nahas Robicsek, Francis (1978) The Smoking Gods: Tobacco in Maya Art, History, and Religion Wilbert, Johannes (1993) Tobacco and Shamanism in South America Burns, Eric. The Smoke of the Gods: A Social History of Tobacco. Philadelphia: Temple University Press, 2007. Kulikoff, Allan. Tobacco & Slaves: The Development of Southern Cultures in the Chesapeake. North Carolina: University of North Carolina Press, 1986. External links BBC Headroom - Smoking advice Cigarette Smoking and Cancer – National Cancer Institute Smoking & Tobacco Use – Centers for Disease Control Smoking – Our World in Data Treating Tobacco Use and Dependence – U.S. Department of Health and Human Services How to stop smoking – National Health Service UK NY Times: Responses to the targeting of teenage smokers Study ties more deaths, types of disease, to smoking (Feb 2015), Marilynn Marchione, Associated Press Dosage forms Drug delivery devices Habits Drug culture
Smoking
Chemistry,Biology
9,646
22,253,822
https://en.wikipedia.org/wiki/Primal%20scene
In psychoanalysis, the primal scene () is the theory of the initial unconscious fantasy of a child of a sex act, between the parents, that organises the psychosexual development of that child. The expression "primal scene" refers to the sight of sexual relations between the parents, as observed, constructed, or fantasized by the child and interpreted by the child as a scene of violence. The theory suggests that the scene is not understood by the child, remaining enigmatic but at the same time provoking sexual excitement. Freud's views Evolution The term appeared for the first time in Freud's published work apropos of the "Wolf Man" case (1918b [1914]), but the notion of a sexual memory experienced too early to have been translated into verbal images, and thus liable to return in the form of conversion symptoms or obsessions, was part of his thinking as early as 1896 [as witnessed in his letter of May 30 of that year to Wilhelm Fliess, where he evokes a "surplus of sexuality" that "impedes translation" (1950a, pp. 229–230)]. Here Freud is already close to the model of the trauma and its "deferred" effect. The following year, in his letter to Fliess of May 2, Freud uses the actual term Urszene for the first time; and gives the approximate age when in his estimation children were liable to "hear things" that they would understand only "subsequently" as six or seven months (SE 1, p. 247). The subject of the child's witnessing parental coitus came up as well, albeit in an older child, with the case of "Katharina," in the Studies on Hysteria (1895d), and Freud evoked it yet again in The Interpretation of Dreams, with the fantasy of the young man who dreamed of watching his parents copulating during his life in the womb (1900a [addition of 1909], pp. 399–400). Fantasy or reality? Freud persistently strove to decide whether the primal scene was a fantasy or something actually witnessed; above all, he placed increasing emphasis on the child's own fantasy interpretation of the scene as violence visited upon the mother by the father. He went so far, in "On the Sexual Theories of Children" (1908c, p. 221), as to find a measure of justification for what he called the "sadistic concept of coitus", suggesting that, though the child may exaggerate, the perception of a real repugnance towards sexual intercourse on the part of a mother fearful of another pregnancy may be quite accurate. In the case of "Little Hans," however, the violence was explained in terms of a prohibition: Hans deemed it analogous to "smashing a window-pane or forcing a way into an enclosed space" (1909b, p. 41). The case history of the Wolf Man gave Freud the opportunity not only to pursue the issue of the reality of the primal scene, but also to propose the idea that it lay at the root of childhood (and later adult) neurosis: the sexual development of the child was "positively splintered up by it" (1918b [1914), pp. 43–44). In his Introductory Lectures, however, he argued for the universality of the fantasy of the primal scene (like the sexual theories of children): it may be encountered in all neurotics, if not in every human being (Freud, 1915f), and it belongs in the category of "primal" fantasies. It appears, however, not to have the same force for all individuals. Freud would later assign a central place to the primal scene in his analysis of Marie Bonaparte, although in her case the scene took place between her nanny and a groom (Bonaparte, 1950–53). Looked upon as an actual event rather than as a pure fantasy reconstructed in a retrospective way (as with Carl Jung's zurückphantasieren), the primal scene had a much more marked traumatic impact, and this led Freud to insist on the "reality" of such scenes, thus returning to the debate over event-driven (or "historical") reality versus psychic reality. Beyond the issue of the scene itself, however, it was the whole subject of fantasy that was thus raised (in chapter five of the Wolf Man case-history [1918b, pp. 48–60]), discussed in terms that would be picked up by Freud again later in Constructions in Analysis (1937d). It was not merely, in Freud's view, that the technique of psychoanalysis demanded that fantasies be treated as realities so as to give their evocation all the force they needed, but also that many "real" scenes were not accessible by way of recollection, but solely by way of dreams. Whether a scene was constructed out of elements observed elsewhere and in a different context (for example, animal coitus transposed to the parents); reconstituted on the basis of clues (such as bloodstained sheets); or indeed observed directly, but at an age when the child still had not the corresponding verbal images at its disposal; did not fundamentally alter the basic facts of the matter: "I intend on this occasion," wrote Freud, "to close the discussion of the reality of the primal scene with a non liquet" (1918b, p. 60). Kleinian interpretations Melanie Klein's view of the primal scene differed from Freud's, for where Freud saw an enigmatic perception of violence, she saw the child's projective fantasies. Klein considered that a child's curiosity was first provoked by the primal scene, and that typically the child felt both excited and excluded by the primal scene. The sexual relationship between the parents, fantasized as continuous, is also the basis of the "combined-parent figure", mother and father seen as locked in mutual (but excluding) gratification. Where Klein laid emphasis on the way the infant projected hostile and destructive tendencies onto the primal scene, with the mother pictured therein as just as dangerous for the father as the father is for her, later Kleinians like John Steiner have stressed the creative aspect of the primal scene; and the necessity in analysis of overcoming a splitting of its image between a loving couple on the one hand, and a combined parent figure locked in hate. General characteristics The primal scene is inseparable from the sexual theories of childhood that it serves to create. This disturbing representation, which at once acknowledges and denies the familiar quality of the parents, excludes the child even as it concerns them, as witness the libidinal excitement the child feels in response. Otto Fenichel has stressed the traumatic nature of the excess excitement felt by the child, which they are unable to process — what he called the "overwhelming unknown". The particularity of the primal scene lies in the fact that the subject experiences in a simultaneous and contradictory way the emergence of the unknown within a familiar world, to which they are bound by vital needs, by expectations of pleasure, and by the self-image that it reflects back to them. The lack of common measure between the child's emotional and psychosexual experience and the words that could give an account of the primal scene creates a gulf that the sexual theories of childhood attempt to bridge. A sadistic reading of the scene combines the child's curiosity about both the origin and the end of life in a representation in which death and life are indeed fused. Ph.D. dissertations on primal scene began to appear since 1970s. M.F. Hoyt's Ph.D. dissertation entitled 'The primal scene: A study of fantasy and perception regarding parental sexuality' was submitted to the Yale University. Based on the approximately 400 college student samples, Hoyt inferred that approximately 20% of respondents reported actually witnessing (by sight and/or sound) their parents engaging in sexual relations. The conclusion of this study indicated that primal scene experience per se is not necessarily deleterious; the traumatic or pathogenic effects usually occur only within the context of general brutality or disturbed family relations. A segment of Paul Okami's doctoral dissertation to the University of California Los Angeles in 1990s, was published in the Journal of Sex Research in 1995. Other claims to phrase "primal scene" Intertextual readings Ned Lukacher has proposed using the term in literary criticism to refer to a kind of intertextuality in which the ability to interpret one text depends on the meaning of another text. It is "the interpretive impasse that arises when a reader has good reason to believe that the meaning of one text is historically dependent on the meaning of another text or on a previously unnoticed set of criteria, even though there is no conclusive evidential or archival means of establishing the case beyond a reasonable doubt." Cultural examples Maynard Solomon interprets four recorded dreams of Beethoven as all being centered on the primal scene, with the composer either as participant or as impotent by-stander. See also References Further reading Bonaparte, Marie. (1950–53). Five copy-books. Translated by Nancy Procter-Gregg. London: Imago. Freud, Sigmund. (1900a). The interpretation of dreams. Part I, SE, 4: 1-338; Part II, SE, 5: 339-625. ——. (1908c). On the sexual theories of children. SE, 9: 205-226. ——. (1909b). Analysis of a phobia in a five-year-old boy. SE, 10: 1-149. ——. (1915f). A case of paranoia running counter to the psycho-analytic theory of the disease. SE, 14: 261-272. ——. (1918b [1914]). From the history of an infantile neurosis. SE, 17: 1-122. ——. (1937d). Constructions in analysis. SE, 23: 255-269. ——. (1950a [1887-1902]). Extracts from the Fliess papers. SE, 1: 173-280. Freud, Sigmund, and Breuer, Josef. (1895d). Studies on hysteria. SE, 2: 48-106. Hoyt, M.F. (1978). Primal scene experiences as recalled and reported by college students. Psychiatry, 41, 57-71. Hoyt, M.F. (1979). Primal scene experiences: Quantitative assessment of an interview study. Archives of Sexual Behavior, 8, 225-245. Klein, Melanie. (1961). Narrative of a child analysis. The conduct of the psychoanalysis of children as seen in the treatment of a ten-year-old boy. New York: Basic Books. Laplanche, Jean. (1989). New foundations for psychoanalysis (David Macey, Trans.). Oxford: Blackwell. Mijolla-Mellor, Sophie de. (1999). Les Mythes magicosexuelles sur l'origine et sur la fin. Topique, 68. Paul Okami (1995). Childhood exposure to parental nudity, parent-child co-sleeping, and 'primal scenes': a review of clinical opinion and empirical evidence. Journal of Sex Research, 32(1), 51-64. Paul Okami, Richard Olmstead, Paul R. Abramson and Laura Pendleton, "Early childhood exposure to parental nudity and scenes of parental sexuality ('primal scenes'): an 18-year longitudinal study of outcome," Archives of Sexual Behavior 27.4 (1998) 361–84, preview available. Arlow, Jacob A. (1980). The revenge motive in the primal scene. Journal of the American Psychoanalytic Association, 28, 519-542. Aron, Lewis. (1995). The internalized primal scene. Psychoanalytic Dialogues, 5, 195-238. Greenacre, Phyllis. (1973). The primal scene and the sense of reality. Psychoanalytic Quarterly, 42, 10-41. Human sexuality Psychoanalytic terminology Freudian psychology
Primal scene
Biology
2,504
2,070,675
https://en.wikipedia.org/wiki/Mixminion
Mixminion is the standard implementation of the Type III anonymous remailer protocol. Mixminion can send and receive anonymous e-mail. Mixminion uses a mix network architecture to provide strong anonymity, and prevent eavesdroppers and other attackers from linking senders and recipients. Volunteers run servers (called "mixes") that receive messages, decrypt them, re-order them, and re-transmit them toward their eventual destination. Every e-mail passes through several mixes so that no single mix can link message senders with recipients. To send an anonymous message, mixminion breaks it into uniform-sized chunks (also called "packets"), pads the packets to a uniform size, and chooses a path through the mix network for each packet. The software encrypts every packet with the public keys for each server in its path, one by one. When it is time to transmit a packet, mixminion sends it to the first mix in the path. The first mix decrypts the packet, learns which mix will receive the packet, and relays it. Eventually, the packet arrives at a final (or "exit") mix, which sends it to the chosen recipient. Because no mix sees any more of the path besides the immediately adjacent mixes, they cannot link senders to recipients. Mixminion supports Single-Use Reply Blocks (or SURBs) to allow anonymous recipients. A SURB encodes a half-path to a recipient, so that each mix in the sequence can unwrap a single layer of the path, and encrypt the message for the recipient. When the message reaches the recipient, the recipient can decode the message and learn which SURB was used to send it; the sender does not know which recipient has received the anonymous message. The most current version of Mixminion Message Sender is 1.2.7 and was released on 11 February 2009. On 2 September 2011, a news announcement was made that stated the source was uploaded to GitHub See also Anonymity Anonymous P2P Anonymous remailer Cypherpunk anonymous remailer (Type I) Mixmaster anonymous remailer (Type II) Onion routing Tor (anonymity network) Pseudonymous remailer (a.k.a. nym servers) Penet remailer Data privacy Traffic analysis References External links Windows GUI Frontend for Mixminion Apple OSX, Macport File for Mixminion Network stats Noreply number of mixminion nodes Internet Protocol based network software Anonymity networks Routing Network architecture Mix networks
Mixminion
Engineering
535
2,335,283
https://en.wikipedia.org/wiki/Beilstein%20Journal%20of%20Organic%20Chemistry
The Beilstein Journal of Organic Chemistry is a peer-reviewed diamond open-access scientific journal established in 2005. The journal is published and completely funded by the Beilstein Institute for the Advancement of Chemical Sciences, a German non-profit foundation. The editor-in-chief is Peter Seeberger (Max Planck Institute of Colloids and Interfaces). It is a member of the Free Journal Network. Scientific videos are available for selected articles of the journal. Abstracting and indexing The journal is abstracted and indexed in Science Citation Index Expanded, Current Contents, Scopus, Chemical Abstracts Service and the Directory of Open Access Journals According to the Journal Citation Reports, the journal has a 2023 impact factor of 2.2. References External links Organic chemistry journals Open access journals Creative Commons Attribution-licensed journals Academic journals established in 2005 English-language journals BioMed Central academic journals
Beilstein Journal of Organic Chemistry
Chemistry
180
59,010,568
https://en.wikipedia.org/wiki/Helen%20Asemota
Helen Nosakhare Asemota is a biochemist and agricultural biotechnologist based in Jamaica. She is Professor of Biochemistry and Molecular Biology and Director of the Biotechnology Centre at the University of the West Indies at Mona, Jamaica. Her research develops biotechnology strategies for production and improvement of tropical tuber crops. She is notable for leading large international biotechnology collaborations, as well as for acting as an international biotechnology consultant for the United Nations (UN). Early life and education Asemota was born in Nigeria. She earned a Bachelor of Science from the University of Benin, a Master of Science from Ahmadu Bello University, and a Doctor of Philosophy from the University of Benin/Frankfurt University. Career In 1990, Asemota moved to Jamaica to take up a position as Associate Honorary Lecturer at the University of the West Indies. She was appointed Lecturer in 1996, and promoted to Senior Lecturer in Biochemistry and Biotechnology in 1998. In 2003, Asemota was promoted to Professor of Biochemistry and Molecular Biology. She was full professor at the Shaw University, North Carolina from 2005 to 2012. During this time she was Head of the Nanobiology Division of the Shaw Nanotechnology Initiative at the Nanoscience and Nanotechnology Research Centre (NNRC) from 2005 to 2009, Nature Sciences Biological Sciences' Program Coordinator from 2009 to 2010, and Chairman for the Shaw University Institutional Review Board (IRB) from 2006 to 2009, Senator for the Shaw Faculty Senate between 2007 and 2012, Core Director of the Faculty Research Development at the NIH- Research Infrastructure for Minority Institutions and as IRB Administrator between 2010 and 2012. In 2013, Asemota was appointed Director of the Biotechnology Centre, a research unit at the University of the West Indies with a focus on biotechnology-based enterprises. At the time of her promotion to Professor in 2003, Asemota was a member of the Caribbean Biotechnology Network, the Biochemical Society of Nigeria, the Third World Organisation for Women in Science, and the Nigerian Association of Women in Science, Technology & Mathematics. She was a Fellow of the American Biographical Institute, a member of the National Geographic Society, the Nigerian Institute of Food Science and Technology, and the New York Academy of Science. Research Asemota conducted PhD research at the University of Benin and Frankfurt University, where she studied the molecular genetics and metabolism of the browning of yam tubers in storage. Upon moving to Jamaica, prompted by ongoing problems with production and storage in the Jamaican yam industry, Asemota continued researching yams, founding the multidisciplinary UWI Yam Biotechnology Project. Initially, Asemota investigated the biochemical effects of removing yam heads at harvest, a common farming practice in Jamaica. Over the ensuing decades, Asemota's research team has investigated many aspects of yam biochemistry and physiology, from DNA fingerprinting studies of Jamaican yam varieties to the carbohydrate metabolism of yam tubers in storage. In addition to her work on yam production and storage, Asemota has studied the metabolic effects of yams and yam-derived products on animal models of diseases such as diabetes. More recently, the Yam Biotechnology Project has moved towards a 'farm to finished products' strategy, with the goal of producing yam-based food, medical, and biofuel products to benefit the Jamaican economy. She has also applied similar research techniques to other types of tropical crop. Asemota has served as Principal Investigator for the National Institute of Health (NIH) and National Science Foundation (NSF) grants. She has lectured undergraduates, postgraduates and postdoctoral levels worldwide, and has supervised or advised at least 30 postgraduate students in Biochemistry or Biotechnology. She has over 250 publications, and owns four patents from her research. Outreach activities Asemota has undertaken outreach research with Jamaican farmers, experimenting with lab-derived yam planting materials in their fields, and reviving 'threatened' Jamaican yam varieties. International consultancy Asemota has a long history of international consultancy in matters of food security and biotechnology. She was an international technical expert for the European Union (1994-1995), and served the United Nations Technical Cooperation among Developing Countries (TCDC) Programmes as International Technical Cooperation Programmes (TCP). She served as an International Biotechnology consultant to the United Nations Food and Agriculture Organisation from 2001. This included consulting for the International Technical Cooperation for Syria with the Developing Countries Programmes in 2001 and as technical lead on food sufficiency for the National Seed Potato Production Programme in the Republic of Tajikistan between 2003 and 2007. She periodically serves the UN-FAO Seed Production Programmes as an International Consultant. See also M. H. Ahmad References Living people Nigerian women scientists Jamaican women scientists Biochemists Biotechnologists University of Benin (Nigeria) alumni Ahmadu Bello University alumni Shaw University faculty Academic staff of the University of the West Indies Year of birth missing (living people) Women biotechnologists Women biochemists Nigerian biochemists Nigerian biologists Nigerian women biologists Jamaican biologists
Helen Asemota
Chemistry,Biology
1,018
3,399,657
https://en.wikipedia.org/wiki/Gpl-violations.org
gpl-violations.org is a not-for-profit project founded and led by Harald Welte in 2004. It works to make sure software licensed under the GNU General Public License (GPL) is not used in ways prohibited by the license. Goals The goals of the project are, according to its website, to: Raise public awareness of the infringing use of free software, and thus putting pressure on the infringers, Give users who detect or assume GPL-licensed software is being misused a way to report them to the copyright holders, Assist copyright holders in any action against GPL infringing organizations, and to Distribute information on how a commercial entity using GPL licensed software in their products can comply with the license. In May 2008, gpl-violations.org and the Free Software Foundation Europe Freedom Task Force announced that they were to deepen their previous cooperation. The task force would focus on educating and informing, while gpl-violations.org would focus on enforcing the GPL. History The gpl-violations.org project was founded in 2004 by Harald Welte. Welte was a kernel developer who had been actively enforcing the GPL license on his netfilter/iptables code since late 2003. Since then, other developers have given gpl-violations.org legal right to represent them. While the Software Freedom Conservancy's GPL Compliance Project for Linux Developers, operates from the United States, gpl-violations.org operates from Germany, Welte's home country. The project has been credited with being the first to prove in court that the GPL is valid and that it will stand up in court. Project creator Harald Welte received the 2007 FSF Award for the Advancement of Free Software, partly because of his work on gpl-violations.org. From January till October, 2015, the website was offline and no longer resolved. It planned to continue its activities in 2016. Its activities resumed by November 2015. Notable victories Fortinet In 2005, the gpl-violations.org project uncovered evidence that Fortinet had used GPL code in its products against the terms of the license, and used cryptographic tools to conceal the violation. The violation was alleged to have occurred in the FortiOS system, which the gpl-violations.org project said contained elements of the Linux kernel. In response, a Munich court granted a temporary injunction against the company, preventing it from selling products until they were in compliance with the necessary license terms; Fortinet was forced to make their FortiOS available free in compliance with GPL licensing. D-Link On September 6, 2006, the gpl-violations.org project prevailed in court litigation against D-Link Germany GmbH regarding D-Link's alleged inappropriate and copyright infringing use of parts of the Linux kernel. The judgement provided the on-record, legal precedent that the GPL is valid and that it will stand up in German courts. See also Commercial use of free software References External links GNU Project Free software websites Free and open-source software organizations Intellectual property activism Computer law Copyright infringement of software Organizations established in 2004 2004 establishments in Germany ru:Gpl-violations.org
Gpl-violations.org
Technology
660
25,287,224
https://en.wikipedia.org/wiki/Solar%20eclipse%20of%20September%2012%2C%202034
An annular solar eclipse will occur at the Moon's ascending node of orbit on Tuesday, September 12, 2034, with a magnitude of 0.9736. A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. An annular solar eclipse occurs when the Moon's apparent diameter is smaller than the Sun's, blocking most of the Sun's light and causing the Sun to look like an annulus (ring). An annular eclipse appears as a partial eclipse over a region of the Earth thousands of kilometres wide. Occurring about 5.7 days before apogee (on September 18, 2034, at 8:05 UTC), the Moon's apparent diameter will be smaller. The eclipse will commence over the southern Pacific Ocean and then enter South America. Countries under the path include northern Chile, southern Bolivia, northern Argentina, southern Paraguay, and southern Brazil. The eclipse will then enter the Atlantic Ocean, and terminate approximately southeast of South America. A partial eclipse will be visible for parts of Central America, the Caribbean, South America, and Antarctica. Images Animated path Eclipse details Shown below are two tables displaying details about this particular solar eclipse. The first table outlines times at which the moon's penumbra or umbra attains the specific parameter, and the second table describes various other parameters pertaining to this eclipse. Eclipse season This eclipse is part of an eclipse season, a period, roughly every six months, when eclipses occur. Only two (or occasionally three) eclipse seasons occur each year, and each season lasts about 35 days and repeats just short of six months (173 days) later; thus two full eclipse seasons always occur each year. Either two or three eclipses happen each eclipse season. In the sequence below, each eclipse is separated by a fortnight. Related eclipses Eclipses in 2034 A total solar eclipse on March 20. A penumbral lunar eclipse on April 3. An annular solar eclipse on September 12. A partial lunar eclipse on September 28. Metonic Preceded by: Solar eclipse of November 25, 2030 Followed by: Solar eclipse of July 2, 2038 Tzolkinex Preceded by: Solar eclipse of August 2, 2027 Followed by: Solar eclipse of October 25, 2041 Half-Saros Preceded by: Lunar eclipse of September 7, 2025 Followed by: Lunar eclipse of September 19, 2043 Tritos Preceded by: Solar eclipse of October 14, 2023 Followed by: Solar eclipse of August 12, 2045 Solar Saros 135 Preceded by: Solar eclipse of September 1, 2016 Followed by: Solar eclipse of September 22, 2052 Inex Preceded by: Solar eclipse of October 3, 2005 Followed by: Solar eclipse of August 24, 2063 Triad Preceded by: Solar eclipse of November 12, 1947 Followed by: Solar eclipse of July 14, 2121 Solar eclipses of 2033–2036 Saros 135 Metonic series Tritos series Inex series References External links NASA graphics 2034 9 12 2034 in science 2034 09 12 2034 09 12
Solar eclipse of September 12, 2034
Astronomy
642
2,718,504
https://en.wikipedia.org/wiki/Richard%20Swann%20Lull
Richard Swann Lull (November 6, 1867 – April 22, 1957) was an American paleontologist and Sterling Professor at Yale University who is largely remembered for championing a non-Darwinian view of evolution, whereby mutation(s) could unlock presumed "genetic drives" that, over time, would lead populations to increasingly extreme phenotypes (and perhaps, ultimately, to extinction). Life Lull was born in Annapolis, Maryland, the son of naval officer Edward Phelps Lull and Elizabeth Burton, daughter of General Henry Burton. He married Clara Coles Boggs, and he has a daughter, Dorothy. He majored in zoology at Rutgers College, where he received both his undergraduate and master's degrees (M.S. 1896). He worked for the Division of Entomology of the United States Department of Agriculture but in 1894 became an assistant professor of zoology at the State Agricultural College in Amherst, Massachusetts (now the University of Massachusetts Amherst). Lull's interest in fossil footprints began at Amherst College, renowned for its collection of fossil footprints, and eventually led him to switch from entomology to paleontology. In 1899, Lull worked as a member of the American Museum of Natural History's expedition to Bone Cabin Quarry, Wyoming, helping to collect that museum's brontosaur skeleton. In 1902 he again joined an American Museum team in Montana, then studied under Columbia University professor Henry Fairfield Osborn. In 1903 he received his Ph.D. from Columbia University, and in 1906, after a brief time at Amherst, he was named assistant professor of Vertebrate Paleontology in Yale College and Associate Curator of Vertebrate Paleontology at the Peabody Museum of Natural History. He stayed at Yale for the next 50 years. In 1933, Lull was awarded the Daniel Giraud Elliot Medal from the National Academy of Sciences. One famous example he used to support his non-Darwinian evolution theory concerned the enormous antlers of the Irish elk: he argued that these could not possibly be the result of natural selection, and instead reflected one of his "unlocked genetic drives" toward ever-increasing antler size. The poor elk, coping in each generation with ever-bigger antlers were eventually driven extinct. His evolutionary theory was a form of orthogenesis. His book Organic Evolution (1917) received positive reviews and was described as an "excellent summary of the theories, facts, and factors of evolution." Publications Fossils: What They Tell Us of Plants and Animals of the Past (1931) A Revision of the Ceratopsia or Horned Dinosaurs (1933) The Ways of Life (1925) Organic Evolution (1917) Fossil Footprints of the Jura-Trias of North America (1904) References Yale History and Archives: Richard Swann Lull External links 1867 births 1957 deaths American paleontologists Non-Darwinian evolution People from Annapolis, Maryland University of Massachusetts Amherst faculty Columbia University alumni Yale University alumni Yale Sterling Professors
Richard Swann Lull
Biology
605
28,953,957
https://en.wikipedia.org/wiki/Special%20group%20%28finite%20group%20theory%29
In group theory, a discipline within abstract algebra, a special group is a finite group of prime power order that is either elementary abelian itself or of class 2 with its derived group, its center, and its Frattini subgroup all equal and elementary abelian . A special group of order pn that has class 2 and whose derived group has order p is called an extra special group. References Finite groups P-groups
Special group (finite group theory)
Mathematics
86
12,599,909
https://en.wikipedia.org/wiki/Oloid
An oloid is a three-dimensional curved geometric object that was discovered by Paul Schatz in 1929. It is the convex hull of a skeletal frame made by placing two linked congruent circles in perpendicular planes, so that the center of each circle lies on the edge of the other circle. The distance between the circle centers equals the radius of the circles. One third of each circle's perimeter lies inside the convex hull, so the same shape may be also formed as the convex hull of the two remaining circular arcs each spanning an angle of 4π/3. Surface area and volume The surface area of an oloid is given by: exactly the same as the surface area of a sphere with the same radius. In closed form, the enclosed volume is , where and denote the complete elliptic integrals of the first and second kind respectively. A numerical calculation gives . Kinetics The surface of the oloid is a developable surface, meaning that patches of the surface can be flattened into a plane. While rolling, it develops its entire surface: every point of the surface of the oloid touches the plane on which it is rolling, at some point during the rolling movement. Unlike most axial symmetric objects (cylinder, sphere etc.), while rolling on a flat surface, its center of mass performs a meander motion rather than a linear one. In each rolling cycle, the distance between the oloid's center of mass and the rolling surface has two minima and two maxima. The difference between the maximum and the minimum height is given by , where is the oloid's circular arcs radius. Since this difference is fairly small, the oloid's rolling motion is relatively smooth. At each point during this rolling motion, the oloid touches the plane in a line segment. The length of this segment stays unchanged throughout the motion, and is given by: . Related shapes The sphericon is the convex hull of two semicircles on perpendicular planes, with centers at a single point. Its surface consists of the pieces of four cones. It resembles the oloid in shape and, like it, is a developable surface that can be developed by rolling. However, its equator is a square with four sharp corners, unlike the oloid which does not have sharp corners. Another object called the two circle roller is defined from two perpendicular circles for which the distance between their centers is √2 times their radius, farther apart than the oloid. It can either be formed (like the oloid) as the convex hull of the circles, or by using only the two disks bounded by the two circles. Unlike the oloid its center of gravity stays at a constant distance from the floor, so it rolls more smoothly than the oloid. In popular culture In 1979, modern dancer Alan Boeding designed his "Circle Walker" sculpture from two crosswise semicircles, forming a skeletal version of the sphericon, a shape with a similar rolling motion to the oloid. He began dancing with a scaled-up version of the sculpture in 1980 as part of an MFA program in sculpture at Indiana University, and after he joined the MOMIX dance company in 1984 the piece became incorporated into the company's performances. The company's later piece "Dream Catcher" is based around another Boeding sculpture whose linked teardrop shapes incorporate the skeleton and rolling motion of the oloid. References Literature Tobias Langscheid, Tilo Richter (Ed.): Oloid – Form of the Future. With contributions by Dirk Böttcher, Andreas Chiquet, Heinrich Frontzek a.o., niggli Verlag 2023, ISBN 978-3-7212-1025-5 External links Rolling oloid, filmed at Swiss Science Center Technorama, Winterthur, Switzerland. Paper model oloid Make your own oloid Oloid mesh Polygon mesh of the oloid, and code to generate it. Geometric shapes Articles containing video clips
Oloid
Mathematics
802
661,847
https://en.wikipedia.org/wiki/Frisket
A frisket is any material that protects areas of a work from unintended change. Letterpress On a sheet-fed letterpress printing machine, a frisket is a sheet of oiled paper that covers the space between the type or cuts (illustrations) and the edge of the paper that is to be printed. When the press operator uses a brayer to coat the surface of the type with ink, the ink brayer will often coat the furniture and slugs (wood and metal spacers) between the columns and around the type. To keep this ink from touching the target sheet, the frisket covers the area that is not desired to print. The frisket is set in a frame, often hinged to the tympan that holds the paper in place. A new frisket has to be cut for each different page or form; a well-made frisket lasts for hundreds or thousands of impressions. Airbrushing In airbrushing, a frisket is a plastic sheet with an adhesive backing used to mask off specific areas of an image so that only the exposed area is covered with paint. The frisket is vital to airbrushing, because it allows the artist to control excess paint spray, create special effects, achieve extreme precision, control edge attributes and expedite the airbrushing process. A frisket differs from other masks in that it is a single sheet that covers the entire work, parts of which are removed by cutting into the sheet. Friskets are available in matté and glossy finishes. Some friskets are also solvent-proof, manufactured for use with solvent-reduced and -based urethanes. The frisket is fixed to the painting surface and then the appropriate shapes are cut out of the material using a razor or scalpel. The cut piece is lifted, the exposed area painted, and the process repeated using the cut pieces to mask their matching finished areas. When all painting is finished, the resulting work contains precise shapes with no overspray. Watercolour In watercolor painting, frisket, also sometimes called masking fluid, is a removable liquid masking fluid based on latex and ammonia, often available in various colours to make its presence more obvious, which is applied to the surface in order to mask off the areas that are not to be coloured by a given application of paint. Frisket is usually used when the unmasked areas are desired to be the same colour and a rapid wash is being applied, or for negative painting effects. Watercolouring frisket is applied using a brush, allowed to dry, and then the watercolour paints are applied and also allowed to dry. Once the paper is completely dry, the frisket can be easily removed by gentle rubbing with a natural crepe rubber pickup, the same as those used for removal of rubber cement. The paper must be completely dry before removing the frisket as the friction can otherwise damage the paper if still damp. A subsequent application of frisket can be applied to mask other areas—usually those with the intention of applying a different colour or to darken some areas whilst not affecting others—and removed with the natural rubber pickup. This process can be repeated several times without damaging artist-grade watercolour papers, so long as the paper is completely dry after each application of watercolours. Some lesser grades of paper, often used for practice and academic purposes, may be more prone to damage after repeated masking and painting cycles, however. Although watercolour frisket can be removed by rubbing with the fingers, doing so has the disadvantage of potentially transferring skin oils which can discolour the artwork, or otherwise affect subsequent applications of watercolours or other media such as chalk or ink. References Printing terminology Printing materials Visual arts materials
Frisket
Physics
783
54,730,939
https://en.wikipedia.org/wiki/Notch%20%28engineering%29
In mechanical engineering and materials science, a notch refers to a V-shaped, U-shaped, or semi-circular defect deliberately introduced into a planar material. In structural components, a notch causes a stress concentration which can result in the initiation and growth of fatigue cracks. Notches are used in materials characterization to determine fracture mechanics related properties such as fracture toughness and rates of fatigue crack growth. Notches are commonly used in material impact tests where a morphological crack of a controlled origin is necessary to achieve standardized characterization of fracture resistance of the material. The most common is the Charpy impact test, which uses a pendulum hammer (striker) to strike a horizontal notched specimen. The height of its subsequent swing-through is used to determine the energy absorbed during fracture. The Izod impact strength test uses a circular notched vertical specimen in a cantilever configuration. Charpy testing is conducting with U- or V-notches whereby the striker contacts the specimen directly behind the notch, whereas the now largely obsolete Izod method involves a semi-circular notch facing the striker. Notched specimens are used in other characterization protocols, such as tensile and fatigue tests. Types of notches The type of notch introduced to a specimen depends on the material and characterization employed. For standardized testing of fracture toughness by the Charpy impact method, specimen and notch dimensions are most often taken from American standard ASTM E23, or British standard BS EN ISO 148-1:2009. For all notch types, a key parameter in governing stress concentration and failure in notched materials is the notch tip curvature or radius. Sharp tipped V-shaped notches are often used in standard fracture toughness testing for ductile materials, polymers and for the characterization of weld strength. The application of such notches for hard-steels is problematic due to sensitivity to grain alignment, which is why torsional testing may be applied for such materials instead. A U notch is an elongated notch having a round notch-tip, being deeper than it is wide. This notch is also often referred to as C-notch, and is the most widely form of introduced notch, due to the repeatability of results obtained from notch specimens. Correlating U-Notch performance to V-Notch equivalent is challenging and is carried out on a case by case basis, there is no standardized correlation between performance values obtained with the two notch types. A keyhole notch is typically considered as a slit ending in a hole of a given radius. This type of notch is most often considered in numerical models. Fracture toughness results obtained from keyhole notch testing are often higher than those obtained from V-notched or pre-cracked specimens. See also Charpy impact test References Fracture mechanics
Notch (engineering)
Materials_science,Engineering
554
59,231,239
https://en.wikipedia.org/wiki/Sequential%20bargaining
Sequential bargaining (also known as alternate-moves bargaining, alternating-offers protocol, etc.) is a structured form of bargaining between two participants, in which the participants take turns in making offers. Initially, person #1 has the right to make an offer to person #2. If person #2 accepts the offer, then an agreement is reached and the process ends. If person #2 rejects the offer, then the participants switch turns, and now it is the turn of person #2 to make an offer (which is often called a counter-offer). The people keep switching turns until either an agreement is reached, or the process ends with a disagreement due to a certain end condition. Several end conditions are common, for example: There is a pre-specified limit on the number of turns; after that many turns, the process ends. There is a pre-specified limit on the negotiation time; when time runs out, the process ends. The number of possible offers is finite, and the protocol rules disallow to offer the same agreement twice. Hence, if the number of possible offers is finite, at some point all them are exhausted, and the negotiation ends without an agreement. Several settings of sequential bargaining have been studied. Dividing the Dollar: two people should decide how to split a given amount of money between them. If they do not reach an agreement, they get nothing. This setting can represent a buyer and a seller bargaining on the price of an item, where the valuations of both players are known. In this case, the amount of money is the difference between the buyer's value and the seller's value. Buyer and Seller: a buyer and the seller bargain over the price of an item, and their valuations of the item are not known. A general outcome set: there is an arbitrary finite set of possible outcomes, each of which yields a different payment to each of the two players. This setting can represent, for example, two parties who have to choose an agreed arbitrator from a given set of candidates. Game-theoretic analysis An alternating-offers protocol induces a sequential game. A natural question is what outcomes can be attained in an equilibrium of this game. At first glance, the first player has the power to make a very selfish offer. For example, in the Dividing the Dollar game, player #1 can offer to give only 1% of the money to player #2, and threaten that "if you do not accept, I will refuse all offers from now on, and both of us will get 0". But this is a non-credible threat, since if player #2 refuses and makes a counter-offer (e.g. give 2% of the money to player #1), then it is better for player #1 to accept. Therefore, a natural question is: what outcomes are a subgame perfect equilibrium (SPE) of this game? This question has been studied in various settings. Dividing the dollar Ariel Rubinstein studied a setting in which the negotiation is on how to divide $1 between the two players. Each player in turn can offer any partition. The players bear a cost for each round of negotiation. The cost can be presented in two ways: Additive cost: the cost of each player i is ci per round. Then, if c1 < c2, the only SPE gives the entire $1 to player 1; if c1 > c2, the only SPE gives $c2 to player 1 and $1-c2 to player 2. Multiplicative cost: each player has a discount factor di. Then, the only SPE gives $(1-d2)/(1-d1d2) to player 1. Rubinstein and Wolinsky studied a market in which there are many players, partitioned into two types (e.g. "buyers" and "sellers"). Pairs of players of different types are brought together randomly, and initiate a sequential-bargaining process over the division of a surplus (as in the Divide the Dollar game). If they reach an agreement, they leave the market; otherwise, they remain in the market and wait for the next match. The steady-state equilibrium in this market it is quite different than competitive equilibrium in standard markets (e.g. Fisher market or Arrow–Debreu market). Buyer and seller Fudenberg and Tirole study sequential bargaining between a buyer and a seller who have incomplete information, i.e., they do not know the valuation of their partner. They focus on a two-turn game (i.e., the seller has exactly two opportunities to sell the item to the buyer). Both players prefer a trade today than the same trade tomorrow. They analyze the Perfect Bayesian equilibrium (PBE) in this game, if the seller's valuation is known, then the PBE is generically unique; but if both valuations are private, then there are multiple PBE. Some surprising findings, that follow from the information transfer and the lack of commitment, are: The buyer may do better when he is more impatient; Increasing the size of the "contract zone" may decrease the probability of agreement; Prices can increase over time; Increasing the number of periods can decrease efficiency. Grossman and Perry study sequential bargaining between a buyer and a seller over an item price, where the buyer knows the gains-from-trade but the seller does not. They consider an infinite-turn game with time discounting. They show that, under some weak assumptions, there exists a unique perfect sequential equilibrium, in which: Players communicate their private information by revealing their willingness to delay the agreement; The least patient buyers (that is, those whose gain from trade is larger) accept the seller's offer immediately; The intermediately-patient respond with an acceptable counter-offer; the most patient respond with a counter-offer that they know the seller will not accept (and thus reveal the fact that they are patient). The seller cannot credibly threaten to reject an offer above the discounted value of the game in which all buyers are intermediately-patient. If the seller gets an unacceptable offer, he updates his beliefs and the process repeats. This can go on for many rounds. General outcome set Nejat Anbarci studied a setting with a finite number of outcomes, where each of the two agents may have a different preference order over the outcomes. The protocol rules disallow repeating the same offer twice. In any such game, there is a unique SPE. It is always Pareto optimal; it is always one of the two Pareto-optimal options of which rankings by the players are the closest. It can be found by finding the smallest integer k for which the sets of k best options of the two players have a non-empty intersection. For example, if the rankings are a>b>c>d and c>b>a>d, then the unique SPE is b (with k=2). If the rankings are a>b>c>d and d>c>b>a, then the SPE is either b or c (with k=3). In a later study, Anbarci studies several schemes for two agents who have to select an arbitrator from a given set of candidates: In the Alternating Strike scheme, each agent in turn crosses off one candidate; the last remaining candidate is chosen. The scheme is not invariant to "bad" alternatives. In contrast, the Voting by Alternating Offers and Vetoes scheme is invariant to bad alternatives. In all schemes, if the options are uniformly distributed over the bargaining set and their number approaches infinity, then the unique SPE outcome converges to the Equal-Area solution of the cooperative bargaining problem. Erlich, Hazon and Kraus study the Alternating Offers protocol in several informational settings: With complete information (each agent knows the other agents' full ranking), there are strategies that specify a subgame-perfect equilibrium for the agents, and can be computed in linear time. They implement a known bargaining rule. With partial information (only one agent knows the other's ranking) and no information (one agent knows the other's ranking), there are other solution concepts that are distribution-free. Experimental analysis Laboratory studies The Dividing-the-Dollar game has been studied in several laboratory experiments. In general, subjects behave quite differently from the unique SPE. Subjects' behavior depends on the number of turns, their experience with the game, and their beliefs about fairness. There have been multiple experiments. Field study A field study was done by Backus, Blake, Larsen and Tadelis. They studied back-and-forth sequential bargaining in over 25 million listings from the Best Offer platform of eBay. Their main findings are: About 1/3 of the interactions end in immediate agreement, as predicted by complete-information models. Most interactions end in disagreement or delayed agreement, as predicted by incomplete-information models. Stronger bargaining power and better outside options improve agents' outcomes. They also report some findings that cannot be rationalized by the existing theories: A reciprocal, gradual concession behavior, and delayed disagreement. A preference for making and accepting offers that split the difference between the two most recent offers. They suggest that these findings can be explained by behavioral norms. Further reading "Game-theoretic models of bargaining", edited by Alvin Roth. See also Negotiation Ultimatum game Offer and acceptance Fair division experiments References Negotiation Bargaining theory
Sequential bargaining
Mathematics
1,928
1,572,944
https://en.wikipedia.org/wiki/1%2C4-Dioxane
1,4-Dioxane () is a heterocyclic organic compound, classified as an ether. It is a colorless liquid with a faint sweet odor similar to that of diethyl ether. The compound is often called simply dioxane because the other dioxane isomers (1,2- and 1,3-) are rarely encountered. Dioxane is used as a solvent for a variety of practical applications as well as in the laboratory, and also as a stabilizer for the transport of chlorinated hydrocarbons in aluminium containers. Synthesis Dioxane is produced by the acid-catalysed dehydration of diethylene glycol, which in turn is obtained from the hydrolysis of ethylene oxide. In 1985, the global production capacity for dioxane was between 11,000 and 14,000 tons. In 1990, the total U.S. production volume of dioxane was between 5,250 and 9,150 tons. Structure The dioxane molecule is centrosymmetric, meaning that it adopts a chair conformation, typical of relatives of cyclohexane. However, the molecule is conformationally flexible, and the boat conformation is easily adopted, e.g. in the chelation of metal cations. Dioxane resembles a smaller crown ether with only two ethyleneoxyl units. Uses Trichloroethane transport In the 1980s, most of the dioxane produced was used as a stabilizer for 1,1,1-trichloroethane for storage and transport in aluminium containers. Normally aluminium is protected by a passivating oxide layer, but when these layers are disturbed, the metallic aluminium reacts with trichloroethane to give aluminium trichloride, which in turn catalyses the dehydrohalogenation of the remaining trichloroethane to vinylidene chloride and hydrogen chloride. Dioxane "poisons" this catalysis reaction by forming an adduct with aluminium trichloride. As a solvent Dioxane is used in a variety of applications as a versatile aprotic solvent, e.g. for inks, adhesives, and cellulose esters. It is substituted for tetrahydrofuran (THF) in some processes, because of its lower toxicity and higher boiling point (101 °C, versus 66 °C for THF). While diethyl ether is rather insoluble in water, dioxane is miscible and in fact is hygroscopic. At standard pressure, the mixture of water and dioxane in the ratio 17.9:82.1 by mass is a positive azeotrope that boils at 87.6 °C. The oxygen atoms are weakly Lewis-basic. It forms adducts with a variety of Lewis acids. It is classified as a hard base and its base parameters in the ECW model are EB = 1.86 and CB = 1.29. Dioxane produces coordination polymers by linking metal centers. In this way, it is used to drive the Schlenk equilibrium, allowing the synthesis of dialkyl magnesium compounds. Dimethylmagnesium is prepared in this manner: 2 CHMgBr + (CHO) → MgBr(CHO) + (CH)Mg Spectroscopy Dioxane is used as an internal standard for nuclear magnetic resonance spectroscopy in deuterium oxide. Toxicology Safety Dioxane has an of 5170 mg/kg in rats. It is irritating to the eyes and respiratory tract. Exposure may cause damage to the central nervous system, liver and kidneys. In a 1978 mortality study conducted on workers exposed to 1,4-dioxane, the observed number of deaths from cancer was not significantly different from the expected number. Dioxane is classified by the National Toxicology Program as "reasonably anticipated to be a human carcinogen". It is also classified by the IARC as a Group 2B carcinogen: possibly carcinogenic to humans because it is a known carcinogen in other animals. The United States Environmental Protection Agency classifies dioxane as a probable human carcinogen (having observed an increased incidence of cancer in controlled animal studies, but not in epidemiological studies of workers using the compound), and a known irritant (with a no-observed-adverse-effects level of 400 milligrams per cubic meter) at concentrations significantly higher than those found in commercial products. Animal studies in rats suggest that the greatest health risk is associated with inhalation of vapors in the pure form. The State of New York has adopted a first-in-the-nation drinking water standard for 1,4-Dioxane and set the maximum contaminant level of 1 part per billion. Explosion hazard Like some other ethers, dioxane combines with atmospheric oxygen upon prolonged exposure to air to form potentially explosive peroxides. Distillation of these mixtures is dangerous. Storage over metallic sodium could limit the risk of peroxide accumulation. Environment Dioxane tends to concentrate in the water and has little affinity for soil. It is resistant to abiotic degradation in the environment, and was formerly thought to also resist biodegradation. However, more recent studies since the 2000s have found that it can be biodegraded through a number of pathways, suggesting that bioremediation can be used to treat 1,4-dioxane contaminated water. Dioxane has affected groundwater supplies in several areas. Dioxane at the level of 1 μg/L (~1 ppb) has been detected in many locations in the US. In the U.S. state of New Hampshire, it had been found at 67 sites in 2010, ranging in concentration from 2 ppb to over 11,000 ppb. Thirty of these sites are solid waste landfills, most of which have been closed for years. In 2019, the Southern Environmental Law Center successfully sued Greensboro, North Carolina's Wastewater treatment after 1,4-Dioxane was found at 20 times above EPA safe levels in the Haw River. Cosmetics As a byproduct of the ethoxylation process, a route to some ingredients found in cleansing and moisturizing products, dioxane can contaminate cosmetics and personal care products such as deodorants, perfumes, shampoos, toothpastes and mouthwashes. The ethoxylation process makes the cleansing agents, such as sodium laureth sulfate and ammonium laureth sulfate, less abrasive and offers enhanced foaming characteristics. 1,4-Dioxane is found in small amounts in some cosmetics, a yet unregulated substance used in cosmetics in both China and the U.S. Research has found the chemical in ethoxylated raw ingredients and in off-the-shelf cosmetic products. The Environmental Working Group (EWG) found that 97% of hair relaxers, 57% of baby soaps and 22 percent of all products in Skin Deep, their database for cosmetic products, are contaminated with 1,4-dioxane. Since 1979 the U.S. Food and Drug Administration (FDA) have conducted tests on cosmetic raw materials and finished products for the levels of 1,4-dioxane. 1,4-Dioxane was present in ethoxylated raw ingredients at levels up to 1410 ppm (~0.14%wt), and at levels up to 279 ppm (~0.03%wt) in off the shelf cosmetic products. Levels of 1,4-dioxane exceeding 85 ppm (~0.01%wt) in children's shampoos indicate that close monitoring of raw materials and finished products is warranted. While the FDA encourages manufacturers to remove 1,4-dioxane, it is not required by federal law. On 9 December 2019, New York passed a bill to ban the sale of cosmetics with more than 10 ppm of 1,4-dioxane as of the end of 2022. The law will also prevent the sale of household cleaning and personal care products containing more than 2 ppm of 1,4-dioxane at the end of 2022. See also Dioxolane 9-crown-3 Dioxane tetraketone Oxalic anhydride Dioxanone References Dioxanes Ether solvents IARC Group 2B carcinogens Crown ethers Sweet-smelling chemicals
1,4-Dioxane
Chemistry
1,762
68,660,251
https://en.wikipedia.org/wiki/Surface%20Go%203
The Surface Go 3 is the third generation model of the Surface Go series of devices, introduced as the successor to the Surface Go 2 by Microsoft at their Surface Event on September 22, 2021. It was announced by the company alongside the Surface Laptop Studio, Surface Pro 8, Surface Duo 2 and many Surface accessories. The tablet has the same body, the same set of cameras and speakers, the same ports, and the same dimensions as its predecessor; the main enhancement is a range of more powerful processors. The tablet is powered by the Windows 11 operating system. Configuration At launch, the Surface Go 3 was only available in one color option, platinum. A matte black color was made available on January 11, 2022 for all 8GB models. The Wi-Fi models of the tablet started shipping on October 5, 2021, while the LTE models shipped in December 2021 in North America and in Q1 2022 in other markets. Features Windows 11 operating system 10th Gen Intel Core i3 or Pentium Gold processor (dual-core Amber Lake-Y) 10.5 inch 1920 x 1280 display Windows Hello with IR camera for facial recognition logon Intel UHD Graphics 615 GPU 4 GB and 8 GB RAM options 64 GB, 128 GB, and 256 GB storage options A headphone jack, USB-C port, microSD card slot, and a nano SIM card tray on LTE models All configurations can be upgraded to Windows 11 for free or Windows 11 Pro at an additional cost The 8.3 mm thick tablet weighs 544 grams (1.2 pounds). Hardware The Surface Go 3 is the 6th addition to the lightweight 2-in-1 Surface lineup. The Surface Go 3 is aimed toward children and students. The tablet features the same magnesium alloy chassis and screen size as its predecessor. It has one of two fanless dual-core Amber Lake-Y microarchitecture processors, the Intel Core i3-10100Y or Intel Pentium Gold 6500Y. The 6500Y is 60% faster than the previous 4425Y in the Surface Go 2. The device has a USB C port supporting power delivery and a Surface Connect port. The front-facing camera assembly has an infrared sensor that supports login via Windows Hello. A detachable keyboard uses an 8-pin connection which is compatible with the previous Surface Go models and retails at $99. Software Surface Go 3 is powered by Windows 11 Home in S Mode by default with a 30-day trial of Microsoft 365; however, this can be opted out of and upgraded to Home for free or Pro for a fee. The device also supports Windows Hello login using a biometric facial recognition. Timeline References External links Tablet computers introduced in 2021 Microsoft Surface 2-in-1 PCs
Surface Go 3
Technology
563
186,864
https://en.wikipedia.org/wiki/Primitive%20root%20modulo%20n
In modular arithmetic, a number is a primitive root modulo  if every number coprime to is congruent to a power of modulo . That is, is a primitive root modulo  if for every integer coprime to , there is some integer for which ≡ (mod ). Such a value is called the index or discrete logarithm of to the base modulo . So is a primitive root modulo  if and only if is a generator of the multiplicative group of integers modulo . Gauss defined primitive roots in Article 57 of the Disquisitiones Arithmeticae (1801), where he credited Euler with coining the term. In Article 56 he stated that Lambert and Euler knew of them, but he was the first to rigorously demonstrate that primitive roots exist for a prime . In fact, the Disquisitiones contains two proofs: The one in Article 54 is a nonconstructive existence proof, while the proof in Article 55 is constructive. A primitive root exists if and only if n is 1, 2, 4, pk or 2pk, where p is an odd prime and . For all other values of n the multiplicative group of integers modulo n is not cyclic. This was first proved by Gauss. Elementary example The number 3 is a primitive root modulo 7 because Here we see that the period of 3 modulo 7 is 6. The remainders in the period, which are 3, 2, 6, 4, 5, 1, form a rearrangement of all nonzero remainders modulo 7, implying that 3 is indeed a primitive root modulo 7. This derives from the fact that a sequence ( modulo ) always repeats after some value of , since modulo  produces a finite number of values. If is a primitive root modulo  and is prime, then the period of repetition is Permutations created in this way (and their circular shifts) have been shown to be Costas arrays. Definition If is a positive integer, the integers from 1 to that are coprime to (or equivalently, the congruence classes coprime to ) form a group, with multiplication modulo as the operation; it is denoted by , and is called the group of units modulo , or the group of primitive classes modulo . As explained in the article multiplicative group of integers modulo , this multiplicative group () is cyclic if and only if is equal to 2, 4, , or 2 where is a power of an odd prime number. When (and only when) this group is cyclic, a generator of this cyclic group is called a primitive root modulo (or in fuller language primitive root of unity modulo , emphasizing its role as a fundamental solution of the roots of unity polynomial equations X − 1 in the ring ), or simply a primitive element of . When is non-cyclic, such primitive elements mod do not exist. Instead, each prime component of has its own sub-primitive roots (see in the examples below). For any (whether or not is cyclic), the order of is given by Euler's totient function () . And then, Euler's theorem says that for every coprime to ; the lowest power of that is congruent to 1 modulo is called the multiplicative order of modulo . In particular, for to be a primitive root modulo , has to be the smallest power of that is congruent to 1 modulo . Examples For example, if then the elements of are the congruence classes {1, 3, 5, 9, 11, 13}; there are of them. Here is a table of their powers modulo 14: x x, x2, x3, ... (mod 14) 1 : 1 3 : 3, 9, 13, 11, 5, 1 5 : 5, 11, 13, 9, 3, 1 9 : 9, 11, 1 11 : 11, 9, 1 13 : 13, 1 The order of 1 is 1, the orders of 3 and 5 are 6, the orders of 9 and 11 are 3, and the order of 13 is 2. Thus, 3 and 5 are the primitive roots modulo 14. For a second example let The elements of are the congruence classes {1, 2, 4, 7, 8, 11, 13, 14}; there are of them. x x, x2, x3, ... (mod 15) 1 : 1 2 : 2, 4, 8, 1 4 : 4, 1 7 : 7, 4, 13, 1 8 : 8, 4, 2, 1 11 : 11, 1 13 : 13, 4, 7, 1 14 : 14, 1 Since there is no number whose order is 8, there are no primitive roots modulo 15. Indeed, , where is the Carmichael function. Table of primitive roots Numbers that have a primitive root are of the shape = {1, 2, 3, 4, 5, 6, 7, 9, 10, 11, 13, 14, 17, 18, 19, ...}. These are the numbers with kept also in the sequence in the OEIS. The following table lists the primitive roots modulo up to : Properties Gauss proved that for any prime number (with the sole exception of the product of its primitive roots is congruent to 1 modulo . He also proved that for any prime number , the sum of its primitive roots is congruent to ( − 1) modulo , where is the Möbius function. For example, {| |- | = 3, || (2) = −1. || The primitive root is 2. |- | = 5, || (4) = 0. || The primitive roots are 2 and 3. |- | = 7, || (6) = 1. || The primitive roots are 3 and 5. |- | = 31, ||(30) = −1. || The primitive roots are 3, 11, 12, 13, 17, 21, 22 and 24. |} E.g., the product of the latter primitive roots is , and their sum is . If is a primitive root modulo the prime , then . Artin's conjecture on primitive roots states that a given integer that is neither a perfect square nor −1 is a primitive root modulo infinitely many primes. Finding primitive roots No simple general formula to compute primitive roots modulo is known. There are however methods to locate a primitive root that are faster than simply trying out all candidates. If the multiplicative order (its exponent) of a number modulo is equal to (the order of ), then it is a primitive root. In fact the converse is true: If is a primitive root modulo , then the multiplicative order of is We can use this to test a candidate to see if it is primitive. For first, compute Then determine the different prime factors of , say 1, ..., . Finally, compute using a fast algorithm for modular exponentiation such as exponentiation by squaring. A number for which these results are all different from 1 is a primitive root. The number of primitive roots modulo , if there are any, is equal to since, in general, a cyclic group with elements has generators. For prime , this equals , and since the generators are very common among {2, ..., −1} and thus it is relatively easy to find one. If is a primitive root modulo , then is also a primitive root modulo all powers unless −1 ≡ 1 (mod 2); in that case, + is. If is a primitive root modulo , then is also a primitive root modulo all smaller powers of . If is a primitive root modulo , then either or + (whichever one is odd) is a primitive root modulo 2. Finding primitive roots modulo is also equivalent to finding the roots of the ( − 1)st cyclotomic polynomial modulo . Order of magnitude of primitive roots The least primitive root modulo (in the range 1, 2, ..., is generally small. Upper bounds Burgess (1962) proved that for every ε > 0 there is a such that Grosswald (1981) proved that if , then Shoup (1990, 1992) proved, assuming the generalized Riemann hypothesis, that Lower bounds Fridlander (1949) and Salié (1950) proved that there is a positive constant such that for infinitely many primes It can be proved in an elementary manner that for any positive integer there are infinitely many primes such that < < Applications A primitive root modulo is often used in pseudorandom number generators and cryptography, including the Diffie–Hellman key exchange scheme. Sound diffusers have been based on number-theoretic concepts such as primitive roots and quadratic residues. See also Dirichlet character Full reptend prime Gauss's generalization of Wilson's theorem Multiplicative order Quadratic residue Root of unity modulo Artin's conjecture on primitive roots Footnotes References Sources The Disquisitiones Arithmeticae has been translated from Gauss's Ciceronian Latin into English and German. The German edition includes all of his papers on number theory: all the proofs of quadratic reciprocity, the determination of the sign of the Gauss sum, the investigations into biquadratic reciprocity, and unpublished notes. Further reading . External links Modular arithmetic
Primitive root modulo n
Mathematics
1,974
25,234,581
https://en.wikipedia.org/wiki/Borane%20dimethylsulfide
Borane dimethylsulfide (BMS) is a chemical compound with the chemical formula . It is an adduct between borane molecule () and dimethyl sulfide molecule (). It is a complexed borane reagent that is used for hydroborations and reductions. The advantages of BMS over other borane reagents, such as borane-tetrahydrofuran, are its increased stability and higher solubility. BMS is commercially available at much higher concentrations than its tetrahydrofuran counterpart (10 M) and does not require sodium borohydride as a stabilizer, which could result in undesired side reactions. In contrast, requires sodium borohydride to inhibit reduction of THF to tributyl borate (). BMS is soluble in most aprotic solvents. Preparation and structure Although usually purchased, BMS can be prepared by absorbing diborane into dimethyl sulfide: It can be purified by bulb to bulb vacuum transfer. Although a structure of BMS has not been determined crystallographically, (pentafluorophenyl)-borane dimethylsulfide (), has been examined by X-ray crystallography. The boron atom adopts a tetrahedral molecular geometry. Reactions Hydroborations Due to the experimental ease of its use, BMS has become common in hydroboration reactions. In hydroborations with BMS, the dimethylsulfide dissociates in situ, liberating diborane, which rapidly adds to the unsaturated bonds. The resulting organoborane compounds are useful intermediates in organic synthesis. Boranes add to alkenes in an anti-Markovnikov fashion and allow conversion of alkynes to the corresponding cis-alkenes. Reductions BMS has been employed for the reduction of many functional groups. Reductions of aldehydes, ketones, epoxides, esters, and carboxylic acids give the corresponding alcohols. Lactones are reduced to diols, and nitriles are reduced to amines. Acid chlorides are not reduced by BMS. Borane dimethylsulfide is one of the most common bulk reducing agents used in the Corey–Itsuno reduction. The dimethylsulfide ligand attenuates the reactivity of the borane. Activation by the nitrogen of the chiral oxazaborolidine catalyst of the stoichiometric reducing agent allows for asymmetric control of the reagent. In general BMS does not lead to significantly greater enantiomeric selectivities than borane-THF, however its increased stability in the presence of moisture and oxygen makes it the reagent of choice for the reduction. Safety Borane dimethylsulfide is flammable and reacts readily with water to produce a flammable gas. It also has an unpleasant smell. References Boranes Reagents for organic chemistry Foul-smelling chemicals
Borane dimethylsulfide
Chemistry
632
36,595,079
https://en.wikipedia.org/wiki/Super%20Minkowski%20space
In mathematics and physics, super Minkowski space or Minkowski superspace is a supersymmetric extension of Minkowski space, sometimes used as the base manifold (or rather, supermanifold) for superfields. It is acted on by the super Poincaré algebra. Construction Abstract construction Abstractly, super Minkowski space is the space of (right) cosets within the Super Poincaré group of Lorentz group, that is, . This is analogous to the way ordinary Minkowski spacetime can be identified with the (right) cosets within the Poincaré group of the Lorentz group, that is, . The coset space is naturally affine, and the nilpotent, anti-commuting behavior of the fermionic directions arises naturally from the Clifford algebra associated with the Lorentz group. Direct sum construction For this section, the dimension of the Minkowski space under consideration is . Super Minkowski space can be concretely realized as the direct sum of Minkowski space, which has coordinates , with 'spin space'. The dimension of 'spin space' depends on the number of supercharges in the associated super Poincaré algebra to the super Minkowski space under consideration. In the simplest case, , the 'spin space' has 'spin coordinates' with , where each component is a Grassmann number. In total this forms 4 spin coordinates. The notation for super Minkowski space is then . There are theories which admit supercharges. Such cases have extended supersymmetry. For such theories, super Minkowski space is labelled , with coordinates with . Definition The underlying supermanifold of super Minkowski space is isomorphic to a super vector space given by the direct sum of ordinary Minkowski spacetime in d dimensions (often taken to be 4) and a number of real spinor representations of the Lorentz algebra. (When this is slightly ambiguous because there are 2 different real spin representations, so one needs to replace by a pair of integers , though some authors use a different convention and take copies of both spin representations.) However this construction is misleading for two reasons: first, super Minkowski space is really an affine space over a group rather than a group, or in other words it has no distinguished "origin", and second, the underlying supergroup of translations is not a super vector space but a nilpotent supergroup of nilpotent length 2. This supergroup has the following Lie superalgebra. Suppose that is Minkowski space (of dimension ), and is a finite sum of irreducible real spinor representations for -dimensional Minkowski space. Then there is an invariant, symmetric bilinear map . It is positive definite in the sense that, for any , the element is in the closed positive cone of , and if . This bilinear map is unique up to isomorphism. The Lie superalgebra has as its even part, and as its odd (fermionic) part. The invariant bilinear map is extended to the whole superalgebra to define the (graded) Lie bracket , where the Lie bracket of anything in with anything is zero. The dimensions of the irreducible real spinor representation(s) for various dimensions d of spacetime are given a table below. The table also displays the type of reality structure for the spinor representation, and the type of invariant bilinear form on the spinor representation. The table repeats whenever the dimension increases by 8, except that the dimensions of the spin representations are multiplied by 16. Notation In the physics literature, a super Minkowski spacetime is often specified by giving the dimension of the even, bosonic part (dimension of the spacetime), and the number of times that each irreducible spinor representation occurs in the odd, fermionic part. This is the number of supercharges in the associated super Poincaré algebra to the super Minkowski space. In mathematics, Minkowski spacetime is sometimes specified in the form Mm|n or where m is the dimension of the even part and n the dimension of the odd part. This is notation used for -graded vector spaces. The notation can be extended to include the signature of the underlying spacetime, often this is if . The relation is as follows: the integer in the physics notation is the integer in the mathematics notation, while the integer in the mathematics notation is times the integer in the physics notation, where is the dimension of (either of) the irreducible real spinor representation(s). For example, the Minkowski spacetime is . A general expression is then . When , there are two different irreducible real spinor representations, and authors use various different conventions. Using earlier notation, if there are copies of the one representation and of the other, then defining , the earlier expression holds. In physics the letter P is used for a basis of the even bosonic part of the Lie superalgebra, and the letter Q is often used for a basis of the complexification of the odd fermionic part, so in particular the structure constants of the Lie superalgebra may be complex rather than real. Often the basis elements Q come in complex conjugate pairs, so the real subspace can be recovered as the fixed points of complex conjugation. Signature (p,q) The real dimension associated to the factor or can be found for generalized Minkowski space with dimension and arbitrary signature . The earlier subtlety when instead becomes a subtlety when . For the rest of this section, the signature refers to the difference . The dimension depends on the reality structure on the spin representation. This is dependent on the signature modulo 8, given by the table The dimension also depends on . We can write as either or , where . We define the spin representation to be the representation constructed using the exterior algebra of some vector space, as described here. The complex dimension of is . If the signature is even, then this splits into two irreducible half-spin representations and of dimension , while if the signature is odd, then is itself irreducible. When the signature is even, there is the extra subtlety that if the signature is a multiple of 4 then these half-spin representations are inequivalent, otherwise they are equivalent. Then if the signature is odd, counts the number of copies of the spin representation . If the signature is even and not a multiple of 4, counts the number of copies of the half-spin representation. If the signature is a multiple of 4, then counts the number of copies of each half-spin representation. Then, if the reality structure is real, then the complex dimension becomes the real dimension. On the other hand if the reality structure is quaternionic or complex (hermitian), the real dimension is double the complex dimension. The real dimension associated to or is summarized in the following table: This allows the calculation of the dimension of superspace with underlying spacetime with supercharges, or supercharges when the signature is a multiple of 4. The associated super vector space is with where appropriate. Restrictions on dimensions and supercharges Higher-spin theory There is an upper bound on (equal to where appropriate). More straightforwardly there is an upper bound on the dimension of the spin space where is the dimension of the spin representation if the signature is odd, and the dimension of the half-spin representation if the signature is even. The bound is . This bound arises as any theory with more than supercharges automatically has fields with (absolute value of) spin greater than 2. More mathematically, any representation of the superalgebra contains fields with spin greater than 2. Theories that consider such fields are known as higher-spin theories. On Minkowski space, there are no-go theorems which prohibit such theories from being interesting. If one doesn't wish to consider such theories, this gives upper bounds on the dimension and on . For Lorentzian spaces (with signature ), the limit on dimension is . For generalized Minkowski spaces of arbitrary signature, the upper dimension depends sensitively on the signature, as detailed in an earlier section. Supergravity A large number of supercharges also implies local supersymmetry. If supersymmetries are gauge symmetries of the theory, then since the supercharges can be used to generate translations, this implies infinitesimal translations are gauge symmetries of the theory. But these generate local diffeomorphisms, which is a signature of gravitational theories. So any theory with local supersymmetry is necessarily a supergravity theory. The limit placed on massless representations is the highest spin field must have spin , which places a limit of supercharges for theories without supergravity. Supersymmetric Yang-Mills theories These are theories consisting of a gauge superfield partnered with a spinor superfield. This requires a matching of degrees of freedom. If we restrict this discussion to -dimensional Lorentzian space, the degrees of freedom of the gauge field is , while the degrees of freedom of a spinor is a power of 2, which can be worked out from information elsewhere in this article. This places restrictions on super Minkowski spaces which can support a supersymmetric Yang-Mills theory. For example, for , only or support a Yang-Mills theory. See also Superspace Super vector space Super-Poincaré algebra References Supersymmetry
Super Minkowski space
Physics
1,941
63,973
https://en.wikipedia.org/wiki/Wi-Fi
Wi-Fi () is a family of wireless network protocols based on the IEEE 802.11 family of standards, which are commonly used for local area networking of devices and Internet access, allowing nearby digital devices to exchange data by radio waves. These are the most widely used computer networks, used globally in home and small office networks to link devices and to provide Internet access with wireless routers and wireless access points in public places such as coffee shops, restaurants, hotels, libraries, and airports. Wi-Fi is a trademark of the Wi-Fi Alliance, which restricts the use of the term "Wi-Fi Certified" to products that successfully complete interoperability certification testing. Non-compliant hardware is simply referred to as WLAN, and it may or may not work with "Wi-Fi Certified" devices. the Wi-Fi Alliance consisted of more than 800 companies from around the world. over 3.05 billion Wi-Fi-enabled devices are shipped globally each year. Wi-Fi uses multiple parts of the IEEE 802 protocol family and is designed to work well with its wired sibling, Ethernet. Compatible devices can network through wireless access points with each other as well as with wired devices and the Internet. Different versions of Wi-Fi are specified by various IEEE 802.11 protocol standards, with different radio technologies determining radio bands, maximum ranges, and speeds that may be achieved. Wi-Fi most commonly uses the UHF and SHF radio bands, with the 6 gigahertz SHF band used in newer generations of the standard; these bands are subdivided into multiple channels. Channels can be shared between networks, but, within range, only one transmitter can transmit on a channel at a time. Wi-Fi's radio bands work best for line-of-sight use. Many common obstructions, such as walls, pillars, home appliances, etc., may greatly reduce range, but this also helps minimize interference between different networks in crowded environments. The range of an access point is about indoors, while some access points claim up to a range outdoors. Hotspot coverage can be as small as a single room with walls that block radio waves or as large as many square kilometers using many overlapping access points with roaming permitted between them. Over time, the speed and spectral efficiency of Wi-Fi have increased. some versions of Wi-Fi, running on suitable hardware at close range, can achieve speeds of 9.6 Gbit/s (gigabit per second). History A 1985 ruling by the U.S. Federal Communications Commission released parts of the ISM bands for unlicensed use for communications. These frequency bands include the same 2.4 GHz bands used by equipment such as microwave ovens, and are thus subject to interference. In 1991 in Nieuwegein, the NCR Corporation and AT&T invented the precursor to 802.11, intended for use in cashier systems, under the name WaveLAN. NCR's Vic Hayes, who held the chair of IEEE 802.11 for ten years, along with Bell Labs engineer Bruce Tuch, approached the Institute of Electrical and Electronics Engineers (IEEE) to create a standard and were involved in designing the initial 802.11b and 802.11a specifications within the IEEE. They have both been subsequently inducted into the Wi-Fi NOW Hall of Fame. In 1989 in Australia, a team of scientists began working on wireless LAN technology. A prototype test bed for a wireless local area network (WLAN) was developed in 1992 by a team of researchers from the Radiophysics Division of the CSIRO (Commonwealth Scientific and Industrial Research Organisation) in Australia, led by John O'Sullivan. A patent for Wi Fi was lodged by the CSIRO in 1992 The first version of the 802.11 protocol was released in 1997, and provided up to 2 Mbit/s link speeds. This was updated in 1999 with 802.11b to permit 11 Mbit/s link speeds. In 1999, the Wi-Fi Alliance formed as a trade association to hold the Wi-Fi trademark under which most IEEE 802.11 products are sold. The major commercial breakthrough came with Apple Inc. adopting Wi-Fi for their iBook series of laptops in 1999. It was the first mass consumer product to offer Wi-Fi network connectivity, which was then branded by Apple as AirPort. This was in collaboration with the same group that helped create the standard: Vic Hayes, Bruce Tuch, Cees Links, Rich McGinn, and others from Lucent. In the year 2000, Radiata, a group of Australian scientists connected to the CSIRO, were the first to use the 802.11a standard on chips connected to a Wi-Fi network. Wi-Fi uses a large number of patents held by many different organizations. Australia, the United States and the Netherlands simultaneously claim the invention of Wi-Fi, and a consensus has not been reached globally. In 2009, the Australian CSIRO was awarded $200 million after a patent settlement with 14 technology companies, with a further $220 million awarded in 2012 after legal proceedings with 23 companies. In 2016, the CSIRO's WLAN prototype test bed was chosen as Australia's contribution to the exhibition A History of the World in 100 Objects held in the National Museum of Australia. Etymology and terminology The name Wi-Fi, commercially used at least as early as August 1999, was coined by the brand-consulting firm Interbrand. The Wi-Fi Alliance had hired Interbrand to create a name that was "a little catchier than 'IEEE 802.11b Direct Sequence'." According to Phil Belanger, a founding member of the Wi-Fi Alliance, the term Wi-Fi was chosen from a list of ten names that Interbrand proposed. Interbrand also created the Wi-Fi logo. The yin-yang Wi-Fi logo indicates the certification of a product for interoperability. The name is often written as WiFi, Wifi, or wifi, but these are not approved by the Wi-Fi Alliance. The name Wi-Fi is not short-form for 'Wireless Fidelity', although the Wi-Fi Alliance did use the advertising slogan "The Standard for Wireless Fidelity" for a short time after the brand name was created, and the Wi-Fi Alliance was also called the "Wireless Fidelity Alliance Inc." in some publications. IEEE is a separate, but related, organization and their website has stated "WiFi is a short name for Wireless Fidelity". The name Wi-Fi was partly chosen because it sounds similar to Hi-Fi, which consumers take to mean high fidelity or high quality. Interbrand hoped consumers would find the name catchy, and that they would assume this wireless protocol has high fidelity because of its name. Other technologies intended for fixed points, including Motorola Canopy, are usually called fixed wireless. Alternative wireless technologies include Zigbee, Z-Wave, Bluetooth and mobile phone standards. To connect to a Wi-Fi LAN, a computer must be equipped with a wireless network interface controller. The combination of a computer and an interface controller is called a station. Stations are identified by one or more MAC addresses. Wi-Fi nodes often operate in infrastructure mode in which all communications go through a base station. Ad hoc mode refers to devices communicating directly with each other, without communicating with an access point. A service set is the set of all the devices associated with a particular Wi-Fi network. Devices in a service set need not be on the same wavebands or channels. A service set can be local, independent, extended, mesh, or a combination. Each service set has an associated identifier, a 32-byte service set identifier (SSID), which identifies the network. The SSID is configured within the devices that are part of the network. A basic service set (BSS) is a group of stations that share the same wireless channel, SSID, and other settings that have wirelessly connected, usually to the same access point. Each BSS is identified by a MAC address called the BSSID. Certification The IEEE does not test equipment for compliance with their standards. The Wi-Fi Alliance was formed in 1999 to establish and enforce standards for interoperability and backward compatibility, and to promote wireless local-area-network technology. The Wi-Fi Alliance enforces the use of the Wi-Fi brand to technologies based on the IEEE 802.11 standards from the IEEE. Manufacturers with membership in the Wi-Fi Alliance, whose products pass the certification process, gain the right to mark those products with the Wi-Fi logo. Specifically, the certification process requires conformance to the IEEE 802.11 radio standards, the WPA and WPA2 security standards, and the EAP authentication standard. Certification may optionally include tests of IEEE 802.11 draft standards, interaction with cellular-phone technology in converged devices, and features relating to security set-up, multimedia, and power-saving. Not every Wi-Fi device is submitted for certification. The lack of Wi-Fi certification does not necessarily imply that a device is incompatible with other Wi-Fi devices. The Wi-Fi Alliance may or may not sanction derivative terms, such as Super Wi-Fi, coined by the US Federal Communications Commission (FCC) to describe proposed networking in the UHF TV band in the US. Versions and generations Equipment frequently supports multiple versions of Wi-Fi. To communicate, devices must use a common Wi-Fi version. The versions differ between the radio wavebands they operate on, the radio bandwidth they occupy, the maximum data rates they can support and other details. Some versions permit the use of multiple antennas, which permits greater speeds as well as reduced interference. Historically, the equipment listed the versions of Wi-Fi supported using the name of the IEEE standards. In 2018, the Wi-Fi Alliance introduced simplified Wi-Fi generational numbering to indicate equipment that supports Wi-Fi 4 (802.11n), Wi-Fi 5 (802.11ac) and Wi-Fi 6 (802.11ax). These generations have a high degree of backward compatibility with previous versions. The alliance has stated that the generational level 4, 5, or 6 can be indicated in the user interface when connected, along with the signal strength. The most important standards affecting Wi‑Fi are: 802.11a, 802.11b, 802.11g, 802.11n (Wi-Fi 4), 802.11h, 802.11i, 802.11-2007, 802.11–2012, 802.11ac (Wi-Fi 5), 802.11ad, 802.11af, 802.11-2016, 802.11ah, 802.11ai, 802.11aj, 802.11aq, 802.11ax (Wi-Fi 6), 802.11ay. Uses Internet Wi-Fi technology may be used to provide local network and Internet access to devices that are within Wi-Fi range of one or more routers that are connected to the Internet. The coverage of one or more interconnected access points can extend from an area as small as a few rooms to as large as many square kilometres. Coverage in the larger area may require a group of access points with overlapping coverage. For example, public outdoor Wi-Fi technology has been used successfully in wireless mesh networks in London. An international example is Fon. Wi-Fi provides services in private homes, businesses, as well as in public spaces. Wi-Fi hotspots may be set up either free of charge or commercially, often using a captive portal webpage for access. Organizations, enthusiasts, authorities and businesses, such as airports, hotels, and restaurants, often provide free or paid-use hotspots to attract customers, to provide services to promote business in selected areas. Routers often incorporate a digital subscriber line modem or a cable modem and a Wi-Fi access point, are frequently set up in homes and other buildings, to provide Internet access for the structure. Similarly, battery-powered routers may include a mobile broadband modem and a Wi-Fi access point. When subscribed to a cellular data carrier, they allow nearby Wi-Fi stations to access the Internet. Many smartphones have a built-in mobile hotspot capability of this sort, though carriers often disable the feature, or charge a separate fee to enable it. Standalone devices such as MiFi- and WiBro-branded devices provide the capability. Some laptops that have a cellular modem card can also act as mobile Internet Wi-Fi access points. Many traditional university campuses in the developed world provide at least partial Wi-Fi coverage. Carnegie Mellon University built the first campus-wide wireless Internet network, called Wireless Andrew, at its Pittsburgh campus in 1993 before Wi-Fi branding existed. Many universities collaborate in providing Wi-Fi access to students and staff through the Eduroam international authentication infrastructure. City-wide In the early 2000s, many cities around the world announced plans to construct citywide Wi-Fi networks. There are many successful examples; in 2004, Mysore (Mysuru) became India's first Wi-Fi-enabled city. A company called WiFiyNet has set up hotspots in Mysore, covering the whole city and a few nearby villages. In 2005, St. Cloud, Florida and Sunnyvale, California, became the first cities in the United States to offer citywide free Wi-Fi (from MetroFi). Minneapolis has generated $1.2 million in profit annually for its provider. In May 2010, the then London mayor Boris Johnson pledged to have London-wide Wi-Fi by 2012. Several boroughs including Westminster and Islington already had extensive outdoor Wi-Fi coverage at that point. New York City announced a city-wide campaign to convert old phone booths into digital kiosks in 2014. The project, titled LinkNYC, has created a network of kiosks that serve as public Wi-Fi hotspots, high-definition screens and landlines. Installation of the screens began in late 2015. The city government plans to implement more than seven thousand kiosks over time, eventually making LinkNYC the largest and fastest public, government-operated Wi-Fi network in the world. The UK has planned a similar project across major cities of the country, with the project's first implementation in the London Borough of Camden. Officials in South Korea's capital Seoul were moving to provide free Internet access at more than 10,000 locations around the city, including outdoor public spaces, major streets, and densely populated residential areas. Seoul was planning to grant leases to KT, LG Telecom, and SK Telecom. The companies were supposed to invest $44 million in the project, which was to be completed in 2015. Geolocation Wi-Fi positioning systems use known positions of Wi-Fi hotspots to identify a device's location. It is used when GPS isn't suitable due to issues like signal interference or slow satellite acquisition. This includes assisted GPS, urban hotspot databases, and indoor positioning systems. Wi-Fi positioning relies on measuring signal strength (RSSI) and fingerprinting. Parameters like SSID and MAC address are crucial for identifying access points. The accuracy depends on nearby access points in the database. Signal fluctuations can cause errors, which can be reduced with noise-filtering techniques. For low precision, integrating Wi-Fi data with geographical and time information has been proposed. The Wi-Fi RTT capability introduced in IEEE 802.11mc allows for positioning based on round trip time measurement, an improvement over the RSSI method. The IEEE 802.11az standard promises further improvements in geolocation accuracy. Motion detection Wi-Fi sensing is used in applications such as motion detection and gesture recognition. Operational principles Wi-Fi stations communicate by sending each other data packets, blocks of data individually sent and delivered over radio on various channels. As with all radio, this is done by the modulation and demodulation of carrier waves. Different versions of Wi-Fi use different techniques, 802.11b uses direct-sequence spread spectrum on a single carrier, whereas 802.11a, Wi-Fi 4, 5 and 6 use orthogonal frequency-division multiplexing. Channels are used half duplex and can be time-shared by multiple networks. Any packet sent by one computer is locally received by stations tuned to that channel, even if that information is intended for just one destination. Stations typically ignore information not addressed to them. The use of the same channel also means that the data bandwidth is shared, so for example, available throughput to each device is halved when two stations are actively transmitting. As with other IEEE 802 LANs, stations come programmed with a globally unique 48-bit MAC address. The MAC addresses are used to specify both the destination and the source of each data packet. On the reception of a transmission, the receiver uses the destination address to determine whether the transmission is relevant to the station or should be ignored. A scheme known as carrier-sense multiple access with collision avoidance (CSMA/CA) governs the way stations share channels. With CSMA/CA stations attempt to avoid collisions by beginning transmission only after the channel is sensed to be idle, but then transmit their packet data in its entirety. CSMA/CA cannot completely prevent collisions, as two stations may sense the channel to be idle at the same time and thus begin transmission simultaneously. A collision happens when a station receives signals from multiple stations on a channel at the same time. This corrupts the transmitted data and can require stations to re-transmit. The lost data and re-transmission reduces throughput, in some cases severely. Waveband The 802.11 standard provides several distinct radio frequency ranges for use in Wi-Fi communications: 900 MHz, 2.4 GHz, 3.6 GHz, 4.9 GHz, 5 GHz, 6 GHz and 60 GHz bands. Each range is divided into a multitude of channels. In the standards, channels are numbered at 5 MHz spacing within a band (except in the 60 GHz band, where they are 2.16 GHz apart), and the number refers to the centre frequency of the channel. Although channels are numbered at 5 MHz spacing, transmitters generally occupy at least 20 MHz, and standards allow for neighbouring channels to be bonded together to form a wider channel for higher throughput. Countries apply their own regulations to the allowable channels, allowed users and maximum power levels within these frequency ranges. 802.11b/g/n can use the 2.4 GHz band, operating in the United States under FCC Part 15 rules and regulations. In this frequency band, equipment may occasionally suffer interference from microwave ovens, cordless telephones, USB 3.0 hubs, Bluetooth and other devices. Spectrum assignments and operational limitations are not consistent worldwide: Australia and Europe allow for an additional two channels (12, 13) beyond the 11 permitted in the United States for the 2.4 GHz band, while Japan has three more (12–14). 802.11a/h/j/n/ac/ax can use the 5 GHz U-NII band, which, for much of the world, offers at least 23 non-overlapping 20 MHz channels. This is in contrast to the 2.4 GHz frequency band where the channels are only 5 MHz wide. In general, lower frequencies have longer range but have less capacity. The 5 GHz bands are absorbed to a greater degree by common building materials than the 2.4 GHz bands and usually give a shorter range. As 802.11 specifications evolved to support higher throughput, the protocols have become much more efficient in their bandwidth use. Additionally, they have gained the ability to aggregate channels together to gain still more throughput where the bandwidth for additional channels is available. 802.11n allows for double radio spectrum bandwidth (40 MHz) per channel compared to 802.11a or 802.11g (20 MHz). 802.11n can be set to limit itself to 20 MHz bandwidth to prevent interference in dense communities. In the 5 GHz band, 20 MHz, 40 MHz, 80 MHz, and 160 MHz channels are permitted with some restrictions, giving much faster connections. Communication stack Wi-Fi is part of the IEEE 802 protocol family. The data is organized into 802.11 frames that are very similar to Ethernet frames at the data link layer, but with extra address fields. MAC addresses are used as network addresses for routing over the LAN. Wi-Fi's MAC and physical layer (PHY) specifications are defined by IEEE 802.11 for modulating and receiving one or more carrier waves to transmit the data in the infrared, and 2.4, 3.6, 5, 6, or 60 GHz frequency bands. They are created and maintained by the IEEE LAN/MAN Standards Committee (IEEE 802). The base version of the standard was released in 1997 and has had many subsequent amendments. The standard and amendments provide the basis for wireless network products using the Wi-Fi brand. While each amendment is officially revoked when incorporated in the latest version of the standard, the corporate world tends to market to the revisions because they concisely denote capabilities of their products. As a result, in the market place, each revision tends to become its own standard. In addition to 802.11, the IEEE 802 protocol family has specific provisions for Wi-Fi. These are required because Ethernet's cable-based media are not usually shared, whereas with wireless all transmissions are received by all stations within the range that employ that radio channel. While Ethernet has essentially negligible error rates, wireless communication media are subject to significant interference. Therefore, the accurate transmission is not guaranteed so delivery is, therefore, a best-effort delivery mechanism. Because of this, for Wi-Fi, the Logical Link Control (LLC) specified by IEEE 802.2 employs Wi-Fi's media access control (MAC) protocols to manage retries without relying on higher levels of the protocol stack. For internetworking purposes, Wi-Fi is usually layered as a link layer below the internet layer of the Internet Protocol. This means that nodes have an associated internet address and, with suitable connectivity, this allows full Internet access. Modes Infrastructure In infrastructure mode, which is the most common mode used, all communications go through a base station. For communications within the network, this introduces an extra use of the airwaves but has the advantage that any two stations that can communicate with the base station can also communicate through the base station, which limits issues associated with the hidden node problem and simplifies the protocols. Ad hoc and Wi-Fi direct Wi-Fi also allows communications directly from one computer to another without an access point intermediary. This is called ad hoc Wi-Fi transmission. Different types of ad hoc networks exist. In the simplest case, network nodes must talk directly to each other. In more complex protocols nodes may forward packets, and nodes keep track of how to reach other nodes, even if they move around. Ad hoc mode was first described by Chai Keong Toh in his 1996 patent of wireless ad hoc routing, implemented on Lucent WaveLAN 802.11a wireless on IBM ThinkPads over a size nodes scenario spanning a region of over a mile. The success was recorded in Mobile Computing magazine (1999) and later published formally in IEEE Transactions on Wireless Communications, 2002 and ACM SIGMETRICS Performance Evaluation Review, 2001. This wireless ad hoc network mode has proven popular with multiplayer video games on handheld game consoles, such as the Nintendo DS and PlayStation Portable. It is also popular on digital cameras, and other consumer electronics devices. Some devices can also share their Internet connection using ad hoc, becoming hotspots or virtual routers. Similarly, the Wi-Fi Alliance promotes the specification Wi-Fi Direct for file transfers and media sharing through a new discovery and security methodology. Wi-Fi Direct launched in October 2010. Another mode of direct communication over Wi-Fi is Tunneled Direct Link Setup (TDLS), which enables two devices on the same Wi-Fi network to communicate directly, instead of via the access point. Multiple access points An Extended Service Set may be formed by deploying multiple access points that are configured with the same SSID and security settings. Wi-Fi client devices typically connect to the access point that can provide the strongest signal within that service set. Increasing the number of Wi-Fi access points for a network provides redundancy, better range, support for fast roaming, and increased overall network-capacity by using more channels or by defining smaller cells. Except for the smallest implementations (such as home or small office networks), Wi-Fi implementations have moved toward "thin" access points, with more of the network intelligence housed in a centralized network appliance, relegating individual access points to the role of "dumb" transceivers. Outdoor applications may use mesh topologies. Performance Wi-Fi operational range depends on factors such as the frequency band, radio power output, receiver sensitivity, antenna gain, and antenna type as well as the modulation technique. Also, the propagation characteristics of the signals can have a big impact. At longer distances, and with greater signal absorption, speed is usually reduced. Transmitter power Compared to cell phones and similar technology, Wi-Fi transmitters are low-power devices. In general, the maximum amount of power that a Wi-Fi device can transmit is limited by local regulations, such as FCC Part 15 in the US. Equivalent isotropically radiated power (EIRP) in the European Union is limited to 20 dBm (100 mW). To reach requirements for wireless LAN applications, Wi-Fi has higher power consumption compared to some other standards designed to support wireless personal area network (PAN) applications. For example, Bluetooth provides a much shorter propagation range between 1 and 100 metres (1 and 100 yards) and so in general has a lower power consumption. Other low-power technologies such as Zigbee have fairly long range, but much lower data rate. The high power consumption of Wi-Fi makes battery life in some mobile devices a concern. Antenna An access point compliant with either 802.11b or 802.11g, using the stock omnidirectional antenna might have a range of . The same radio with an external semi-parabolic antenna (15 dB gain) with a similarly equipped receiver at the far end might have a range over 20 miles. Higher gain rating (dBi) indicates further deviation (generally toward the horizontal) from a theoretical, perfect isotropic radiator, and therefore the antenna can project or accept a usable signal further in particular directions, as compared to a similar output power on a more isotropic antenna. For example, an 8 dBi antenna used with a 100 mW driver has a similar horizontal range to a 6 dBi antenna being driven at 500 mW. This assumes that radiation in the vertical is lost; this may not be the case in some situations, especially in large buildings or within a waveguide. In the above example, a directional waveguide could cause the low-power 6 dBi antenna to project much further in a single direction than the 8 dBi antenna, which is not in a waveguide, even if they are both driven at 100 mW. On wireless routers with detachable antennas, it is possible to improve range by fitting upgraded antennas that provide a higher gain in particular directions. Outdoor ranges can be improved to many kilometres (miles) through the use of high gain directional antennas at the router and remote device(s). MIMO (multiple-input and multiple-output) Wi-Fi 4 and higher standards allow devices to have multiple antennas on transmitters and receivers. Multiple antennas enable the equipment to exploit multipath propagation on the same frequency bands giving much higher speeds and longer range. Wi-Fi 4 can more than double the range over previous standards. The Wi-Fi 5 standard uses the 5 GHz band exclusively, and is capable of multi-station WLAN throughput of at least 1 gigabit per second, and a single station throughput of at least 500 Mbit/s. As of the first quarter of 2016, The Wi-Fi Alliance certifies devices compliant with the 802.11ac standard as "Wi-Fi CERTIFIED ac". This standard uses several signal processing techniques such as multi-user MIMO and 4X4 Spatial Multiplexing streams, and wide channel bandwidth (160 MHz) to achieve its gigabit throughput. According to a study by IHS Technology, 70% of all access point sales revenue in the first quarter of 2016 came from 802.11ac devices. Radio propagation With Wi-Fi signals line-of-sight usually works best, but signals can transmit, absorb, reflect, refract, diffract and up and down fade through and around structures, both man-made and natural. Wi-Fi signals are very strongly affected by metallic structures (including rebar in concrete, low-e coatings in glazing), rock structures (including marble) and water (such as found in vegetation). Due to the complex nature of radio propagation at typical Wi-Fi frequencies, particularly around trees and buildings, algorithms can only approximately predict Wi-Fi signal strength for any given area in relation to a transmitter. This effect does not apply equally to long-range Wi-Fi, since longer links typically operate from towers that transmit above the surrounding foliage. Mobile use of Wi-Fi over wider ranges is limited, for instance, to uses such as in an automobile moving from one hotspot to another. Other wireless technologies are more suitable for communicating with moving vehicles. Distance records Distance records (using non-standard devices) include in June 2007, held by Ermanno Pietrosemoli and EsLaRed of Venezuela, transferring about 3 MB of data between the mountain-tops of El Águila and Platillon. The Swedish National Space Agency transferred data , using 6 watt amplifiers to reach an overhead stratospheric balloon. Interference Wi-Fi connections can be blocked or the Internet speed lowered by having other devices in the same area. Wi-Fi protocols are designed to share the wavebands reasonably fairly, and this often works with little to no disruption. To minimize collisions with Wi-Fi and non-Wi-Fi devices, Wi-Fi employs Carrier-sense multiple access with collision avoidance (CSMA/CA), where transmitters listen before transmitting and delay transmission of packets if they detect that other devices are active on the channel, or if noise is detected from adjacent channels or non-Wi-Fi sources. Nevertheless, Wi-Fi networks are still susceptible to the hidden node and exposed node problem. A standard speed Wi-Fi signal occupies five channels in the 2.4 GHz band. Interference can be caused by overlapping channels. Any two channel numbers that differ by five or more, such as 2 and 7, do not overlap (no adjacent-channel interference). The oft-repeated adage that channels 1, 6, and 11 are the only non-overlapping channels is, therefore, not accurate. Channels 1, 6, and 11 are the only group of three non-overlapping channels in North America. However, whether the overlap is significant depends on physical spacing. Channels that are four apart interfere a negligible amountmuch less than reusing channels (which causes co-channel interference)if transmitters are at least a few metres apart. In Europe and Japan where channel 13 is available, using Channels 1, 5, 9, and 13 for 802.11g and 802.11n is viable and recommended. However, many 2.4 GHz 802.11b and 802.11g access-points default to the same channel on initial startup, contributing to congestion on certain channels. Wi-Fi pollution, or an excessive number of access points in the area, can prevent access and interfere with other devices' use of other access points as well as with decreased signal-to-noise ratio (SNR) between access points. These issues can become a problem in high-density areas, such as large apartment complexes or office buildings with many Wi-Fi access points. Other devices use the 2.4 GHz band: microwave ovens, ISM band devices, security cameras, Zigbee devices, Bluetooth devices, video senders, cordless phones, baby monitors, and, in some countries, amateur radio, all of which can cause significant additional interference. It is also an issue when municipalities or other large entities (such as universities) seek to provide large area coverage. On some 5 GHz bands interference from radar systems can occur in some places. For base stations that support those bands they employ Dynamic Frequency Selection which listens for radar, and if it is found, it will not permit a network on that band. These bands can be used by low power transmitters without a licence, and with few restrictions. However, while unintended interference is common, users that have been found to cause deliberate interference (particularly for attempting to locally monopolize these bands for commercial purposes) have been issued large fines. Throughput Various layer-2 variants of IEEE 802.11 have different characteristics. Across all flavours of 802.11, maximum achievable throughputs are either given based on measurements under ideal conditions or in the layer-2 data rates. This, however, does not apply to typical deployments in which data are transferred between two endpoints of which at least one is typically connected to a wired infrastructure, and the other is connected to an infrastructure via a wireless link. This means that typically data frames pass an 802.11 (WLAN) medium and are being converted to 802.3 (Ethernet) or vice versa. Due to the difference in the frame (header) lengths of these two media, the packet size of an application determines the speed of the data transfer. This means that an application that uses small packets (e.g. VoIP) creates a data flow with high overhead traffic (low goodput). Other factors that contribute to the overall application data rate are the speed with which the application transmits the packets (i.e. the data rate) and the energy with which the wireless signal is received. The latter is determined by distance and by the configured output power of the communicating devices. The same references apply to the attached throughput graphs, which show measurements of UDP throughput measurements. Each represents an average throughput of 25 measurements (the error bars are there, but barely visible due to the small variation), is with specific packet size (small or large), and with a specific data rate (10 kbit/s – 100 Mbit/s). Markers for traffic profiles of common applications are included as well. This text and measurements do not cover packet errors but information about this can be found at the above references. The table below shows the maximum achievable (application-specific) UDP throughput in the same scenarios (same references again) with various WLAN (802.11) flavours. The measurement hosts have been 25 metres (yards) apart from each other; loss is again ignored. Hardware Wi-Fi allows wireless deployment of local area networks (LANs). Also, spaces where cables cannot be run, such as outdoor areas and historical buildings, can host wireless LANs. However, building walls of certain materials, such as stone with high metal content, can block Wi-Fi signals. A Wi-Fi device is a short-range wireless device. Wi-Fi devices are fabricated on RF CMOS integrated circuit (RF circuit) chips. Since the early 2000s, manufacturers are building wireless network adapters into most laptops. The price of chipsets for Wi-Fi continues to drop, making it an economical networking option included in ever more devices. Different competitive brands of access points and client network-interfaces can inter-operate at a basic level of service. Products designated as "Wi-Fi Certified" by the Wi-Fi Alliance are backward compatible. Unlike mobile phones, any standard Wi-Fi device works anywhere in the world. Access point A wireless access point (WAP) connects a group of wireless devices to an adjacent wired LAN. An access point resembles a network hub, relaying data between connected wireless devices in addition to a (usually) single connected wired device, most often an Ethernet hub or switch, allowing wireless devices to communicate with other wired devices. Wireless adapter Wireless adapters allow devices to connect to a wireless network. These adapters connect to devices using various external or internal interconnects such as mini PCIe (mPCIe, M.2), USB, ExpressCard and previously PCI, Cardbus, and PC Card. As of 2010, most newer laptop computers come equipped with built-in internal adapters. Router Wireless routers integrate a Wireless Access Point, Ethernet switch, and internal router firmware application that provides IP routing, NAT, and DNS forwarding through an integrated WAN-interface. A wireless router allows wired and wireless Ethernet LAN devices to connect to a (usually) single WAN device such as a cable modem, DSL modem, or optical modem. A wireless router allows all three devices, mainly the access point and router, to be configured through one central utility. This utility is usually an integrated web server that is accessible to wired and wireless LAN clients and often optionally to WAN clients. This utility may also be an application that is run on a computer, as is the case with as Apple's AirPort, which is managed with the AirPort Utility on macOS and iOS. Bridge Wireless network bridges can act to connect two networks to form a single network at the data-link layer over Wi-Fi. The main standard is the wireless distribution system (WDS). Wireless bridging can connect a wired network to a wireless network. A bridge differs from an access point: an access point typically connects wireless devices to one wired network. Two wireless bridge devices may be used to connect two wired networks over a wireless link, useful in situations where a wired connection may be unavailable, such as between two separate homes or for devices that have no wireless networking capability (but have wired networking capability), such as consumer entertainment devices; alternatively, a wireless bridge can be used to enable a device that supports a wired connection to operate at a wireless networking standard that is faster than supported by the wireless network connectivity feature (external dongle or inbuilt) supported by the device (e.g., enabling Wireless-N speeds (up to the maximum supported speed on the wired Ethernet port on both the bridge and connected devices including the wireless access point) for a device that only supports Wireless-G). A dual-band wireless bridge can also be used to enable 5 GHz wireless network operation on a device that only supports 2.4 GHz wireless and has a wired Ethernet port. Repeater Wireless range-extenders or wireless repeaters can extend the range of an existing wireless network. Strategically placed range-extenders can elongate a signal area or allow for the signal area to reach around barriers such as those pertaining in L-shaped corridors. Wireless devices connected through repeaters suffer from an increased latency for each hop, and there may be a reduction in the maximum available data throughput. Besides, the effect of additional users using a network employing wireless range-extenders is to consume the available bandwidth faster than would be the case whereby a single user migrates around a network employing extenders. For this reason, wireless range-extenders work best in networks supporting low traffic throughput requirements, such as for cases whereby a single user with a Wi-Fi-equipped tablet migrates around the combined extended and non-extended portions of the total connected network. Also, a wireless device connected to any of the repeaters in the chain has data throughput limited by the "weakest link" in the chain between the connection origin and connection end. Networks using wireless extenders are more prone to degradation from interference from neighbouring access points that border portions of the extended network and that happen to occupy the same channel as the extended network. Embedded systems The security standard, Wi-Fi Protected Setup, allows embedded devices with a limited graphical user interface to connect to the Internet with ease. Wi-Fi Protected Setup has 2 configurations: The Push Button configuration and the PIN configuration. These embedded devices are also called The Internet of things and are low-power, battery-operated embedded systems. Several Wi-Fi manufacturers design chips and modules for embedded Wi-Fi, such as GainSpan. Increasingly in the last few years (particularly ), embedded Wi-Fi modules have become available that incorporate a real-time operating system and provide a simple means of wirelessly enabling any device that can communicate via a serial port. This allows the design of simple monitoring devices. An example is a portable ECG device monitoring a patient at home. This Wi-Fi-enabled device can communicate via the Internet. These Wi-Fi modules are designed by OEMs so that implementers need only minimal Wi-Fi knowledge to provide Wi-Fi connectivity for their products. In June 2014, Texas Instruments introduced the first ARM Cortex-M4 microcontroller with an onboard dedicated Wi-Fi MCU, the SimpleLink CC3200. It makes embedded systems with Wi-Fi connectivity possible to build as single-chip devices, which reduces their cost and minimum size, making it more practical to build wireless-networked controllers into inexpensive ordinary objects. Security The main issue with wireless network security is its simplified access to the network compared to traditional wired networks such as Ethernet. With wired networking, one must either gain access to a building (physically connecting into the internal network), or break through an external firewall. To access Wi-Fi, one must merely be within the range of the Wi-Fi network. Most business networks protect sensitive data and systems by attempting to disallow external access. Enabling wireless connectivity reduces security if the network uses inadequate or no encryption. An attacker who has gained access to a Wi-Fi network router can initiate a DNS spoofing attack against any other user of the network by forging a response before the queried DNS server has a chance to reply. Securing methods A common measure to deter unauthorized users involves hiding the access point's name by disabling the SSID broadcast. While effective against the casual user, it is ineffective as a security method because the SSID is broadcast in the clear in response to a client SSID query. Another method is to only allow computers with known MAC addresses to join the network, but determined eavesdroppers may be able to join the network by spoofing an authorized address. Wired Equivalent Privacy (WEP) encryption was designed to protect against casual snooping but it is no longer considered secure. Tools such as AirSnort or Aircrack-ng can quickly recover WEP encryption keys. Because of WEP's weakness the Wi-Fi Alliance approved Wi-Fi Protected Access (WPA) which uses TKIP. WPA was specifically designed to work with older equipment usually through a firmware upgrade. Though more secure than WEP, WPA has known vulnerabilities. The more secure WPA2 using Advanced Encryption Standard was introduced in 2004 and is supported by most new Wi-Fi devices. WPA2 is fully compatible with WPA. In 2017, a flaw in the WPA2 protocol was discovered, allowing a key replay attack, known as KRACK. A flaw in a feature added to Wi-Fi in 2007, called Wi-Fi Protected Setup (WPS), let WPA and WPA2 security be bypassed. The only remedy was to turn off Wi-Fi Protected Setup, which is not always possible. Virtual private networks can be used to improve the confidentiality of data carried through Wi-Fi networks, especially public Wi-Fi networks. A URI using the WIFI scheme can specify the SSID, encryption type, password/passphrase, and if the SSID is hidden or not, so users can follow links from QR codes, for instance, to join networks without having to manually enter the data. A MeCard-like format is supported by Android and iOS 11+. Common format: WIFI:S:<SSID>;T:<WEP|WPA|blank>;P:<PASSWORD>;H:<true|false|blank>; Sample WIFI:S:MySSID;T:WPA;P:MyPassW0rd;; Data security risks Wi-Fi access points typically default to an encryption-free (open) mode. Novice users benefit from a zero-configuration device that works out-of-the-box, but this default does not enable any wireless security, providing open wireless access to a LAN. To turn security on requires the user to configure the device, usually via a software graphical user interface (GUI). On unencrypted Wi-Fi networks connecting devices can monitor and record data (including personal information). Such networks can only be secured by using other means of protection, such as a VPN, or Hypertext Transfer Protocol over Transport Layer Security (HTTPS). The older wireless-encryption standard, Wired Equivalent Privacy (WEP), has been shown easily breakable even when correctly configured. Wi-Fi Protected Access (WPA) encryption, which became available in devices in 2003, aimed to solve this problem. Wi-Fi Protected Access 2 (WPA2) ratified in 2004 is considered secure, provided a strong passphrase is used. The 2003 version of WPA has not been considered secure since it was superseded by WPA2 in 2004. In 2018, WPA3 was announced as a replacement for WPA2, increasing security; it rolled out on 26 June. Piggybacking Piggybacking refers to access to a wireless Internet connection by bringing one's computer within the range of another's wireless connection, and using that service without the subscriber's explicit permission or knowledge. During the early popular adoption of 802.11, providing open access points for anyone within range to use was encouraged to cultivate wireless community networks, particularly since people on average use only a fraction of their downstream bandwidth at any given time. Recreational logging and mapping of other people's access points have become known as wardriving. Indeed, many access points are intentionally installed without security turned on so that they can be used as a free service. Providing access to one's Internet connection in this fashion may breach the Terms of Service or contract with the ISP. These activities do not result in sanctions in most jurisdictions; however, legislation and case law differ considerably across the world. A proposal to leave graffiti describing available services was called warchalking. Piggybacking often occurs unintentionally – a technically unfamiliar user might not change the default "unsecured" settings to their access point and operating systems can be configured to connect automatically to any available wireless network. A user who happens to start up a laptop in the vicinity of an access point may find the computer has joined the network without any visible indication. Moreover, a user intending to join one network may instead end up on another one if the latter has a stronger signal. In combination with automatic discovery of other network resources (see DHCP and Zeroconf) this could lead wireless users to send sensitive data to the wrong middle-man when seeking a destination (see man-in-the-middle attack). For example, a user could inadvertently use an unsecured network to log into a website, thereby making the login credentials available to anyone listening, if the website uses an insecure protocol such as plain HTTP without TLS. On an unsecured access point, an unauthorized user can obtain security information (factory preset passphrase or Wi-Fi Protected Setup PIN) from a label on a wireless access point and use this information (or connect by the Wi-Fi Protected Setup pushbutton method) to commit unauthorized or unlawful activities. Societal aspects Wireless Internet access has become much more embedded in society. It has thus changed how the society functions in many ways. Influence on developing countries over half the world did not have access to the Internet, prominently rural areas in developing nations. Technology that has been implemented in more developed nations is often costly and energy inefficient. This has led to developing nations using more low-tech networks, frequently implementing renewable power sources that can solely be maintained through solar power, creating a network that is resistant to disruptions such as power outages. For instance, in 2007, a network between Cabo Pantoja and Iquitos in Peru was erected in which all equipment is powered only by solar panels. These long-range Wi-Fi networks have two main uses: offer Internet access to populations in isolated villages, and to provide healthcare to isolated communities. In the case of the latter example, it connects the central hospital in Iquitos to 15 medical outposts which are intended for remote diagnosis. Work habits Access to Wi-Fi in public spaces such as cafes or parks allows people, in particular freelancers, to work remotely. While the accessibility of Wi-Fi is the strongest factor when choosing a place to work (75% of people would choose a place that provides Wi-Fi over one that does not), other factors influence the choice of specific hotspots. These vary from the accessibility of other resources, like books, the location of the workplace, and the social aspect of meeting other people in the same place. Moreover, the increase of people working from public places results in more customers for local businesses thus providing an economic stimulus to the area. Additionally, in the same study it has been noted that wireless connection provides more freedom of movement while working. Both when working at home or from the office it allows the displacement between different rooms or areas. In some offices (notably Cisco offices in New York) the employees do not have assigned desks but can work from any office connecting their laptop to Wi-Fi hotspot. Housing The Internet has become an integral part of living. , 81.9% of American households have Internet access. Additionally, 89% of American households with broadband connect via wireless technologies. 72.9% of American households have Wi-Fi. Wi-Fi networks have also affected how the interior of homes and hotels are arranged. For instance, architects have described that their clients no longer wanted only one room as their home office, but would like to work near the fireplace or have the possibility to work in different rooms. This contradicts architect's pre-existing ideas of the use of rooms that they designed. Additionally, some hotels have noted that guests prefer to stay in certain rooms since they receive a stronger Wi-Fi signal. Health concerns The World Health Organization (WHO) says, "no health effects are expected from exposure to RF fields from base stations and wireless networks", but notes that they promote research into effects from other RF sources. (a category used when "a causal association is considered credible, but when chance, bias or confounding cannot be ruled out with reasonable confidence"), this classification was based on risks associated with wireless phone use rather than Wi-Fi networks. The United Kingdom's Health Protection Agency reported in 2007 that exposure to Wi-Fi for a year results in the "same amount of radiation from a 20-minute mobile phone call". A review of studies involving 725 people who claimed electromagnetic hypersensitivity, "...suggests that 'electromagnetic hypersensitivity' is unrelated to the presence of an EMF, although more research into this phenomenon is required." Alternatives Several other wireless technologies provide alternatives to Wi-Fi for different use cases: Bluetooth Low Energy, a low-power variant of Bluetooth Bluetooth, a short-distance network Cellular networks, used by smartphones LoRa, for long range wireless with low data rate NearLink, a short-range wireless technology standard WiMAX, for providing long range wireless internet connectivity Zigbee, a low-power, low data rate, short-distance communication protocol Some alternatives are "no new wires", re-using existing cable: G.hn, which uses existing home wiring, such as phone and power lines Several wired technologies for computer networking, which provide viable alternatives to Wi-Fi: Ethernet over twisted pair See also Gi-Fia term used by some trade press to refer to faster versions of the IEEE 802.11 standards HiperLAN High-speed multimedia radio Indoor positioning system Li-Fi List of WLAN channels Operating system Wi-Fi support Passive Wi-Fi Power-line communication San Francisco Digital Inclusion Strategy WiGig Wireless Broadband Alliance Wi-Fi Direct Explanatory notes References Further reading Australian inventions Telecommunications-related introductions in 1997 Networking standards Wireless communication systems Dutch inventions
Wi-Fi
Technology,Engineering
10,804
3,652,277
https://en.wikipedia.org/wiki/Bigha
The bigha or beegah (, ) is a traditional unit of measurement of area of a land, commonly used in northern & eastern India, Bangladesh and Nepal. There is no "standard" size of bigha and it varies considerably from place to place. Sources have given measurement of Bigha ranging from to . Its sub-unit is Biswa or Katha in many regions, but it has no "standard" size. A bigha may have 5 to 20 Katha/ biswa in different regions. Uses in India The bigha is a traditional unit of land in several parts of North & East India. Sale and purchase of land (particularly agricultural land) is still done unofficially in this unit. However, the area is recorded in hectare or square metres in official land records. Bigha varies in size from one part of India to another. Various states and often regions within the same state have different sizes attributed to the bigha. It is usually less than one acre (43,560 square feet or 4,046.8 square metre) but can extend up to 3 acres. In India, Bigha is commonly used in the states of Uttarakhand, Haryana, Himachal Pradesh, Punjab, Madhya Pradesh, Uttar Pradesh, Bihar, Jharkhand, West Bengal, Assam, Gujarat and Rajasthan. However, in Maharashtra and Tamil Nadu, Bigha is not in practical measurement unit. Assam In Assam, a bigha is or 1,600 sq yard. One bigha is divided into 5 Katha. Each Katha consists of 20 Lessa. Hence each Katha is in area, although this may vary within different regions of Assam. 4 bighas together are further termed as a Pura. 1 Katha = or 320 sq yard 1 Lessa = or 16 sq yard 1 Acre = 3.025 bigha and 1 Hectare = 7.475 bigha (Assam) Bihar In Bihar, different regions have different sizes of Bigha. Near the capital, Patna, one bigha is equivalent to 20 Katha. 1 Katha is equals to or 151.25 square yard. One Katha is further subdivided in 20 dhur. Hence each Dhur is . One Dhur is subdivided in 20 dhurki, each Dhurki being . One decimal is equal to 435.60 sq feet or 1/100 acre. One Acre = or 4,840 sq. yard. 1 Bigha (बीघा) = 20 Katha (कठ्ठा) = 2,529 m2 or 3,025 sq. yard or 27,225 sq. feet 1 Katha (कठ्ठा) = 20 Dhur (धुर) = 126.46 m2 or 151.25 sq. yard or 1,361.25 sq. feet 1 Acre = 1.6 Bigha = 100 decimal; and 1 Hectare = 3.95 Bigha (Patna) Note: 1 Katha in Muzaffarpur is ; and in West Champaran is . Himachal Pradesh In Himachal Pradesh, five bigha is equal to one Acre. Hence, 1 Bigha = or 968 square yard. One Hectare is equal to 12.35 bigha. Punjab In Punjab and Haryana, 2 bigha is equal to one acre, each bigha is 4 kanals, each kanal is 20 marlas, each marla is 9 square karam. Each square Karam is 30.25 square feet (5.5 feet X 5.5 feet), each karam is 5.5 feet. See measurement of land in Punjab as below: 1 Killa = 1 Acre (4,046.8 square metre or 4,840 square yard) 1 Killa = 8 Kanal = 2 Bigha = 160 Marla 1 Bigha = 4 Kanal = 0.5 Killa = 80 Marla 1 Bigha = 2,023.4 square metre or 2,420 square yard 1 Kanal = 20 Marla = 0.25 Bigha (605 square yard) 1 Marla = 25.29 square metre or 30.25 square yard Madhya Pradesh In Madhya Pradesh, one Bigha has 20 Katha, where 1 Katha = . Therefore 1 bigha = . Rajasthan In Rajasthan, One Pucca Bigha = or 3,025 square yard. A plot of land with each side 165 feet is called as Bigha (165 ft × 165 ft). One Kaccha Bigha = or 1,936 square yard. Uttar Pradesh In Uttar Pradesh, one bigha can mean different things to people in different districts of the state. One Bigha in UP ranges from 5 biswa to 20 biswa. Here one Biswa is or 151.25 square yard. In Western UP, 1 bigha is usually equal to 5 biswa i.e. or 756.25 square yard. In some districts it can be 6.6667 biswa i.e. or 1,008.33 square yard. In Eastern UP, 1 bigha is equal to 20 biswa. Hence 1 Bigha in Purvanchal is or 3,025 square yard. Uttarakhand In Uttarakhand, 1 Bigha is divided into 20 Bissas or 12 Nali. One Bigha is 632.29 m2 or 10000 sq. yard. West Bengal In West Bengal, the Bigha was standardized under British colonial rule at or 1,600 square yard. This is often interpreted as being 1/3 acre (it is precisely 40⁄121 acre). Therefore, 1 Acre = 3.025 bigha and 1 Hectare = 7.475 bigha in West Bengal. Uses in Bangladesh Bigha is a traditional unit of land in entire Bangladesh, with land purchases still being undertaken in this unit. One bigha is equal to 20 Katha (14,400 square feet or 1,600 square yard) as standardized in pre-partition Bengal during the British rule. In other words, 3 bigha are just 0.5 Katha or 360 sq ft short of 1 acre. (One Acre = 4,840 sq yd or 43,560 sq ft or 4,047 sq m). Measurements of area 1 Katha (কাঠা) = 720 sq ft (80 sq yd or 66.89 sq m) 1 Bigha = 14,400 sq ft (1,337.8 sq m or 1,600 sq yd) 1 Acre (একর) = 3.025 bigha (বিঘা) = 60.5 Katha (কাঠা) 1 Hectare = 7.475 bigha = 149.5 Katha Use in Nepal A Bigha is a customary unit of measurement in Nepal, equal to about 6,773 square meters. Officially, most measurement of lands use units of either Bigha (in Terai region) or Ropani (Nepali: रोपनी) (in Hilly regions). Metric system (SI unit of square metre) is very seldom used officially in measuring area of land. Measurement of area in terms of bigha 1 Bigha (बिघा)= 20 Katha (कठ्ठा) (about 6,772.63 m² or 72900 sq.ft.) 1 Katha (कठ्ठा) = 20 Dhur (धुर) (about 338.63 m² or 3,645 sq.ft.) 1 Dhur (धुर) = 16.93 m² or 182.25 sq.ft. 1 Kanwa (कनवा) = 1/16 Dhur (धुर) 1 Kanaee (कनई) = 1/16 Kanwa (कनवा) [Note: Kanwa is largely obsolete and is used only when tiny lands are very precious]. 1 Bigha = 13.31 Ropani (रोपनी) 1 Ropani = 16 aana (आना) (about 508.72 m² or 5476 sq. ft.) 1 aana = 4 paisa (पैसा) (about 31.80 m² or 342.25 sq.ft.) 1 paisa = 4 daam (दाम) (7.95 m²) 1 Bigha = 6,772.63 m2 1 Bigha = 0.677263 hectare = 1.6735 acre 1 Hectare = 19.6565 Ropani In popular culture The classic Hindi movie Do Bigha Zamin ("Two bighas of land", 1953) by Bimal Roy portrayed the struggle of a poor peasant with very little landholding. See also Doab Jagir Khadir and Bangar Barani, Nahri, Chahi, Taal Banjar, Jungle, Abadi, Shamlat, Gair Mumkin Measurement of land in Punjab Patwari Zaildar Zamindar References Sources Area conversion, royalreality.com (archived 27 April 2006) Land glossary, Bhulekh – etawah.nic.in (archived 29 June 2011) Land measurement in India , landzone.in Bhulekh, uk.gov.in External links Bigha at Sizes.com Units of area Customary units in India
Bigha
Mathematics
1,863
22,448,416
https://en.wikipedia.org/wiki/Hebeloma%20vaccinum
Hebeloma vaccinum is a species of agaric fungus in the family Hymenogastraceae. It was described as new to science in 1965 by French mycologist Henri Romagnesi. See also List of Hebeloma species References Fungi described in 1965 vaccinum Fungi of Europe Fungus species
Hebeloma vaccinum
Biology
66
2,216,678
https://en.wikipedia.org/wiki/Term%20algebra
In universal algebra and mathematical logic, a term algebra is a freely generated algebraic structure over a given signature. For example, in a signature consisting of a single binary operation, the term algebra over a set X of variables is exactly the free magma generated by X. Other synonyms for the notion include absolutely free algebra and anarchic algebra. From a category theory perspective, a term algebra is the initial object for the category of all X-generated algebras of the same signature, and this object, unique up to isomorphism, is called an initial algebra; it generates by homomorphic projection all algebras in the category. A similar notion is that of a Herbrand universe in logic, usually used under this name in logic programming, which is (absolutely freely) defined starting from the set of constants and function symbols in a set of clauses. That is, the Herbrand universe consists of all ground terms: terms that have no variables in them. An atomic formula or atom is commonly defined as a predicate applied to a tuple of terms; a ground atom is then a predicate in which only ground terms appear. The Herbrand base is the set of all ground atoms that can be formed from predicate symbols in the original set of clauses and terms in its Herbrand universe. These two concepts are named after Jacques Herbrand. Term algebras also play a role in the semantics of abstract data types, where an abstract data type declaration provides the signature of a multi-sorted algebraic structure and the term algebra is a concrete model of the abstract declaration. Universal algebra A type is a set of function symbols, with each having an associated arity (i.e. number of inputs). For any non-negative integer , let denote the function symbols in of arity . A constant is a function symbol of arity 0. Let be a type, and let be a non-empty set of symbols, representing the variable symbols. (For simplicity, assume and are disjoint.) Then the set of terms of type over is the set of all well-formed strings that can be constructed using the variable symbols of and the constants and operations of . Formally, is the smallest set such that:   — each variable symbol from is a term in , and so is each constant symbol from . For all and for all function symbols and terms , we have the string   — given terms , the application of an -ary function symbol to them represents again a term. The term algebra of type over is, in summary, the algebra of type that maps each expression to its string representation. Formally, is defined as follows: The domain of is . For each nullary function in , is defined as the string . For all and for each n-ary function in and elements in the domain, is defined as the string . A term algebra is called absolutely free because for any algebra of type , and for any function , extends to a unique homomorphism , which simply evaluates each term to its corresponding value . Formally, for each : If , then . If , then . If where and , then . Example As an example type inspired from integer arithmetic can be defined by , , , and for each . The best-known algebra of type has the natural numbers as its domain and interprets , , , and in the usual way; we refer to it as . For the example variable set , we are going to investigate the term algebra of type over . First, the set of terms of type over is considered. We use to flag its members, which otherwise may be hard to recognize due to their uncommon syntactic form. We have e.g. , since is a variable symbol; , since is a constant symbol; hence , since is a 2-ary function symbol; hence, in turn, since is a 2-ary function symbol. More generally, each string in corresponds to a mathematical expression built from the admitted symbols and written in Polish prefix notation; for example, the term corresponds to the expression in usual infix notation. No parentheses are needed to avoid ambiguities in Polish notation; e.g. the infix expression corresponds to the term . To give some counter-examples, we have e.g. , since is neither an admitted variable symbol nor an admitted constant symbol; , for the same reason, , since is a 2-ary function symbol, but is used here with only one argument term (viz. ). Now that the term set is established, we consider the term algebra of type over . This algebra uses as its domain, on which addition and multiplication need to be defined. The addition function takes two terms and and returns the term ; similarly, the multiplication function maps given terms and to the term . For example, evaluates to the term . Informally, the operations and are both "sluggards" in that they just record what computation should be done, rather than doing it. As an example for unique extendability of a homomorphism consider defined by and . Informally, defines an assignment of values to variable symbols, and once this is done, every term from can be evaluated in a unique way in . For example, In a similar way, one obtains . Herbrand base The signature σ of a language is a triple <O, F, P> consisting of the alphabet of constants O, function symbols F, and predicates P. The Herbrand base of a signature σ consists of all ground atoms of σ: of all formulas of the form R(t1, ..., tn), where t1, ..., tn are terms containing no variables (i.e. elements of the Herbrand universe) and R is an n-ary relation symbol (i.e. predicate). In the case of logic with equality, it also contains all equations of the form t1 = t2, where t1 and t2 contain no variables. Decidability Term algebras can be shown decidable using quantifier elimination. The complexity of the decision problem is in NONELEMENTARY because binary constructors are injective and thus pairing functions. See also Answer-set programming Clone (algebra) Domain of discourse / Universe (mathematics) Rabin's tree theorem (the monadic theory of the infinite complete binary tree is decidable) Initial algebra Abstract data type Term rewriting system References Further reading Joel Berman (2005). "The structure of free algebras". In Structural Theory of Automata, Semigroups, and Universal Algebra. Springer. pp. 47–76. . External links Universal algebra Mathematical logic Free algebraic structures Unification (computer science)
Term algebra
Mathematics
1,356
10,807,859
https://en.wikipedia.org/wiki/Isotype%20%28immunology%29
In immunology, antibodies (immunoglobulins (Ig)) are classified into several types called isotypes or classes. The variable (V) regions near the tip of the antibody can differ from molecule to molecule in countless ways, allowing it to specifically target an antigen (or more exactly, an epitope). In contrast, the constant (C) regions only occur in a few variants, which define the antibody's class. Antibodies of different classes activate distinct effector mechanisms in response to an antigen (triggering different elements of the innate immune system). They appear at different stages of an immune response, differ in structural features, and in their location around the body. Isotype expression reflects the maturation stage of a B cell. Naive B cells express IgM and IgD isotypes with unmutated variable genes, which are produced from the same initial transcript following alternative splicing. Expression of other antibody isotypes (in humans: IgG, IgA, and IgE) occurs via a process of class switching after antigen exposure. Class switching is mediated by the enzyme AID (activation-induced cytidine deaminase) and occurs after the B cell binds an antigen through its B cell receptor. Class-switching usually requires interaction with a T helper cell. In humans, there are five heavy chain isotypes α,δ,γ,ε,μ, corresponding to five antibody isotypes: α – IgA, further divided into subclasses IgA1 and IgA2 δ – IgD γ – IgG, further divided into subclasses IgG1 to IgG4 ε – IgE μ – IgM There are also two light chain isotypes κ and λ; however, there is no significant difference in function between the two. Thus an antibody isotype is determined by the constant regions of the heavy chains only. IgM is first expressed as a monomer on the surface of immature B cells. Upon antigenic stimulation, IgM+ B cells secrete pentameric IgM antibody formed by five Ig monomers which are linked via disulfide bonds. The pentamer also contains a polypeptide J-chain, which links two of the monomers and facilitates secretion at mucosal surfaces. The pentameric structure of IgM antibodies makes them efficient at binding antigens with repetitive epitopes (e.g. bacterial capsule, viral capsid) and activation of complement cascade. As IgM antibodies are expressed early in a B cell response, they are rarely highly mutated and have broad antigen reactivity thus providing an early response to a wide range of antigens without the need for T cell help. IgD isotypes are expressed on naive B cells as they leave bone marrow and populate secondary lymphoid organs. The levels of surface expression of IgD isotype has been associated with differences in B cell activation status but their role in serum is poorly understood. The IgG, IgE and IgA antibody isotypes are generated following class-switching during germinal centre reaction and provide different effector functions in response to specific antigens. IgG is the most abundant antibody class in the serum and it is divided into 4 subclasses based on differences in the structure of the constant region genes and the ability to trigger different effector functions. Despite the high sequence similarity (90% identical on the amino acid level), each subclass has a different half-life, a unique profile of antigen binding and distinct capacity for complement activation. IgG1 antibodies are the most abundant IgG class and dominate the responses to protein antigens. Impaired production of IgG1 is observed in some cases of immunodeficiency and is associated with recurrent infections. The IgG responses to bacterial capsular polysaccharide antigens are mediated primarily via IgG2 subclass, and deficiencies in this subclass result in susceptibility to certain bacterial species. IgG2 represents the major antibody subclass reacting to glycan antigens but IgG1 and IgG3 subclasses have also been observed in such responses, particularly in the case of protein-glycan conjugates. IgG3 is an efficient activator of pro-inflammatory responses by triggering the classical complement pathway. It has the shortest half-life compared to the other IgG subclasses and is frequently present together with IgG1 in response to protein antigens after viral infections. IgG4 is the least abundant IgG subclass in the serum and is often generated following repeated exposure to the same antigen or during persistent infections. IgA antibodies are secreted in the respiratory or the intestinal tract and act as the main mediators of mucosal immunity. They are monomeric in the serum, but appear as a dimer termed secretory IgA (sIgA) at mucosal surfaces. The secretory IgA is associated with a J-chain and another polypeptide chain called the secretory component. IgA antibodies are divided into two subclasses that differ in the size of their hinge region. IgA1 has a longer hinge region which increases its sensitivity to bacterial proteases. Therefore, this subclass dominates the serum IgA, while IgA2 is predominantly found in mucosal secretions. Complement fixation by IgA is not a major effector mechanism at the mucosal surface but IgA receptor is expressed on neutrophils which may be activated to mediate antibody-dependent cellular cytotoxicity. sIgA has also been shown to potentiate the immune response in intestinal tissue by uptake of antigen together with the bound antibody by dendritic cells. IgE antibodies are present at lowest concentrations in peripheral blood but constitute the main antibody class in allergic responses through the engagement of mast cells, eosinophils and Langerhans cells. Responses to specific helminths are also characterised with elevated levels of IgE antibodies. See also Idiotype References External links Overview at University of South Carolina School of Medicine Overview at Southern Illinois University Carbondale Immunology
Isotype (immunology)
Biology
1,264
65,905
https://en.wikipedia.org/wiki/Ideal%20gas
An ideal gas is a theoretical gas composed of many randomly moving point particles that are not subject to interparticle interactions. The ideal gas concept is useful because it obeys the ideal gas law, a simplified equation of state, and is amenable to analysis under statistical mechanics. The requirement of zero interaction can often be relaxed if, for example, the interaction is perfectly elastic or regarded as point-like collisions. Under various conditions of temperature and pressure, many real gases behave qualitatively like an ideal gas where the gas molecules (or atoms for monatomic gas) play the role of the ideal particles. Many gases such as nitrogen, oxygen, hydrogen, noble gases, some heavier gases like carbon dioxide and mixtures such as air, can be treated as ideal gases within reasonable tolerances over a considerable parameter range around standard temperature and pressure. Generally, a gas behaves more like an ideal gas at higher temperature and lower pressure, as the potential energy due to intermolecular forces becomes less significant compared with the particles' kinetic energy, and the size of the molecules becomes less significant compared to the empty space between them. One mole of an ideal gas has a volume of (exact value based on 2019 revision of the SI) at standard temperature and pressure (a temperature of 273.15 K and an absolute pressure of exactly 105 Pa). The ideal gas model tends to fail at lower temperatures or higher pressures, when intermolecular forces and molecular size becomes important. It also fails for most heavy gases, such as many refrigerants, and for gases with strong intermolecular forces, notably water vapor. At high pressures, the volume of a real gas is often considerably larger than that of an ideal gas. At low temperatures, the pressure of a real gas is often considerably less than that of an ideal gas. At some point of low temperature and high pressure, real gases undergo a phase transition, such as to a liquid or a solid. The model of an ideal gas, however, does not describe or allow phase transitions. These must be modeled by more complex equations of state. The deviation from the ideal gas behavior can be described by a dimensionless quantity, the compressibility factor, . The ideal gas model has been explored in both the Newtonian dynamics (as in "kinetic theory") and in quantum mechanics (as a "gas in a box"). The ideal gas model has also been used to model the behavior of electrons in a metal (in the Drude model and the free electron model), and it is one of the most important models in statistical mechanics. If the pressure of an ideal gas is reduced in a throttling process the temperature of the gas does not change. (If the pressure of a real gas is reduced in a throttling process, its temperature either falls or rises, depending on whether its Joule–Thomson coefficient is positive or negative.) Types of ideal gas There are three basic classes of ideal gas: the classical or Maxwell–Boltzmann ideal gas, the ideal quantum Bose gas, composed of bosons, and the ideal quantum Fermi gas, composed of fermions. The classical ideal gas can be separated into two types: The classical thermodynamic ideal gas and the ideal quantum Boltzmann gas. Both are essentially the same, except that the classical thermodynamic ideal gas is based on classical statistical mechanics, and certain thermodynamic parameters such as the entropy are only specified to within an undetermined additive constant. The ideal quantum Boltzmann gas overcomes this limitation by taking the limit of the quantum Bose gas and quantum Fermi gas in the limit of high temperature to specify these additive constants. The behavior of a quantum Boltzmann gas is the same as that of a classical ideal gas except for the specification of these constants. The results of the quantum Boltzmann gas are used in a number of cases including the Sackur–Tetrode equation for the entropy of an ideal gas and the Saha ionization equation for a weakly ionized plasma. Classical thermodynamic ideal gas The classical thermodynamic properties of an ideal gas can be described by two equations of state: Ideal gas law The ideal gas law is the equation of state for an ideal gas, given by: where is the pressure is the volume is the amount of substance of the gas (in moles) is the absolute temperature is the gas constant, which must be expressed in units consistent with those chosen for pressure, volume and temperature. For example, in SI units = 8.3145 J⋅K−1⋅mol−1 when pressure is expressed in pascals, volume in cubic meters, and absolute temperature in kelvin. The ideal gas law is an extension of experimentally discovered gas laws. It can also be derived from microscopic considerations. Real fluids at low density and high temperature approximate the behavior of a classical ideal gas. However, at lower temperatures or a higher density, a real fluid deviates strongly from the behavior of an ideal gas, particularly as it condenses from a gas into a liquid or as it deposits from a gas into a solid. This deviation is expressed as a compressibility factor. This equation is derived from Boyle's law: ; Charles's law: ; Avogadro's law: . After combining three laws we get That is: . Internal energy The other equation of state of an ideal gas must express Joule's second law, that the internal energy of a fixed mass of ideal gas is a function only of its temperature, with . For the present purposes it is convenient to postulate an exemplary version of this law by writing: where is the internal energy is the dimensionless specific heat capacity at constant volume, approximately for a monatomic gas, for diatomic gas, and 3 for non-linear molecules if we treat translations and rotations classically and ignore quantum vibrational contribution and electronic excitation. These formulas arise from application of the classical equipartition theorem to the translational and rotational degrees of freedom. That for an ideal gas depends only on temperature is a consequence of the ideal gas law, although in the general case depends on temperature and an integral is needed to compute . Microscopic model In order to switch from macroscopic quantities (left hand side of the following equation) to microscopic ones (right hand side), we use where is the number of gas particles is the Boltzmann constant (). The probability distribution of particles by velocity or energy is given by the Maxwell speed distribution. The ideal gas model depends on the following assumptions: The molecules of the gas are indistinguishable, small, hard spheres All collisions are elastic and all motion is frictionless (no energy loss in motion or collision) Newton's laws apply The average distance between molecules is much larger than the size of the molecules The molecules are constantly moving in random directions with a distribution of speeds There are no attractive or repulsive forces between the molecules apart from those that determine their point-like collisions The only forces between the gas molecules and the surroundings are those that determine the point-like collisions of the molecules with the walls In the simplest case, there are no long-range forces between the molecules of the gas and the surroundings. The assumption of spherical particles is necessary so that there are no rotational modes allowed, unlike in a diatomic gas. The following three assumptions are very related: molecules are hard, collisions are elastic, and there are no inter-molecular forces. The assumption that the space between particles is much larger than the particles themselves is of paramount importance, and explains why the ideal gas approximation fails at high pressures. Heat capacity The dimensionless heat capacity at constant volume is generally defined by where is the entropy. This quantity is generally a function of temperature due to intermolecular and intramolecular forces, but for moderate temperatures it is approximately constant. Specifically, the Equipartition Theorem predicts that the constant for a monatomic gas is  =  while for a diatomic gas it is  =  if vibrations are neglected (which is often an excellent approximation). Since the heat capacity depends on the atomic or molecular nature of the gas, macroscopic measurements on heat capacity provide useful information on the microscopic structure of the molecules. The dimensionless heat capacity at constant pressure of an ideal gas is: where is the enthalpy of the gas. Sometimes, a distinction is made between an ideal gas, where and could vary with temperature, and a perfect gas, for which this is not the case. The ratio of the constant volume and constant pressure heat capacity is the adiabatic index For air, which is a mixture of gases that are mainly diatomic (nitrogen and oxygen), this ratio is often assumed to be 7/5, the value predicted by the classical Equipartition Theorem for diatomic gases. Entropy Using the results of thermodynamics only, we can go a long way in determining the expression for the entropy of an ideal gas. This is an important step since, according to the theory of thermodynamic potentials, if we can express the entropy as a function of ( is a thermodynamic potential), volume and the number of particles , then we will have a complete statement of the thermodynamic behavior of the ideal gas. We will be able to derive both the ideal gas law and the expression for internal energy from it. Since the entropy is an exact differential, using the chain rule, the change in entropy when going from a reference state 0 to some other state with entropy may be written as where: where the reference variables may be functions of the number of particles . Using the definition of the heat capacity at constant volume for the first differential and the appropriate Maxwell relation for the second we have: Expressing in terms of as developed in the above section, differentiating the ideal gas equation of state, and integrating yields: which implies that the entropy may be expressed as: where all constants have been incorporated into the logarithm as which is some function of the particle number having the same dimensions as in order that the argument of the logarithm be dimensionless. We now impose the constraint that the entropy be extensive. This will mean that when the extensive parameters ( and ) are multiplied by a constant, the entropy will be multiplied by the same constant. Mathematically: From this we find an equation for the function Differentiating this with respect to , setting equal to 1, and then solving the differential equation yields : where may vary for different gases, but will be independent of the thermodynamic state of the gas. It will have the dimensions of . Substituting into the equation for the entropy: and using the expression for the internal energy of an ideal gas, the entropy may be written: Since this is an expression for entropy in terms of , , and , it is a fundamental equation from which all other properties of the ideal gas may be derived. This is about as far as we can go using thermodynamics alone. Note that the above equation is flawed – as the temperature approaches zero, the entropy approaches negative infinity, in contradiction to the third law of thermodynamics. In the above "ideal" development, there is a critical point, not at absolute zero, at which the argument of the logarithm becomes unity, and the entropy becomes zero. This is unphysical. The above equation is a good approximation only when the argument of the logarithm is much larger than unity – the concept of an ideal gas breaks down at low values of . Nevertheless, there will be a "best" value of the constant in the sense that the predicted entropy is as close as possible to the actual entropy, given the flawed assumption of ideality. A quantum-mechanical derivation of this constant is developed in the derivation of the Sackur–Tetrode equation which expresses the entropy of a monatomic () ideal gas. In the Sackur–Tetrode theory the constant depends only upon the mass of the gas particle. The Sackur–Tetrode equation also suffers from a divergent entropy at absolute zero, but is a good approximation for the entropy of a monatomic ideal gas for high enough temperatures. An alternative way of expressing the change in entropy: Thermodynamic potentials Expressing the entropy as a function of , , and : The chemical potential of the ideal gas is calculated from the corresponding equation of state (see thermodynamic potential): where is the Gibbs free energy and is equal to so that: The chemical potential is usually referenced to the potential at some standard pressure Po so that, with : For a mixture (j=1,2,...) of ideal gases, each at partial pressure Pj, it can be shown that the chemical potential μj will be given by the above expression with the pressure P replaced by Pj. The thermodynamic potentials for an ideal gas can now be written as functions of , , and as: {| |- | | | |- | | | |- | | | |- | | | |} where, as before, . The most informative way of writing the potentials is in terms of their natural variables, since each of these equations can be used to derive all of the other thermodynamic variables of the system. In terms of their natural variables, the thermodynamic potentials of a single-species ideal gas are: In statistical mechanics, the relationship between the Helmholtz free energy and the partition function is fundamental, and is used to calculate the thermodynamic properties of matter; see configuration integral for more details. Speed of sound The speed of sound in an ideal gas is given by the Newton-Laplace formula: where the isentropic Bulk modulus For an isentropic process of an ideal gas, , therefore Here, is the adiabatic index () is the entropy per particle of the gas. is the mass density of the gas. is the pressure of the gas. is the universal gas constant is the temperature is the molar mass of the gas. Table of ideal gas equations Ideal quantum gases In the above-mentioned Sackur–Tetrode equation, the best choice of the entropy constant was found to be proportional to the quantum thermal wavelength of a particle, and the point at which the argument of the logarithm becomes zero is roughly equal to the point at which the average distance between particles becomes equal to the thermal wavelength. In fact, quantum theory itself predicts the same thing. Any gas behaves as an ideal gas at high enough temperature and low enough density, but at the point where the Sackur–Tetrode equation begins to break down, the gas will begin to behave as a quantum gas, composed of either bosons or fermions. (See the gas in a box article for a derivation of the ideal quantum gases, including the ideal Boltzmann gas.) Gases tend to behave as an ideal gas over a wider range of pressures when the temperature reaches the Boyle temperature. Ideal Boltzmann gas The ideal Boltzmann gas yields the same results as the classical thermodynamic gas, but makes the following identification for the undetermined constant : where is the thermal de Broglie wavelength of the gas and is the degeneracy of states. Ideal Bose and Fermi gases An ideal gas of bosons (e.g. a photon gas) will be governed by Bose–Einstein statistics and the distribution of energy will be in the form of a Bose–Einstein distribution. An ideal gas of fermions will be governed by Fermi–Dirac statistics and the distribution of energy will be in the form of a Fermi–Dirac distribution. See also – billiard balls as a model of an ideal gas References Notes References
Ideal gas
Physics
3,244
13,129,612
https://en.wikipedia.org/wiki/Dessiatin
A dessiatin or desyatina () is an archaic, rudimentary measure of area used in tsarist Russia for land measurement. A dessiatin is equal to 2,400 square sazhens and is approximately equivalent to 2.702 English acres or, precisely, 10,925.397 504 square metres (1.09 hectare). Treasury/official desyatina , ) = 10,925.4 m2 = 117,600 sq ft = 2.7 acres = 2,400 square sazhen Proprietor's (, ) = 14,567.2 m2 = 156,800 sq ft = 3,200 square sazhen Hence 3 proprietor's desyatinas = 4 official desyatinas. See also Historical Russian units of measurement Units of area Obsolete units of measurement
Dessiatin
Mathematics
170
15,074,798
https://en.wikipedia.org/wiki/SCN2B
Sodium channel subunit beta-2 is a protein that in humans is encoded by the SCN2B gene. See also Sodium channel References Further reading External links Ion channels
SCN2B
Chemistry
35
4,000,401
https://en.wikipedia.org/wiki/Kr%C3%BCppel%20associated%20box
The Krüppel associated box (KRAB) domain is a category of transcriptional repression domains present in approximately 400 human zinc finger protein-based transcription factors (KRAB zinc finger proteins). The KRAB domain typically consists of about 75 amino acid residues, while the minimal repression module is approximately 45 amino acid residues. It is predicted to function through protein-protein interactions via two amphipathic helices. The most prominent interacting protein is called TRIM28 initially visualized as SMP1, cloned as KAP1 and TIF1-beta. Substitutions for the conserved residues abolish repression. Over 10 independently encoded KRAB domains have been shown to be effective repressors of transcription, suggesting this activity to be a common property of the domain. KRAB domains can be fused with dCas9 CRISPR tools to form even stronger repressors. Evolution The KRAB domain had initially been identified in 1988 as a periodic array of leucine residues separated by six amino acids 5’ to the zinc finger region of KOX1/ZNF10 coined heptad repeat of leucines (also known as a leucine zipper). Later, this domain was named in association with the C2H2-Zinc finger proteins Krüppel associated box (KRAB). The KRAB domain is confined to genomes from tetrapod organisms. The KRAB containing C2H2-ZNF genes constitute the largest sub-family of zinc finger genes. More than half of the C2H2-ZNF genes are associated with a KRAB domain in the human genome. They are more prone to clustering and are found in large clusters on the human genome. The KRAB domain presents one of the strongest repressors in the human genome. Once the KRAB domain was fused to the tetracycline repressor (TetR), the TetR-KRAB fusion proteins were the first engineered drug-inducible repressor that worked in mammalian cells. Two distinct types of KRAB A domains can be structurally and functionally distinguished. Ancestral KRAB A domains present in human PDRM9 proteins are even evolutionary conserved in mussel genomes. Modern KRAB A domain sequences are found in coelacanth latimeria chalumnae and in Lungfish genomes. Examples Human genes encoding KRAB-ZFPs include KOX1/ZNF10, KOX8/ZNF708, ZNF43, ZNF184, ZNF91, HPF4, HTF10 and HTF34. References Further reading Protein domains
Krüppel associated box
Chemistry,Biology
555
26,471
https://en.wikipedia.org/wiki/Rat
Rats are various medium-sized, long-tailed rodents. Species of rats are found throughout the order Rodentia, but stereotypical rats are found in the genus Rattus. Other rat genera include Neotoma (pack rats), Bandicota (bandicoot rats) and Dipodomys (kangaroo rats). Rats are typically distinguished from mice by their size. Usually the common name of a large muroid rodent will include the word "rat", while a smaller muroid's name will include "mouse". The common terms rat and mouse are not taxonomically specific. There are 56 known species of rats in the world. Species and description The best-known rat species are the black rat (Rattus rattus) and the brown rat (Rattus norvegicus). This group, generally known as the Old World rats or true rats, originated in Asia. Rats are bigger than most Old World mice, which are their relatives, but seldom weigh over in the wild. The term rat is also used in the names of other small mammals that are not true rats. Examples include the North American pack rats (aka wood rats) and a number of species loosely called kangaroo rats. Rats such as the bandicoot rat (Bandicota bengalensis) are murine rodents related to true rats but are not members of the genus Rattus. Male rats are called bucks; unmated females, does, pregnant or parent females, dams; and infants, kittens or pups. A group of rats is referred to as a mischief. The common species are opportunistic survivors and often live with and near humans; therefore, they are known as commensals. They may cause substantial food losses, especially in developing countries. However, the widely distributed and problematic commensal species of rats are a minority in this diverse genus. Many species of rats are island endemics, some of which have become endangered due to habitat loss or competition with the brown, black, or Polynesian rat. Wild rodents, including rats, can carry many different zoonotic pathogens, such as Leptospira, Toxoplasma gondii, and Campylobacter. The Black Death is traditionally believed to have been caused by the microorganism Yersinia pestis, carried by the tropical rat flea (Xenopsylla cheopis), which preyed on black rats living in European cities during the epidemic outbreaks of the Middle Ages; these rats were used as transport hosts. Another zoonotic disease linked to the rat is foot-and-mouth disease. Rats become sexually mature at age 6 weeks, but reach social maturity at about 5 to 6 months of age. The average lifespan of rats varies by species, but many only live about a year due to predation. The black and brown rats diverged from other Old World rats in the forests of Asia during the beginning of the Pleistocene. Rat tails The characteristic long tail of most rodents is a feature that has been extensively studied in various rat species models, which suggest three primary functions of this structure: thermoregulation, minor proprioception, and a nocifensive-mediated degloving response. Rodent tails—particularly in rat models—have been implicated with a thermoregulation function that follows from its anatomical construction. This particular tail morphology is evident across the family Muridae, in contrast to the bushier tails of Sciuridae, the squirrel family. The tail is hairless and thin skinned but highly vascularized, thus allowing for efficient countercurrent heat exchange with the environment. The high muscular and connective tissue densities of the tail, along with ample muscle attachment sites along its plentiful caudal vertebrae, facilitate specific proprioceptive senses to help orient the rodent in a three-dimensional environment. Murids have evolved a unique defense mechanism termed degloving that allows for escape from predation through the loss of the outermost integumentary layer on the tail. However, this mechanism is associated with multiple pathologies that have been the subject of investigation. Multiple studies have explored the thermoregulatory capacity of rodent tails by subjecting test organisms to varying levels of physical activity and quantifying heat conduction via the animals' tails. One study demonstrated a significant disparity in heat dissipation from a rat's tail relative to its abdomen. This observation was attributed to the higher proportion of vascularity in the tail, as well as its higher surface-area-to-volume ratio, which directly relates to heat's ability to dissipate via the skin. These findings were confirmed in a separate study analyzing the relationships of heat storage and mechanical efficiency in rodents that exercise in warm environments. In this study, the tail was a focal point in measuring heat accumulation and modulation. On the other hand, the tail's ability to function as a proprioceptive sensor and modulator has also been investigated. As aforementioned, the tail demonstrates a high degree of muscularization and subsequent innervation that ostensibly collaborate in orienting the organism. Specifically, this is accomplished by coordinated flexion and extension of tail muscles to produce slight shifts in the organism's center of mass, orientation, etc., which ultimately assists it with achieving a state of proprioceptive balance in its environment. Further mechanobiological investigations of the constituent tendons in the tail of the rat have identified multiple factors that influence how the organism navigates its environment with this structure. A particular example is that of a study in which the morphology of these tendons is explicated in detail. Namely, cell viability tests of tendons of the rat's tail demonstrate a higher proportion of living fibroblasts that produce the collagen for these fibers. As in humans, these tendons contain a high density of golgi tendon organs that help the animal assess stretching of muscle in situ and adjust accordingly by relaying the information to higher cortical areas associated with balance, proprioception, and movement. The characteristic tail of murids also displays a unique defense mechanism known as degloving in which the outer layer of the integument can be detached in order to facilitate the animal's escape from a predator. This evolutionary selective pressure has persisted despite a multitude of pathologies that can manifest upon shedding part of the tail and exposing more interior elements to the environment. Paramount among these are bacterial and viral infection, as the high density of vascular tissue within the tail becomes exposed upon avulsion or similar injury to the structure. The degloving response is a nocifensive response, meaning that it occurs when the animal is subjected to acute pain, such as when a predator snatches the organism by the tail. As pets Specially bred rats have been kept as pets at least since the late 19th century. Pet rats are typically variants of the species brown rat, but black rats and giant pouched rats are also sometimes kept. Pet rats behave differently from their wild counterparts depending on how many generations they have been kept as pets. Pet rats do not pose any more of a risk of zoonotic diseases than pets such as cats or dogs. Tamed rats are generally friendly and can be taught to perform selected behaviors. Selective breeding has brought about different color and marking varieties in rats. Genetic mutations have also created different fur types, such as rex and hairless. Congenital malformation in selective breeding have created the dumbo rat, a popular pet choice due to their low, saucer-shaped ears. A breeding standard exists for rat fanciers wishing to breed and show their rat at a rat show. As subjects for scientific research In 1895, Clark University in Worcester, Massachusetts, established a population of domestic albino brown rats to study the effects of diet and for other physiological studies. Over the years, rats have been used in many experimental studies, adding to our understanding of genetics, diseases, the effects of drugs, and other topics that have provided a great benefit for the health and wellbeing of humankind. The aortic arches of the rat are among the most commonly studied in murine models due to marked anatomical homology to the human cardiovascular system. Both rat and human aortic arches exhibit subsequent branching of the brachiocephalic trunk, left common carotid artery, and left subclavian artery, as well as geometrically similar, nonplanar curvature in the aortic branches. Aortic arches studied in rats exhibit abnormalities similar to those of humans, including altered pulmonary arteries and double or absent aortic arches. Despite existing anatomical analogy in the inthrathoracic position of the heart itself, the murine model of the heart and its structures remains a valuable tool for studies of human cardiovascular conditions. The rat's larynx has been used in experimentations that involve inhalation toxicity, allograft rejection, and irradiation responses. One experiment described four features of the rat's larynx. The first being the location and attachments of the thyroarytenoid muscle, the alar cricoarytenoid muscle, and the superior cricoarytenoid muscle, the other of the newly named muscle that ran from the arytenoid to a midline tubercle on the cricoid. The newly named muscles were not seen in the human larynx. In addition, the location and configuration of the laryngeal alar cartilage was described. The second feature was that the way the newly named muscles appear to be familiar to those in the human larynx. The third feature was that a clear understanding of how MEPs are distributed in each of the laryngeal muscles was helpful in understanding the effects of botulinum toxin injection. The MEPs in the posterior cricoarytenoid muscle, lateral cricoarytenoid muscle, cricothyroid muscle, and superior cricoarytenoid muscle were focused mostly at the midbelly. In addition, the medial thyroarytenoid muscle were focused at the midbelly while the lateral thyroarytenoid muscle MEPs were focused at the anterior third of the belly. The fourth and final feature that was cleared up was how the MEPs were distributed in the thyroarytenoid muscle. Laboratory rats have also proved valuable in psychological studies of learning and other mental processes (Barnett 2002), as well as to understand group behavior and overcrowding (with the work of John B. Calhoun on behavioral sink). A 2007 study found rats to possess metacognition, a mental ability previously only documented in humans and some primates. Domestic rats differ from wild rats in many ways. They are calmer and less likely to bite; they can tolerate greater crowding; they breed earlier and produce more offspring; and their brains, livers, kidneys, adrenal glands, and hearts are smaller (Barnett 2002). Brown rats are often used as model organisms for scientific research. Since the publication of the rat genome sequence, and other advances, such as the creation of a rat SNP chip, and the production of knockout rats, the laboratory rat has become a useful genetic tool, although not as popular as mice. Entirely new breeds or "lines" of brown rats, such as the Wistar rat, have been bred for use in laboratories. Much of the genome of Rattus norvegicus has been sequenced. When it comes to conducting tests related to intelligence, learning, and drug abuse, rats are a popular choice due to their high intelligence, ingenuity, aggressiveness, and adaptability. Their psychology seems in many ways similar to that of humans. Inspired by B.F. Skinner’s famous box which dispensed food pellets when rats pushed a lever, photographer Augustin Lignier gave two rats periodic, unpredictable rewards for pressing a button. He likened their repeated button-pressing behaviors to people’s fascinations with digital and social media. General intelligence Early studies found evidence both for and against measurable intelligence using the "g factor" in rats. Part of the difficulty of understanding animal cognition, generally, is determining what to measure. One aspect of intelligence is the ability to learn, which can be measured using a maze like the T-maze. Experiments done in the 1920s showed that some rats performed better than others in maze tests, and if these rats were selectively bred, their offspring also performed better, suggesting that in rats an ability to learn was heritable in some way. As food Rat meat is a food that, while taboo in some cultures, is a dietary staple in others. Working rats Rats have been used as working animals. Tasks for working rats include the sniffing of gunpowder residue, demining, acting and animal-assisted therapy. Rats have a keen sense of smell and are easy to train. These characteristics have been employed, for example, by the Belgian non-governmental organization APOPO, which trains rats (specifically African giant pouched rats) to detect landmines and diagnose tuberculosis through smell. As pests Rats have long been considered deadly pests. Once considered a modern myth, the rat flood in India occurs every fifty years, as armies of bamboo rats descend upon rural areas and devour everything in their path. Rats have long been held up as the chief villain in the spread of the Bubonic Plague; however, recent studies show that rats alone could not account for the rapid spread of the disease through Europe in the Middle Ages. Still, the Centers for Disease Control does list nearly a dozen diseases directly linked to rats. Most urban areas battle rat infestations. A 2015 study by the American Housing Survey (AHS) found that eighteen percent of homes in Philadelphia showed evidence of rodents. Boston, New York City, and Washington, D.C., also demonstrated significant rodent infestations. Indeed, rats in New York City are famous for their size and prevalence. The urban legend that the rat population in Manhattan equals that of its human population was definitively refuted by Robert Sullivan in his book Rats but illustrates New Yorkers' awareness of the presence, and on occasion boldness and cleverness, of the rodents. New York has specific regulations for eradicating rats; multifamily residences and commercial businesses must use a specially trained and licensed rat catcher. Chicago was declared the "rattiest city" in the US by the pest control company Orkin in 2020, for the sixth consecutive time. It's followed by Los Angeles, New York, Washington, DC, and San Francisco. To help combat the problem, a Chicago animal shelter has placed more than 1000 feral cats (sterilized and vaccinated) outside of homes and businesses since 2012, where they hunt and catch rats while also providing a deterrent simply by their presence. Rats have the ability to swim up sewer pipes into toilets. Rats will infest any area that provides shelter and easy access to sources of food and water, including under sinks, near garbage, and inside walls or cabinets. In the spread of disease Rats can serve as zoonotic vectors for certain pathogens and thus spread disease, such as bubonic plague, Lassa fever, leptospirosis, and Hantavirus infection. Researchers studying New York City wastewater have also cited rats as the potential source of "cryptic" SARS-CoV-2 lineages, due to unknown viral RNA fragments in sewage matching mutations previously shown to make SARS-CoV-2 more adept at rodent-based transmission. Rats are also associated with human dermatitis because they are frequently infested with blood feeding rodent mites such as the tropical rat mite (Ornithonyssus bacoti) and spiny rat mite (Laelaps echidnina), which will opportunistically bite and feed on humans, where the condition is known as rat mite dermatitis. As invasive species When introduced into locations where rats previously did not exist, they can wreak an enormous degree of environmental degradation. Rattus rattus, the black rat, is considered to be one of the world's worst invasive species. Also known as the ship rat, it has been carried worldwide as a stowaway on seagoing vessels for millennia and has usually accompanied men to any new area visited or settled by human beings by sea. Rats first got to countries such as America and Australia by stowing away on ships. The similar species Rattus norvegicus, the brown rat or wharf rat, has also been carried worldwide by ships in recent centuries. The ship or wharf rat has contributed to the extinction of many species of wildlife, including birds, small mammals, reptiles, invertebrates, and plants, especially on islands. True rats are omnivorous, capable of eating a wide range of plant and animal foods, and have a very high birth rate. When introduced to a new area, they quickly reproduce to take advantage of the new food supply. In particular, they prey on the eggs and young of forest birds, which on isolated islands often have no other predators and thus have no fear of predators. Some experts believe that rats are to blame for between forty percent and sixty percent of all seabird and reptile extinctions, with ninety percent of those occurring on islands. Thus man has indirectly caused the extinction of many species by accidentally introducing rats to new areas. Rat-free areas Rats are found in nearly all areas of Earth which are inhabited by human beings. The only rat-free continent is Antarctica, which is too cold for rat survival outdoors, and its lack of human habitation does not provide buildings to shelter them from the weather. However, rats have been introduced to many of the islands near Antarctica, and because of their destructive effect on native flora and fauna, efforts to eradicate them are ongoing. In particular, Bird Island (just off rat-infested South Georgia Island), where breeding seabirds could be badly affected if rats were introduced, is subject to special measures and regularly monitored for rat invasions. As part of island restoration, some islands' rat populations have been eradicated to protect or restore the ecology. Hawadax Island, Alaska was declared rat free after 229 years and Campbell Island, New Zealand after almost 200 years. Breaksea Island in New Zealand was declared rat free in 1988 after an eradication campaign based on a successful trial on the smaller Hawea Island nearby. In January 2015, an international "Rat Team" (organized by the South Georgia Heritage Trust) set sail from the Falkland Islands for the British Overseas Territory of South Georgia and the South Sandwich Islands on board a ship carrying three helicopters and 100 tons of rat poison with the objective of "reclaiming the island for its seabirds". Rats had wiped out more than 90% of the seabirds on South Georgia, and the sponsors hoped that once the rats were gone, it would regain its former status as home to the greatest concentration of seabirds in the world. The Canadian province of Alberta is notable for being the largest inhabited area on Earth which is free of true rats due to very aggressive government rat control policies. It has large numbers of native pack rats, also called bushy-tailed wood rats, but they are forest-dwelling vegetarians which are much less destructive than true rats. Alberta was settled by Europeans relatively late in North American history and only became a province in 1905. Black rats cannot survive in its climate at all, and brown rats must live near people and in their structures to survive the winters. There are numerous predators in Canada's vast natural areas which will eat non-native rats, so it took until 1950 for invading rats to make their way over land from Eastern Canada. Immediately upon their arrival at the eastern border with Saskatchewan, the Alberta government implemented an extremely aggressive rat control program to stop them from advancing further. A systematic detection and eradication system was used throughout a control zone about long and wide along the eastern border to eliminate rat infestations before the rats could spread further into the province. Shotguns, bulldozers, high explosives, poison gas, and incendiaries were used to destroy rats. Numerous farm buildings were destroyed in the process. Initially, tons of arsenic trioxide were spread around thousands of farm yards to poison rats, but soon after the program commenced the rodenticide and medical drug warfarin was introduced, which is much safer for people and more effective at killing rats than arsenic. Forceful government control measures, strong public support and enthusiastic citizen participation continue to keep rat infestations to a minimum. The effectiveness has been aided by a similar but newer program in Saskatchewan which prevents rats from even reaching the Alberta border. Alberta still employs an armed rat patrol to control rats along Alberta's borders. About ten single rats are found and killed per year, and occasionally a large localized infestation has to be dug out with heavy machinery, but the number of permanent rat infestations is zero. In culture Ancient Romans did not generally differentiate between rats and mice, instead referring to the former as mus maximus (big mouse) and the latter as mus minimus (little mouse). On the Isle of Man, there is a taboo against the word "rat". Asian cultures The rat (sometimes referred to as a mouse) is the first of the twelve animals of the Chinese zodiac. People born in this year are expected to possess qualities associated with rats, including creativity, intelligence, honesty, generosity, ambition, a quick temper and wastefulness. People born in a year of the rat are said to get along well with "monkeys" and "dragons", and to get along poorly with "horses". In Indian tradition, rats are seen as the vehicle of Ganesha, and a rat's statue is always found in a temple of Ganesh. In the northwestern Indian city of Deshnoke, the rats at the Karni Mata Temple are held to be destined for reincarnation as Sadhus (Hindu holy men). The attending priests feed milk and grain to the rats, of which the pilgrims also partake. European cultures European associations with the rat are generally negative. For instance, "Rats!" is used as a substitute for various vulgar interjections in the English language. These associations do not draw, per se, from any biological or behavioral trait of the rat, but possibly from the association of rats (and fleas) with the 14th-century medieval plague called the Black Death. Rats are seen as vicious, unclean, parasitic animals that steal food and spread disease. In 1522, the rats in Autun, France were charged and put on trial for destroying crops. However, some people in European cultures keep rats as pets and conversely find them to be tame, clean, intelligent, and playful. Rats are often used in scientific experiments; animal rights activists allege the treatment of rats in this context is cruel. The term "lab rat" is used, typically in a self-effacing manner, to describe a person whose job function requires them to spend a majority of their work time engaged in bench-level research (such as postgraduate students in the sciences). Terminology Rats are frequently blamed for damaging food supplies and other goods, or spreading disease. Their reputation has carried into common parlance: in the English language, "rat" is often an insult or is generally used to signify an unscrupulous character; it is also used, as a synonym for the term nark, to mean an individual who works as a police informant or who has turned state's evidence. Writer/director Preston Sturges created the humorous alias "Ratskywatsky" for a soldier who seduced, impregnated, and abandoned the heroine of his 1944 film, The Miracle of Morgan's Creek. It is a term (noun and verb) in criminal slang for an informant – "to rat on someone" is to betray them by informing the authorities of a crime or misdeed they committed. Describing a person as "rat-like" usually implies he or she is unattractive and suspicious. Among trade unions, the word "rat" is also a term for nonunion employers or breakers of union contracts, and this is why unions use inflatable rats. Fiction Depictions of rats in fiction are historically inaccurate and negative. The most common falsehood is the squeaking almost always heard in otherwise realistic portrayals (i.e. nonanthropomorphic). While the recordings may be of actual squeaking rats, the noise is uncommon – they may do so only if distressed, hurt, or annoyed. Normal vocalizations are very high-pitched, well outside the range of human hearing. Rats are also often cast in vicious and aggressive roles when in fact, their shyness helps keep them undiscovered for so long in an infested home. The actual portrayals of rats vary from negative to positive with a majority in the negative and ambiguous. The rat plays a villain in several mouse societies; from Brian Jacques's Redwall and Robin Jarvis's The Deptford Mice, to the roles of Disney's Professor Ratigan and Kate DiCamillo's Roscuro and Botticelli. They have often been used as a mechanism in horror; being the titular evil in stories like The Rats or H.P. Lovecraft's The Rats in the Walls and in films like Willard and Ben. Another terrifying use of rats is as a method of torture, for instance in Room 101 in George Orwell's Nineteen Eighty-Four or The Pit and the Pendulum by Edgar Allan Poe. Selfish helpfulness —those willing to help for a price— has also been attributed to fictional rats. Templeton, from E. B. White's Charlotte's Web, repeatedly reminds the other characters that he is only involved because it means more food for him, and the cellar-rat of John Masefield's The Midnight Folk requires bribery to be of any assistance. By contrast, the rats appearing in the Doctor Dolittle books tend to be highly positive and likeable characters, many of whom tell their remarkable life stories in the Mouse and Rat Club established by the animal-loving doctor. Some fictional works use rats as the main characters. Notable examples include the society created by O'Brien's Mrs. Frisby and the Rats of NIMH, and others include Doctor Rat, and Rizzo the Rat from The Muppets. Pixar's 2007 animated film Ratatouille is about a rat described by Roger Ebert as "earnest... lovable, determined, [and] gifted" who lives with a Parisian garbage-boy-turned-chef. Mon oncle d'Amérique ("My American Uncle"), a 1980 French film, illustrates Henri Laborit's theories on evolutionary psychology and human behaviors by using short sequences in the storyline showing lab rat experiments. In Harry Turtledove's science fiction novel Homeward Bound, humans unintentionally introduce rats to the ecology at the home world of an alien race which previously invaded Earth and introduced some of its own fauna into its environment. A. Bertram Chandler pitted the space-bound protagonist of a long series of novels, Commodore Grimes, against giant, intelligent rats who took over several stellar systems and enslaved their human inhabitants. "The Stainless Steel Rat" is nickname of the (human) protagonist of a series of humorous science fiction novels written by Harry Harrison. Wererats, therianthropic creatures able to take the shape of a rat, have appeared in the fantasy or horror genre since the 1970s. The term is a neologism coined in analogy to werewolf. The concept has since become common in role playing games like Dungeons & Dragons and fantasy fiction like the Anita Blake series. The Pied Piper One of the oldest and most historic stories about rats is "The Pied Piper of Hamelin", in which a rat-catcher leads away an infestation with enchanted music. The piper is later refused payment, so he in turn leads away the town's children. This tale, traced to Germany around the late 13th century, has inspired adaptations in film, theatre, literature, and even opera. The subject of much research, some theories have intertwined the tale with events related to the Black Plague, in which black rats played an important role. Fictional works based on the tale that focus heavily on the rat aspect include Pratchett's The Amazing Maurice and his Educated Rodents, and Belgian graphic novel (The Ball of the Dead Rat). Furthermore, a linguistic phenomenon when a wh-expression drags with it an entire encompassing phrase to the front of the clause has been named pied-piping after "Pied Piper of Hamlin" (see also pied-piping with inversion). See also Chicago rat hole List of fictional rodents Rat-baiting Rat king Rodentology Rat Guard References Further reading List of books and articles about rats, is a non-fiction list. Smith, Robert (Rat-catcher) (1786) ''The universal directory for taking alive and destroying rats External links High-Resolution Images of the Rat Brain (archived 3 March 2016) National Bio Resource Project for the Rat in Japan (archived 29 April 2005) Rat Behaviour and Biology Rat Genome Database Mammal common names Rodents Scavengers Extant Pleistocene first appearances Storage pests Paraphyletic groups Articles containing video clips Animals bred for albinism on a large scale
Rat
Biology
6,027
9,526,852
https://en.wikipedia.org/wiki/Monoxenous%20development
Monoxenous development, or monoxeny, characterizes a parasite whose development is restricted to a single host species. The etymology of the terms monoxeny / monoxenous derives from the two ancient Greek words (), meaning "unique", and (), meaning "foreign". In a monoxenous life cycle, the parasitic species may be strictly host specific (using only a single host species, such as gregarines) or not (e.g. Eimeria, Coccidia). References External links xeno-, xen- word info Parasitism
Monoxenous development
Biology
128
1,156,271
https://en.wikipedia.org/wiki/Teradyne
Teradyne, Inc. is an American automatic test equipment (ATE) designer and manufacturer based in North Reading, Massachusetts. Its high-profile customers include Samsung, Qualcomm, Intel, Analog Devices, Texas Instruments and IBM. History Teradyne was founded by Alex d'Arbeloff and Nick DeWolf, who were classmates at the Massachusetts Institute of Technology (MIT) in the late 1940s. The men founded Teradyne in 1960, and set up shop in rented space above Joe and Nemo's hotdog stand in downtown Boston. The name, Teradyne, was intended to represent a very forceful presence. 1,000,000,000,000 dynes = 10 meganewtons (2,248,089 pounds-force or 1,019,716 kilograms-force). d'Arbeloff and DeWolf knew that testing electronic components in high-volume production would reach a bottleneck, unless the tasks performed by technicians and laboratory instruments could be automated. Their business plan involved a new breed of "industrial-grade" electronic test equipment, known for its technical performance, reliability and economic payback. In 1961, d'Arbeloff and DeWolf sold their first product, a logic-controlled go/no-go diode tester, to Raytheon. In the 1980s, Teradyne expanded its sub-assembly test business by acquiring Zehntel, a leading manufacturer of in-circuit board test systems. In 1987, the company introduced the first analog VLSI test system, the A500, which led the market in testing integrated devices that provided the interface between analog and digital data. The 1990s brought more diversification. The company acquired Megatest Corporation, which expanded its Semiconductor Test group to include smaller and less expensive testers than had been currently available. Teradyne also became a market leader in high-end System-on-a-Chip (SoC) test with its Catalyst and Tiger test systems. In 2000, Teradyne Connection Systems acquired Herco Technologies and Synthane-Taylor, and a year later they acquired circuit-board test and inspection leader, GenRad, and merged it into the Assembly Test Division. GenRad's Diagnostic Solutions, which made test equipment for the automotive manufacturing and service industries, became a separate product group for Teradyne. In 2006, Teradyne sold its two Boston buildings and consolidated all of its Boston-area staff to its current site in North Reading, Massachusetts. Teradyne grew its semiconductor test business with the addition of Nextest and Eagle Test Systems in 2008, serving the flash memory test market and high-volume analog test market, respectively. That same year, Teradyne entered the disk-drive test market with the internally developed Neptune product, which serves the data-intensive internet and computing storage markets. In 2010, Teradyne celebrated its 50th anniversary. The following year, it acquired LitePoint Corporation, a leading provider of test instruments for use with wireless products, such as laptops PCs, tablets, home networking and cell phones. With the addition of LitePoint, Teradyne's product portfolio stretched from wafer test of semiconductor chips to system-level circuit boards to products ready for store shelves. Upon d'Arbeloff's retirement, George Chamillard assumed the post of President and CEO. He was replaced at his retirement by former CFO Mike Bradley. Bradley retired in January 2014, and was in turn replaced by Semiconductor Test Division president Mark Jagiela. Teradyne operates major facilities around the world, with headquarters in North Reading, Massachusetts. Timeline Timeline showing notable milestones, major acquisitions and key innovations. 1960 - Teradyne founded in Boston, MA by Alex d'Arbeloff and Nick DeWolf. 1961 - First product, the D133 diode tester, sold by Raytheon Company. 1966 - Teradyne moves headquarters from the Summer street loft above Joe & Nemo's hot dog stand to 183 Essex Street, Boston. 1966 - Teradyne introduces the first computer controlled chip tester, the J259. 1969 - Teradyne launches Teradyne Dynamic Systems after acquiring Triangle Systems to develop digital semiconductor test systems in Chatsworth, CA. 1970 - Teradyne becomes a publicly owned company and is listed on the New York Stock Exchange (symbol TER), 420,000 shares are sold to the public. 1971 - Alex d'Arbeloff is named President of Teradyne. 1973 - Teradyne launches Teradyne Central in Chicago, IL to develop telecommunications test systems. 1973 - Teradyne introduces the world's first subscriber-line test system, 4TEL. 1979 - Teradyne passes $100 million in sales; A300 Analog LSI test system introduced. 1980 - Teradyne introduces the first combinational in-circuit/functional circuit board test system, the L200. 1981 - Teradyne announces the first VLSI test system with non-stop pattern generation, the J941. 1986 - Teradyne introduces the first analog VLSI test system, the A500. 1988 - Teradyne introduces the first PC-based circuit board tester to use spreadsheet programming, the Z1800-Series. 1990 - Teradyne launches company-wide Total Quality Management initiative. 1993 - Teradyne receives $63 million order from Deutsche Telekom for 4TEL telecommunications test systems, a record for the company. 1996 - Teradyne introduces the Spectrum 8800-Series Manufacturing Test Platform, the first VXI-based in-circuit tester. 1996 - Marlin Memory Test system introduced; the first system capable of simultaneous test and redundancy analysis of DRAMs. 1997 - Teradyne creates the J973, the first Structural to Functional test system with the ability to shift in real time. 1997 - Teradyne introduces Catalyst, the first System-On-A-Chip (SOC) test system. 1998 - Teradyne introduces the Integra J750, a test system for high volume test of low-cost devices. 2000 - Teradyne Japan Division announces a new generation of image sensor test systems, the IP-750. 2004 - Teradyne introduces the FLEX family of test systems, for high volume, high mix, complex SOC devices. 2006 - Teradyne moves headquarters to North Reading, MA. 2008 - Teradyne acquires Eagle Test Systems and Nextest Systems. 2011 - Teradyne acquires LitePoint to advance production testing for the development and manufacturing of wireless devices. 2015 - Teradyne acquires Danish company Universal Robots. 2018 - Teradyne acquires Mobile Industrial Robots (MiR) and Energid to expand its Industrial Automation business to include Autonomous Mobile Robots and motion control and simulation software for robotics. 2019 - Teradyne acquires AutoGuide Mobile Robots. 2023 - Teradyne acquires a 10% equity stake in Technoprobe. Divisions The Semiconductor Test Division provides test equipment used by integrated circuit manufacturers to test logic, RF, analog, power, mixed-signal and memory devices. Teradyne manufactures five principal families of testers known as the "J750", "FLEX," "UltraFLEX," “Eagle” and “Magnum” series. These testers are used by semiconductor manufacturers to test and classify the individual devices ("dies") on a completed semiconductor wafer and then used again to retest the parts once they are enclosed in their final packaging. The System Test Group builds testers for completed circuit boards (printed circuit boards/printed wiring boards) and hard drives. The division addresses next-level electronics production for consumer, communications, industrial and government customers. Major product families in the system test business include: TestStation, Spectrum Series, High-Speed Subsystem, Neptune, Saturn and Titan. Portions of this division were acquired when Teradyne purchased GenRad in 2002. LitePoint, Teradyne's wireless test business, provides services for leading manufacturers of wireless modules and consumer electronics. Serving the rapidly growing wireless communications industry, LitePoint products are used by chipset and product designers along with their contract manufacturers. LitePoint's products include IQxel for connectivity test and IQxstream for cellular test. Teradyne's industrial automation business is composed of Universal Robots, Mobile Industrial Robots, AutoGuide and Energid. Universal Robots (UR) provides collaborative robots (cobots) that work side-by-side with production workers. UR cobots automate tasks including machine tending, packaging, gluing, painting, polishing and assembling parts. The cobots are deployed in the automotive, food and agriculture, furniture and equipment, metal and machining, plastics and polymers, and pharma and chemicals industries. Mobile Industrial Robots (MiR) offers autonomous mobile robots for managing internal logistics (for payloads under 1,500 kg). These robots are currently used in the transportation, healthcare, pharmaceutical, metal and plastics, fashion, technology and food industries. AutoGuide Mobile Robots manufactures modular industrial mobile robots (for payloads up to 45,000 kg). These high-payload robots are used for assembly, material handling, warehousing and distribution operations across multiple industries. Energid specializes in the control and simulation of complex robotic systems, and is used in the aerospace, agriculture, manufacturing, transportation, defense and medical industries. References External links Electronics companies of the United States Equipment semiconductor companies Electronic test equipment manufacturers Manufacturing companies based in Massachusetts Companies based in Middlesex County, Massachusetts North Reading, Massachusetts 1970s initial public offerings Electronics companies established in 1960 1960 establishments in Massachusetts Companies listed on the New York Stock Exchange Computer companies of the United States Computer hardware companies
Teradyne
Technology,Engineering
1,998
8,839,340
https://en.wikipedia.org/wiki/Graph%20toughness
In graph theory, toughness is a measure of the connectivity of a graph. A graph is said to be -tough for a given real number if, for every integer , cannot be split into different connected components by the removal of fewer than vertices. For instance, a graph is -tough if the number of components formed by removing a set of vertices is always at most as large as the number of removed vertices. The toughness of a graph is the maximum for which it is -tough; this is a finite number for all finite graphs except the complete graphs, which by convention have infinite toughness. Graph toughness was first introduced by . Since then there has been extensive work by other mathematicians on toughness; the recent survey by lists 99 theorems and 162 papers on the subject. Examples Removing vertices from a path graph can split the remaining graph into as many as connected components. The maximum ratio of components to removed vertices is achieved by removing one vertex (from the interior of the path) and splitting it into two components. Therefore, paths are -tough. In contrast, removing vertices from a cycle graph leaves at most remaining connected components, and sometimes leaves exactly connected components, so a cycle is -tough. Connection to vertex connectivity If a graph is -tough, then one consequence (obtained by setting ) is that any set of nodes can be removed without splitting the graph in two. That is, every -tough graph is also -vertex-connected. Connection to Hamiltonicity observed that every cycle, and therefore every Hamiltonian graph, is -tough; that is, being -tough is a necessary condition for a graph to be Hamiltonian. He conjectured that the connection between toughness and Hamiltonicity goes in both directions: that there exists a threshold such that every -tough graph is Hamiltonian. Chvátal's original conjecture that would have proven Fleischner's theorem but was disproved by . The existence of a larger toughness threshold for Hamiltonicity remains open, and is sometimes called Chvátal's toughness conjecture. Computational complexity Testing whether a graph is -tough is co-NP-complete. That is, the decision problem whose answer is "yes" for a graph that is not -tough, and "no" for a graph that is -tough, is NP-complete. The same is true for any fixed positive rational number : testing whether a graph is -tough is co-NP-complete . See also Strength of a graph, an analogous concept for edge deletions Tutte–Berge formula, a related characterization of the size of a maximum matching in a graph Harris graphs, a family of graphs that are tough, Eulerian, and non-Hamiltonian References . . . . Graph connectivity Graph invariants NP-complete problems
Graph toughness
Mathematics
566
58,165,091
https://en.wikipedia.org/wiki/Raymond%20Clark%20%28engineer%29
Raymond Clark (engineer) is a British engineer. He is best known for his leadership of the Society of Environmental Engineers where he served as chief executive for nearly two decades. Clark earned his BSc in engineering in 1964 from the University of Manchester and his BA in psychology in 1967. Starting in 2001, he served as the chief executive of the Society of Environmental Engineers. In 2010, Clark was elected to the Engineering Council where he represented the smaller licensed member institutions. Clark is a Chartered Engineer, a Chartered Environmentalist, and a Fellow of the Society of Environmental Engineers. In 2005, he was recognized as an Officer of the Most Excellent Order of the British Empire. In 2014, he was recognized as an honorary fellow of the Society for the Environment. References Year of birth missing (living people) Living people Environmental engineers Members of the Order of the British Empire Fellows of the Society of Environmental Engineers
Raymond Clark (engineer)
Chemistry,Engineering
178
74,438,085
https://en.wikipedia.org/wiki/Ayya%20%28smartphone%29
AYYA is a smartphone series developed by Rostec. The first and only device in the series is the AYYA T1 which released on 28 October 2021 with Android 11 and plans to upgrade to Aurora OS. The T1 sold poorly and was affected by a CPU chip undersupply issue due in part to the supplier TSMC halting trade with Russia in response to the 2022 invasion of Ukraine. It has a button which disables the microphone and cameras. History At the end of October 2021 the first sales of the AYYA T1 smartphone began in Russia. Then from the domestic smartphone, there was only software that is legally required for installation. In March 2022 a modification of AYYA T1 appeared on the Aurora OS for legal entities (corporate customers). In March 2023, the media reported that retail sales of the AYYA T1 smartphone did not exceed 1 thousand units in 2 years out of a batch of 5 thousand units produced. Product series AYYA T1: 19,000 rubles or $200 USD 6.56-inch screen, two main cameras (12 and 5 MP), a 13 MP front camera, a MediaTek Helio P70 processor, 4 GB of RAM, and 64 GB of built-in memory with 128 GB of expansion AYYA T2 is under development References External links Смартфон AYYA T1 Economy of Russia Rostec
Ayya (smartphone)
Technology
291
11,128,372
https://en.wikipedia.org/wiki/Puccinia%20schedonnardii
Puccinia schedonnardii is a basidiomycete fungus that affects cotton. More commonly known as a “rust,” this pathogen typically affects cotton leaves, which can decrease the quality of the boll at time of harvest. As large percentages of cotton in the United States are resistant to various rust varieties, there is little economic importance to this disease. In places where rust is prevalent, however, growers could see up to a 50% reduction in yield due to rust infection. Hosts and symptoms For Puccinia schedonnardii the host range is specific to cotton, but is not specific to a certain cotton species. Alternate hosts are necessary to complete the life cycle, and include many types of gramma grasses. Symptomology is similar to that of other rust species. First appearing as small yellow pustules on leaves, bolls, and stems, the spots will then transform into larger, orange/red pustules which release aeciospores. Rust lesions can cause leaves or stems to become weak and break or fall off, resulting in decreased photosynthetic ability and extreme difficulty during harvest. Symptoms on the alternate host grasses are small ovular red/brown (rust colored) powdery lesions, which release uredospores. Disease cycle and pathogenesis The disease cycle of Puccinia schedonardii does not vary from other rust disease cycles. This pathogen is heteroecious and exhibits a polycyclic disease cycle. Puccinia schedonnardii overwinters as teliospores that are produced in telia on the alternate host. In the spring, the teliospores germinate to produce basidiospores. The basidiospores are then windblown to the cotton host where they enter via stomata. When basidiospores germinate, they produce a mycelium from which flask-shape pycnia as well as receptive hyphae are formed. From here, nothing happens until the pycniospores produced by the pycnia fertilize receptive hyphae of a different mating type. Over a period of five to ten days the dikaryotic mycelia formed by the joining of the receptive hyphae and the pycniospore grows through the cotton leaf to produce aecia. Aecia are the pustules seen on the leaves of the cotton. When conditions are right, and adequate moisture is achieved, aeciospores are released from the aecia. These aeciospores land on the alternate grass host and infect it via a germ tube. Eventually, a uredium is formed from this germ tube. The uredospores released by the uredium are then able to do one of two things. As a polycyclic disease, the uredospores present an opportunity for secondary infection in a single season. The spores can either reinfect grasses by spreading uredospores that become uredia which leads to more uredospores and a likely epidemic; or they can become overwintering teliospores, thus preparing for the cycle to begin again in the spring. Environment Rust infections can proliferate under humid conditions and periods of prolonged wetness. Ideally, periods of or more of rain followed by at least 12 hours of high humidity are needed for the disease to develop. It is the moisture on the leaf surfaces that leads to disease and spore release/germination, so even well drained soils are susceptible to rust outbreaks. However, evidence has shown that poorly-drained soils may have increased incidence of fungal pathogens like rust, due to the increased relative humidity underneath the canopy. Management and importance Disease management can be difficult as it would be impossible to eliminate the alternate host grass species. However, the utilization of non-susceptible crops in a rotation can decrease infection rate in future cotton crops. An application of Mancozeb foliar fungicide can be used to prevent the disease, but little can be done after infection. This is not effective as a treatment, only as a prevention tool. Similarly, growers are proactive in their prevention of the disease, ensuring the proper fungicide applications. As previously stated, the importance of this disease can be severe, but growers tend to plant resistant varieties in areas where rust has been prevalent. One common cultivar with transferred resistance is Gossypium hirsutum, which was transferred from G. arboretum and G. anomalum. Without resistance, Puccinia schedonnardii can cause a 50% yield loss, which is why generally resistant varieties are so widely used. Cotton is one of the most important textile fibers, and the United States is ranked third in cotton production. It is for that reason that such a large focus of agronomic research funding has gone to cotton research to develop solutions to cotton rust. See also List of Puccinia species References External links USDA ARS Fungal Database Fungal plant pathogens and diseases Cotton diseases schedonnardii Fungi described in 1888 Fungus species
Puccinia schedonnardii
Biology
1,050
31,256,890
https://en.wikipedia.org/wiki/Gholam%20A.%20Peyman
Gholam A. Peyman (born 1 January 1937) is an Iranian American ophthalmologist, retina surgeon, and inventor. He is best known for his invention of LASIK eye surgery, a vision correction procedure designed to allow people to see clearly without glasses. He was awarded the first US patent for the procedure in 1989. Life and career Peyman was born in Shiraz, Iran. At the age of 19, he moved to Germany to begin his medical studies. He received his MD at the University of Freiburg in 1962. He completed his internship at St. Johannes Hospital in Duisburg, Germany in 1964 and at Passaic General Hospital in Passaic, New Jersey in 1965. Peyman completed his residency in ophthalmology and a retina fellowship at the University of Essen, Essen Germany, in 1969 and an additional postdoctoral fellowship in retina at the Jules Stein Eye Institute, UCLA School of Medicine in Los Angeles in 1971. Peyman held the position of assistant professor of ophthalmology at the UCLA School of Medicine from 1971 and served as associate professor and then professor of ophthalmology and ocular oncology at the Illinois Eye and Ear Infirmary, University of Illinois at Chicago during 1971–1987. Peyman held a joint appointment at the School of Medicine and also at the Neuroscience Center of Excellence at the Louisiana State University Medical University Medical Center in New Orleans during 1987–2000. During 1998-2000 Peyman held the Prince Abdul Aziz Bin Ahmed Bin Abdul Aziz Al Saud Chair in Retinal Diseases. During 2000–2006, Peyman served as professor of ophthalmology, ocular oncology and co-director, Vitreo-Retinal Service, Tulane University School of Medicine in New Orleans. During 2006–2007, he was professor of ophthalmology at the University of Arizona, Tucson, with a cross appointment at University of Arizona College of Optical Sciences. He has been emeritus professor of ophthalmology at Tulane University since 2009. Peyman is currently professor of basic medical sciences at the University of Arizona College of Medicine – Phoenix & Optical engineering at the University of Arizona in Tucson. Peyman was awarded in 2013 an honoree doctorate degree from the National University of Cordoba in Argentina. The Invention of LASIK surgery and its improvements At the Illinois Eye and Ear Infirmary, Peyman, because of his interest in the effects of lasers on tissues in the eye, began evaluating the potential use of a laser to modify corneal refraction in rabbits. No prior study had existed on this concept. The laser was applied to the surface of the cornea using different patterns. This laser created significant scarring. His conclusions at that time were: 1) one has to wait for the development of an ablative laser and 2) one should not ablate the surface of the cornea but, instead, the ablation should take place under a flap in order to prevent scarring, pain and other undesirable sequelae. Peyman published the first article on this subject in 1980. In late 1982, he read an article from IBM Laboratories, published in Laser Focus, describing the photo-ablative properties of an excimer laser on organic material. This was very exciting information, but, unfortunately, Peyman did not have access to this laser, which at the time was new and very expensive. By 1985 and beyond, many investigators were interested in ablating the corneal surface. However, because of his previous experience with the laser, Peyman wanted to avoid surface ablation in order to prevent potential corneal scarring and the pain associated with the removal of the corneal epithelium, necessary to expose the surface of the cornea. Therefore, in July 1985, he applied for a patent that described a method of modifying corneal refractive errors using laser ablation under a corneal flap. This US patent was accepted after two revisions and issued in June, 1989. Peyman performed a number of experimental studies evaluating the effect of various excimer lasers in collaboration with Physics Department of the University of Helsinki, Finland. Since he had purchased an Erb-Yag laser in the U.S., he evaluated the concept using this laser in vivo in rabbit and primate eyes and described the creation of a hinged corneal flap to enable the ablation to be performed on the exposed corneal bed, thus reducing the potential for postoperative scarring and pain. Always aware of the potential limitations of his invention, Peyman devoted considerable time and effort in subsequent years to ameliorating them. In order to improve the risk/benefit considerations of the LASIK procedure, he invented in 2004 and patented a broad range of ablative and non-ablative inlays to be placed under the surgically created corneal flap (US Patent 6,702,807). These inlays offered many potential advantages over the standard LASIK technique, the most significant of which is that the inlay procedure is reversible. However, their ablation was not predictable. In October 2009, Peyman invented and applied for a patent on a method of preventing corneal implant rejection, which was approved in 2017 (US Patent 9,681,942). It consisted of forming a Lasik flap in the cornea, raising the flap, inserting a lamellar cornea under the flap so as to overlie the exposed stromal tissue. The inlay is ablated with wavefront guided excimer laser, to correct the refractive errors of the eye, applying a cross linking solution to the inlay and stromal tissue of the cornea, replacing the corneal flap and cross linking the inlay with UV radiation, killing the cellular elements in the inlay and its surrounding cornea, preventing cellular migration in the inlay and its rejection or encapsulation by the host corneal cells. This new procedure is now called “Mesoick” (Meso means Inside, Implant, Crosslinking Keratomileusis (US Patent 9,037,033). This creates an immune privileged cell free space that does not initiate an immune response to an implant. A synthetic, crosslinked organic or polymeric lens can be implanted in the corneal pocket to compensate for the patient's refractive error. The implant can be exchanged as the eye grows or refractive need dictates. Laser in ophthalmology Peyman has been granted 200 US Patents covering a broad range of novel medical devices, intra-ocular drug delivery, surgical techniques, as well as new methods of diagnosis and treatment. First attempt to correct refractive - Modification of rabbit corneal curvature with use of carbon dioxide laser burns (1980) Errors using lasers Evaluations of laser use in ophthalmology - Histopathological studies on transscleral argon-krypton laser coagulation with an exolaser probe (1984) Comparison of the effects of argon fluoride (ArF) and krypton fluoride (KrF) excimer lasers on ocular structures (1985) The Nd:YAG laser 1.3μ wavelength: In vitro effects on ocular structures (1987) Effects of an erbium:YAG laser on ocular structures (1987) Contact laser: Thermal sclerostomy ab interna (1987) Internal trans-pars plana filtering procedure in humans (1988) Internal pars plana sclerotomy with the contact Nd:YAG laser: An experimental study (1988) Intraocular telescope for age related - Age-related macular degeneration and its management (1988) Endolaser for vitrectomy (Arch Ophthalmol. 1980 Nov;98(11):2062-4) New operating microscope with stereovision for the operator and his assistant(US Patent 4,138,191) Development of direct intraocular drug delivery and Vitrectomy Refs. Articles; J Ophthalmic Vis Res. 2018 Apr-Jun;13(2):91-92. Doi, Retina. 2009 Jul-Aug;29(7):875-912. Vitreoretinal surgical techniques; Informa 2007 UK Ltd . Surgical removal of intraocular tumors Can J Ophthalmol. 1988 Aug;23(5):218-23. Br J Ophthalmol. 1998 Oct;82(10):1147-53. Remote controlled system for Laser Surgery This technology enables an ophthalmologist to treat a patient located in another location e.g. another city by a laser system controlled remotely, via the internet, using a sophisticated secure system in a non-contact fashion. Development of precision thermotherapy in oncology Therapy of malignant tumors in early-stage along with imaging and immune therapy and precision localized drug delivery: Tele-laser system and tele- medicine with a novel Dynamic Identity recognition Macular degeneration Retinal pigment epithelium transplantation - A technique for retinal pigment epithelium transplantation for age-related macular degeneration secondary to extensive subfoveal scarring (1991) Photodynamic therapy for ARMD - The effect of light-activating .n ethyl etiopurpurin (SnET2) on normal rabbit choriocapillaries (1996) Problems with and pitfalls of photodynamic therapy (2000) Semiconductor photodiode stimulation - Subretinal semiconductor microphotodiode array (1998) Subretinal implantation of semiconductor-based photodiodes. Durability of novel implant designs (2002) The artificial silicon retina microchip for the treatment of vision loss from retinitis pigmentosa (2004) Testing intravitreal toxicity of Bevacizumab (Avastin), (2006) Oscillatory photodynamic therapy for choroidal neovascularization and central serous retinopathy; a pilot study (2013). 8,141,557 Method of oscillatory thermotherapy of biological tissue. Intravitreal slow-release Rock inhibitors alone or in combination with Anti-VEGF Artificial Retina Stimulation Semiconductor photodiode stimulation of the retina - Subretinal semiconductor microphotodiode array (1998) Subretinal implantation of semiconductor-based photodiodes. Durability of novel implant designs (2002) The artificial silicon retina microchip for the treatment of vision loss from retinitis pigmentosa (2004) Quantum dots and Optogenetic for artificial retinal and brain stimulation and gene therapy 8,409,263—Methods to regulate polarization of excitable cells 8,388,668—Methods to regulate polarization of excitable cells 8,460,351—Methods to regulate polarization and enhance function of excitable cells 8,562,660—Methods to regulate polarization and enhance function of excitable cells Gene therapy with non-viral nanoparticles and CRISPR 10,022,457—Methods to regulate polarization and enhance function of excitable cells Adaptic optic phoropter for automated vision correction and Tunable light field camera in use for VR and AR technology 7,993,399—External lens adapted to change refractive properties 8,409,278—External lens with flexible membranes for automatic correction of the refractive errors of a person 8,603,164—Adjustable fluidic telescope combined with an intraocular lens 9,016,860-Fluidic adaptive optics fundus camera 9,164,206-Variable focal length achromatic lens system comprising a diffractive lens and a refractive lens 9,191,568-Automated camera system with one or more fluidic lenses 9,671,607-Flexible fluidic mirror and hybrid system 9,681,800-Holographic adaptive see-through phoropter 10,133,056-Flexible fluidic mirror and hybrid system Honors and awards Among other awards and honors, Peyman has received the National Medal of Technology and Innovation (2012), the Waring Medal of the Journal of Refractive Surgery (2008), and the American Academy of Ophthalmology's Lifetime Achievement Award (2008) He was named a fellow of the National Academy of Inventors in 2013. References Dr. Peyman's CV (Source: Tulane University) External links 20 YEARS of LASIK Artificial Silicon Retina Tulane Ophthalmology Faculty Dr. Gholam Peyman Official Website American ophthalmologists Drug delivery devices American biographers Living people Iranian ophthalmologists American writers of Iranian descent Iranian expatriate academics 1937 births Iranian expatriates in the United States
Gholam A. Peyman
Chemistry
2,639
10,874,478
https://en.wikipedia.org/wiki/Rope%20caulk
Rope caulk or caulking cord is a type of pliable putty or caulking formed into a rope-like shape. It is typically off-white in color, relatively odorless, and stays pliable for an extended period of time. Rope caulk can be used as caulking or weatherstripping around conventional windows installed in conventional wooden or metal frames (see glazing). It is also used as a form for epoxy work, since epoxy does not adhere to this material. Rope caulk has also been applied to the metallic structure supporting the magnet for a dynamic speaker to cut unwanted resonance of the metal structure, leading to improved speaker performance. It has also been used as a sonic damping material in sensitive phonograph components. History Mortite brand rope caulk was introduced by the J.W. Mortell Co. of Kankakee, Illinois in the 1940s, and called "pliable plastic tape". The trademark application was filed in March, 1943. It was later marketed as "caulking cord". The company was later acquired by Thermwell Products. Mortite Mortite putty is a brand of rope caulk marketed under the Frost King brand. Its primary ingredient is titanium dioxide; it has a specific gravity of 1.34. It is listed by the state of California as containing ingredients known to the state to cause cancer or adversely affect reproductive health (a "P65 Warning"). Notes Plastics Building engineering
Rope caulk
Physics,Engineering
311
483,924
https://en.wikipedia.org/wiki/Audio%20analysis
Audio analysis refers to the extraction of information and meaning from audio signals for analysis, classification, storage, retrieval, synthesis, etc. The observation mediums and interpretation methods vary, as audio analysis can refer to the human ear and how people interpret the audible sound source, or it could refer to using technology such as an audio analyzer to evaluate other qualities of a sound source such as amplitude, distortion, frequency response. Once an audio source's information has been observed, the information revealed can then be processed for the logical, emotional, descriptive, or otherwise relevant interpretation by the user. Natural Analysis The most prevalent form of audio analysis is derived from the sense of hearing. A type of sensory perception that occurs in much of the planet's fauna, audio analysis is a fundamental process of many living beings. Sounds made by the surrounding environment or other living beings provides input to the hearing mechanism, for which the listener's brain can interpret the sound and how it should respond. Examples of functions include speech, startle response, music listening, and more. An inherent ability of humans, hearing is fundamental in communication across the globe, and the process of assigning meaning and value to speech is a complex but necessary function of the human body. The study of the auditory system has been greatly centered using mathematics and the analysis of sinusoidal vibrations and sounds. The Fourier transform has been an essential theorem in understanding how the human ear processes moving air and turns it into the audible frequency range, about 20 to 20,000 Hz. The ear is able to take one complex waveform and process it into varying frequency ranges thanks to differences in the structures of the ear canal that are tuned to specific frequency ranges. The initial sensory input is then analyzed further up in the neurological system where the perception of sound takes place. The auditory system also works in tandem with the neural system so that the listener is capable of spatially locating the direction from which a sound source originated. This is known as the Haas or Precedence effect and is possible due to the nature of having two ears, or auditory receptors. The difference in time it takes for a sound to reach both ears provides the necessary information for the brain to calculate the spatial positioning of the source. Signal Analysis Audio signals can be analyzed in several different ways, depending on the kind of information desired from the signal. Types of signal analysis include: Level and gain Frequency domain analysis Frequency response Total Harmonic Distortion plus Noise (THD+N) Phase Crosstalk Intermodulation distortion (IMD) Stereo and Surround Hardware analyzers have been the primary means of signal analysis since the invention of the first audio analyzer, made by Hewlett-Packard, the HP200A. Hardware analyzers are typically used in engineering, testing, and manufacturing of professional and consumer grade products. As computer technology progressed, integrated software found its way into these hardware systems, and later there would be audio analysis tools that did not require any hardware components save for the computer running the software. Software audio analyzers are regularly used in various stages of music production, such as live audio, mixing, and mastering. These products tend to employ Fast Fourier Transform (FFT) algorithms and processing to provide a visual representation of the signal being analyzed. Display and information types include frequency spectrum, stereo field, surround field, spectrogram, and more. See also References Audio engineering
Audio analysis
Engineering
678
44,581,891
https://en.wikipedia.org/wiki/Eye%20movement%20in%20scene%20viewing
Eye movement in scene viewing refers to the visual processing of information presented in scenes. This phenomenon has been studied in a range of areas such as cognitive psychology and psychophysics, where eye movement can be monitored under experimental conditions. A core aspect in these studies is the division of eye movements into saccades, the rapid movement of the eyes, and fixations, the focus of the eyes on a point. There are several factors which influence eye movement in scene viewing, both the task and knowledge of the viewer (top-down factors), and the properties of the image being viewed (bottom-up factors). The study of eye movement in scene viewing helps to understand visual processing in more natural environments. Typically, when presented with a scene, viewers demonstrate short fixation durations and long saccade amplitudes in the earlier phases of viewing an image, representing ambient processing. This is followed by longer fixations and shorter saccades in the latter phases of scene viewing, representing focal processing (Pannasch et al., 2008). Eye movement behaviour in scene viewing differs between different levels of cognitive development. Fixation durations shorten and saccade amplitudes lengthen with the increase in age. In children, the development of saccades to the amplitude normally found in adults have occur earlier (4–6 years old) than the development of fixation durations (6–8 years old). Yet, the typical pattern of behaviour during scene viewing, when progressing from ambient processing to focal processing, has been observed to occur from the age of 2 years old (Helo, Pannasch, Sirri & Rämä, 2014). Spatial variation There are particular factors which affect where eye movements fixate upon, these include bottom-up factors inherent to the stimulus, and top-down factors inherent to the viewer. Even an initial glimpse of a scene has been found to generate an abstract representation of the image that can be stored in memory for use in subsequent eye movements (Castelhano & Henderson, 2007). In bottom-up factors, eye guidance can be affected by the local contrast or salience of features in an image (Itti & Koch, 2000). An example of this would be an area with a large difference in luminance (Parkhurst et al., 2002), a greater density of edges (Mannan, Ruddock & Wooding, 1996) or binocular disparity determining the distance of different objects on the scene (Jansen et al., 2009). The top-down factors of scenes have more of an impact than bottom-up features in affecting fixation positions. Behaviourally relevant information that are more interesting in a scene is more salient than low-level features, drawing fixations more frequently and more quickly from scene onset (Onat, Açik, Schumann & König, 2014). Local scene colour in a fixation position has an influence on where fixations occur. The presence of colour can increase the likelihood of the item being processed as a semantic object as it can aid the discrimination of the object, making it more interesting to view (Amano & Foster, 2014). When viewers are semantically primed by being presented with consistently similar scenes, the density of fixations increase, and fixation durations decrease (Henderson, Weeks Jr., & Hollingworth, 1999). Information separate to what is presented in a scene also has an effect on the area being fixated upon. Eye movements can be guided anticipatorily by linguistic input, where if an item in the scene is presented verbally, the listener will be more likely to move their visual focus to that object (Staub, Abott & Bogartz, 2012). With regard to factors relating to viewers rather than the scene, differences have been found in cross-cultural research. Westerners have an inclination to concentrate on focal objects in a scene, where they look at focal objects more often and quicker in comparison to East Asians who attend more to contextual information, where they make more saccades to the background of the scene (Chua, Boland & Nisbett, 2002). Temporal variation Regarding the temporality of fixations, average fixation durations last for 300ms on average, although there is a large variability around this approximation. Some of this variability can be explained through global properties of an image, impacting upon both bottom-up processing and top-down processing. During natural scene viewing, the masking of an image by replacing it with a grey field during fixations has an increase in fixation durations (Henderson & Pierce, 2008). More subtle degradations of an image on fixation durations, such as the decrease in luminance of an image during fixations, also increases the length of fixation durations (Henderson, Nuthmann & Luke, 2013). An asymmetric effect is shown where the increase of luminance also increases fixation durations (Walshe & Nuthmann, 2014). However, the change in factors affecting top-down processing, such as blurring or phase noise, increases fixation durations when used to degrade a scene and decreases fixation durations when used to enhance a scene (Henderson, Olejarczyk, Luke & Schmidt, 2014; Einhäuser et al., 2006). Furthermore, temporal and spatial aspects interact in a complex manner. When a picture is first presented on the screen, fixations made within the first second are more likely to be directed toward the left side of the scene, whereas the opposite holds true for the remaining part of the presentation (Ossandón et al., 2014). See also Screen reading References Amano, K. & Foster, D., H. (2014). Influence of local scene color on fixation position in visual search. Journal of the Optical Society of America A, 31, A254-A261. Castelhano, M., S. & Henderson, J., M. (2007). Initial Scene Representations Facilitate Eye Movement Guidance in Visual Search. Journal of Experimental Psychology: Human Perception and Performance, 33, 753-763. Chua, H., F., Boland, J., E. & Nisbett, R., E. (2002). Cultural variation in eye movement during scene perception. Proceedings of the National Academy of Sciences of the United States of America, 102, 12629-12633. Einhäuser, W., Rutishauser, U., Frady, E., P., Nadler, S., König, P. & Kock, C. (2006). The relation of phase noise and luminance contrast to overt attention in complex visual stimuli. Journal of Vision, 6, 1148-1158. Helo, A., Pannasch, S., Sirri, L. & Rämä, P. (2014). The maturation of eye movement behaviour: scene viewing characteristics in children and adults. Vision Research, 103, 83-91. Henderson, J., M., Weeks, Jr., P., A. & Hollingworth, A. (1999). The Effects of Semantic Consistency on Eye Movements During Complex Scene Viewing. Journal of Experimental Psychology: Human Perception and Performance, 25, 210-228. Henderson, J., M. & Pierce, G., L. (2008). Eye movements during scene viewing: Evidence for mixed control of fixation durations. Psychonomic Bulletin & Review, 15, 566-573. Henderson, J., M., Nuthmann, A. & Luke, S., G. (2013). Eye Movement Control During Scene Viewing: Immediate Effects of Scene Luminance on Fixation Durations. Journal of Experimental Psychology: Human Perception and Performance, 39, 318-322. Henderson, J., M., Olejarczyk, J., Luke, S., G. & Schmidt, J. (2014). Eye movement control during scene viewing: Immediate degradation and enhancement effects of spatial frequency filtering. Visual Cognition, 22, 486-502. Itti, L., & Koch, C. (2000). A saliency-based search mechanism for overt and covert shifts of visual attention. Vision Research, 40, 1489-1506. Jansen, L., Onat, S., & König, P. (2009). Influence of disparity on fixation and saccades in free viewing of natural scenes. Journal of Vision, 9(1):29, 1–19, http://journalofvision.org/9/1/29/, doi:10.1167/9.1.29. Mannan, S., K., Ruddock, K., H. & Wooding, D., S. (1996). The relationship between the locations of spatial features and those of fixations made during visual examination of briefly presented images. Spatial Vision, 10, 165–188. Onat, S., Açik, A., Schumann, F. & König, P. (2014). The Contributions of Image Content and Behavioural Relevancy to Overt Attention. PLOS ONE, 9, e93254. Ossandón, J. P., Onat, S., & König, P. (2014) Spatial biases in viewing behavior. Journal of Vision, 14(2):20, 1–26. Pannasch, S., Helmert, J., R., Roth, K., Herbold, A.-K. & Walter, H. (2008). Visual Fixation Durations and Saccade Amplitudes: Shifting Relationship in a Variety of Conditions.Journal of Eye Movement Research, 2, 1-19. Parkhurst, D. J., Law, K., & Niebur, E. (2002). Modeling the role of salience in the allocation of overt visual attention. Vision Research, 42, 107-123. Staub, A., Abbott, M. & Bogartz, R., S. (2012). Linguistically guided anticipatory eye movements in scene viewing. Visual Cognition, 20, 922-946. Walshe, R., C. & Nuthmann, A. (2014). Asymmetrical control of fixation durations in scene viewing. Vision Research, 100, 38-46. Cognitive science Eye Motor control
Eye movement in scene viewing
Biology
2,185
67,663,189
https://en.wikipedia.org/wiki/Lenore%20Fahrig
Lenore Fahrig is a Chancellor's Professor in the biology department at Carleton University, Canada and a Fellow of the Royal Society of Canada. Fahrig studies effects of landscape structure—the arrangement of forests, wetlands, roads, cities, and farmland—on wildlife populations and biodiversity, and is best known for her work on habitat fragmentation. In 2023, she was elected to the National Academy of Sciences. Early life and education Fahrig is from Ottawa, Ontario. She completed a BSc (Biology) at Queen's University, Kingston, in 1981 and an MSc from Carleton University, Ottawa in 1983 under the supervision of Gray Merriam, on habitat connectivity and population stability. She completed her PhD in 1987 at the University of Toronto under the supervision of Jyri Paloheimo, on the effects of animal dispersal behaviour on the relationship between population size and habitat spatial arrangement. Research and career After her PhD, Fahrig worked for two years as a postdoctoral fellow at the University of Virginia, researching how different plant dispersal strategies allow species to respond to environmental disturbances. She then spent two years as a research scientist for the Federal Department of Fisheries and Oceans in St. John's, Newfoundland, Canada, where she modeled the spatial and temporal interactions between fisheries and fish populations. In 1991 she joined the faculty of the Biology Department at Carleton University, Ottawa, where, as of 2024, she is a Chancellor's Professor and the Gray Merriam Chair in Landscape Ecology. Fahrig is best known for her work on habitat fragmentation. Her early work in this area culminated in her highly cited 2003 review. Fahrig argues that the effects of fragmentation (breaking of habitat into small patches) on biodiversity should be estimated independently of the effects of habitat loss, showing that the combined effects of habitat loss and fragmentation are almost entirely due to the effects of habitat loss alone. This is important for species conservation because it means that, on a per-area basis, habitat in small patches is as valuable for conservation as habitat in large patches. This finding negates a common 'excuse' for habitat destruction, namely the assumed low conservation value of small patches. Fahrig's later work on habitat fragmentation found that effects of habitat fragmentation on biodiversity, independent of effects of habitat loss, are more likely to be positive than negative. This indicates that small patches have high cumulative value for biodiversity, and provides support for small-scale conservation efforts. Fahrig presented her work on habitat fragmentation at the International Biogeography Society's 50th anniversary celebration of The Theory of Island Biogeography, and at the World Biodiversity Forum. She published a retrospective article on her habitat fragmentation research for the 30th anniversary of the journal Global Ecology and Biogeography. Fahrig has also worked on habitat connectivity, road ecology, and effects of cropland heterogeneity on biodiversity. Based on her MSc thesis in 1983, Fahrig and Merriam published the first paper on habitat connectivity., and provided the earliest evidence for the concept of wildlife movement corridors. These concepts–habitat connectivity and wildlife movement corridors – are widely used in large-scale conservation planning, in municipal and regional greenways planning, and in mitigation of road effects on wildlife. Fahrig and colleagues' further work demonstrated the importance of distinguishing between structural and functional connectivity. and showed that habitat fragmentation does not necessarily decrease functional connectivity. Fahrig's contributions in road ecology include the first paper to show that roadkill causes declines in wildlife populations. Her later work showed strong and widespread impacts of roads on wildlife populations. Fahrig and her students found that the groups of species whose populations are most impacted by roads are amphibians, reptiles, and mammals with low reproductive rates. They also argued that high roadkill sites arenot necessarily the best sites for mitigating road effects on wildlife, and that ecopassages alone do not reduce roadkill. Her research on cropland heterogeneity shows that regions with small crop fields have higher biodiversity than regions with large crop fields, even when the total area under crop production is the same. Further, her group showed that this benefit of cropland heterogeneity to biodiversity is as large as the benefits from reducing intense practices such as pesticide use. She is a co-author of a book on road ecology, and several major reviews of the subject. Honours and distinctions 2016 Elected Fellow of the Royal Society of Canada 2018 Miroslaw Romanowski Medal for environmental science 2019 Chancellor's Professor: highest honour by Carleton University for research and scholarship 2021 Guggenheim Fellowship in Geography & Environmental Studies from the Guggenheim Foundation 2021 President's Award from the Canadian Society for Ecology and Evolution for Research Excellence 2021 BBVA Foundation Frontiers of Knowledge Awards in Ecology and Conservation Biology 2022 Gerhard Herzberg Canada Gold Medal for Science and Engineering References Living people Academic staff of Carleton University Fellows of the Royal Society of Canada Year of birth missing (living people) Canadian women scientists Women ecologists Ecologists Members of the United States National Academy of Sciences
Lenore Fahrig
Environmental_science
1,018
8,720,264
https://en.wikipedia.org/wiki/History%20of%20the%20battery
Batteries provided the main source of electricity before the development of electric generators and electrical grids around the end of the 19th century. Successive improvements in battery technology facilitated major electrical advances, from early scientific studies to the rise of telegraphs and telephones, eventually leading to portable computers, mobile phones, electric cars, and many other electrical devices. Students and engineers developed several commercially important types of battery. "Wet cells" were open containers that held liquid electrolyte and metallic electrodes. When the electrodes were completely consumed, the wet cell was renewed by replacing the electrodes and electrolyte. Open containers are unsuitable for mobile or portable use. Wet cells were used commercially in the telegraph and telephone systems. Early electric cars used semi-sealed wet cells. One important classification for batteries is by their life cycle. "Primary" batteries can produce current as soon as assembled, but once the active elements are consumed, they cannot be electrically recharged. The development of the lead-acid battery and subsequent "secondary" or "chargeable" types allowed energy to be restored to the cell, extending the life of permanently assembled cells. The introduction of nickel and lithium based batteries in the latter half of the 20th century made the development of innumerable portable electronic devices feasible, from powerful flashlights to mobile phones. Very large stationary batteries find some applications in grid energy storage, helping to stabilize electric power distribution networks. Invention From the mid 18th century on, before there were batteries, experimenters used Leyden jars to store electrical charge. As an early form of capacitor, Leyden jars, unlike electrochemical cells, stored their charge physically and would release it all at once. Many experimenters took to hooking several Leyden jars together to create a stronger charge and one of them, the colonial American inventor Benjamin Franklin, may have been the first to call his grouping an "electrical battery", a play on the military term for weapons functioning together. Based on some findings by Luigi Galvani, Alessandro Volta, a friend and fellow scientist, believed observed electrical phenomena were caused by two different metals joined by a moist intermediary. He verified this hypothesis through experiments and published the results in 1791. In 1800, Volta invented the first true battery, storing and releasing a charge through a chemical reaction instead of physically, which came to be known as the voltaic pile. The voltaic pile consisted of pairs of copper and zinc discs piled on top of each other, separated by a layer of cloth or cardboard soaked in brine (i.e., the electrolyte). Unlike the Leyden jar, the voltaic pile produced continuous electricity and stable current, and lost little charge over time when not in use, though his early models could not produce a voltage strong enough to produce sparks. He experimented with various metals and found that zinc and silver gave the best results. Volta believed the current was the result of two different materials simply touching each other – an obsolete scientific theory known as contact tension – and not the result of chemical reactions. As a consequence, he regarded the corrosion of the zinc plates as an unrelated flaw that could perhaps be fixed by changing the materials somehow. However, no scientist ever succeeded in preventing this corrosion. In fact, it was observed that the corrosion was faster when a higher current was drawn. This suggested that the corrosion was actually integral to the battery's ability to produce a current. This, in part, led to the rejection of Volta's contact tension theory in favor of the electrochemical theory. Volta's illustrations of his Crown of Cups and voltaic pile have extra metal disks, now known to be unnecessary, on both the top and bottom. The figure associated with this section, of the zinc-copper voltaic pile, has the modern design, an indication that "contact tension" is not the source of electromotive force for the voltaic pile. Volta's original pile models had some technical flaws, one of them involving the electrolyte leaking and causing short-circuits due to the weight of the discs compressing the brine-soaked cloth. A Scotsman named William Cruickshank solved this problem by laying the elements in a box instead of piling them in a stack. This was known as the trough battery. Volta himself invented a variant that consisted of a chain of cups filled with a salt solution, linked together by metallic arcs dipped into the liquid. This was known as the Crown of Cups. These arcs were made of two different metals (e.g., zinc and copper) soldered together. This model also proved to be more efficient than his original piles, though it did not prove as popular. Another problem with Volta's batteries was short battery life (an hour's worth at best), which was caused by two phenomena. The first was that the current produced electrolyzed the electrolyte solution, resulting in a film of hydrogen bubbles forming on the copper, which steadily increased the internal resistance of the battery (this effect, called polarization, is counteracted in modern cells by additional measures). The other was a phenomenon called local action, wherein minute short-circuits would form around impurities in the zinc, causing the zinc to degrade. The latter problem was solved in 1835 by the English inventor William Sturgeon, who found that amalgamated zinc, whose surface had been treated with some mercury, did not suffer from local action. Despite its flaws, Volta's batteries provide a steadier current than Leyden jars, and made possible many new experiments and discoveries, such as the first electrolysis of water by the English surgeon Anthony Carlisle and the English chemist William Nicholson. First practical batteries Daniell cell An English professor of chemistry named John Frederic Daniell found a way to solve the hydrogen bubble problem in the Voltaic Pile by using a second electrolyte to consume the hydrogen produced by the first. In 1836, he invented the Daniell cell, which consists of a copper pot filled with a copper sulfate solution, in which is immersed an unglazed earthenware container filled with sulfuric acid and a zinc electrode. The earthenware barrier is porous, which allows ions to pass through but keeps the solutions from mixing. The Daniell cell was a great improvement over the existing technology used in the early days of battery development and was the first practical source of electricity. It provides a longer and more reliable current than the Voltaic cell. It is also safer and less corrosive. It has an operating voltage of roughly 1.1 volts. It soon became the industry standard for use, especially with the new telegraph networks. The Daniell cell was also used as the first working standard for definition of the volt, which is the unit of electromotive force. Bird's cell A version of the Daniell cell was invented in 1837 by the Guy's Hospital physician Golding Bird who used a plaster of Paris barrier to keep the solutions separate. Bird's experiments with this cell were of some importance to the new discipline of electrometallurgy. Porous pot cell The porous pot version of the Daniell cell was invented by John Dancer, a Liverpool instrument maker, in 1838. It consists of a central zinc anode dipped into a porous earthenware pot containing a zinc sulfate solution. The porous pot is, in turn, immersed in a solution of copper sulfate contained in a copper can, which acts as the cell's cathode. The use of a porous barrier allows ions to pass through but keeps the solutions from mixing. Gravity cell In the 1860s, a Frenchman named Callaud invented a variant of the Daniell cell called the gravity cell. This simpler version dispensed with the porous barrier. This reduces the internal resistance of the system and, thus, the battery yields a stronger current. It quickly became the battery of choice for the American and British telegraph networks, and was widely used until the 1950s. The gravity cell consists of a glass jar, in which a copper cathode sits on the bottom and a zinc anode is suspended beneath the rim. Copper sulfate crystals are scattered around the cathode and then the jar is filled with distilled water. As the current is drawn, a layer of zinc sulfate solution forms at the top around the anode. This top layer is kept separate from the bottom copper sulfate layer by its lower density and by the polarity of the cell. The zinc sulfate layer is clear in contrast to the deep blue copper sulfate layer, which allows a technician to measure the battery life with a glance. On the other hand, this setup means the battery can be used only in a stationary appliance, or else the solutions mix or spill. Another disadvantage is that a current has to be continually drawn to keep the two solutions from mixing by diffusion, so it is unsuitable for intermittent use. Poggendorff cell The German scientist Johann Christian Poggendorff overcame the problems with separating the electrolyte and the depolariser using a porous earthenware pot in 1842. In the Poggendorff cell, sometimes called Grenet Cell due to the works of Eugene Grenet around 1859, the electrolyte is dilute sulphuric acid and the depolariser is chromic acid. The two acids are physically mixed together, eliminating the porous pot. The positive electrode (cathode) is two carbon plates, with a zinc plate (negative or anode) positioned between them. Because of the tendency of the acid mixture to react with the zinc, a mechanism is provided to raise the zinc electrode clear of the acids. The cell provides 1.9 volts. It was popular with experimenters for many years due to its relatively high voltage; greater ability to produce a consistent current and lack of any fumes, but the relative fragility of its thin glass enclosure and the necessity of having to raise the zinc plate when the cell is not in use eventually saw it fall out of favour. The cell was also known as the 'chromic acid cell', but principally as the 'bichromate cell'. This latter name came from the practice of producing the chromic acid by adding sulphuric acid to potassium dichromate, even though the cell itself contains no dichromate. The Fuller cell was developed from the Poggendorff cell. Although the chemistry is principally the same, the two acids are once again separated by a porous container and the zinc is treated with mercury to form an amalgam. Grove cell The Welshman William Robert Grove invented the Grove cell in 1839. It consists of a zinc anode dipped in sulfuric acid and a platinum cathode dipped in nitric acid, separated by porous earthenware. The Grove cell provides a high current and nearly twice the voltage of the Daniell cell, which made it the favoured cell of the American telegraph networks for a time. However, it gives off poisonous nitric oxide fumes when operated. The voltage also drops sharply as the charge diminishes, which became a liability as telegraph networks grew more complex. Platinum was and still is very expensive. Dun cell Alfred Dun 1885, nitro-muriatic acid () – iron and carbon: In the new element there can be used advantageously as exciting-liquid in the first case such solutions as have in a concentrated condition great depolarizing-power, which effect the whole depolarization chemically without necessitating the mechanical expedient of increased carbon surface. It is preferred to use iron as the positive electrode, and as exciting-liquid nitro muriatic acid (), the mixture consisting of muriatic and nitric acids. The nitro-muriatic acid, as explained above, serves for filling both cells. For the carbon-cells it is used strong or very slightly diluted, but for the other cells very diluted, (about one-twentieth, or at the most one-tenth). The element containing in one cell carbon and concentrated nitro-muriatic acid and in the other cell iron and dilute nitro-muriatic acid remains constant for at least twenty hours when employed for electric incandescent lighting. Rechargeable batteries and dry cells Lead-acid Up to this point, all existing batteries would be permanently drained when all their chemical reactants were spent. In 1859, Gaston Planté invented the lead–acid battery, the first-ever battery that could be recharged by passing a reverse current through it. A lead-acid cell consists of a lead anode and a lead dioxide cathode immersed in sulfuric acid. Both electrodes react with the acid to produce lead sulfate, but the reaction at the lead anode releases electrons whilst the reaction at the lead dioxide consumes them, thus producing a current. These chemical reactions can be reversed by passing a reverse current through the battery, thereby recharging it. Planté's first model consisted of two lead sheets separated by rubber strips and rolled into a spiral. His batteries were first used to power the lights in train carriages while stopped at a station. In 1881, Camille Alphonse Faure invented an improved version that consists of a lead grid lattice into which is pressed a lead oxide paste, forming a plate. Multiple plates can be stacked for greater performance. This design is easier to mass-produce. Compared to other batteries, Planté's is rather heavy and bulky for the amount of energy it can hold. However, it can produce remarkably large currents in surges, because it has very low internal resistance, meaning that a single battery can be used to power multiple circuits. The lead-acid battery is still used today in automobiles and other applications where weight is not a big factor. The basic principle has not changed since 1859. In the early 1930s, a gel electrolyte (instead of a liquid) produced by adding silica to a charged cell was used in the LT battery of portable vacuum-tube radios. In the 1970s, "sealed" versions became common (commonly known as a "gel cell" or "SLA"), allowing the battery to be used in different positions without failure or leakage. Today cells are classified as "primary" if they produce a current only until their chemical reactants are exhausted, and "secondary" if the chemical reactions can be reversed by recharging the cell. The lead-acid cell was the first "secondary" cell. Leclanché cell In 1866, Georges Leclanché invented a battery that consists of a zinc anode and a manganese dioxide cathode wrapped in a porous material, dipped in a jar of ammonium chloride solution. The manganese dioxide cathode has a little carbon mixed into it as well, which improves conductivity and absorption. It provided a voltage of 1.4 volts. This cell achieved very quick success in telegraphy, signaling, and electric bell work. The dry cell form was used to power early telephones—usually from an adjacent wooden box affixed to fit batteries before telephones could draw power from the telephone line itself. The Leclanché cell can not provide a sustained current for very long. In lengthy conversations, the battery would run down, rendering the conversation inaudible. This is because certain chemical reactions in the cell increase the internal resistance and, thus, lower the voltage. Zinc-carbon cell, the first dry cell Many experimenters tried to immobilize the electrolyte of an electrochemical cell to make it more convenient to use. The Zamboni pile of 1812 is a high-voltage dry battery but capable of delivering only minute currents. Various experiments were made with cellulose, sawdust, spun glass, asbestos fibers, and gelatine. In 1886, Carl Gassner obtained a German patent on a variant of the Leclanché cell, which came to be known as the dry cell because it does not have a free liquid electrolyte. Instead, the ammonium chloride is mixed with plaster of Paris to create a paste, with a small amount of zinc chloride added in to extend the shelf life. The manganese dioxide cathode is dipped in this paste, and both are sealed in a zinc shell, which also acts as the anode. In November 1887, he obtained for the same device. Unlike previous wet cells, Gassner's dry cell is more solid, does not require maintenance, does not spill, and can be used in any orientation. It provides a potential of 1.5 volts. The first mass-produced model was the Columbia dry cell, first marketed by the National Carbon Company in 1896. The NCC improved Gassner's model by replacing the plaster of Paris with coiled cardboard, an innovation that left more space for the cathode and made the battery easier to assemble. It was the first convenient battery for the masses and made portable electrical devices practical, and led directly to the invention of the flashlight. The zinc–carbon battery (as it came to be known) is still manufactured today. In parallel, in 1887 Wilhelm Hellesen developed his own dry cell design. It has been claimed that Hellesen's design preceded that of Gassner. In 1887, a dry-battery was developed by Sakizō Yai (屋井 先蔵) of Japan, then patented in 1892. In 1893, Sakizō Yai's dry-battery was exhibited in World's Columbian Exposition and commanded considerable international attention. NiCd, the first alkaline battery In 1899, a Swedish scientist named Waldemar Jungner invented the nickel–cadmium battery, a rechargeable battery that has nickel and cadmium electrodes in a potassium hydroxide solution; the first battery to use an alkaline electrolyte. It was commercialized in Sweden in 1910 and reached the United States in 1946. The first models were robust and had significantly better energy density than lead-acid batteries, but were much more expensive. 20th century: new technologies and ubiquity Nickel-iron Waldemar Jungner patented a nickel–iron battery in 1899, the same year as his Ni-Cad battery patent, but found it to be inferior to its cadmium counterpart and, as a consequence, never bothered developing it. It produced a lot more hydrogen gas when being charged, meaning it could not be sealed, and the charging process was less efficient (it was, however, cheaper). Seeing a way to make a profit in the already competitive lead-acid battery market, Thomas Edison worked in the 1890s on developing an alkaline based battery that he could get a patent on. Edison thought that if he produced a lightweight and durable battery electric cars would become the standard, with his firm as its main battery vendor. After many experiments, and probably borrowing from Jungner's design, he patented an alkaline based nickel–iron battery in 1901. However, customers found his first model of the alkaline nickel–iron battery to be prone to leakage leading to short battery life, and it did not outperform the lead-acid cell by much either. Although Edison was able to produce a more reliable and powerful model seven years later, by this time the inexpensive and reliable Model T Ford had made gasoline engine cars the standard. Nevertheless, Edison's battery achieved great success in other applications such as electric and diesel-electric rail vehicles, providing backup power for railroad crossing signals, or to provide power for the lamps used in mines. Common alkaline batteries Until the late 1950s, the zinc–carbon battery continued to be a popular primary cell battery, but its relatively low battery life hampered sales. The Canadian engineer Lewis Urry, working for the Union Carbide, first at the National Carbon Co. in Ontario and, by 1955, at the National Carbon Company Parma Research Laboratory in Cleveland, Ohio, was tasked with finding a way to extend the life of zinc-carbon batteries. Building on earlier work by Edison, Urry decided instead that alkaline batteries held more promise. Until then, longer-lasting alkaline batteries were unfeasibly expensive. Urry's battery consists of a manganese dioxide cathode and a powdered zinc anode with an alkaline electrolyte. Using powdered zinc gives the anode a greater surface area. These batteries were put on the market in 1959. Nickel–hydrogen and nickel–metal hydride The nickel–hydrogen battery entered the market as an energy-storage subsystem for commercial communication satellites. The first consumer grade nickel–metal hydride batteries (NiMH) for smaller applications appeared on the market in 1989 as a variation of the 1970s nickel–hydrogen battery. NiMH batteries tend to have longer lifespans than NiCd batteries (and their lifespans continue to increase as manufacturers experiment with new alloys) and, since cadmium is toxic, NiMH batteries are less damaging to the environment. Alkali metal-ion batteries Lithium is the alkali metal with lowest density and with the greatest electrochemical potential and energy-to-weight ratio. The low atomic weight and small size of its ions also speeds its diffusion, likely making it an ideal battery material. Experimentation with lithium batteries began in 1912 under American physical chemist Gilbert N. Lewis, but commercial lithium batteries did not come to market until the 1970s in the form of the lithium-ion battery. Three volt lithium primary cells such as the CR123A type and three volt button cells are still widely used, especially in cameras and very small devices. Three important developments regarding lithium batteries occurred in the 1980s. In 1980, an American chemist, John B. Goodenough, discovered the LiCoO2 (Lithium cobalt oxide) cathode (positive lead) and a Moroccan research scientist, Rachid Yazami, discovered the graphite anode (negative lead) with the solid electrolyte. In 1981, Japanese chemists Tokio Yamabe and Shizukuni Yata discovered a novel nano-carbonacious-PAS (polyacene) and found that it was very effective for the anode in the conventional liquid electrolyte. This led a research team managed by Akira Yoshino of Asahi Chemical, Japan, to build the first lithium-ion battery prototype in 1985, a rechargeable and more stable version of the lithium battery; Sony commercialized the lithium-ion battery in 1991. In 2019, John Goodenough, Stanley Whittingham, and Akira Yoshino, were awarded the Nobel Prize in Chemistry, for their development of lithium-ion batteries. In 1997, the lithium polymer battery was released by Sony and Asahi Kasei. These batteries hold their electrolyte in a solid polymer composite instead of in a liquid solvent, and the electrodes and separators are laminated to each other. The latter difference allows the battery to be encased in a flexible wrapping instead of in a rigid metal casing, which means such batteries can be specifically shaped to fit a particular device. This advantage has favored lithium polymer batteries in the design of portable electronic devices such as mobile phones and personal digital assistants, and of radio-controlled aircraft, as such batteries allow for a more flexible and compact design. They generally have a lower energy density than normal lithium-ion batteries. High costs and concerns about mineral extraction associated with lithium chemistry have renewed interest in sodium-ion battery development, with early electric vehicle product launches in 2023. Solid State batteries In 2024, Solid-state batteries represent a significant technological leap forward, offering numerous advantages over traditional lithium-ion batteries. Unlike lithium-ion batteries, which use liquid or gel electrolytes, solid-state batteries utilize solid electrolytes. This key difference enhances safety, as solid electrolytes are less likely to catch fire or leak. Solid state batteries can also achieve higher energy densities, therefore lasting longer than traditional lithium-based batteries. The automotive industry is keenly interested in this new technology as it promises safer and more efficient vehicles. Companies like Toyota, Ford, and QuantumScape are invested heavily in the development of solid-state batteries. See also Baghdad Battery, an artifact that has similar properties to a modern battery Memory effect Comparison of commercial battery types History of electrochemistry List of battery sizes List of battery types Search for the Super Battery, a 2017 PBS film Burgess Battery Company Notes and references “Advances in solid-state batteries: Materials, interfaces, characterizations, and devices.” MRS Bulletin, 16 Jan. 2024, link.springer.com/article/10.1557/s43577-023-00649-7. Volle, Adam. “Solid-state battery | Definition, History, & Facts.” Britannica, www.britannica.com/technology/solid-state-battery. Electric battery Battery Alessandro Volta Battery
History of the battery
Technology
5,079
1,179,236
https://en.wikipedia.org/wiki/Gas%20cylinder
A gas cylinder is a pressure vessel for storage and containment of gases at above atmospheric pressure. Gas storage cylinders may also be called bottles. Inside the cylinder the stored contents may be in a state of compressed gas, vapor over liquid, supercritical fluid, or dissolved in a substrate material, depending on the physical characteristics of the contents. A typical gas cylinder design is elongated, standing upright on a flattened or dished bottom end or foot ring, with the cylinder valve screwed into the internal neck thread at the top for connecting to the filling or receiving apparatus. Nomenclature Gas cylinders may be grouped by several characteristics, such as construction method, material, pressure group, class of contents, transportability, and re-usability. The size of a pressurised gas container that may be classed as a gas cylinder is typically 0.5 litres to 150 litres. Smaller containers may be termed gas cartridges, and larger may be termed gas tubes, tanks, or other specific type of pressure vessel. A gas cylinder is used to store gas or liquefied gas at pressures above normal atmospheric pressure. In South Africa, a gas storage cylinder implies a refillable transportable container with a water capacity volume of up to 150 litres. Refillable transportable cylindrical containers from 150 to 3,000 litres water capacity are referred to as tubes. In the United States, "bottled gas" typically refers to liquefied petroleum gas. "Bottled gas" is sometimes used in medical supply, especially for portable oxygen tanks. Packaged industrial gases are frequently called "cylinder gas", though "bottled gas" is sometimes used. The term propane tank is also used for cylinders for propane. The United Kingdom and other parts of Europe more commonly refer to "bottled gas" when discussing any usage, whether industrial, medical, or liquefied petroleum. In contrast, what is called liquefied petroleum gas in the United States is known generically in the United Kingdom as "LPG" and it may be ordered by using one of several trade names, or specifically as butane or propane, depending on the required heat output. The term cylinder in this context is sometimes confused with tank, the latter being an open-top or vented container that stores liquids under gravity, though the term scuba tank is commonly used to refer to a compressed gas cylinder used for breathing gas supply to an underwater breathing apparatus. Components Cylinder – Either the shell or the complete assembly of shell and all accessories directly attached to the shell, depending on context. Shell – The pressure vessel as a whole, excepting accessories. Shoulder – The end of the shell with a neck or boss into which the valve is fitted. Neck – A coaxial cylindrical extension of the shoulder with a threaded hole into which the cylinder valve or a gas pipe connection is fitted. Boss – A sturdy insert, usually in the centre of the shoulder, into which a valve or gas pipe connection is fitted. Base or foot – The end of the shell opposite the shoulder. Liner – The core on which filament windings are laid. The core may be structural (usually metal), and share the pressure loads, or purely to separate composite wrapping from the cylinder contents, (metal or engineering plastic). Cylinder valve – a shutoff valve directly coupled to the cylinder shell at the neck or boss which is opened to allow gas flow into or out of the cylinder, and closed to prevent such flow. It usually has a threaded inlet/outlet opening to which other equipment can be connected, but in some cases may have an integral pressure regulator on the outlet side, and a separate inlet opening for filling. Foot ring – A permanently attached ring fitted to the base on which the cylinder can stand. Valve guard – A fitting (cap or collar) screwed or clamped to the shoulder, defending the valve from impact during transport, and in some cases, when in use. Permanent markings – Information identifying the cylinder and its specification, stamped into the outside of the shoulder on metal cylinders. On composite cylinders permanent makings can be a printed label encapsulated under the resin or covered by a permanent transparent coating on the shoulder or side wall of the cylinder. Types Since fibre-composite materials have been used to reinforce pressure vessels, various types of cylinder distinguished by the construction method and materials used have been defined: Type 1: Metal only. Mostly seamless forged metal, but for lower working pressure, e.g., liquefied butane, welded steel vessels are also used. Type 2: Metal vessel, hoop wrapped with a fibre composite only around the cylindrical part of the "cylinder". (Geometrically there is a need for twice the tensile strength on the cylindrical region in comparison to the spherical caps of the cylinder.) Type 3: Thin metal liner (that keeps the vessel gas tight, but does not contribute to the strength) fully wrapped with fibre composite material. Type 4: Metal-free liner of plastic, fully wrapped with fibre composite material. The neck of the cylinder which includes the thread for the valve is a metal insert. Cylinder assemblies Assemblies comprising a group of cylinders mounted together for combined use or transport: Bank – A group of cylinders connected to a gas distribution system for bulk storage, where the individual cylinders may be used together or separately, but are not necessarily supported by a structure which can be used to transport them as a group. Cascade – A bank when used in cascade. Quad or bundle, also occasionally gas pack or gas battery – A bank of high pressure gas storage cylinders, typically mounted upright on a rectangular frame for transport, and manifolded together. A pallet is a similar appearing group of cylinders on a lifting frame with no functional connections. The maximum combined cylinder volume for a bundle is 3000 litres for non-toxic gases and 1000 litres for toxic gases. Gas bundles are specified by ISO 10961:2019 Gas cylinders — Cylinder bundles — Design, manufacture, testing and inspection. Rack – A structure to hold cylinders safely upright or horizontal while in use, for transport, or in storage. Materials All-metal cylinders are the most rugged and usually the most economical option, but are relatively heavy. Steel is generally the most resistant to rough handling and most economical, and is often lighter than aluminium for the same working pressure, capacity, and form factor due to its higher specific strength. The inspection interval of industrial steel cylinders has increased from 5 or 6 years to 10 years. Diving cylinders that are used in water must be inspected more often; intervals tend to range between 1 and 5 years. Steel cylinders are typically withdrawn from service after 70 years, or may continue to be used indefinitely providing they pass periodic inspection and testing. When they were found to have inherent structural problems, certain steel and aluminium alloys were withdrawn from service, or discontinued from new production, while existing cylinders may require different inspection or testing, but remain in service provided they pass these tests. For very high pressures, composites have a greater mass advantage. Due to the very high tensile strength of carbon fiber reinforced polymer, these vessels can be very light, but are more expensive to manufacture. Filament wound composite cylinders are used in fire fighting breathing apparatus, high altitude climbing, and oxygen first aid equipment because of their low weight, but are rarely used for diving, due to their high positive buoyancy. They are occasionally used when portability for accessing the dive site is critical, such as in cave diving where the water surface is far from the cave entrance. Composite cylinders certified to ISO-11119-2 or ISO-11119-3 may only be used for underwater applications if they are manufactured in accordance with the requirements for underwater use and are marked "UW". Cylinders reinforced with or made from a fibre reinforced material usually must be inspected more frequently than metal cylinders, e.g., every 5 instead of 10 years, and must be inspected more thoroughly than metal cylinders as they are more susceptible to impact damage. They may also have a limited service life. Fibre composite cylinders were originally specified for a limited life span of 15, 20 or 30 years, but this has been extended when they proved to be suitable for longer service. Manufacturing processes Type 1 seamless metal cylinders The Type 1 pressure vessel is a seamless cylinder normally made of cold-extruded aluminum or forged steel. The pressure vessel comprises a cylindrical section of even wall thickness, with a thicker base at one end, and domed shoulder with a central neck to attach a cylinder valve or manifold at the other end. Occasionally other materials may be used. Inconel has been used for non-magnetic and highly corrosion resistant oxygen compatible spherical high-pressure gas containers for the US Navy's Mk-15 and Mk-16 mixed gas rebreathers, and a few other military rebreathers. Aluminium Most aluminum cylinders are flat bottomed, allowing them to stand upright on a level surface, but some were manufactured with domed bottoms. Aluminum cylinders are usually manufactured by cold extrusion of aluminum billets in a process which first presses the walls and base, then trims the top edge of the cylinder walls, followed by press forming the shoulder and neck. The final structural process is machining the neck outer surface, boring and cutting the neck threads and O-ring groove. The cylinder is then heat-treated, tested and stamped with the required permanent markings. Steel Steel cylinders are often used because they are harder and more resistant to external surface impact and abrasion damage, and can tolerate higher temperatures without affecting material properties. They also may have a lower mass than aluminium cylinders with the same gas capacity, due to considerably higher specific strength. Steel cylinders are more susceptible than aluminium to external corrosion, particularly in seawater, and may be galvanized or coated with corrosion barrier paints to resist corrosion damage. It is not difficult to monitor external corrosion, and repair the paint when damaged, and steel cylinders which are well maintained have a long service life, often longer than aluminium cylinders, as they are not susceptible to fatigue damage when filled within their safe working pressure limits. Steel cylinders are manufactured with domed (convex) and dished (concave) bottoms. The dished profile allows them to stand upright on a horizontal surface, and is the standard shape for industrial cylinders. The cylinders used for emergency gas supply on diving bells are often this shape, and commonly have a water capacity of about 50 litres ("J"). Domed bottoms give a larger volume for the same cylinder mass, and are the standard for scuba cylinders up to 18 litres water capacity, though some concave bottomed cylinders have been marketed for scuba. Domed end industrial cylinders may be fitted with a press-fitted foot ring to allow upright standing. Steel alloys used for gas cylinder manufacture are authorised by the manufacturing standard. For example, the US standard DOT 3AA requires the use of open-hearth, basic oxygen, or electric steel of uniform quality. Approved alloys include 4130X, NE-8630, 9115, 9125, Carbon-boron and Intermediate manganese, with specified constituents, including manganese and carbon, and molybdenum, chromium, boron, nickel or zirconium. Drawn from plate Steel cylinders may be manufactured from steel plate discs stamped from annealed plate or coil, which are lubricated and cold drawn to a cylindrical cup form, by a hydraulic press, this is annealed and drawn again in two or three stages, until the final diameter and wall thickness is reached. They generally have a domed base if intended for the scuba market, so they cannot stand up by themselves.For industrial use a dished base allows the cylinder to stand on the end on a flat surface. After forming the base and side walls, the top of the cylinder is trimmed to length, heated and hot spun to form the shoulder and close the neck. This process thickens the material of the shoulder. The cylinder is heat-treated by quenching and tempering to provide the best strength and toughness. The cylinders are machined to provide the neck thread and o-ring seat (if applicable), then chemically cleaned or shot-blasted inside and out to remove mill-scale. After inspection and hydrostatic testing they are stamped with the required permanent markings, followed by external coating with a corrosion barrier paint or hot dip galvanising and final inspection. Spun from seamless tube A related method is to start with seamless steel tube of a suitable diameter and wall thickness, manufactured by a process such as the Mannesmann process, and to close both ends by the hot spinning process. This method is particularly suited to high pressure gas storage tubes, which usually have a threaded neck opening at both ends, so that both ends are processed alike. When a neck opening is only required at one end, the base is spun first and dressed inside for a uniform smooth surface, then the process of closing the shoulder and forming the neck is the same as for the pressed plate method. Forged from billet An alternative production method is backward extrusion of a heated steel billet, similar to the cold extrusion process for aluminium cylinders, followed by hot drawing and bottom forming to reduce wall thickness, and trimming of the top edge in preparation for shoulder and neck formation by hot spinning. The other processes are much the same for all production methods. Cylinder neck The neck of the cylinder is the part of the end which is shaped as a narrow concentric cylinder, and internally threaded to fit a cylinder valve. There are several standards for neck threads, which include parallel threads where the seal is by an O-ring gasket, and taper threads which seal along the contact surface by deformation of the contact surfaces, and on thread tape or sealing compound. Type 2 hoop wrapped metal liner Type 2 is hoop wrapped with fibre reinforced resin over the cylindrical part of the cylinder, where circumferential load is highest. The fibres share the circumferential load with the metal core, and achieve a significant weight saving due to efficient stress distribution and high specific strength and stiffness of the composite. The core is a seamless metal cylinder, manufactured in any of the ways suitable for a type 1 cylinder, but with thinner walls, as they only carry about half the load, mainly the axial load. Hoop winding is at an angle to the length axis of close to 90°, so the fibres carry negligible axial load. Type 3 fully wrapped thin metal liner Type 3 is wrapped over the entire cylinder except for the neck, and the metal liner is mainly to make the cylinder gas tight, so very little load is carried by the liner. Winding angles are optimised to carry all the loads (axial and circumferential) from the pressurised gas in the cylinder. Only the neck metal is exposed on the outside. This construction can save in the order of 30% of the mass compared with type 2, as the fibre composite has a higher specific strength than the metal of the type 2 liner that it replaces. Type 4 fully wrapped plastic liner Type 4 is wrapped in the same way as type 3, but the liner is non-metallic. A metal neck boss is fitted to the shoulder of the plastic liner before winding, and this carries the neck threads for the cylinder valve. The outside of the neck of the insert is not covered by the fibre wrapping, and may have axial ridges to engage with a wrench or clamp for torsional support when fitting or removing the cylinder valve. There is a mass reduction compared with type 3 due to the lower density of the plastic liner. Welded gas cylinders A welded gas cylinder comprises two or more shell components joined by welding. The most commonly used material is steel, but stainless steel, aluminium and other alloys can be used when they are better suited to the application. Steel is strong, resistant to physical damage, easy to weld, relatively low cost, and usually adequate for corrosion resistance, and provides an economical product. The components of the shell are usually domed ends, and often a rolled cylindrical centre section. The ends are usually domed by cold pressing from a circular blank, and may be drawn in two or more stages to get the final shape, which is generally semi-elliptical in section. The end blank is typically punched from sheet, drawn to the required section, edges trimmed to size and necked for overlap where appropriate, and hole(s) for the neck and other fittings punched. The neck boss is inserted from the concave side and welded in place before shell assembly. Smaller cylinders are typically assembled from a top and bottom dome, with an equatorial weld seam. Larger cylinders with a longer cylindrical body comprise dished ends circumferentially welded to a rolled central cylindrical section with a single longitudinal welded seam. Welding is typically automated gas metal arc welding. Typical accessories which are welded to the outside of the cylinder include a foot ring, a valve guard with lifting handles, and a neck boss threaded for the valve. Occasionally other through-shell and external fittings are also welded on. After welding, the assembly may be heat treated for stress-relief and to improve mechanical characteristics, cleaned by shotblasting, and coated with a protective and decorative coating. Testing and inspection for quality control will take place at various stages of production. Regulations and testing The transportation of high-pressure cylinders is regulated by many governments throughout the world. Various levels of testing are generally required by the governing authority for the country in which it is to be transported while filled. In the United States, this authority is the United States Department of Transportation (DOT). Similarly in the UK, the European transport regulations (ADR) are implemented by the Department for Transport (DfT). For Canada, this authority is Transport Canada (TC). Cylinders may have additional requirements placed on design and or performance from independent testing agencies such as Underwriters Laboratories (UL). Each manufacturer of high-pressure cylinders is required to have an independent quality agent that will inspect the product for quality and safety. Within the UK the "competent authority" — the Department for Transport (DfT) — implements the regulations and appointment of authorised cylinder testers is conducted by United Kingdom Accreditation Service (UKAS), who make recommendations to the Vehicle Certification Agency (VCA) for approval of individual bodies. There are a variety of tests that may be performed on various cylinders. Some of the most common types of tests are hydrostatic test, burst test, ultimate tensile strength, Charpy impact test and pressure cycling. During the manufacturing process, vital information is usually stamped or permanently marked on the cylinder. This information usually includes the type of cylinder, the working or service pressure, the serial number, date of manufacture, the manufacture's registered code and sometimes the test pressure. Other information may also be stamped, depending on the regulation requirements. High-pressure cylinders that are used multiple times — as most are — can be hydrostatically or ultrasonically tested and visually examined every few years. In the United States, hydrostatic or ultrasonic testing is required either every five years or every ten years, depending on cylinder and its service. Valve connections Neck thread Cylinder neck thread can be to any one of several standards. Both taper thread sealed with thread tape and parallel thread sealed with an O-ring have been found satisfactory for high pressure service, but each has advantages and disadvantages for specific use cases, and if there are no regulatory requirements, the type may be chosen to suit the application. A tapered thread provides simple assembly, but requires high torque for establishing a reliable seal, which causes high radial forces in the neck, and has a limited number of times it can be used before it is excessively deformed. This can be extended a bit by always returning the same fitting to the same cylinder, and avoiding over-tightening. In Australia, Europe and North America, tapered neck threads are generally preferred for inert, flammable, corrosive and toxic gases, but when aluminium cylinders are used for oxygen service to United States Department of Transportation (DOT) or Transport Canada (TC) specifications in North America, the cylinders must have parallel thread. DOT and TC allow UN pressure vessels to have tapered or parallel threaded openings. In the US, 49 CFR Part 171.11 applies, and in Canada, CSA B340-18 and CSA B341-18. In Europe and other parts of the world, tapered thread is preferred for cylinder inlets for oxidising gases. Scuba cylinders typically have a much shorter interval between internal inspections, so the use of tapered thread is less satisfactory due to the limited number of times a tapered thread valve can be re-used before it wears out, so parallel thread is generally used for this application. Parallel thread can be tightened sufficiently to form a good seal with the O-ring without lubrication, which is an advantage when the lubricant may react with the O-ring or the contents. Repeated secure installations are possible with different combinations of valve and cylinder provided they have compatible thread and correct O-ring seals. Parallel thread is more likely to give the technician warning of residual internal pressure by leaking or extruding the O-ring before catastrophic failure when the O-ring seal is broken during removal of the valve. The O-ring size must be correct for the combination of cylinder and valve, and the material must be compatible with the contents and any lubricant used. Valve Gas cylinders usually have an angle stop valve at one end, and the cylinder is usually oriented so the valve is on top. During storage, transportation, and handling when the gas is not in use, a cap may be screwed over the protruding valve to protect it from damage or breaking off in case the cylinder were to fall over. Instead of a cap, cylinders sometimes have a protective collar or neck ring around the valve assembly which has an opening for access to fit a regulator or other fitting to the valve outlet, and access to operate the valve. Installation of valves for high pressure aluminum alloy cylinders is described in the guidelines: CGA V-11, Guideline for the Installation of Valves into High Pressure Aluminum Alloy Cylinders and ISO 13341, Transportable gas cylinders—Fitting of valves to gas cylinders. Connection The valves on industrial, medical and diving cylinders usually have threads or connection geometries of different handedness, sizes and types that depend on the category of gas, making it more difficult to mistakenly misuse a gas. For example, a hydrogen cylinder valve outlet does not fit an oxygen regulator and supply line, which could result in catastrophe. Some fittings use a right-hand thread, while others use a left-hand thread; left-hand thread fittings are usually identifiable by notches or grooves cut into them, and are usually used for flammable gases. In the United States, valve connections are sometimes referred to as CGA connections, since the Compressed Gas Association (CGA) publishes guidelines on what connections to use for what gasses. For example, an argon cylinder may have a "CGA 580" connection on the valve. High purity gases sometimes use CGA-DISS ("Diameter Index Safety System") connections. Medical gases may use the Pin Index Safety System to prevent incorrect connection of gases to services. In the European Union, DIN connections are more common than in the United States. In the UK, the British Standards Institution sets the standards. Included among the standards is the use left-hand threaded valves for flammable gas cylinders (most commonly brass, BS4, valves for non-corrosive cylinder contents or stainless steel, BS15, valves for corrosive contents). Non flammable gas cylinders are fitted with right-hand threaded valves (most commonly brass, BS3, valves for non-corrosive components or stainless steel, BS14, valves for corrosive contents). Regulator When the gas in the cylinder is to be used at low pressure, the cap is taken off and a pressure-regulating assembly is attached to the stop valve. This attachment typically has a pressure regulator with upstream (inlet) and downstream (outlet) pressure gauges and a further downstream needle valve and outlet connection. For gases that remain gaseous under ambient storage conditions, the upstream pressure gauge can be used to estimate how much gas is left in the cylinder according to pressure. For gases that are liquid under storage, e.g., propane, the outlet pressure is dependent on the vapor pressure of the gas, and does not fall until the cylinder is nearly exhausted, although it will vary according to the temperature of the cylinder contents. The regulator is adjusted to control the downstream pressure, which will limit the maximum flow of gas out of the cylinder at the pressure shown by the downstream gauge. For some purposes, such as shielding gas for arc welding, the regulator will also have a flowmeter on the downstream side. The regulator outlet connection is attached to whatever needs the gas supply. Safety and standards Because the contents are under pressure and are sometimes hazardous materials, handling bottled gases is regulated. Regulations may include chaining bottles to prevent falling and damaging the valve, proper ventilation to prevent injury or death in case of leaks and signage to indicate the potential hazards. If a compressed gas cylinder falls over, causing the valve block to be sheared off, the rapid release of high-pressure gas may cause the cylinder to be violently accelerated, potentially causing property damage, injury, or death. To prevent this, cylinders are normally secured to a fixed object or transport cart with a strap or chain. They can also be stored in a safety cabinet. In a fire, the pressure in a gas cylinder rises in direct proportion to its temperature. If the internal pressure exceeds the mechanical limitations of the cylinder and there are no means to safely vent the pressurized gas to the atmosphere, the vessel will fail mechanically. If the vessel contents are flammable, this event may result in a "fireball". Oxidisers such as oxygen and fluorine will produce a similar effect by accelerating combustion in the area affected. If the cylinder's contents are liquid, but become a gas at ambient conditions, this is commonly referred to as a boiling liquid expanding vapour explosion (BLEVE). Medical gas cylinders in the UK and some other countries have a fusible plug of Wood's metal in the valve block between the valve seat and the cylinder. This plug melts at a comparatively low temperature (70 °C) and allows the contents of the cylinder to escape to the surroundings before the cylinder is significantly weakened by the heat, lessening the risk of explosion. More common pressure relief devices are a simple burst disc installed in the base of the valve between the cylinder and the valve seat. A burst disc is a small metal gasket engineered to rupture at a pre-determined pressure. Some burst discs are backed with a low-melting-point metal, so that the valve must be exposed to excessive heat before the burst disc can rupture. The Compressed Gas Association publishes a number of booklets and pamphlets on safe handling and use of bottled gases. International and national standards There is a wide range of standards relating to the manufacture, use and testing of pressurised gas cylinders and related components. Some examples are listed here. ISO 11439: Gas cylinders — High-pressure cylinders for the on-board storage of natural gas as a fuel for automotive vehicles ISO 15500-5: Road vehicles — Compressed natural gas (CNG) fuel system components — Part 5: Manual cylinder valve US DOT CFR Title 49, part 178, Subpart C — Specification for Cylinders US DOT Aluminum Tank Alloy 6351-T6 amendment for SCUBA, SCBA, Oxygen Service — Visual Eddy inspection AS 2896-2011:Medical gas systems—Installation and testing of non-flammable medical gas pipeline systems pipeline systems (Australian Standards). EN 1964-3 – Transportable gas cylinders. Specification for the design and construction of refillable transportable seamless steel gas cylinders of water capacities capacity from 0,5 litre up to 150 litre ISO 9809-1: Gas Cylinders–Refillable Seamless Steel Gas Cylinders–Design, Construction and Testing–Part 1: Quenched and Tempered Steel Cylinders with Tensile Strength less than 1 100 Mpa ISO 9809-2: Gas Cylinders–Refillable Seamless Steel Gas Cylinders–Design, Construction and Testing–Part 2: Quenched and Tempered Steel Cylinders with Tensile Strength Greater than or Equal to 1 100 Mpa ISO 9809-3: Gas Cylinders–Refillable Seamless Steel Gas Cylinders–Design, Construction and Testing–Part 3: Normalized Steel Cylinders EN ISO 11120 – Gas cylinders. Refillable seamless steel tubes of water capacity between 150 l and 3000 l. Design, construction and testing (ISO 11120:2015) EN 1975 – Transportable gas cylinders. Specification for the design and construction of refillable transportable seamless aluminium and aluminium alloy gas cylinders of capacity from 0,5 litre up to 150 litre EN 84/526/EEC – Aluminium high pressure gas cylinder design EN 12245 – Transportable gas cylinders Fully wrapped composite cylinders ISO 11119-1 Gas cylinders — Design, construction and testing of refillable composite gas cylinders and tubes — Part 1: Hoop wrapped fibre reinforced composite gas cylinders and tubes up to 450 l HOAL — Home Office Aluminium — UK seamless aluminium high pressure cylinder manufacturing standards HOAL1, HOAL2, HOAL3 and HOAL4 (superseded) in HE30/AA6082 or AA6351 alloys. Color coding Gas cylinders are often color-coded, but the codes are not standard across different jurisdictions, and sometimes are not regulated. Cylinder color can not safely be used for positive product identification; cylinders have labels to identify the gas they contain. Medical gas cylinder color code Indian standard The Indian Standard for Gas Cylinder Color Code applies to the identification of the contents of gas cylinders intended for medical use. Each cylinder shall be painted externally in the colours corresponding to its gaseous contents. Common sizes The below are example cylinder sizes and do not constitute an industry standard. (US DOT specs define material, making, and maximum pressure in psi. They are comparable to Transport Canada specs, which shows pressure in bars. A 3E-1800 in DOT nomenclature would be a TC 3EM 124 in Canada.) Gas storage tubes For larger volume, high pressure gas storage units, known as tubes, are available. They generally have a larger diameter and length than high pressure cylinders, and usually have a tapped neck at both ends. They may be mounted alone or in groups on trailers, permanent bases, or intermodal transport frames. Due to their length, they are mounted horizontally on mobile structures. In general usage they are often manifolded together and managed as a unit. Gas storage banks Groups of similar size cylinders may be mounted together and connected to a common manifold system to provide larger storage capacity than a single standard cylinder. This is commonly called a cylinder bank or a gas storage bank. The manifold may be arranged to allow simultaneous flow from all the cylinders, or, for a cascade filling system, where gas is tapped off cylinders according to the lowest positive pressure difference between storage and destination cylinder, being a more efficient use of pressurised gas. Gas storage quads A gas cylinder quad, also known as a gas cylinder bundle, is a group of high pressure cylinders mounted on a transport and storage frame. There are commonly 16 cylinders, each of about 50 litres capacity mounted upright in four rows of four, on a square base with a square plan frame with lifting points on top and may have fork-lift slots in the base. The cylinders are usually interconnected by a manifold for use as a unit, but many variations in layout and structure are possible. See also – Small gas cylinder typically used for specialty gasses References Sources CD-ROM prepared and distributed by the National Technical Information Service (NTIS)in partnership with NOAA and Best Publishing Company External links NASA — Safety Standards for Oxygen and Oxygen Handling Pressure vessels Containers Gas technologies
Gas cylinder
Physics,Chemistry,Engineering
6,502
59,909,328
https://en.wikipedia.org/wiki/Mary%20Rakowski%20DuBois
Mary Rakowski DuBois is an inorganic chemist, now retired from Pacific Northwest National Laboratory (PNNL). She made multiple contributions to inorganic and organometallic chemistry, focusing on synthetic and mechanistic studies. In recognition of her scientific contributions, she received several awards. Education and career Rakowski DuBois conducted her undergraduate training at Creighton University, receiving her B.S. in 1970. She earned her Ph.D. in 1974 under the mentorship of Daryle H. Busch at Ohio State University, and then was a postdoctoral fellow with Earl Muetterties at Cornell University. She joined the faculty of the University of Colorado at Boulder in 1976, and was a professor there until 2007, where she moved to Pacific Northwest National Laboratory (PNNL). She retired from PNNL in 2011. Research Together with her husband Daniel L. DuBois, Rakowski DuBois led a team that elucidated the reactivity of nickel complexes of P2N2 ligands, which were popularized at PNNL. The behavior of these complexes highlighted the strong influence of the second coordination sphere on the rates of activation of H2 by 16-electron nickel complexes. Early in her independent career, while on the faculty at the University of Colorado, she discovered that organomolybdenum sulfides activated hydrogen. This work provided a mechanistic connection between the Mo-S catalysts used in hydrodesulfurization and molecular organometallic chemistry. Awards Rakowski DuBois has been honored with fellowships from Alfred P. Sloan (1981), Dreyfus (1981), and Guggenheim Foundations (1984). References American inorganic chemists 20th-century American chemists 21st-century American chemists American women chemists Creighton University alumni Ohio State University alumni University of Colorado Boulder faculty 20th-century American women scientists 21st-century American women scientists Living people American women academics 1946 births
Mary Rakowski DuBois
Chemistry
387