id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
64,186,951
https://en.wikipedia.org/wiki/18-Methyltestosterone
18-Methyltestosterone (18-MT) is an androgen/anabolic steroid (AAS) which was never marketed. Along with 19-nortestosterone (nandrolone) and 17α-ethynyltestosterone (ethisterone), it is a parent structure of a number of progestogens and AAS. These include the progestogens levonorgestrel (17α-ethynyl-18-methyl-19-nortestosterone) and its derivatives (e.g., desogestrel, gestodene, norgestimate, gestrinone) as well as AAS such as norboletone (17α-ethyl-18-methyl-19-nortestosterone) and tetrahydrogestrinone (THG; δ9,11-17α-ethyl-18-methyl-19-nortestosterone). See also List of androgens/anabolic steroids References Abandoned drugs Anabolic–androgenic steroids Androstanes Enones Tertiary alcohols
18-Methyltestosterone
Chemistry
237
54,206,400
https://en.wikipedia.org/wiki/Antoine%20Wagner
Antoine Amadeus Wagner-Pasquier ( , ; 1982) is an American-French visual artist. He works between Woodstock, NY and Paris, France. Through a visual language drawn from nature his work refers to mythological narratives and the sublime. Early life and education Wagner was born in Evanston, Illinois. He is the son of German opera manager Eva Wagner-Pasquier and is the great-great-grandson of the composer Richard Wagner. He graduated from Northwestern University, Illinois, and Sciences Po, Paris, with a double bachelor's degree in Theatre and Political Science in 2005. He continued his education in film at the School of Continuing and Professional Studies at New York University's Tisch School of the Arts in 2007. Artwork Wagner's medium include video, sound, sculpture, performance and photography, which he exhibits through installations, site-specific projects, film, monumental photography and opera. Through the visual language of nature, his work references music, mythology and the romantic. Wagner's multifaceted practice has created an entire visual language to engage the audience in communicating with nature. Abstract unseen anthropomorphic shapes found in nature and minerals are the main actors of this world of mythologies and cyclical redundancies. Through different medium and projects, the artist examines the themes of identity, geography and spirituality and offers to the viewer an escape from traditional geographic and artistic boundaries. Through his exhibitions including Exil (Museum am Rothenbaum, 2013), Kundry (La Filature de Mulhouse 2015), Wagner applied his experience in the moving image to photography, exploring the possibilities of narrative in a silent and motionless environment through large-scale abstract photography. In 2018, Wagner directed Act II of the Opera Die Walkure at Frank Gehry's New World Center in Miami. Wagner's site-specific multimedia installation Sentimental Analysis (April 2019) is a response to the legend of Ara the Beautiful at The National Gallery of Armenia. Early work In 2006, Wagner assisted director Michael Haneke on Funny Games. After his first site specific installation Lisz[:T:]raumin in Raiding, Austria in 2007 he directed a series of videos and documentaries exploring visual inspiration in music. His film From a Mess to the Masses (2011) reveals the genesis of visual creation of the band Phoenix. Wagner: A genius in Exile (2013) is a documentary revealing the landscapes that influenced Richard Wagner during his Swiss exile. In 2013, VfmK Verlag für moderne Kunst published Wagner in der Schweiz, a photographic essay exploring Richard Wagner's inspiration during his forced journey from Germany to Switzerland after the 1848 revolution. It was awarded the 2013 Prix de l'Académie Lyrique Pierre Bergé in Paris. Exhibitions 2009: The Open, Group Show, Deitch Projects, New York City. 2011: Landscapes Escaped: Henn Gallerie, Munich, Germany. 2013–2016: Exil, The Opera Bastille, Paris, 2013; The Bayreuth Festspielhaus, Germany, 2013; Palazzo Vendramin, Venice, Italy, 2013; Museum für Völkerkunde Hamburg, Hamburg, Germany, 2015; Gertrude Salon, New York, 2016; Stedelijk Museum Breda, Breda, Netherlands, 2016; 2015: Cadences: La Filature, Mulhouse, France. 2015: Wagner in der Schweiz (screening), Goethe-Institut, California, United States. 2016: Un Musee Imaginaire, Group Show, Collection Lambert (Collection Lambert), Avignon, France. 2016: Common Denominator, Theater St. Gallen, St. Gallen, Switzerland. 2016: Bredaphoto, Stedelijk Museum Breda, Breda, Netherlands. 2016: Interference, Galerie RM, Paris. 2017: Kundry, La Filature de Mulhouse, France. 2017: Supersonic, Deck, Singapore. 2017: Echo, Julien David, Jinguamae, Tokyo. 2017: Silence, Atelier Hermes Pantin, France. 2017: Wagner in der Schweiz, Society of the Four Arts, Palm Beach, Florida. 2017: Art Paris Art Fair, Grand Palais, Paris. 2017: Distortion, Youngfu, Shanghai, China 2017: Antoine Wagner, Phillips, Paris 2017: Julierpass, Tait memorial Fundwith Stuart Skelton, Saint Paul's Church, London 2018: Liquid La patinoire, Royal Gallerie Valerie Bach, Bruxelles 2018: Looking at sound: Symposium at the Goethe Institut, Tokyo 2018: Studies Between Silence: Nancy Nasher, Soluna Music and Arts Festival, Dallas, Texas 2018: ACT II - Die Walkure, New World Center, Miami 2018: Orient Blue, In Cadaques Festival, Cadaques, Spain 2018: Artist talk, Spring Place, New York 2018 Morceau Choisi, (group show), Bubenberg Art Paris 2019: Conscience of Angel's Landing, Art Paris, Grand Palais Paris, 2019: Sentimental Analysis, site specific multimedia exhibition at the National Gallery of Armenia Collections Wagner's work is held in the following permanent collections: Yvon Lambert Gallery, Paris Museum of Ethnology, Hamburg, Hamburg, Germany Filmography Wagner assisted director Michael Haneke on his American remake of Funny Games in 2007. In 2011, Wagner directed the film From a Mess to the Masses featuring the band Phoenix. The film was commissioned by Arte and first broadcast in 2011. Wagner has directed videos featuring Julien David, Phoenix, Vanessa Paradis, Johnny Hallyday, Kate Moss, Maria Korchetkova, and Spank Rock. Wagner has also worked as a cinematographer. Residencies Wagner has completed residencies at Robert Wilson's Byrd Hoffman Watermill Center, NY (2005), and at The Villa Medici, Rome (2014). Publications Wagner: A Genius in Exile (2013) Antoine Wagner: Wagner in der Schweiz (2014). Nürnberg, Moderne Kunst Nürnberg. Antoine Wagner: Kundry (2015). Gourcuff Grandenigo, Montreuil. . With texts by Eric Mezil and Carole Blumenfeld. Patrice Chéreau: An Imaginary Museum (2016). Arles: Actes Sud. P. 71–74. You (2016). . Ancestry Wagner is the great-great-grandson of German composer Richard Wagner and great-great-great-grandson of Franz Liszt. See also Wagner family tree References External links Photography Now Profile Northwestern University alumni Tisch School of the Arts alumni Sciences Po alumni Living people 1982 births Artists from Evanston, Illinois American male sculptors Photographers from Illinois 21st-century American photographers 21st-century American sculptors Draughtsmen American multimedia artists American video artists American installation artists Sculptors from Illinois American people of German descent American people of French descent
Antoine Wagner
Engineering
1,390
4,334,418
https://en.wikipedia.org/wiki/The%20Complexity%20of%20Songs
"The Complexity of Songs" is a scholarly article by computer scientist Donald Knuth published in 1977 as an in-joke about computational complexity theory. The article capitalizes on what it argues is the tendency of popular songs to devolve from long and content-rich ballads to highly repetitive texts with little or no meaningful content. The article states that a song of length N words may be produced remembering, e.g., only words ("space complexity" of the song) or even less. Article summary Knuth writes that "our ancient ancestors invented the concept of refrain" to reduce the space complexity of songs, which becomes crucial when a large number of songs is to be committed to one's memory. Knuth's Lemma 1 states that if N is the length of a song, then the refrain decreases the song complexity to cN, where the factor . Knuth further demonstrates a way of producing songs with complexity, an approach "further improved by a Scottish farmer named O. MacDonald". More ingenious approaches yield songs of complexity , a class known as "m bottles of beer on the wall". Finally, the progress during the 20th century—stimulated by the fact that "the advent of modern drugs has led to demands for still less memory"—leads to the ultimate improvement: Arbitrarily long songs with space complexity exist, e.g. a song defined by the recurrence relation 'That's the way,' 'I like it,' , for all 'uh huh,' 'uh huh' Further developments Prof. Kurt Eisemann of San Diego State University in his letter to the Communications of the ACM further improves the latter seemingly unbeatable estimate. He begins with an observation that for practical applications the value of the "hidden constant" c in the big O notation may be crucial in making the difference between the feasibility and unfeasibility: for example a constant value of 1080 would exceed the capacity of any known device. He further notices that a technique has already been known in Mediaeval Europe whereby textual content of an arbitrary tune can be recorded basing on the recurrence relation , where , yielding the value of the big-O constant c equal to 2. However it turns out that another culture achieved the absolute lower bound of O(0). As Prof. Eisemann puts it: When the Mayflower voyagers first descended on these shores, the native Americans proud of their achievement in the theory of information storage and retrieval, at first welcomed the strangers with the complete silence. This was meant to convey their peak achievement in the complexity of songs, namely the demonstration that a limit as low as c = 0 is indeed obtainable. It is then claimed that the Europeans were unprepared to grasp this notion, and the chiefs, in order to establish a common ground to convey their achievements later proceeded to demonstrate an approach described by the recurrent relation , where , with a suboptimal complexity given by . The O(1) space complexity result was also implemented by Guy L. Steele, Jr., "perhaps challenged by Knuth's [article]." Dr. Steele's TELNET Song used a completely different algorithm based on exponential recursion, a parody on some implementations of TELNET. Darrah Chavey suggested that the complexity analysis of human songs can be a useful pedagogic device for teaching students complexity theory. The article "On Superpolylogarithmic Subexponential Functions" by Prof. Alan Sherman writes that Knuth's article was seminal for analysis of a special class of functions. References External links "The Complexity of Songs", Knuth, Donald E. (1984). Computational complexity theory Mathematics of music In-jokes Computer humour Donald Knuth 1977 documents Computer science papers Music and humour
The Complexity of Songs
Mathematics
776
55,406,959
https://en.wikipedia.org/wiki/Clearwater%20river%20%28river%20type%29
A clearwater river is classified based on its chemistry, sediments and water colour. Clearwater rivers have a low conductivity, relatively low levels of dissolved solids, typically have a neutral to slightly acidic pH and are very clear with a greenish colour. Clearwater rivers often have fast-flowing sections. The main clearwater rivers are South American and have their source in the Brazilian Plateau or the Guiana Shield. Outside South America the classification is not commonly used, but rivers with clearwater characteristics are found elsewhere. Amazonian rivers fall into three main categories: clearwater, blackwater and whitewater. This classification system was first proposed by Alfred Russel Wallace in 1853 based on water colour, but the types were more clearly defined according to chemistry and physics by from the 1950s to the 1980s. Although many Amazonian rivers fall clearly into one of these categories, others show a mix of characteristics and may vary depending on season and flood levels. Location The main clearwater rivers are South American and have their source in the Brazilian Plateau or the Guiana Shield. Examples of clearwater rivers originating in the Brazilian Plateau include Tapajós, Xingu, Tocantins, several large right-bank tributaries of the Madeira (notably Guaporé, Ji-Paraná and Aripuanã) and Paraguay (although heavily influenced by its whitewater tributaries). The Tapajós and Xingu alone account for 6% and 5%, respectively, of the water in the Amazon basin. Examples of clearwater rivers originating in the Guiana Shield include the upper Orinoco (above the inflow of the blackwater Atabapo and whitewater Inírida–Guaviare), Ventuari, Nhamundá, Trombetas, Paru, Araguari and Suriname. Outside South America the classification is not commonly used, but rivers with clearwater characteristics are found elsewhere, such as the upper Zambezi River, certain upland streams in major river basins of South and Southeast Asia, and many streams of northern Australia. Chemistry and sediments In South America, clearwater rivers typically have their source and flow through regions with sandy soils and crystalline rocks. These are generally ancient, of Precambrian origin, and therefore heavily weathered, allowing relatively few sediments to be dissolved in the water. This results in the low conductivity, relatively low levels of dissolved solids and clear colour typical of clearwater rivers. Sand and kaolinite are the typical sediments transported by clearwater rivers, similar to blackwater, but unlike whitewater that also transports high levels of illite and montmorillonite, resulting in a significantly higher fertility of places influenced by the latter river type. Nevertheless, although clearwater rivers can have extremely low nutrient levels similar to blackwater, some such as the Tapajós, Xingu and Tocantins have nutrient levels that are intermediate between black and whitewater. The exact chemistry of clearwater rivers varies, but it is often very similar to rainwater, low in major nutrients with sodium as the relatively dominating chemical. The water is typically neutral to slightly acidic, but the pH can range between 4.5 and 8. In the Amazon basin, clearwater rivers flowing through regions with sediments of Tertiary age are typically highly acidic, while those flowing through sediments of Carboniferous age are closer to neutral or slightly basic. As suggested by the name, clearwater rivers are highly transparent with a typical visibility of . There can be large variations, even within a single river, depending on season or heavy rains. Ecology The difference in chemistry and visibility between the various black, white and clearwater rivers result in distinct differences in flora and fauna. Although there is considerable overlap in the fauna found in the different river types, there are also many species found only in one of them. Many blackwater and clearwater species are restricted to relatively small parts of the Amazon, as different blackwater and clearwater systems are separated (and therefore isolated) by large whitewater sections. These "barriers" are considered a main force in allopatric speciation in the Amazon basin. Many species of fish, which often are threatened (especially by dams), are only known from clearwater rivers. Large sections with rapids are home to specialized, rheophilic fish, as well as aquatic plants such as Podostemaceae. There are major differences in the amount of macrophytes and this is mainly related to light: heavily shaded clearwater rivers have few, while those flowing through more open regions often contain many. Clearwater rivers have relatively low productivity compared to whitewater rivers, resulting in a comparably low insect abundance. References Aquatic ecology Rivers
Clearwater river (river type)
Biology
935
21,827,841
https://en.wikipedia.org/wiki/Trolox%20equivalent%20antioxidant%20capacity
The Trolox equivalent antioxidant capacity (TEAC) assay measures the antioxidant capacity of a given substance, as compared to the standard, Trolox. Most commonly, antioxidant capacity is measured using the ABTS Decolorization Assay. Other antioxidant capacity assays which use Trolox as a standard include the diphenylpicrylhydrazyl (DPPH), oxygen radical absorbance capacity (ORAC) and ferric reducing ability of plasma (FRAP) assays. The TEAC assay is often used to measure the antioxidant capacity of foods, beverages and nutritional supplements. References Biochemistry detection reactions
Trolox equivalent antioxidant capacity
Chemistry,Biology
145
8,813,885
https://en.wikipedia.org/wiki/Octahedral%20prism
In geometry, an octahedral prism is a convex uniform 4-polytope. This 4-polytope has 10 polyhedral cells: 2 octahedra connected by 8 triangular prisms. Alternative names Octahedral dyadic prism (Norman W. Johnson) Ope (Jonathan Bowers, for octahedral prism) Triangular antiprismatic prism Triangular antiprismatic hyperprism Coordinates It is a Hanner polytope with vertex coordinates, permuting first 3 coordinates: ([±1,0,0]; ±1) Structure The octahedral prism consists of two octahedra connected to each other via 8 triangular prisms. The triangular prisms are joined to each other via their square faces. Projections The octahedron-first orthographic projection of the octahedral prism into 3D space has an octahedral envelope. The two octahedral cells project onto the entire volume of this envelope, while the 8 triangular prismic cells project onto its 8 triangular faces. The triangular-prism-first orthographic projection of the octahedral prism into 3D space has a hexagonal prismic envelope. The two octahedral cells project onto the two hexagonal faces. One triangular prismic cell projects onto a triangular prism at the center of the envelope, surrounded by the images of 3 other triangular prismic cells to cover the entire volume of the envelope. The remaining four triangular prismic cells are projected onto the entire volume of the envelope as well, in the same arrangement, except with opposite orientation. Related polytopes It is the second in an infinite series of uniform antiprismatic prisms. It is one of 18 uniform polyhedral prisms created by using uniform prisms to connect pairs of parallel Platonic solids and Archimedean solids. It is one of four four-dimensional Hanner polytopes; the other three are the tesseract, the 16-cell, and the dual of the octahedral prism (a cubical bipyramid). References John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 26) Norman Johnson Uniform Polytopes, Manuscript (1991) External links Uniform 4-polytopes
Octahedral prism
Physics
479
173,366
https://en.wikipedia.org/wiki/Mechanization
Mechanization (or mechanisation) is the process of changing from working largely or exclusively by hand or with animals to doing that work with machinery. In an early engineering text, a machine is defined as follows: In every fields, mechanization includes the use of hand tools. In modern usage, such as in engineering or economics, mechanization implies machinery more complex than hand tools and would not include simple devices such as an ungeared horse or donkey mill. Devices that cause speed changes or changes to or from reciprocating to rotary motion, using means such as gears, pulleys or sheaves and belts, shafts, cams and cranks, usually are considered machines. After electrification, when most small machinery was no longer hand powered, mechanization was synonymous with motorized machines. Extension of mechanization of the production process is termed as automation and it is controlled by a closed loop system in which feedback is provided by the sensors. In an automated machine the work of different mechanisms is performed automatically. History Ancient times Water wheels date to the Roman period and were used to grind grain and lift irrigation water. Water-powered bellows were in use on blast furnaces in China in 31 AD. By the 13th century, water wheels powered sawmills and trip hammers, to pull cloth and pound flax and later cotton rags into pulp for making paper. Trip hammers are shown crushing ore in De re Metallica (1555). Clocks were some of the most complex early mechanical devices. Clock makers were important developers of machine tools including gear and screw cutting machines, and were also involved in the mathematical development of gear designs. Clocks were some of the earliest mass-produced items, beginning around 1830. Water-powered bellows for blast furnaces, used in China in ancient times, were in use in Europe by the 15th century. De re Metallica contains drawings related to bellows for blast furnaces including a fabrication drawing. Improved gear designs decreased wear and increased efficiency. Mathematical gear designs were developed in the mid 17th century. French mathematician and engineer Desargues designed and constructed the first mill with epicycloidal teeth ca. 1650. In the 18th century involute gears, another mathematical derived design, came into use. Involute gears are better for meshing gears of different sizes than epicycloidal. Gear cutting machines came into use in the 18th century. Industrial revolution The Newcomen steam engine was first used, to pump water from a mine, in 1712. John Smeaton introduced metal gears and axles to water wheels in the mid to last half of the 18th century. The Industrial Revolution started mainly with textile machinery, such as the spinning jenny (1764) and water frame (1768). Demand for metal parts used in textile machinery led to the invention of many machine tools in the late 1700s until the mid-1800s. After the early decades of the 19th century, iron increasingly replaced wood in gearing and shafts in textile machinery. In the 1840s self acting machine tools were developed. Machinery was developed to make nails ca. 1810. The Fourdrinier paper machine for continuous production of paper was patented in 1801, displacing the centuries-old hand method of making individual sheets of paper. One of the first mechanical devices used in agriculture was the seed drill invented by Jethro Tull around 1700. The seed drill allowed more uniform spacing of seed and planting depth than hand methods, increasing yields and saving valuable seed. In 1817, the first bicycle was invented and used in Germany. Mechanized agriculture greatly increased in the late eighteenth and early nineteenth centuries with horse drawn reapers and horse powered threshing machines. By the late nineteenth century steam power was applied to threshing and steam tractors appeared. Internal combustion began being used for tractors in the early twentieth century. Threshing and harvesting was originally done with attachments for tractors, but in the 1930s independently powered combine harvesters were in use. In the mid to late 19th century, hydraulic and pneumatic devices were able to power various mechanical actions, such as positioning tools or work pieces. Pile drivers and steam hammers are examples for heavy work. In food processing, pneumatic or hydraulic devices could start and stop filling of cans or bottles on a conveyor. Power steering for automobiles uses hydraulic mechanisms, as does practically all earth moving equipment and other construction equipment and many attachments to tractors. Pneumatic (usually compressed air) power is widely used to operate industrial valves. Twentieth century By the early 20th century machines developed the ability to perform more complex operations that had previously been done by skilled craftsmen. An example is the glass bottle making machine developed 1905. It replaced highly paid glass blowers and child labor helpers and led to the mass production of glass bottles. After 1900 factories were electrified, and electric motors and controls were used to perform more complicated mechanical operations. This resulted in mechanized processes to manufacture almost all goods. Categories In manufacturing, mechanization replaced hand methods of making goods. Prime movers are devices that convert thermal, potential or kinetic energy into mechanical work. Prime movers include internal combustion engines, combustion turbines (jet engines), water wheels and turbines, windmills and wind turbines and steam engines and turbines. Powered transportation equipment such as locomotives, automobiles and trucks and airplanes, is a classification of machinery which includes sub classes by engine type, such as internal combustion, combustion turbine and steam. Inside factories, warehouses, lumber yards and other manufacturing and distribution operations, material handling equipment replaced manual carrying or hand trucks and carts. In mining and excavation, power shovels replaced picks and shovels. Rock and ore crushing had been done for centuries by water-powered trip hammers, but trip hammers have been replaced by modern ore crushers and ball mills. Bulk material handling systems and equipment are used for a variety of materials including coal, ores, grains, sand, gravel and wood products. Construction equipment includes cranes, concrete mixers, concrete pumps, cherry pickers and an assortment of power tools. Powered machinery Powered machinery today usually means either by electric motor or internal combustion engine. Before the first decade of the 20th century powered usually meant by steam engine, water or wind. Many of the early machines and machine tools were hand powered, but most changed over to water or steam power by the early 19th century. Before electrification, mill and factory power was usually transmitted using a line shaft. Electrification allowed individual machines to each be powered by a separate motor in what is called unit drive. Unit drive allowed factories to be better arranged and allowed different machines to run at different speeds. Unit drive also allowed much higher speeds, which was especially important for machine tools. A step beyond mechanization is automation. Early production machinery, such as the glass bottle blowing machine (ca. 1890s), required a lot of operator involvement. By the 1920s fully automatic machines, which required much less operator attention, were being used. Military usage The term is also used in the military to refer to the use of tracked armoured vehicles, particularly armoured personnel carriers, to move troops ( mechanized infantry) that would otherwise have marched or ridden trucks into combat. In military terminology, mechanized refers to ground units that can fight from vehicles, while motorized refers to units (motorized infantry) that are transported and go to battle in unarmoured vehicles such as trucks. Thus, a towed artillery unit is considered motorized while a self-propelled one is mechanized. Mechanical vs human labour When we compare the efficiency of a labourer, we see that he has an efficiency of about 1%–5.5% (depending on whether he uses arms, or a combination of arms and legs). Internal combustion engines mostly have an efficiency of about 20%, although large diesel engines, such as those used to power ships, may have efficiencies of nearly 50%. Industrial electric motors have efficiencies up to the low 90% range, before correcting for the conversion efficiency of fuel to electricity of about 35%. When we compare the costs of using an internal combustion engine to a worker to perform work, we notice that an engine can perform more work at a comparative cost. 1 liter of fossil fuel burnt with an IC engine equals about 50 hands of workers operating for 24 hours or 275 arms and legs for 24 hours. In addition, the combined work capability of a human is also much lower than that of a machine. An average human worker can provide work good for around 0,9 hp (2.3 MJ per hour) while a machine (depending on the type and size) can provide for far greater amounts of work. For example, it takes more than one and a half hour of hard labour to deliver only one kWh – which a small engine could deliver in less than one hour while burning less than one litre of petroleum fuel. This implies that a gang of 20 to 40 men will require a financial compensation for their work at least equal to the required expended food calories (which is at least 4 to 20 times higher). In most situations, the worker will also want compensation for the lost time, which is easily 96 times greater per day. Even if we assume the real wage cost for the human labour to be at US $1.00/day, an energy cost is generated of about $4.00/kWh. Despite this being a low wage for hard labour, even in some of the countries with the lowest wages, it represents an energy cost that is significantly more expensive than even exotic power sources such as solar photovoltaic panels (and thus even more expensive when compared to wind energy harvesters or luminescent solar concentrators). Levels of mechanization For simplification, one can study mechanization as a series of steps. Many students refer to this series as indicating basic-to-advanced forms of mechanical society. hand/muscle power hand-tools powered hand-tools, e.g. electric-controlled powered tools, single functioned, fixed cycle powered tools, multi-functioned, program controlled powered tools, remote-controlled powered tools, activated by work-piece (e.g.: coin phone) measurement selected signaling control, e.g. hydro power control performance recording automated machine action altered through measurement segregation/rejection according to measurement selection of appropriate action cycle correcting performance after operation correcting performance during operation See also Assembly line Bulk materials handling Industrialisation Newly industrialized country References Further reading Secondary sector of the economy Agricultural machinery Armoured warfare Machinery Industrial history
Mechanization
Physics,Technology,Engineering
2,131
7,639,649
https://en.wikipedia.org/wiki/Green%20thread
In computer programming, a green thread is a thread that is scheduled by a runtime library or virtual machine (VM) instead of natively by the underlying operating system (OS). Green threads emulate multithreaded environments without relying on any native OS abilities, and they are managed in user space instead of kernel space, enabling them to work in environments that do not have native thread support. Etymology Green threads refers to the name of the original thread library for Java programming language (that was released in version 1.1 and then Green threads were abandoned in version 1.3 to native threads). It was designed by The Green Team at Sun Microsystems. History Green threads were briefly available in Java between 1997 and 2000. Green threads share a single operating system thread through co-operative concurrency and can therefore not achieve parallelism performance gains like operating system threads. The main benefit of coroutines and green threads is ease of implementation. Performance On a multi-core processor, native thread implementations can automatically assign work to multiple processors, whereas green thread implementations normally cannot. Green threads can be started much faster on some VMs. On uniprocessor computers, however, the most efficient model has not yet been clearly determined. Benchmarks on computers running the Linux kernel version 2.2 (released in 1999) have shown that: Green threads significantly outperform Linux native threads on thread activation and synchronization. Linux native threads have slightly better performance on input/output (I/O) and context switching operations. When a green thread executes a blocking system call, not only is that thread blocked, but all of the threads within the process are blocked. To avoid that problem, green threads must use non-blocking I/O or asynchronous I/O operations, although the increased complexity on the user side can be reduced if the virtual machine implementing the green threads spawns specific I/O processes (hidden to the user) for each I/O operation. There are also mechanisms which allow use of native threads and reduce the overhead of thread activation and synchronization: Thread pools reduce the cost of spawning a new thread by reusing a limited number of threads. Languages which use virtual machines and native threads can use escape analysis to avoid synchronizing blocks of code when unneeded. Green threads in the Java Virtual Machine In Java 1.1, green threads were the only threading model used by the Java virtual machine (JVM), at least on Solaris. As green threads have some limitations compared to native threads, subsequent Java versions dropped them in favor of native threads. An exception to this is the Squawk virtual machine, which is a mixture between an operating system for low-power devices and a Java virtual machine. It uses green threads to minimize the use of native code, and to support migrating its isolates. Kilim and Quasar are open-source projects which implement green threads on later versions of the JVM by modifying the Java bytecode produced by the Java compiler (Quasar also supports Kotlin and Clojure). Green threads in other languages There are some other programming languages that implement equivalents of green threads instead of native threads. Examples: Chicken Scheme uses lightweight user-level threads based on first-class continuations Common Lisp CPython natively supports asyncio since Version 3.4, alternative implementations exist like greenlet, eventlet and gevent, PyPy Crystal offers fibers D offers fibers, used for asynchronous I/O Dyalog APL terms them threads Erlang Go implements so called goroutines Haskell Julia uses green threads for its Tasks. Limbo Lua uses coroutines for concurrency. Lua 5.2 also offers true C coroutine semantics through the functions lua_yieldk, lua_callk, and lua_pcallk. The CoCo extension allows true C coroutine semantics for Lua 5.1. Nim provides asynchronous I/O and coroutines OCaml, since version 5.0, supports green threads through the Domainslib.Task module occam, which prefers the term process instead of thread due to its origins in communicating sequential processes Perl supports green threads through coroutines PHP supports green threads through fibers and coroutines Racket (native threads are also available through Places) Ruby before version 1.9 SML/NJ's implementation of Concurrent ML Smalltalk (most dialects: Squeak, VisualWorks, GNU Smalltalk, etc.) Stackless Python supports either preemptive multitasking or cooperative multitasking through microthreads (termed tasklets). Tcl has coroutines and an event loop The Erlang virtual machine has what might be called green processes – they are like operating system processes (they do not share state like threads do) but are implemented within the Erlang Run Time System (erts). These are sometimes termed green threads, but have significant differences from standard green threads. In the case of GHC Haskell, a context switch occurs at the first allocation after a configurable timeout. GHC threads are also potentially run on one or more OS threads during their lifetime (there is a many-to-many relationship between GHC threads and OS threads), allowing for parallelism on symmetric multiprocessing machines, while not creating more costly OS threads than needed to run on the available number of cores. Most Smalltalk virtual machines do not count evaluation steps; however, the VM can still preempt the executing thread on external signals (such as expiring timers, or I/O becoming available). Usually round-robin scheduling is used so that a high-priority process that wakes up regularly will effectively implement time-sharing preemption: [ [(Delay forMilliseconds: 50) wait] repeat ] forkAt: Processor highIOPriority Other implementations, e.g., QKS Smalltalk, are always time-sharing. Unlike most green thread implementations, QKS also supports preventing priority inversion. Differences to virtual threads in the Java Virtual Machine Virtual threads were introduced as a preview feature in Java 19 and stabilized in Java 21. Important differences between virtual threads and green threads are: Virtual threads coexist with existing (non-virtual) platform threads and thread pools. Virtual threads protect their abstraction: Unlike with green threads, sleeping on a virtual thread does not block the underlying carrier thread. Working with thread-local variables is deemphasized, and scoped values are suggested as a more lightweight replacement. Virtual threads can be cheaply suspended and resumed, making use of JVM support for the special jdk.internal.vm.Continuation class. Virtual threads handle blocking calls by transparently unmounting from the carrier thread where possible, otherwise compensating by increasing the number of platform threads. See also Async/await Light-weight process Coroutine Java virtual machine Global interpreter lock Fiber (computer science) GNU Portable Threads Protothreads References External links "Four for the ages", JavaWorld article about Green threads Green threads on Java threads FAQ Threads (computing) Java platform
Green thread
Technology
1,484
61,037,921
https://en.wikipedia.org/wiki/Scale%20cube
The scale cube is a technology model that indicates three methods (or approaches) by which technology platforms may be scaled to meet increasing levels of demand upon the system in question. The three approaches defined by the model include scaling through replication or cloning (the “X axis”), scaling through segmentation along service boundaries or dissimilar components (the “Y axis”) and segmentation or partitioning along similar components (the “Z axis”). History The model was first published in a book in the first edition of The Art of Scalability. The authors claim first publishing of the model online in 2007 in their company blog. Subsequent versions of the model were published in the first edition of Scalability Rules in 2011, the second edition of The Art of Scalability in 2015 and the second edition of Scalability Rules in 2016. Model overview The X axis of the model describes scaling a technology solution through multiple instances of the same component through cloning of a service or replication of a data set. Web and application servers performing the same function may exist behind a load balancer for scaling a solution. Data persistence systems such as a database may be replicated for higher transaction throughput. The Y axis of the model describes scaling a technology solution by separating a monolithic application into services using action words (verbs), or separating “dissimilar” things. Data may be separated by nouns. Services should have the data upon which they act separated and isolated to that service. The Z axis of the cube describes scaling a technology solution by separating components along “similar” boundaries. Such separations may be done on a geographic basis, along customer identity numbers, etc. X axis X axis scaling is the most commonly used approach and tends to be the easiest to implement. Although potentially costly, the speed at which it can be implemented and start alleviating issues tends to offset the cost. The X Axis tends to be a simple copy of a service that is then load balanced to either help with spikes in traffic or server outages. The costs can start to become overwhelming, particularly when dealing with the persistence tier. Pros of X axis scaling Intellectually easy Scales transactions well Quick to implement Cons of X axis scaling Cost (multiple database copies) Does not address caching Does not address organizational scale Y axis Y axis scaling starts to break away chunks of monolithic code bases and creates separate services, or sometimes microservices. This separation creates clearly defined lanes for not only responsibility and accountability, but also for fault isolation. If one service fails, it should only bring down itself and not other services. Pros of Y axis scaling Allows for organizational scale Scales transactions well Fault isolation Increases cache hit rate Cons of Y axis scaling Intellectually hard Takes time to implement Z axis Z axis scaling is usually looking at similar use cases of data. Whether that be geographic in nature or how customers use your website, or even just a random modulus of your customer dataset. The Z Axis breaks customers into sequestered sections to benefit response time and to help eliminate issues if a particular region or section should go down. Pros of Z axis scaling Intellectually easy Scales transactions well Can provide fault isolation Can improve response times Cons of Z axis scaling Takes time to implement Does not address organizational scale Requires increased automation to reduce systems overhead References Computer architecture Computational resources Computer systems Engineering concepts
Scale cube
Technology,Engineering
679
71,586,519
https://en.wikipedia.org/wiki/Q.%20Jane%20Wang
Qian Jane Wang is an American professor of mechanical engineering and the Executive Director for the Center for Surface Engineering and Tribology at Northwestern University. She is a tribologist whose research includes work on contact mechanics, lubrication, micromechanics, and solid-state batteries. Education Wang studied mechanical engineering at the Xi'an University of Technology, graduating in 1982. She went to Northern Illinois University for graduate study in mechanical engineering, and earned a master's degree there in 1989. She completed her Ph.D. in 1993 at Northwestern University. Her doctoral advisor was Herbert S. Cheng. Research Wang’s group has developed theories, models, and methods for understanding and simulation of tribological interfaces and for industrial research and development. The scope of the work includes novel design of tribological surfaces, establishment of unified computational methodologies with a focus on efficient computing, establishment of accurate modeling of transient and steady state tribological responses in polymers, and pioneering methods for the tribological response of materials under strain. Recognition Wang was named a Fellow of the Society of Tribologists and Lubrication Engineers (STLE) in 2007, and an ASME Fellow in 2009. She won the Ralph R. Teetor Educational Award in 2000 and in 2015 the STLE awarded her the STLE International Award, which is considered their highest technical award. She has also won numerous best-paper awards, and her research publications have been cited more than 12,000 times as of September 2022. Professional service Wang's service with STLE includes Chief Editor for the Encyclopedia of Tribology, Chair, 2011-2012, STLE Fellows Committee, Chair, 2011 ASME/STLE International Joint Tribology Conference, and Chair, 2008 STLE Annual Meeting Program Committee. Wang worked with fellow engineer at Northwestern Yip-Wah Chung to edit the Encyclopedia of Tribology. References External links Home page Living people Chinese mechanical engineers 21st-century Chinese women engineers 21st-century Chinese engineers 21st-century American women engineers 21st-century American engineers American mechanical engineers Tribologists Xi'an University of Technology alumni Northern Illinois University alumni Northwestern University alumni Northwestern University faculty Fellows of the American Society of Mechanical Engineers Year of birth missing (living people)
Q. Jane Wang
Materials_science
457
14,812,371
https://en.wikipedia.org/wiki/SNRPN%20upstream%20reading%20frame%20protein
SNRPN upstream reading frame protein is a protein that in humans is encoded by the SNURF gene. Function This gene encodes a highly basic protein localized to the nucleus. The evolutionarily constrained open reading frame is found on a bicistronic transcript which has a downstream ORF encoding the small nuclear ribonucleoprotein polypeptide N. The upstream coding region utilizes the first three exons of the transcript, a region that has been identified as an imprinting center. Multiple transcription initiation sites have been identified and extensive alternative splicing occurs in the 5' untranslated region but the full-length nature of these transcripts has not been determined. An alternate exon has been identified that substitutes for exon 4 and leads to a truncated, monocistronic transcript. Alternative splicing or deletion caused by a translocation event in the 5' untranslated region or coding region of this gene leads to Angelman syndrome or Prader-Willi syndrome due to parental imprint switch failure. The function of this protein is not yet known. References Further reading
SNRPN upstream reading frame protein
Chemistry
231
951,901
https://en.wikipedia.org/wiki/Rood%20screen
The rood screen (also choir screen, chancel screen, or jubé) is a common feature in late medieval church architecture. It is typically an ornate partition between the chancel and nave, of more or less open tracery constructed of wood, stone, or wrought iron. The rood screen was originally surmounted by a rood loft carrying the Great Rood, a sculptural representation of the Crucifixion. In English, Scottish, and Welsh cathedrals, monastic, and collegiate churches, there were commonly two transverse screens, with a rood screen or rood beam located one bay west of the pulpitum screen, but this double arrangement nowhere survives complete, and accordingly the preserved pulpitum in such churches is sometimes referred to as a rood screen. At Wells Cathedral the medieval arrangement was restored in the 20th century, with the medieval strainer arch supporting a rood, placed in front of the pulpitum and organ. Rood screens can be found in churches in many parts of Europe; however, in Catholic countries they were generally removed during the Counter-Reformation, when the retention of any visual barrier between the laity and the high altar was widely seen as inconsistent with the decrees of the Council of Trent. Accordingly, rood screens now survive in much greater numbers in Anglican and Lutheran churches; with the greatest number of survivals complete with screen and rood figures in Scandinavia. The iconostasis in Eastern Christian churches is a visually similar barrier, but is now generally considered to have a different origin, deriving from the ancient altar screen or templon. Description and origin of the name The word rood is derived from the Saxon word rood or rode, meaning "cross". The rood screen is so called because it was surmounted by the Rood itself, a large figure of the crucified Christ. Commonly, to either side of the Rood, there stood supporting statues of saints, normally Mary and St John, in an arrangement comparable to the Deesis always found in the centre of an Orthodox iconostasis (which uses John the Baptist instead of the Apostle, and a Pantokrator instead of a Crucifixion). Latterly in England and Wales the Rood tended to rise above a narrow loft (called the "rood loft"), which could occasionally be substantial enough to be used as a singing gallery (and might even contain an altar); but whose main purpose was to hold candles to light the rood itself. The panels and uprights of the screen did not support the loft, which instead rested on a substantial transverse beam called the "rood beam" or "candle beam". Access was via a narrow rood stair set into the piers supporting the chancel arch. In parish churches, the space between the rood beam and the chancel arch was commonly filled by a boarded or lath and plaster tympanum, set immediately behind the rood figures and painted with a representation of the Last Judgement. The roof panels of the first bay of the nave were commonly richly decorated to form a celure or canopy of honour; or otherwise there might be a separate celure canopy attached to the front of the chancel arch. The carving or construction of the rood screen often included latticework, which makes it possible to see through the screen partially from the nave into the chancel. The term "chancel" itself derives from the Latin word cancelli meaning "lattice"; a term which had long been applied to the low metalwork or stone screens that delineate the choir enclosure in early medieval Italian cathedrals and major churches. The passage through the rood screen was fitted with doors, which were kept locked except during services. The terms pulpitum, Lettner, jubé and doksaal all suggest a screen platform used for readings from scripture, and there is plentiful documentary evidence for this practice in major churches in Europe in the 16th century. From this it was concluded by Victorian liturgists that the specification ad pulpitum for the location for Gospel lections in the rubrics of the Use of Sarum referred both to the cathedral pulpitum screen and the parish rood loft. However, rood stairs in English parish churches are rarely, if ever, found to have been built wide enough to accommodate the Gospel procession required in the Sarum Use. The specific functions of the late medieval parish rood loft, over and above supporting the rood and its lights, remain an issue of conjecture and debate. In this respect it may be significant that, although there are terms for a rood screen in the vernacular languages of Europe, there is no counterpart specific term in liturgical Latin. Nor does the 13th century liturgical commentator Durandus refer directly to rood screens or rood lofts. This is consistent with the ritual uses of rood lofts being substantially a late medieval development. History Early medieval altar screens and chancel screens Until the 6th century the altar of Christian churches would have been in full view of the congregation, separated only by a low altar rail around it. Large churches had a ciborium, or canopy on four columns, over the altar, from which hung altar curtains which were closed at certain points in the liturgy. Then, however, following the example of the church of Hagia Sophia in Constantinople, churches began to surround their altars with a colonnade or templon which supported a decorated architrave beam along which a curtain could be drawn to veil the altar at specific points in the consecration of the Eucharist; and this altar screen, with widely spaced columns, subsequently became standard in the major churches of Rome. In Rome the ritual choir tended to be located west of the altar screen, and this choir area was also surrounded by cancelli, or low chancel screens. These arrangements still survive in the Roman basilicas of San Clemente and Santa Maria in Cosmedin, as well as St Mark's Basilica in Venice. In the Eastern Church, the templon and its associated curtains and decorations evolved into the modern iconostasis. In the Western Church, the cancelli screens of the ritual choir developed into the choir stalls and pulpitum screen of major cathedral and monastic churches; but the colonnaded altar screen was superseded from the 10th century onwards, when the practice developed of raising a canopy or baldacchino, carrying veiling curtains, over the altar itself. Many churches in Ireland and Scotland in the early Middle Ages were very small which may have served the same function as a rood screen. Contemporary sources suggest that the faithful may have remained outside the church for most of the mass; the priest would go outside for the first part of the mass including the reading of the gospel, and return inside the church, out of sight of the faithful, to consecrate the Eucharist. Churches built in England in the 7th and 8th centuries consciously copied Roman practices; remains indicating early cancelli screens have been found in the monastic churches of Jarrow and Monkwearmouth, while the churches of the monasteries of Brixworth, Reculver and St Pancras Canterbury have been found to have had arcaded colonnades corresponding to the Roman altar screen, and it may be presumed that these too were equipped with curtains. Equivalent arcaded colonnades also survive in 10th-century monastic churches in Spain, such as San Miguel de Escalada. Some 19th-century liturgists supposed that these early altar screens might have represented the origins of the medieval rood screens; but this view is rejected by most current scholars, who emphasize that these screens were intended to separate the altar from the ritual choir, whereas the medieval rood screen separated the ritual choir from the lay congregation. Great Rood The Great Rood or Rood cross itself long preceded the development of screen lofts, originally being either just hung from the chancel arch or also supported by a plain beam across the arch, and high up, typically at the level of the capitals of the columns (if there are any), or near the point where the arch begins to lean inwards. Numerous near life-size crucifixes survive from the Romanesque period or earlier, with the Gero Cross in Cologne Cathedral (965–970) and the Volto Santo of Lucca the best known. Such crosses are commonly referred to in German as Triumphkreuz or triumphal cross. The prototype may have been one known to have been set up in Charlemagne's Palatine Chapel at Aachen, apparently in gold foil worked over a wooden core in the manner of the Golden Madonna of Essen. The original location and support for the surviving figures is often not clear;  many are now hung on walls - but a number of northern European churches, especially in Germany and Scandinavia, preserve the original setting in full – they are known as a "Triumphkreutz" in German, from the "triumphal arch" (chancel arch in later terms) of Early Christian architecture. As in later examples a Virgin and Saint John often flanked the cross, and cherubim and other figures are sometimes seen. Parochial rood screens For most of the medieval period, there would have been no fixed screen or barrier separating the congregational space from the altar space in parish churches in the Latin West; although as noted above, a curtain might be drawn across the altar at specific points in the Mass. Following the exposition of the doctrine of transubstantiation at the fourth Lateran Council of 1215, clergy were required to ensure that the reserved sacrament was to be kept protected from irreverent access or abuse; and accordingly some form of permanent screen came to be seen as essential, as the parish nave was commonly kept open and used for a wide range of secular purposes. Hence the origin of the chancel screen was independent of the Great Rood; indeed most surviving early screens lack lofts, and do not appear ever to have had a rood cross mounted on them. Nevertheless, over time, the rood beam and its sculptures tended to become incorporated into the chancel screen in new or reworked churches. Over the succeeding three centuries, and especially in the latter period when it became standard for the screen to be topped by a rood loft facing the congregation, a range of local ritual practices developed which incorporated the rood and loft into the performance of the liturgy; especially in the Use of Sarum, the form of the missal that was most common in England. For example, during the 40 days of "Lent" the rood in England was obscured by the Lenten Veil, a large hanging suspended by stays from hooks set into the chancel arch; in such a way that it could be dropped abruptly to the ground on Palm Sunday, at the reading of Matthew 27:51 when the Veil of the Temple is torn asunder. Monastic rood screens The provisions of the Lateran Council had less effect on monastic churches and cathedrals in England; as these would have already been fitted with two transverse screens; a pulpitum screen separating off the ritual choir; and an additional rood screen one bay further west, delineating the area of the nave provided for lay worship (or in monastic churches of the Cistercian order, delineating the distinct church area reserved for the worship of lay brothers). The monastic rood screen invariably had a nave altar set against its western face, which, from at least the late 11th century onwards, was commonly dedicated to the Holy Cross; as for example in Norwich Cathedral, and in Castle Acre Priory. In the later medieval period many monastic churches erected an additional transverse parclose screen, or fence screen, to the west of the nave altar; an example of which survives as the chancel screen in Dunstable Priory in Bedfordshire. Hence the Rites of Durham, a detailed account of the liturgical arrangements of Durham Cathedral Priory before the Reformation, describes three transverse screens; fence screen, rood screen and pulpitum. and the triple same arrangement is also documented in the collegiate church of Ottery St Mary. In the rest of Europe, this multiple screen arrangement was only found in Cistercian churches, as at Maulbronn Monastery in southern Germany, but many other major churches, such as Albi Cathedral in France, inserted transverse screens in the later medieval period, or reconstructed existing choir screens on a greatly increased scale. In Italy, massive rood screens incorporating an ambo or pulpit facing the nave appear to have been universal in the churches of friars; but not in parish churches, there being no equivalent in the Roman Missal for the ritual elaborations of the Use of Sarum. The screen and Tridentine worship The decrees of the Council of Trent (1545–1563) enjoined that the celebration of the Mass should be made much more accessible to lay worshippers; and this was widely interpreted as requiring the removal of rood screens as physical and visual barriers, even though the council had made no explicit condemnation of screens. Already in 1565, Duke Cosimo de' Medici ordered the removal of the tramezzi from the Florentine friary churches of Santa Croce and Santa Maria Novella in accordance with the principles of the council. In 1577 Carlo Borromeo published , making no mention of the screen and emphasizing the importance of making the high altar visible to all worshippers; and in 1584 the Church of the Gesù was built in Rome as a demonstration of the new principles of Tridentine worship, having an altar rail but conspicuously lacking either a central rood or screen. Almost all medieval churches in Italy were subsequently re-ordered following this model; and most screens that impeded the view of the altar were removed, or their screening effect reduced, in other Catholic countries, with exceptions like Toledo Cathedral, Albi Cathedral, the church of Brou in Bourg-en-Bresse; and also in monasteries and convents, where the screen was preserved to maintain the enclosure. In Catholic Europe, parochial rood screens survive in substantial numbers only in Brittany, such as those at Plouvorn, Morbihan and Ploubezre . Symbolic significance The rood screen was a physical and symbolic barrier, separating the chancel, the domain of the clergy, from the nave where lay people gathered to worship. It was also a means of seeing; often it was solid only to waist height and richly decorated with pictures of saints and angels. Concealment and revelation were part of the mediaeval Mass. When kneeling, the congregation could not see the priest, but might do so through the upper part of the screen, when he elevated the Host on Sundays. In some churches, 'squints' (holes in the screen) would ensure that everyone could see the elevation, as seeing the bread made flesh was significant for the congregation. Moreover, while Sunday Masses were very important, there were also weekday services which were celebrated at secondary altars in front of the screen (such as the "Jesus altar", erected for the worship of the Holy Name, a popular devotion in mediaeval times) which thus became the backdrop to the celebration of the Mass. The Rood itself provided a focus for worship according to the medieval Use of Sarum, most especially in Holy Week, when worship was highly elaborate. During Lent the Rood was veiled; on Palm Sunday it was revealed before the procession of palms and the congregation knelt before it. The whole Passion story would then be read from the Rood loft, at the foot of the crucifix by three priests. In the 1400s the rood screen in Dovercourt, UK, became a shrine when it gained a reputation for speaking. Post-Reformation, in England At the Reformation, the Reformers sought to destroy abused images, i.e. those statues and paintings which they alleged to have been the focus of superstitious adoration. Thus not a single mediaeval Rood survives in Britain. They were removed as a result of the 1547 Injunctions of Edward VI (some to be restored when Mary came to the throne and removed again under Elizabeth). Of original rood lofts, also considered suspect due to their association with superstitious veneration, very few are left; surviving examples in Wales being at the ancient churches in Llanelieu, Llanengan and Llanegryn. The rood screens themselves were sometimes demolished or cut down in height, but more commonly remained with their painted figures whitewashed and painted over with religious texts. Tympanums too were whitewashed. English cathedral churches maintained their choirs, and consequently their choir stalls and pulpitum screens; but generally demolished their rood screens entirely, although those of Peterborough and Canterbury survived into the 18th century. In the century following the English Reformation newly built Anglican churches were invariably fitted with chancel screens, which served the purpose of differentiating a separate space in the chancel for communicants at Holy Communion, as was required in the newly adopted Book of Common Prayer. In effect, these chancel screens were rood screens without a surmounting loft or crucifix, and examples survive at St John Leeds and at Foremark. New screens were also erected in many medieval churches where they had been destroyed at the Reformation, as at Cartmel Priory and Abbey Dore. From the early 17th century it became normal for screens or tympanums to carry the royal arms of England, good examples of which survive in two of the London churches of Sir Christopher Wren, and also at Derby Cathedral. However, Wren's design for the church of St James, Piccadilly, of 1684 dispensed with a chancel screen, retaining only rails around the altar itself, and this auditory church plan was widely adopted as a model for new churches from then on. In the 18th and 19th centuries hundreds of surviving medieval screens were removed altogether; today, in many British churches, the rood stair (which gave access to the rood loft) is often the only remaining trace of the former rood loft and screen. In the 19th century, the architect Augustus Pugin campaigned for the re-introduction of rood screens into Catholic church architecture. His screens survive in Macclesfield and Cheadle, Staffordshire, although others have been removed. In Anglican churches, under the influence of the Cambridge Camden Society, many medieval screens were restored; though until the 20th century, generally without roods or with only a plain cross rather than a crucifix. A nearly complete restoration can be seen at Eye, Suffolk, where the rood screen dates from 1480. Its missing rood loft was reconstructed by Sir Ninian Comper in 1925, complete with a rood and figures of saints and angels, and gives a good impression of how a full rood group might have appeared in a mediaeval English church - except that the former tympanum has not been replaced. Indeed, because tympanums, repainted with the royal arms, were erroneously considered post-medieval, they were almost all removed in the course of 19th-century restorations. For parish churches, the 19th-century Tractarians tended, however, to prefer an arrangement whereby the chancel was distinguished from the nave only by steps and a low-gated screen wall or septum (as at All Saints, Margaret Street), so as not to obscure the congregation's view of the altar. This arrangement was adopted for almost all new Anglican parish churches of the period. Painted rood screens occur rarely, but some of the best surviving examples are in East Anglia. Notable examples Britain The earliest known example of a parochial rood screen in Britain, dating to the mid-13th century, is to be found at Stanton Harcourt, Oxfordshire; and a notable early stone screen (14th century) is found at Ilkeston, Derbyshire. Both these screens lack lofts, as do all surviving English screens earlier than the 15th century. However, some early screens, now lost, may be presumed to have had a loft surmounted by the Great Rood, as the churches of Colsterworth and Thurlby in Lincolnshire preserve rood stairs which can be dated stylistically to the beginning of the 13th century, and these represent the earliest surviving evidence of parochial screens; effectively contemporary with the Lateran Council. The majority of surviving screens are no earlier than the 15th century, such as those at Trull in Somerset and Attleborough in Norfolk. In many East Anglian and Devonian parish churches, original painted decoration survives on wooden screen panels, having been whitewashed over at the Reformation; although almost all have lost their rood beams and lofts, and many have been sawn off at the top of the panelled lower section. The quality of the painting and gilding is, some of it, of a very high order, notably those from the East Anglian Ranworth school of painters, of which examples can be found in Southwold and Blythburgh, as well as at Ranworth itself. The magnificent painted screen at St Michael and All Angels Church, Barton Turf in Norfolk is unique in giving an unusually complete view of the heavenly hierarchy, including nine orders of angels. Nikolaus Pevsner also identified the early-16th-century painted screen at Bridford, Devon, as being notable. The 16th-century screen at Charlton-on-Otmoor, said by Pevsner to be "the finest in Oxfordshire", has an unusual custom associated with it, where the rood cross is garlanded with flowers and foliage twice a year, and until the 1850s the cross (which at that time resembled a large corn dolly) was carried in a May Day procession. A particularly large example can be found at the Church of St Mary the Virgin, Uffculme, Devon, which is nearly 70 feet in length. See also References Notes Bibliography Further reading Williams, Michael Aufrère, 'Medieval Devon Roodscreens from the Fourteenth Century to the Present Day', The Devon Historian, 83, 2014, pp. 1–13 Williams, Michael Aufrère, 'The Iconography of Medieval Devon Roodscreens', The Devon Historian, 84, 2015, pp. 17–34. Williams, Michael Aufrère, 'Devon Roodscreens after the Reformation: Destruction and Survival', The Devon Historian, 87, 2018, pp. 11– 24. External links Norfolkchurches.co.uk/screens Painted screens in Norfolk churches Norfolkchurches.co.uk/norfolkroods More about the painted rood screens of East Anglia Open Library of Francis Bond's standard work 'Screens and Galleries in English Churches' Hi-res images of Ranworth rood screen, Norfolk, UK Architectural elements Church architecture Catholic liturgy Christian religious objects Rood screens
Rood screen
Technology,Engineering
4,617
24,587,695
https://en.wikipedia.org/wiki/AN/MRN-1
The AN/MRN-1 was an instrument approach localizer used by the Army Air Force during and after World War II. It was standardized on 3 July 1942. It replaced the SCR-241, and was a component of SCS-51. Use The transmitter provides a signal to guide the RC-103 equipped aircraft to the centerline of a runway. The set radiates two intersecting field patterns, one of which is modulated at an audio frequency of 90 cycles per second, and the other at an audio frequency of 150 cycles per second. The shape of the radiated patterns is such that they intersect in a vertical plane called the "course", which can be oriented (by positioning the truck) to intersect the ground in a line which coincides with the center-line of a landing runway. The range of the equipment is a function of the elevation of the receiving antenna: approximately 40 miles at an elevation of 2,500 feet, 70 miles at 6,000 feet, and 100 miles at 10,000 feet. The transmitter, BC-751-A has a frequency range from 108.3 to 110.3 Mc. power output is 25 Watts. AN/CRN-3 is the same equipment except without the K-53 truck, thereby making it air transportable, components are housed in a tent. Components The AN/MRN-1 is mounted in a K-53 truck and is made up of the following components. BC-915 control box BC-751 radio transmitter BC-752 Modulator and bridge BC-753 course detector fixed BC-754 course detector portable BC-755 field intensity meter BC-777 indicator (alarm) RC-109 antenna (5 alford loops in a horizontal plane) An SCR-610 is provided for ground communications power is provided by a PE-141 generator (115 volts) Aircraft components The RC-103-A is an airborne localizer receiver used to indicate a landing course in conjunction with the AAF instrument approach system. signals received from a transmitter, located at one end of the runway to be used, are fed into the cross-pointer indicator to indicate "on course", "fly right" or "fly left". Audio indication is also provided. Antenna system AS-27/ARN-5 is used with the dual installation of the localizer and glide path receivers. Antenna AN-100 is used when only the localizer receiver is installed in the aircraft. RC-103 components include Indicator I-101-C BC-732 control box BC-733 Receiver W/ DM-53 Dynamotor AN-100 Antenna (localizer only) AS-27/ARN-5 Antenna system (combination) See also AN/MRN-3 Instrument landing system List of military electronics of the United States List of U.S. Signal Corps vehicles LORAN Radio navigation SCR-277 SHORAN Signal Corps Radio References TM 11-227 Signal Communication Directory. Dated 10 April 1944. TM 11-487 Electrical Communication systems Equipment. Dated 2 October 1944. Graphic Survey of Radio and Radar Equipment Used by the Army Airforce. Section 3, Radio Navigation Equipment. Dated May 1945. TO 30-100F-1. Dated 1943. Further reading External links http://www.footnote.com/image/#46938757 exterior http://aafcollection.info/items/documents/view.php?file=000149-01-03.pdf TO 30-100F-1 1943 http://www.designation-systems.net/usmilav/jetds/an-c.html http://www.flightglobal.com/pdfarchive/view/1949/1949%20-%200728.html SCS-51 http://aafradio.org/NASM/RHAntennas.htm antenna systems http://jproc.ca/rrp/rrp3/argus_bc733d.jpg BC-733 History of air traffic control Military radio systems of the United States Equipment of the United States Air Force World War II American electronics Air traffic control Radio navigation Surveying Wireless locating History of radio in the United States Military equipment introduced from 1940 to 1944 Military electronics of the United States
AN/MRN-1
Technology,Engineering
890
38,104,403
https://en.wikipedia.org/wiki/Polar%20code%20%28coding%20theory%29
In information theory, polar codes are a linear block error-correcting codes. The code construction is based on a multiple recursive concatenation of a short kernel code which transforms the physical channel into virtual outer channels. When the number of recursions becomes large, the virtual channels tend to either have high reliability or low reliability (in other words, they polarize or become sparse), and the data bits are allocated to the most reliable channels. It is the first code with an explicit construction to provably achieve the channel capacity for symmetric binary-input, discrete, memoryless channels (B-DMC) with polynomial dependence on the gap to capacity. Polar codes were developed by Erdal Arikan, a professor of electrical engineering at Bilkent University. Notably, polar codes have modest encoding and decoding complexity , which renders them attractive for many applications. Moreover, the encoding and decoding energy complexity of generalized polar codes can reach the fundamental lower bounds for energy consumption of two dimensional circuitry to within an factor for any . Industrial applications Polar codes have some limitations when used in industrial applications. Primarily, the original design of the polar codes achieves capacity when block sizes are asymptotically large with a successive cancellation decoder. However, with the block sizes used in industry, the performance of the successive cancellation is poor compared to well-defined and implemented coding schemes such as low-density parity-check code (LDPC) and turbo code. Polar performance can be improved with successive cancellation list decoding, but its usability in real applications is still questionable due to very poor implementation efficiencies caused by the iterative approach. In October 2016, Huawei announced that it had achieved 27 Gbit/s in 5G field trial tests using polar codes for channel coding. The improvements have been introduced so that the channel performance has now almost closed the gap to the Shannon limit, which sets the bar for the maximum rate for a given bandwidth and a given noise level. In November 2016, 3GPP agreed to adopt polar codes for the eMBB (Enhanced Mobile Broadband) control channels for the 5G NR (New Radio) interface. At the same meeting, 3GPP agreed to use LDPC for the corresponding data channel. PAC codes In 2019, Arıkan suggested to employ a convolutional pre-transformation before polar coding. These pre-transformed variant of polar codes were dubbed polarization-adjusted convolutional (PAC) codes. It was shown that the pre-transformation can effectively improve the distance properties of polar codes by reducing the number of minimum-weight and in general small-weight codewords, resulting in the improvement of block error rates under near maximum likelihood (ML) decoding algorithm such as Fano decoding and list decoding. Fano decoding is a tree search algorithm that determines the transmitted codeword by utilizing an optimal metric function to efficiently guide the search process. PAC codes are also equivalent to post-transforming polar codes with certain cyclic codes. At short blocklengths, such codes outperform both convolutional codes and CRC-aided list decoding of conventional polar codes. Neural Polar Decoders Neural Polar Decoders (NPDs) are an advancement in channel coding that combine neural networks (NNs) with polar codes, providing unified decoding for channels with or without memory, without requiring an explicit channel model. They use four neural networks to approximate the functions of polar decoding: the embedding (E) NN, the check-node (F) NN, the bit-node (G) NN, and the embedding-to-LLR (H) NN. The weights of these NNs are determined by estimating the mutual information of the synthetic channels. By the end of training, the weights of the NPD are fixed and can then be used for decoding. The computational complexity of NPDs is determined by the parameterization of the neural networks, unlike successive cancellation (SC) trellis decoders, whose complexity is determined by the channel model and are typically used for finite-state channels (FSCs). The computational complexity of NPDs is , where is the number of hidden units in the neural networks, is the dimension of the embedding, and is the block length. In contrast, the computational complexity of SC trellis decoders is , where is the state space of the channel model. NPDs can be integrated into SC decoding schemes such as SC list decoding and CRC-aided SC decoding. They are also compatible with non-uniform and i.i.d. input distributions by integrating them into the Honda-Yamamoto scheme. This flexibility allows NPDs to be used in various decoding scenarios, improving error correction performance while maintaining manageable computational complexity. References External links AFF3CT home page: A Fast Forward Error Correction Toolbox for high speed polar code simulations in software Error detection and correction Coding theory Capacity-achieving codes Capacity-approaching codes
Polar code (coding theory)
Mathematics,Engineering
1,022
17,699,541
https://en.wikipedia.org/wiki/Filmnet
FilmNet was the name used for several premium television channels in Europe during the 1980s, 1990s and 2000s. It was launched on 9 March 1985, broadcasting with a focus on Scandinavia, the Netherlands and the northern part of Belgium (Flanders). Filmnet channels were later launched in Poland and Greece. History Filmnet was founded by the Swedish company Esselte Video, a division of Swedish office supply manufacturer Esselte, and Dutch film producer Rob Houwer. They formed a partnership with ATN, a joint venture of the Dutch magazine publisher VNU and the European film distribution company United International Pictures and the channel was launched across Scandinavia and the Benelux countries on 29 March 1985. Filmnet transmitted from the ECS-1 satellite, the same satellite used by cable operators. Filmnet failed to make a profit and was sold to NetHold, a joint venture of the South African MultiChoice company and Richemont, in 1996. The channels were sold to the French Groupe Canal+ on 1 August 1997. The deal didn't include the Greek channels, who continued using the FilmNet name until 2008. Although the brand no longer exists, most of its subsidiaries in the different countries live on in some way: Scandinavia: The channel had 3 timeslots: Morning Club for morning, noon and early afternoon programs, Royal Club for late afternoon and evening programs and Night Club for night and after-midnight programs, each broadcasting for 8 hours. As it was increasing other content except for films, like sports, entertainment, music and others, the channel was renamed FilmNet Plus. To create a TV channel which would fit to film lovers, a second FilmNet channel called The Complete Movie Channel: FilmNet was launched on most cable networks, and it featured only movies. Later, the channels were renamed FilmNet 1 and FilmNet 2. The channels were renamed Canal+ and Canal+ Gul on September 1, 1997. Canal+ sold the company to Nordic Capital and Baker Capital in 2003 and the company was renamed C More Entertainment (although still using the Canal+ name in marketing until 2012). They went on to sell to the SBS Broadcasting Group in 2005, who in turn was merged with ProSiebenSat.1 Media in 2007. In 2008, a deal was entered to sell the company to the Swedish TV4 Group. In October 2012, C More launched a subscription online streaming service under the Filmnet name, which was moved to main C More websites on 30 June 2015. Netherlands: Filmnet was rebranded as Canal+ in 1997. Canal+ sold the channels in 2005 to Liberty Global and renamed to Sport1 and Film1 in February 2006. Sport1 changed its name to Ziggo Sport Totaal in November 2015. Film1 was sold to Sony Pictures Television in 2015. Belgium: The Belgian subsidiary was one of the most successful: as from mid 1988 it was profitable; early 1995 it had 186,000 subscribers. The channels were bought by and renamed Canal+ in 1997. In 2004, they were sold to Telenet and are now known as Play More. The office of Filmnet was located in Brussels. Poland: Filmnet was launched in 1995 and merged into the existing Canal+ channel in February 1997. It continues to exist to this day, under the name Canal+ Premium. Greece: The FilmNet brand came in Greece in 1994, replacing ITA 8. The second channel was called FilmSat, but during 2002 it was renamed as FilmNet 2 and there was, also, a third one, called FilmNet 3. Multichoice finally sold their Greek pay-TV business to Forthnet in April 2008. The Filmnet brand disappeared on June 1, 2008, when the Greek channels were renamed Nova Cinema. Programming Filmnet mainly broadcast films and series, as well as gossip news from E!. In the 1990s, Filmnet also started broadcasting football and other sport events in countries such as Belgium and the Netherlands. K-T.V. K-T.V. was a programming block on Filmnet, featuring various cartoons and original shows with kids as the presentations. See also K-T.V. SuperSport References Multimedia Television channels and stations established in 1985 Television channels and stations disestablished in 1997 Television channels and stations disestablished in 2008 Defunct television channels in Greece Greek-language television stations 1985 establishments in Europe Defunct television channels in the Netherlands
Filmnet
Technology
888
128,608
https://en.wikipedia.org/wiki/Population%20density
Population density (in agriculture: standing stock or plant density) is a measurement of population per unit land area. It is mostly applied to humans, but sometimes to other living organisms too. It is a key geographical term. Biological population densities Population density is population divided by total land area, sometimes including seas and oceans, as appropriate. Low densities may cause an extinction vortex and further reduce fertility. This is called the Allee effect after the scientist who identified it. Examples of the causes of reduced fertility in low population densities are: Increased problems with locating sexual mates Increased inbreeding ===Human densities=== Population density is the number of people per unit of area, usually transcribed as "per square kilometer" or square mile, and which may include or exclude, for example, areas of water or glaciers. Commonly this is calculated for a county, city, country, another territory or the entire world. The world's population is around 8,000,000,000 and the Earth's total area (including land and water) is . Therefore, the worldwide human population density is approximately 8,000,000,000 ÷ 510,000,000 = . However, if only the Earth's land area of is taken into account, then human population density is . This includes all continental and island land area, including Antarctica. However, if Antarctica is excluded, then population density rises to over . The European Commission's Joint Research Centre (JRC) has developed a suite of (open and free) data and tools named the Global Human Settlement Layer (GHSL) to improve the science for policy support to the European Commission Directorate Generals and Services and as support to the United Nations system. Several of the most densely populated territories in the world are city-states, microstates and urban dependencies. In fact, 95% of the world's population is concentrated on just 10% of the world's land. These territories have a relatively small area and a high urbanization level, with an economically specialized city population drawing also on rural resources outside the area, illustrating the difference between high population density and overpopulation. Deserts have very limited potential for growing crops as there is not enough rain to support them. Thus, their population density is generally low. However, some cities in the Middle East, such as Dubai, have been increasing in population and infrastructure growth at a fast pace.Cities with high population densities are, by some, considered to be overpopulated, though this will depend on factors like quality of housing and infrastructure and access to resources. Very densely populated cities are mostly in Asia (particularly Southeast Asia); Africa's Lagos, Kinshasa, and Cairo; South America's Bogotá, Lima, and São Paulo; and Mexico City and Saint Petersburg also fall into this category.City population and especially area are, however, heavily dependent on the definition of "urban area" used: densities are almost invariably higher for the center only than when suburban settlements and intervening rural areas are included, as in the agglomeration or metropolitan area (the latter sometimes including neighboring cities). In comparison, based on a world population of 8 billion, the world's inhabitants, if conceptualized as a loose crowd occupying just under per person (cf. Jacobs Method), would occupy an area of a little less than the land area of Puerto Rico, . Countries and dependent territories Other methods of measurement Although the arithmetic density is the most common way of measuring population density, several other methods have been developed to provide alternative measures of population density over a specific area. Arithmetic density: The total number of people / area of land Physiological density: The total population / area of arable land Agricultural density: The total rural population / area of arable land Residential density: The number of people living in an urban area / area of residential land Urban density: The number of people inhabiting an urban area / total area of urban land Ecological optimum: The density of population that can be supported by the natural resources Living density: Population density at which the average person lives See also Distance sampling Demography Human geography Idealised population List of population concern organizations Plant density Population dynamics Population decline Population growth Population genetics Population health Population momentum Population pyramid Rural transport problem Significant figures Small population size Global Human Settlement Layer Lists of entities by population density List of Australian suburbs by population density List of countries by population density List of cities by population density List of city districts by population density List of English districts by population density List of European Union cities proper by population density List of islands by population density List of states and territories of the United States by population density Explanatory notes References External links Selected Current and Historic City, Ward & Neighborhood Density Environmental controversies Geography terminology Human overpopulation Population ecology Demography Social science indices
Population density
Environmental_science
972
326,776
https://en.wikipedia.org/wiki/Online%20service%20provider
An online service provider (OSP) can, for example, be an Internet service provider, an email provider, a news provider (press), an entertainment provider (music, movies), a search engine, an e-commerce site, an online banking site, a health site, an official government site, social media, a wiki, or a Usenet newsgroup. In its original more limited definition, it referred only to a commercial computer communication service in which paid members could dial via a computer modem the service's private computer network and access various services and information resources such as bulletin board systems, downloadable files and programs, news articles, chat rooms, and electronic mail services. The term "online service" was also used in references to these dial-up services. The traditional dial-up online service differed from the modern Internet service provider in that they provided a large degree of content that was only accessible by those who subscribed to the online service, while ISP mostly serves to provide access to the Internet and generally provides little if any exclusive content of its own. In the U.S., the Online Copyright Infringement Liability Limitation Act (OCILLA) portion of the U.S. Digital Millennium Copyright Act has expanded the legal definition of online service in two different ways for different portions of the law. It states in section 512(k)(1): (A) As used in subsection (a), the term "service provider" means an entity offering the transmission, routing, or providing of connections for digital online communications, between or among points specified by a user, of material of the user's choosing, without modification to the content of the material as sent or received. (B) As used in this section, other than subsection (a), the term "service provider" means a provider of online services or network access, or the operator of facilities therefore, and includes an entity described in subparagraph (A). These broad definitions make it possible for numerous web businesses to benefit from the OCILLA. History The first commercial online services went live in 1979. CompuServe (owned in the 1980s and 1990s by H&R Block) and The Source (for a time owned by The Reader's Digest) are considered the first major online services created to serve the market of personal computer users. Utilizing text-based interfaces and menus, these services allowed anyone with a modem and communications software to use email, chat, news, financial and stock information, bulletin boards, special interest groups (SIGs), forums and general information. Subscribers could exchange email only with other subscribers of the same service. (For a time a service called DASnet carried mail among several online services, and CompuServe, MCI Mail, and other services experimented with X.400 protocols to exchange email until the Internet rendered these outmoded.) Other text-based online services followed such as Delphi, GEnie and MCI Mail. The 1980s also saw the rise of independent Computer Bulletin Boards, or BBSes. (Online services are not BBSes. An online service may contain an electronic bulletin board, but the term "BBS" is reserved for independent dialup, microcomputer-based services that are usually single-user systems.) The commercial services used pre-existing packet-switched (X.25) data communications networks, or the services' own networks (as with CompuServe). In either case, users dialed into local access points and were connected to remote computer centers where information and services were located. As with telephone service, subscribers paid by the minute, with separate day-time and evening/weekend rates. As the use of computers that supported color and graphics, such the Atari 8-bit computers, Commodore 64, TI-99/4A, Apple II, and early IBM PC compatibles, increased, online services gradually developed framed or partially graphical information displays. Early services such as CompuServe added increasingly sophisticated graphics-based front end software to present their information, though they continued to offer text-based access for those who needed or preferred it. In 1985 Viewtron, which began as a Videotex service requiring a dedicated terminal, introduced software allowing home computer owners access. Beginning in the mid-1980s graphics based online services such as PlayNET, Prodigy, and Quantum Link (aka Q-Link) were developed. Quantum Link, which was based on Commodore-only Playnet software, later developed AppleLink Personal Edition, PC-Link (based on Tandy's DeskMate), and Promenade (for IBM), all of which (including Q-Link) were later combined as America Online. These online services presaged the web browser that would change global online life 10 years later. Before Quantum Link, Apple computer had developed its own service, called AppleLink, which was mostly a support network targeted at Apple dealers and developers. Later, Apple offered the short-lived eWorld, targeted at Mac consumers and based on the Mac version of the America Online software. Beginning in 1992, the Internet, which had previously been limited to government, academic, and corporate research settings, was opened to commercial entities. The first online service to offer Internet access was DELPHI, which had developed TCP/IP access much earlier, in connection with an environmental group that rated Internet access. The explosion of popularity of the World Wide Web in 1994 accelerated the development of the Internet as an information and communication resource for consumers and businesses. The sudden availability of low- to no-cost email and appearance of free independent web sites broke the business model that had supported the rise of the early online service industry. CompuServe, BIX, AOL, DELPHI, and Prodigy gradually added access to Internet e-mail, Usenet newsgroups, ftp, and to web sites. At the same time, they moved from usage-based billing to monthly subscriptions. Similarly, companies that paid to have AOL host their information or early online stores began to develop their own web sites, putting further stress on the economics of the online industry. Only the largest services like AOL (which later acquired CompuServe, just as CompuServe acquired The Source) were able to make the transition to the Internet-centric world. A new class of online service provider arose to provide access to the Internet, the internet service provider or ISP. Internet-only service providers like UUNET, The Pipeline, Panix, Netcom, the World, EarthLink, and MindSpring provided no content of their own, concentrating their efforts on making it easy for nontechnical users to install the various software required to "get online" before consumer operating systems came internet-enabled out of the box. In contrast to the online services' multitiered per-minute or per-hour rates, many ISPs offered flat-fee, unlimited access plans. Independent companies sprang up to offer access and packages to compete with the big networks (eg, the-wire.com, 1994 in Toronto and bway.net 1995 in New York). These providers first offered access through telephone and modem, just as did the early online services providers. By the early 2000s, these independent ISPs had largely been supplanted by high speed and broadband access through cable and phone companies, as well as wireless access. The importance of the online services industry was vital in "paving the road" for the information superhighway. When Mosaic and Netscape were released in 1994, they had a ready audience of more than 10 million people who were able to download their first web browser through an online service. Though ISPs quickly began offering software packages with setup to their customers, this brief period gave many users their first online experience. Two online services in particular, Prodigy and AOL, are often confused with the Internet, or the origins of the Internet. Prodigy's Chief Technical Officer said in 1999: "Eleven years ago, the Internet was just an intangible dream that Prodigy brought to life. Now it is a force to be reckoned with." Despite that statement, neither service provided the back bone for the Internet, nor did either start the Internet. Online service interfaces The first online service used a simple text-based interface in which content was largely text only and users made choices via a command prompt. This allowed just about any computer with a modem and terminal communications program the ability to access these text-based online services. CompuServe would later offer, with the advent of the Apple Macintosh and Microsoft Windows-based PCs, a GUI interface program for their service. This provided a very rudimentary GUI interface. CompuServe continued to offer text-only access for those needing it. Online services like Prodigy and AOL developed their online service around a GUI and thus unlike CompuServe's early GUI-based software, these online services provided a more robust GUI interface. Early GUI-based online service interfaces offered little in the way of detailed graphics such as photographs or pictures. Largely they were limited to simple icons and buttons and text. As modem speed increased it became more feasible to offer images and other more complicated graphics to users thus providing a nicer look to their services Common resources provided by online services Some of the resources and services online services have provided access to include message boards, chat services, electronic mail, file archives, current news and weather, online encyclopedias, airline reservations, and online games. Major online service providers like Compuserve also served as a way for software and hardware manufacturers to provide online support for their products via forums and file download areas within the online service provider's network. Prior to the advent of the web, such support had to be done either via an online service or a private bulletin board system run by the company and accessed over a direct phone line. Responsibility Depending on the jurisdiction there may be rules exempting an OSP from responsibility for content provided by users, but with a ' notice and take down (NTD) obligation to remove unacceptable content as soon as it is noticed. See also Videotex Online service provider law Terminal emulator :Category:Pre-World Wide Web online services Service provider NSFNet Shell account Connect Business Information Network References External links Online services history Information on aluminum and glass contracting services for online infrastructure Computer-mediated communication Network access Providers
Online service provider
Technology,Engineering
2,113
11,080,599
https://en.wikipedia.org/wiki/Mallee%20%28habit%29
Mallee are trees or shrubs, mainly certain species of eucalypts, which grow with multiple stems springing from an underground lignotuber, usually to a height of no more than . The term is widely used for trees with this growth habit across southern Australia, in the states of Western Australia, South Australia, New South Wales and Victoria, and has given rise to other uses of the term, including the ecosystems where such trees predominate, specific geographic areas within some of the states and as part of various species' names. Etymology The word is thought to originate from the word mali, meaning water, in the Wemba Wemba language, an Aboriginal Australian language of southern New South Wales and Victoria. The word is also used in the closely related Woiwurrung language and other Aboriginal languages of Victoria, South Australia, and southern New South Wales. Overview The term mallee is used describe various species of trees or woody plants, mainly of the genus Eucalyptus, which grow with multiple stems springing from an underground bulbous woody structure called a lignotuber, or mallee root, usually to a height of no more than . The term is widely used for trees with this across southern Australia, across the states of Western Australia, South Australia, New South Wales and Victoria. The term is also applied to other eucalypts with a similar growth habit, in particular those in the closely related genera Corymbia and Angophora. Some of the species grow as single-stemmed trees initially, but recover in mallee form if burnt to the ground by bushfire. Over 50 per cent of eucalypt species are mallees, and they are mostly slow-growing and tough. The lignotuber enables the plant to regenerate after fire, wind damage or other type of trauma. Range Mallees are the dominant vegetation throughout semi-arid areas of Australia with reliable winter rainfall. Within this area, they form extensive woodlands and shrublands covering over in New South Wales, north-western Victoria, southern South Australia and southern Western Australia, with the greatest extent being in South Australia (). There are also some species found in the Northern Territory, namely Eucalyptus gamophylla (blue mallee), Eucalyptus pachycarpa and Eucalyptus setosa. Farming on mallee land Grubbing the land of mallee stumps for agricultural purposes was difficult for early settler farmers, as the land could not be easily ploughed and sown even after the trees were removed. In the colony of South Australia in the late 19th century, legislation which encouraged closer settlement made it even tougher for farmers to make a living. Grubbing the mallee lands was a laborious and expensive task estimated at £2–7 per acre, and the government offered a £200 reward for the invention of an effective machine that would remove the stumps. To assist with the challenges of farming on mallee lands, some settlers turned their minds to the invention of technologies that could make some of the tasks easier. First the scrub or mallee roller was invented, which flattened the stumps and other vegetation, after which it would all be burnt and crops sown. The technique became known as "mullenising", as the invention of the device was attributed to a farmer called Mullen. A few years later the stump jump plough was invented on the Yorke Peninsula by Richard Bowyer Smith and perfected by his brother, Clarence Herbert Smith. This machine had individually movable ploughshares, enabling the whole plough to move over stumps rather than having to steer around them, and proved a great success. Uses of the term The term is applied to both the tree itself and the whole plant community in which it predominates, giving rise to the classification of mallee woodlands and shrublands as one of Australia's major vegetation groups. Several common names of eucalypt species have "mallee" in them, such as the Blue Mountains mallee (Eucalyptus stricta) and blue mallee (E. gamophylla and E. polybractea). The term is used in the phrase strong as a mallee bull, and is colloquially used is for any remote or isolated area, or as a synonym for outback. Species Widespread mallee species include: E. dumosa (white mallee) E. socialis (red mallee) E. gracilis (yorrell) E. oleosa (red mallee) E. incrassata (ridge-fruited mallee) E. diversifolia (soap mallee) The following four Western Australian species can be found in the Waite Arboretum in Adelaide, and are suitable for gardens: Eucalyptus pleurocarpa, or tallerack Eucalyptus pyriformis, or dowerin rose Eucalyptus preissiana, or bell-fruited mallee Eucalyptus grossa, or coarse-leaved mallee See also Coppice References Further reading Eucalyptus .Habit Mediterranean forests, woodlands, and scrub in Australia Flora of Australia Plant common names Plant life-forms Plant morphology Australian Aboriginal words and phrases
Mallee (habit)
Biology
1,037
40,107,286
https://en.wikipedia.org/wiki/Digital%20scholarship
Digital scholarship is the use of digital evidence, methods of inquiry, research, publication and preservation to achieve scholarly and research goals. Digital scholarship can encompass both scholarly communication using digital media and research on digital media. An important aspect of digital scholarship is the effort to establish digital media and social media as credible, professional and legitimate means of research and communication. Digital scholarship has a close association with digital humanities, often serving as the umbrella term for discipline-agnostic digital research methods. Digital scholarship may also include born-digital means of scholarly communication that are more traditional, like online journals and databases, e-mail correspondence and the digital or digitized collections of research and academic libraries. Since digital scholarship is often concerned with the production and distribution of digital media, discussions about copyright, fair use and digital rights management (DRM) frequently accompany academic analysis of the topic. Combined with open access, digital scholarship is offered as a more affordable and open model for scholarly communication. Development of the concept The concept of digital scholarship (DS) emerged early in the 21st century. DS is described as "discipline-based scholarship produced with digital tools and presented in digital form". It is also considered a research agenda concerned with the impact of Internet and digital technologies that are transforming scholarly practices. These include social and technological factors. In the 2010s, research took mainly two approaches to the subject: The impact of digital infrastructures, in particular the internet, on scholarship, where Christine L. Borgman is reference researcher and the role of the academic library has been discussed in particular, and the impact of digital scholarship on the institutions and organization of academia. According to Ernest L. Boyer in Scholarship Reconsidered, discovery, integration, application, and teaching are the four main aspects of scholarship. The growth of digital media means that the main areas of scholarship can each benefit from expansions in their own way thanks to the infinite sharability of digital content. In education, the main areas of relevance are science, technology, engineering and math. It is said that students learn best in a classroom when they are actively engaged. The emergence of digital scholarship and digital media allows for another means for students to become engaged. Key areas of academia that digital media is used on are to illustrate concepts, model displays and reinforce 21st century skills. Critics cite concerns about the legitimacy, accessibility and verifiability of digital scholarship and the erosion of authors' rights as reasons to be concerned about digital scholarship. As scholarly communication evolves, controversy over the definition and value of the term "digital scholarship" is likely to continue. Digital scholarship must take all of cultural, economic, personal, and institutional responsibilities to take its position as academic scholarship and to realize the core purpose of higher education with the possibilities of our time. Digital education opens new openings for collaboration between scholars from different fields through digital platforms. This makes it easier to work together on complex  exploration questions that might be  delicate to attack within traditional academic structures. also, digital education’s global reach allows for a different range of voices and perspectives to contribute to academic  conversations. For  illustration,  enterprise like the Digital Gujarat literacy demonstrate how digital tools can  grease educational support and collaboration. Intellectual property Concerns with how to regulate digital scholarship have arisen across universities across the world. The explosion in availability and creation of scholarly works has led many universities to adjust their policies on how they will manage scholarship in the future. These universities feel pressured to take action because digital technologies have led to the easy reproduction and commodification of these creations. Many universities are unclear how to address the copyrighting of online classes and media presentations. Current law does not cover these specific areas of media produced in the academic world. In the past any printed work done by professors was considered their intellectual property, but now the question stands as to who owns these different forms of multimedia. One of the main concerns of faculty is that universities will soon take ownership of this digital media. Universities have taken a growing interest in creations that have revenue-generating potential, like online classes or lecture slides, while also showing concern for products that may be used by comparable institutions, potentially reducing their competitive advantage. In order to stay on top of others academically, universities have sought to keep the intellectual property created within the university away from others schools. Not only are universities using digital scholarship to make money and stay ahead, but they also have interests in protecting their brand. While universities attempt to protect digital scholarship, it is in many professors best interests for their creations to be seen by the world as to grow their brand and acclaim as a professor. Laws that may apply to digital scholarship are largely outdated but professors would like to use the argument of faculty ownership of traditional works as historical practice and practice compatible with mission of higher education as a public good. Professors argue that it took time and serious effort to make the presentations, slides, media. To date professors have been rarely questioned whether they have the right to bring their course outlines, lecture outlines, and lecture notes with them if they decide to leave the university where they created them. Change over the ability of professors to bring digital scholarship with them is expected as universities have begun to take notice and assert copyrights. Professors will argue that since they are the creators and authors of the product they are the owners according to law. As of now most copyright laws in America indicate that the author or creator holds the copyright, but the same cannot be said for digital scholarship. The law explicitly states that if the work is within the scope of his or her employment then the work is the property of the employer. Since the employer here would be the university, professors are technically creating work for hire. While faculty of universities appear to not be credited for their work, the primary reason for a university to take ownership of a faculty's work is that the member created the work using university funds mostly. Solutions to the lack of clear laws regarding ownership of digital scholarship are not currently being created but many universities have created written contracts with professors over who owns future work or what they can do with previous work. For example, in the U.S. Supreme Court case Stanford v. Roche, the court decided that Roche, a former Stanford researcher, was a co-owner with Stanford of patents for testing kits to detect HIV. While this case does not deal with digital scholarship directly, it deals with the ownership of intellectual property of university employees when they leave. This case will set a precedent for future decisions with online classes, lecture notes, and outlines. National Education Association policy The National Education Association, the largest professional educational association in the United States, updated its policy on digital learning in 2013. The policy stresses that students need to develop "advanced critical thinking and information literacy skills and master new digital tools", as well as "the initiative to become self-directed learners while adapting to the ever-changing digital information landscape". The National Education Association also believes that digital learning creates an environment in which learning can be more individualized to meet the needs of each student. It mandates that all public schools must do their best to acquire necessary modern technologies and constantly revise teaching plans to incorporate technology where viable to best prepare students for the 21st century. The National Education Association's digital learning policy also states that technology must be used in an adaptive manner as to not become a distraction and to remain a tool, as well as that technology should not become a replacement for instructors, merely a supplement. References External links Open University course on "The digital scholar" Educational technology Digital humanities Scholarly communication Research
Digital scholarship
Technology
1,508
67,333,282
https://en.wikipedia.org/wiki/Foltx
Foltx is a vitamin supplement containing a combination of vitamin B6 (pyridoxine), vitamin B12 (cyanocobalamin), and folic acid (folacin). It may be used to treat hyperhomocysteinemia, a medical condition. References B vitamins
Foltx
Chemistry
66
51,087,701
https://en.wikipedia.org/wiki/Penny%20%28unit%29
In the United States, the length of a nail is designated by its penny size, written with a number and the abbreviation d for penny; for example, 10d for a ten-penny nail. A larger number indicates a longer nail, shown in the table below. Diameter of the nail also varies based on penny size, depending on nail type. Nails under inch, often called brads, are sold mostly in small packages with only a length designation or with length and wire gauge designations; for example, 1″ 18 ga. or ″ 16 ga. Penny sizes originally referred to the price for a hundred (100) or long hundred (120) nails in England in the 15th century: the larger the nail, the higher the cost per long hundred. The system remained in use in England into the 20th century, but is obsolete there today. Nails are still designated in penny sizes in the United States. In Canada, nails are specified by the type and length and are still manufactured to Imperial dimensions. Nail diameter is specified by gauge number (British Imperial Standard). The gauge is the same as the wire diameter used in the manufacture of the nail. The d is an abbreviation for denarius, a Roman coin similar to a penny; this was the abbreviation for the monetary penny in the United Kingdom before decimalisation. References Obsolete units of measurement Units of length Nail (fastener)
Penny (unit)
Mathematics
279
66,824,873
https://en.wikipedia.org/wiki/Cold-stunning
Cold-stunning, also known as hypothermic stunning, is a hypothermic reaction experienced by marine reptiles, notably sea turtles, when exposed to cold water for prolonged periods, which causes them to become weak and inactive. Cold-stunned sea turtles may float to the surface and be further exposed to cold temperatures, which can cause them to drown. A water temperature threshold of 8–10 °C has been associated with mass turtle stunning events. After cold-stunning has taken place, there is only a very short period of time when sea turtles can be safely rescued. One study indicates that ocean warming has led to an increase in cold-stunning events in the northwest Atlantic. Notable instances In 2016, 1,700 turtles were cold-stunned in North Carolina, following "an unusually temperate fall and early winter". In 2021, nearly 5,000 cold-stunned turtles were rescued in Texas during a winter storm; it has been called the largest cold-stunning event to be documented in the state. See also Physiology of aquatic reptiles References Animal physiology Animal welfare Thermoregulation
Cold-stunning
Biology
221
11,121,146
https://en.wikipedia.org/wiki/Jet%20noise
In aeroacoustics, jet noise is the field that focuses on the noise generation caused by high-velocity jets and the turbulent eddies generated by shearing flow. Such noise is known as broadband noise and extends well beyond the range of human hearing (100 kHz and higher). Jet noise is also responsible for some of the loudest sounds ever produced by mankind. Sources of jet noise The primary sources of jet noise for a high-speed air jet (meaning when the exhaust velocity exceeds about 100 m/s; 360 km/h; 225 mph) are "jet mixing noise" and, for supersonic flow, shock associated noise. Acoustic sources within the "jet pipe" also contribute to the noise, mainly at lower speeds, which include combustion noise, and sounds produced by interactions of a turbulent stream with fans, compressors, and turbine systems. The jet mixing sound is created by the turbulent mixing of a jet with the ambient fluid, in most cases, air. The mixing initially occurs in an annular shear layer, which grows with the length of the nozzle. The mixing region generally fills the entire jet at four or five diameters from the nozzle. The high-frequency components of the sound are mainly stationed close to the nozzle, where the dimensions of the turbulence eddies are small. Further down the jet, where the eddy size is similar to the jet diameter, is where lower frequency begins. In supersonic or choked jets there are cells through which the flow continuously expands and contracts. Several of these "shock cells" can be seen extending up to ten jet diameters from the nozzle and are responsible for two additional components of jet noise, screech tones, and broadband shock associated noises. Screech is produced by a feedback mechanism in which a disturbance convecting in the shear layer generates sound as it traverses the standing system of shock waves in the jet. Even though screech is a side effect of the jet's flight, it can be suppressed by an appropriate design for a nozzle. Aircraft noise is also sometimes called jet noise when emanating from jet aircraft, regardless of the mechanism of noise production. See also Lighthill's eighth power law QTOL Stealth aircraft References Works cited Khavaran, Abbas. (2012). Acoustic Investigation of Jet Mixing Noise in Dual Stream Nozzles. Cleveland, OH: National Aeronautics and Space Administration, Glenn Research Center. Aircraft noise Fluid dynamics
Jet noise
Chemistry,Engineering
497
64,120,117
https://en.wikipedia.org/wiki/Robodebt%20scheme
The Robodebt scheme was an unlawful method of automated debt assessment and recovery implemented in Australia under the Liberal-National Coalition governments of Tony Abbott, Malcolm Turnbull, and Scott Morrison, and employed by the Australian government agency Services Australia as part of its Centrelink payment compliance program. Put in place in July 2016 and announced to the public in December of the same year, the scheme aimed to replace the formerly manual system of calculating overpayments and issuing debt notices to welfare recipients with an automated data-matching system that compared Centrelink records with averaged income data from the Australian Taxation Office. The scheme has been the subject of considerable controversy, having been criticised by media, academics, advocacy groups, and politicians due to allegations of false or incorrectly calculated debt notices being issued, concerns over impacts on the physical and mental health of debt notice recipients, and questions around the lawfulness of the scheme. Robodebt has been the subject of an investigation by the Commonwealth Ombudsman, two Senate committee inquiries, several legal challenges, and a royal commission, Australia's highest form of public inquiry. In May 2020, the Morrison government announced that it would scrap the debt recovery scheme, with 470,000 wrongly-issued debts to be repaid in full. Amid enormous public pressure, Prime Minister Scott Morrison stated during Question Time that "I would apologise for any hurt or harm in the way that the Government has dealt with that issue and to anyone else who has found themselves in those situations." However, the Morrison government never offered a formal apology before it was voted out of office in 2022. The Australian government lost a 2019 lawsuit over the legality of the income averaging process and settled a class-action lawsuit in 2020. The scheme was further condemned by Federal Court Justice Bernard Murphy in his June 2021 ruling against the government, where he approved a A$1.8 billion settlement, including repayments of debts paid, wiping of outstanding debts, and legal costs. Going into the 2022 Australian federal election, Australian Labor Party (ALP) leader Anthony Albanese pledged to hold a royal commission into the Robodebt scheme if his party was elected. After winning the election, the Albanese government officially commenced the Royal Commission into the Robodebt Scheme in August 2022. The commission handed down its report in July 2023, which called the scheme a "costly failure of public administration, in both human and economic terms", and referred several individuals to law enforcement agencies for prosecution. The report also specifically criticised former Prime Minister Scott Morrison, who oversaw the introduction of the scheme when he was the Minister for Social Services, for misleading Cabinet and failing in his ministerial duties. In October 2022, the Albanese government effectively forgave the debts of 197,000 people that were still under review. In August 2023, the Albanese government passed a formal motion of apology in the House of Representatives, apologising for the scheme on behalf of the Parliament. Origins Background Since the late 1970s, the Australian Tax Office (ATO) has used data-matching systems to compare income data received from external sources with income reported by taxpayers, to ensure taxation compliance. In 2001, Services Australia (then the Department of Human Services) piloted a program that compared a customer’s Centrelink income details with ATO data, to identify discrepancies in the information provided to Centrelink. Where there was a discrepancy, Services Australia would decide if the customer had been overpaid and had a debt that should be recovered. This program (known as the Income Matching System, or IMS) was fully rolled out in 2004. The IMS identified roughly 300,000 possible discrepancies per year. Services Australia would identify and investigate roughly 20,000 of the highest risk discrepancies per year, but were unable to investigate the remaining discrepancies, due to the costs and resources involved in manually investigating and raising debts. The IMS continued largely unchanged until the introduction of the Robodebt scheme in 2016. Creation and announcement In April 2015, measures to create budgetary savings by increasing the pursuit of outstanding debts and investigation of cases of fraud in the Australian welfare system were first flagged by the Minister for Social Services Scott Morrison and the Minister for Human Services Marise Payne, and formally announced by the Abbott government in the 2015 Australian federal budget. Initial estimates in the 2015 budget projected that the scheme would recoup A$1.5 billion for the government. In 2015, the Department of Human Services conducted a two-stage pilot of the Robodebt scheme, targeting debts of selected welfare recipients that were accrued between 2011–2013. Following the 2015 Liberal Party Leadership Spill and 2016 Australian federal election, the Turnbull government implemented an overhaul of the federal welfare budget in an effort to crack down on Centrelink overpayments believed to have occurred between 2010 and 2013 under the Gillard government. On 20 September 2015, Prime Minister Malcolm Turnbull announced that Christian Porter would replace Scott Morrison as Social Services Minister as part of a Cabinet overhaul. In July 2016, the manual system began to be replaced with the Online Compliance Intervention, an automated data-matching technique with less human oversight, capable of identifying and issuing computer-generated debt notices to welfare recipients who had potentially been overpaid. The new system was fully online by September 2016. In December 2016, Minister for Social Services Christian Porter publicly announced the implementation of this new automated debt recovery scheme – which was given the colloquial name "Robodebt" by the media – was estimated to be capable of issuing debt notices at a rate of 20,000 a week. Operation and public reaction Iterations and official names The scheme went through several iterations and formal names, including: PAYG Manual Compliance Intervention program, from 1 July 2015 to 1 July 2016, including the associated pilot programs from early 2015 to 30 June 2015. Online Compliance Intervention from 1 July 2016 to 10 February 2017. Employment Income Confirmation from 11 February 2017 to 30 September 2018. Check and Update Past Income from 30 September 2018 to 29 May 2020. Debt recovery efforts In early January 2017, six months after the commencement of automated debt recovery, it was announced that the scheme had issued 169,000 debt notices and recovered . Based on these figures, it was suggested that a similar automated debt recovery system would be applied to the Aged Pension and Disability Pension, in order to potentially recover a further . The 2018 Australian federal budget indicated that the Robodebt data matching scheme would be extended into 2021 with the aim of recovering an additional from welfare recipients. Services Australia announced in September 2019 that expenditure on the Robodebt program was while recouping . Reactions and critiques Opponents of the Robodebt scheme said that errors in the system were leading to welfare recipients paying non-existent debts or debts that were larger than what they actually owed, whilst some welfare recipients had been required to make payments while contesting their debts. In some cases, the debts being pursued dated back further than the ATO requests that Australians retain their documentation. Particular criticism focused on the burden of proof being moved from Centrelink needing to verify the information, to being on the individual to prove they did not owe the funds, with human interaction being very limited in the dispatch of the debt letters. Politicians from the Australian Labor Party, Australian Greens, Pauline Hanson's One Nation, and Independent Andrew Wilkie criticized the scheme and its automated debt calculation methods. The scheme was also criticized by advocacy groups for people affected by poverty, disadvantage, and inequality, including the Australian Council of Social Services (ACOSS) and the Saint Vincent de Paul Society. Allegations of misconduct Allegations levelled against the scheme by the media, former and current welfare recipients, advocacy groups, politicians and relatives of welfare recipients include: Welfare recipients' suicide after receiving automated debt recovery notices for significant sums. Debt notices were issued to deceased people. Issuing debt notices to disability pensioners. Revelations that debt notices were issued to 663 vulnerable people (people with complex needs like mental illness and abuse victims) who died soon after. Initial investigations Commonwealth Ombudsman investigation After the Turnbull government implemented the Robodebt scheme, many recipients of debt notices filed complaints with the Commonwealth Ombudsman. This led to the agency investigating the scheme, with the final report and recommendations delivered in April 2017. The ombudsman recommended that the Department of Human Services (DHS) should: reassess the debts raised by the scheme improve the clarity of debt notices and give customers better information inform customers that their ATO income will be averaged across the relevant period if they do not enter their income information notify welfare recipients that debts based on averaged ATO income may be less accurate help welfare recipients to gather evidence with which to effectively respond to debt notices. The ombudsman also recommended that before expanding the scheme, the DHS should undertake a comprehensive evaluation of the scheme in its current form, and consider how to mitigate the risk of possible over-recovery of debts. First Senate committee inquiry The Robodebt scheme was the subject of a Senate committee inquiry beginning in 2017. The inquiry had a number of findings and made a number of recommendations, including: "That a lack of procedural fairness is evident in every stage of the program, which should be put on hold until all procedural fairness flaws are addressed". "That the Robodebt scheme disempowered people, causing emotional trauma, stress and shame". "That the Department of Human Services has a fundamental conflict of interest – the harder it is for people to navigate this system and prove their correct income data, the more money the department recoups". "That the Department of Human Services should resume full responsibility for calculating verifiable debts (including manual checking) relating to income support overpayments, which are based on actual fortnightly earnings and not an assumed average; and provide those issued debt notices with the debt calculation data required to be assured any debts are correct". Legal challenges In February 2019, Legal Aid Victoria announced a federal court challenge of the scheme's calculations used to estimate debt, stating that the calculations assumed that people are working regular, full-time hours when calculating income. In November 2019, the federal government agreed to orders by the Federal Court of Australia in Amato v the Commonwealth that the averaging process using ATO income data to calculate debts was unlawful, and announced that it would no longer raise debts without first gathering evidence – such as payslips – to prove a person had underreported their earnings to Centrelink. In September 2019 Gordon Legal announced their intention of filing a class action suit challenging the legal foundations of the Robodebt system. On 16 November 2020, the day before the trial was due to begin, the Australian government announced that it had struck a deal with Gordon Legal, to settle out-of-court. The deal saw 400,000 victims of Robodebt share in an additional compensation, on top of the additional 470,000 Robodebts (totalling around ) that the Commonwealth government had already agreed to refund or cease pursuing. Demise and further investigations Demise On 29 May 2020, Stuart Robert, Minister for Government Services announced that the Robodebt debt recovery scheme was to be scrapped by the Government, with 470,000 wrongly-issued debts to be repaid in full. Initially, the total sum of the repayments was estimated to be . However, in November 2020 this figure expanded to after the Australian government settled a class-action lawsuit before it could go to trial. On 31 May 2020, Attorney-General Christian Porter, who was Minister for Social Services when the Robodebt system was first implemented, and who had previously defended the scheme, conceded that the use of averaged income data to calculate welfare overpayments was unlawful, stating that there was "no lawful basis for it". After weeks of criticism from the Opposition, in June 2020, Prime Minister Scott Morrison, in response to a question from the opposition concerning a particular victim of the scheme, stated in parliament that "I would apologise for any hurt or harm in the way that the Government has dealt with that issue and to anyone else who has found themselves in those situations". As of 31 July 2020, it was announced that had been repaid to more than 145,000 welfare recipients. On 11 June 2021, the Federal Court approved a A$1.872 billion settlement incorporating repayment of A$751 million, wiping of all remaining debts, and the legal costs running to A$8.4 million. In ruling against the scheme, Justice Bernard Murphy described it as a "shameful chapter in the administration of the commonwealth" and "a massive failure of public administration”. The Federal Treasurer Josh Frydenberg said the government accepted the settlement, but distanced himself from the suicides and mental health issues surrounding the administration of the scheme. Services Australia has stated they will commence repayments in 2022 to people who have overpaid according to debt recalculations. In October 2022, the Albanese Government effectively forgave the debts of 197,000 people who were still under review. Second Senate committee inquiry The scheme was again the subject of a Senate committee inquiry, which began in 2019. In the July 2020 hearing, Kathryn Campbell (former head of Services Australia) denied that the scheme had led to welfare recipients suiciding after receiving debt notices, despite allegations from Centrelink staff and the family members of welfare recipients who took their own lives. Senator O'Neill in the August 2020 hearing, read two letters from mothers whose sons died by suicide following the receiving of a Robodebt notice. Initially meant to report its findings in December 2019, the inquiry's deadline was extended six times, with the Senate committee delivering its final report in May 2022. The five interim reports made several findings, including: "That the Robodebt scheme indiscriminately targeted some of Australia’s most vulnerable people, causing significant and widespread harm to their psychological and financial wellbeing". That the use of technology by Government must be supported by appropriate safeguards to protect vulnerable people That the Government had not applied the necessary rigour to ensure that people are always treated fairly That the program had ignored warnings that began within months of the July 2016 start of the scheme, and had continued to issue debt notices that had no basis in law. That the government was still withholding critical information about the Income Compliance Program and the committee had been hindered in producing its final report due to "entrenched resistance and opacity" from ministers and departments. That the Australian public, especially Robodebt victims, deserve to know what advice was provided to Government and how this advice informed decision-making. The sixth and final report made a single recommendation: "That the Commonwealth Government should establish a Royal Commission into the Robodebt scheme". Royal Commission and aftermath In June 2020, the Greens and Labor called for a Royal Commission into Robodebt, to "determine those responsible for the scheme, and its impact on Australians". These calls have been reiterated by university academics, and by ACOSS, which stated that "although some restitution has been delivered to victims of Robodebt, they have not received justice". In May 2022, the sixth and final report from the second Senate inquiry into the scheme recommended a Royal Commission, "to completely understand how the failures of the Income Compliance Program came to pass, and why they were allowed to continue for so long despite the dire impacts on people issued with debts". In June 2020 Labor had stated that only a Royal Commission would be able to obtain the truth about Robodebt. Labor subsequently budgeted $30M in its election costings for the 2022 election for a Royal Commission into the Robodebt Scheme. ACOSS chief executive Cassandra Goldie welcomed this saying "The Robodebt affair was not just a maladministration scandal, it was a human tragedy that resulted in people taking their lives". Following Labor’s election win, Prime Minister Anthony Albanese announced the Royal Commission into the Robodebt Scheme, with Letters Patent issued on 25 August 2022. The Royal Commission was chaired by former Queensland Supreme Court Justice Catherine Holmes and was expected to conclude on 18 April 2023. The deadline was extended twice, first until 30 June and later until 7 July 2023. In November 2022 it was disclosed that legal advice before the scheme started was that it did not comply with legislation. Commissioner Catherine Holmes asked DSS lawyer Anne Pulford, "You get an advice in draft, and if it's not favourable you just leave it that way?"; Pulford responded "Yes, Commissioner". The final report of the Royal Commission was released on 7 July 2023. Along with 57 recommendations, a sealed section referred several unnamed individuals for further investigation or action, to four separate bodies. Kathryn Campbell, then working on the AUKUS program at the Department of Defence, was suspended without pay from her role on 20 July. Kathryn Campbell resigned from the Department of Defence effective 21 July 2023. Colleen Taylor, a former employee of the department, received a 2024 King's Birthday Honour for her efforts to expose the scheme. Taylor had tried to raise concerns internally in 2017, and had testified at the Royal Commission. National Anti-Corruption Commission In June 2024, the National Anti-Corruption Commission (NACC) decided not to pursue investigations of six individuals referred to it by the Royal Commission. The NACC stated it was unlikely to obtain new evidence and noted that five out of the six were already under investigation by the Australian Public Service Commission. A former NSW Supreme Court judge, Anthony Whealy, stated that the NACC's refusal to investigate the individuals meant that it had "betrayed its core obligation and failed to carry out its primary statutory duty". The NACC's decision received over 1200 complaints, sparking an independent inquiry into the decision by the Inspector of the NACC, Ms Gail Furness SC. The Inspector obtained documents relating to the decision, and requested submissions from the NACC by October. The Inspector found that Commissioner Paul Brereton had a perceived conflict on interest due to a "close association" with one of the individuals involved, and should have recused himself from the decision. The NACC appointed an independent person to reconsider the decision not to investigate. Australian Public Service Commission In September 2024, the Australian Public Service Commission announced that its investigation into the individuals had concluded, leading to several fines and demotions. No individuals were fired from their role. Following the findings of public service misconduct, lawyers representing the class action announced they would appeal their previous $1.8B settlement, seeking compensation for the further breaches uncovered. See also Dutch childcare benefits scandal British Post Office scandal References Welfare in Australia Public policy in Australia Australia Political controversies in Australia Government by algorithm
Robodebt scheme
Engineering
3,829
70,780,393
https://en.wikipedia.org/wiki/Three%20Women%20%28TV%20series%29
Three Women is an American television limited series based on the 2019 book of the same name by Lisa Taddeo. The series was initially set to premiere on Showtime, but on January 30, 2023, Deadline reported that Showtime had decided not to air the completed series. Starz picked up the series a week later. All 10 episodes of season one premiered on February 16, 2024, in Australia on Stan. In May 2024, Starz announced that the show would premiere on September 13, 2024, with a weekly release. Premise A writer convinces three women, all of whom are on a course to radically change their lives, to tell her their stories. Cast and characters Main Shailene Woodley as Gia DeWanda Wise as Sloane Betty Gilpin as Lina Gabrielle Creevy as Maggie Blair Underwood as Richard John Patrick Amedori as Jack Recurring Ravi Patel as Dr. Henry Austin Stowell as Aidan Lola Kirke as Lily Jason Ralph as Aaron Knodel Blair Redford as Will Fred Savage as Rody Jess Gabor as Billie Brían F. O'Byrne as Mark Wilkin Heather Goldenhersh as Arlene Wilkin Zane Pais as David Wilkin Tony D. Head as Stephen Episodes Production Development In July 2019, Showtime acquired rights to Three Women by Lisa Taddeo, with Taddeo attached to write and executive produce. On January 30, 2023, Deadline reported that Showtime had decided not to air the completed series, amid a reorganization at parent company Paramount Global and a review of Showtime's programming slate, but was being shopped to other services. The series was later picked up by Starz in early February 2023. The series premiered on September 13, 2024. Casting In July 2021, Shailene Woodley and DeWanda Wise joined the cast of the series. In September 2021, Betty Gilpin joined the cast in a series regular capacity, with Ravi Patel joining in recurring capacity. In October 2021, Blair Underwood and Gabrielle Creevy joined the cast in series regular capacity, with Austin Stowell and Lola Kirke joining in recurring capacity. In November 2021, Jason Ralph, Blair Redford and Jess Gabor joined in recurring roles. In December 2021, John Patrick Amedori joined the cast in a series regular capacity. In February 2022, Brían F. O'Byrne and Heather Goldenhersh joined the cast in recurring capacity. Filming Principal photography began by October 2021, taking place in Long Island, New York. In November 2021, scenes were shot in Schenectady, New York. Reception On the review aggregator website Rotten Tomatoes, 33% of 15 critics' reviews are positive, with an average rating of 6.2/10. The website's critics consensus reads, "Betty Gilpin shines in this otherwise disappointingly didactic series, where the diverging story stands never cohere into a satisfying whole." Metacritic, which uses a weighted average, assigned a score of 52 out of 100, based on 11 critics, indicating "mixed or average" reviews. Kylie Northover from The Sydney Morning Herald wrote: "This was never going to be an easy book to adapt, and [the show] feels bloated at 10 episodes, but Maggie and Lina’s stories are compelling; Sloane and her husband (Blair Underwood) are the least relatable, a lot larger than life than they were portrayed in the book." Accolades References External links American English-language television shows Television shows based on non-fiction books Works about sex 2024 American television series debuts Starz original programming
Three Women (TV series)
Biology
722
16,782,463
https://en.wikipedia.org/wiki/HD%20196885%20Ab
HD 196885 Ab (also referred to as HD 196885 b) is a Jovian planet with a minimum mass 2.96 times the mass of Jupiter. This planet was discovered on October 23, 2007. In 2022, the planet's inclination and true mass were measured via astrometry, showing it to be about . References External links Exoplanets discovered in 2007 Giant planets Delphinus Exoplanets detected by radial velocity Exoplanets detected by astrometry
HD 196885 Ab
Astronomy
99
24,969,353
https://en.wikipedia.org/wiki/Plasma%20lamp
Plasma lamps are a type of electrodeless gas-discharge lamp energized by radio frequency (RF) power. They are distinct from the novelty plasma lamps that were popular in the 1980s. The internal-electrodeless lamp was invented by Nikola Tesla after his experimentation with high-frequency currents in evacuated glass tubes for the purposes of lighting and the study of high voltage phenomena. The first practical plasma lamps were the sulfur lamps manufactured by Fusion Lighting. This lamp suffered several practical problems and did not prosper commercially. Plasma lamps with an internal phosphor coating are called external electrode fluorescent lamps (EEFL); these external electrodes or terminal conductors provide the radio frequency electric field. Description Modern plasma lamps are a family of light sources that generate light by exciting plasma inside a closed transparent burner or bulb using radio frequency (RF) power. Typically, such lamps use a noble gas or a mixture of these gases and additional materials such as metal halides, sodium, mercury or sulfur. In modern plasma lamps, a waveguide is used to constrain and focus the electrical field into the plasma. In operation, the gas is ionized, and free electrons, accelerated by the electrical field, collide with gas and metal atoms. Some atomic electrons circling around the gas and metal atoms are excited by these collisions, bringing them to a higher energy state. When the electron falls back to its original state, it emits a photon, resulting in visible light or ultraviolet radiation, depending on the fill materials. The first commercial plasma lamp was an ultraviolet curing lamp with a bulb filled with argon and mercury vapor developed by Fusion UV. That lamp led Fusion Lighting to the development of the sulfur lamp, a bulb filled with argon and sulfur that is bombarded with microwaves through a hollow waveguide. The bulb had to be spun rapidly to prevent the sulfur from burning through. Fusion Lighting did not prosper commercially, but other manufacturers continue to pursue sulfur lamps. Sulfur lamps, though relatively efficient, have had several problems, chiefly: Limited life – Magnetrons had limited lives. Large size Heat – The sulfur burnt through the bulb wall unless it was rotated rapidly. High power demand – They were not able to sustain a plasma in powers under 1000 W. Limited life In the past, the life of the plasma lamps was limited by the magnetron used to generate the microwaves. Solid-state RF chips can be used and give long lives. However, using solid-state chips to generate RF is currently an order of magnitude more expensive than using a magnetron and so only appropriate for high-value lighting niches. It has recently been shown by Dipolar of Sweden to be possible to extend the life of magnetrons to over 40,000 hours, making low-cost plasma lamps possible. Heat and power The use of a high-dielectric waveguide allowed the sustaining of plasmas at much lower powers—down to 100 W in some instances. It also allowed the use of conventional gas-discharge lamp fill materials which removed the need to spin the bulb. The only issue with the ceramic waveguide was that much of the light generated by the plasma was trapped inside the opaque ceramic waveguide. High-efficiency plasma (HEP) High-efficiency plasma lighting is the class of plasma lamps that have system efficiencies of 90 lumens per watt or more. Lamps in this class are potentially the most energy-efficient light source for outdoor, commercial, and industrial lighting. This is due not only to their high system efficiency but also to the small light source they present enabling very high luminaire efficiency. Luminaire Efficacy Rating (LER) is the single figure of merit the National Electrical Manufacturers Association has defined to help address problems with lighting manufacturers' efficiency claims and is designed to allow robust comparison between lighting types. It is given by the product of luminaire efficiency (EFF) times total rated lamp output in lumens (TLL) times ballast factor (BF), divided by the input power in watts (IP): LER = EFF × TLL × BF / IP The "system efficiency" for a high-efficiency plasma lamp is given by the last three variables, that is, it excludes the luminaire efficiency. Though plasma lamps do not have a ballast, they have an RF power supply that fulfills the equivalent function. In electrodeless lamps, the inclusion of the electrical losses, or "ballast factor", in lumens per watt claimed can be particularly significant as the conversion of electrical power to radio frequency (RF) power can be a highly inefficient process. Many modern plasma lamps have very small light sources—far smaller than HID bulbs or fluorescent tubes—leading to much higher luminaire efficiencies also. High-intensity discharge lamps have typical luminaire efficiencies of 55%, and fluorescent lamps of 70%. Plasma lamps typically have luminaire efficiencies exceeding 90%. Applications Plasma lamps have been used in high bay and street lighting applications, as well as in stage lighting. They were briefly used in some projection televisions. References Gas discharge lamps Types of lamp Plasma technology and applications
Plasma lamp
Physics
1,051
5,234,734
https://en.wikipedia.org/wiki/Logic%20Trunked%20Radio
Logic Trunked Radio (LTR) is a radio system developed in the late 1970s by the E. F. Johnson Company. LTR is distinguished from some other common trunked radio systems in that it does not have a dedicated control channel. LTR systems are limited to 20 channels (repeaters) per site and each site stands alone (not linked). is Each repeater has its own controller and all of these controllers are coordinated together. Even though each controller monitors its own channel, one of the channel controllers is assigned to be a master and all the other controllers report to it. Typically on LTR systems, each of these controllers periodically sends out a data burst (approximately every 10 seconds on LTR Standard systems) so that the subscriber units know that the system is there and which channels are in use or available. The idle data burst can be turned off if desired by the system operator. Some systems will broadcast idle data bursts only on channels used as home channels and not on those used for "overflow" conversations. To a listener, the idle data burst will sound like a short blip of static like someone keyed up and unkeyed a radio within about 1/4 second. This data burst is not sent at the same time by all the channels but happen randomly throughout all the system channels. References External links Logic Trunked System article from 'Monitoring Times' E.F. Johnson Company website LTR description page at the MRA Company Website Radio electronics Radio resource management Radio networks
Logic Trunked Radio
Engineering
306
37,017,517
https://en.wikipedia.org/wiki/Nilcurve
In mathematics, a nilcurve is a pointed stable curve over a finite field with an indigenous bundle whose p-curvature is square nilpotent. Nilcurves were introduced by as a central concept in his theory of p-adic Teichmüller theory. The nilcurves form a stack over the moduli stack of stable genus g curves with r marked points in characteristic p, of degree p3g–3+r. References Algebraic geometry
Nilcurve
Mathematics
98
24,447,073
https://en.wikipedia.org/wiki/Tahoe-LAFS
Tahoe-LAFS (Tahoe Least-Authority File Store) is a free and open, secure, decentralized, fault-tolerant, distributed data store and distributed file system. It can be used as an online backup system, or to serve as a file or Web host similar to Freenet, depending on the front-end used to insert and access files in the Tahoe system. Tahoe can also be used in a RAID-like fashion using multiple disks to make a single large Redundant Array of Inexpensive Nodes (RAIN) pool of reliable data storage. The system is designed and implemented around the "principle of least authority" (POLA), described by Brian Warner (one of the project's original founders) as the idea "that any component of the system should have as little power of authority as it needs to get its job done". Strict adherence to this convention is enabled by the use of cryptographic capabilities that provide the minimum set of privileges necessary to perform a given task by asking agents. A RAIN array acts as a storage volume; these servers do not need to be trusted by confidentiality or integrity of the stored data. History Tahoe-LAFS was started in 2006 at online backup services company All My Data and has been actively developed since 2007. In 2008, Brian Warner and Zooko Wilcox-O'Hearn published a paper on Tahoe at the 4th ACM international workshop on Storage security and survivability. When All My Data closed in 2009, Tahoe-LAFS became a free software project under the GNU General Public License or The Transitive Grace License, which allows owners of the code twelve months to profit from their work before releasing it. In 2010, Tahoe-LAFS was mentioned as a tool against censorship by the Electronic Frontier Foundation. In 2013, it was one of the hackathon projects at the GNU 30th anniversary. Functionality The Tahoe-LAFS Client sends an unencrypted file via a web API to the HTTPS Server. The HTTPS Server passes the file off to the Tahoe-LAFS Storage client which encrypts the file and then uses erasure coding to store fragments of the file on multiple storage drives. Tahoe-LAFS features "provider-independent security", in that the integrity and confidentiality of the files are guaranteed by the algorithms used on the client, independent of the storage servers, which may fail or may be operated by untrusted entities. Files are encrypted using AES, then split up using erasure coding, such that only a subset K of the original N servers storing the file chunks need to be available in order to recreate the original file. The default parameters are K=3, N=10, so each file is shared across 10 different servers, accessing it requires the correct function of any 3 of those servers. Tahoe provides very little control over on which nodes data is stored. Fork A patched version of Tahoe-LAFS exists from 2011, and was made to run on anonymous networks such as I2P, with support for multiple introducers. There is also a version for Microsoft Windows. It is distributed from a site within the I2P network. In contrast to normal Tahoe-LAFS operation, when I2P and Tahoe-LAFS are used together the location of the nodes are disguised. This allows for anonymous distributed grids to be formed. See also CephFS (file system) Coda (file system) Comparison of distributed file systems Freenet GlusterFS Moose File System LizardFS iFolder List of distributed file systems Lustre (file system) Parallel Virtual File System XtreemFS IPFS References External links Distributed file systems Userspace file systems Free network-related software Free file sharing software Free software programmed in Python File sharing software File sharing software for Linux Virtualization software for Linux Cross-platform software Cross-platform free software Cloud infrastructure Cloud storage Free software for cloud computing I2P
Tahoe-LAFS
Technology
810
73,165,446
https://en.wikipedia.org/wiki/Fling%20%28social%20network%29
Fling was a social media app available for IOS and Android. It was founded in 2014 by Marco Nardone and was taken offline in August 2016. Overview In 2012, Marco Nardone founded the startup Unii and launched Unii.com, a social network intended for students in the UK. While working on this service, Nardone had the idea for a messaging service where pictures could be sent to strangers in January 2014. The app Fling was then developed and released between March and July 2014. After a month, it already had 375,000 downloads and 180,000 active users on iOS. Users were able to take pictures inside the app and send them to 50 random people all over the world. The recipient could then choose to answer via chat or reply by sending a picture themselves. The app was used by many users as a medium to exchange sexually explicit pictures and for sexting with strangers. This led to the app being removed from the App Store in June 2015. In the 19 days that followed, flings developers rewrote the App almost completely from scratch, working around the clock. The feature to message random strangers was removed, and the app was readmitted into the App Store as a messenger App resembling Snapchat. But the redesigned Application did not have the success of its predecessor. The funding ran out and the parent company Unii went bankrupt. The company was not able to pay their content moderation team anymore, leading to a new surge of pornographic content on the App. Shortly after that, the Social Network was taken offline in August 2016. It has been inactive since. During the 2 years Fling was online, $21 million was raised from investors while generating no revenue at all. Of this $21 million (£16.5m), £5 million came from Nardone's father. Allegations against CEO Former employees made multiple allegations against Marco Nardone, the Founder and CEO of Unii and Fling. According to these claims, he behaved erratic and abusive, throwing "things across the office". He hired his girlfriend as the head of human resources to handle issues between him and his staff. Employees who left the company often had "some part of their pay held back". According to the reports, he also spent the money raised from investors irresponsibly, having no clear concept of a budget. Some of that money was used on expensive restaurants in London, a luxurious office for CEO Nardone and advertisements for Fling on Twitter and Facebook. Nardone also spent time partying in Ibiza with two employees, while the developer team in London frantically tried to get Fling back online after it being removed from the App Store. In December 2017 he pleaded guilty to assaulting his girlfriend at a domestic violence court. References Social media Defunct social networking services Mobile applications Instant messaging clients
Fling (social network)
Technology
573
32,679,816
https://en.wikipedia.org/wiki/ELFV%20dehydrogenase
In molecular biology, the ELFV dehydrogenase family of enzymes include glutamate, leucine, phenylalanine and valine dehydrogenases. These enzymes are structurally and functionally related. They contain a Gly-rich region containing a conserved Lys residue, which has been implicated in the catalytic activity, in each case a reversible oxidative deamination reaction. Glutamate dehydrogenases , and (GluDH) are enzymes that catalyse the NAD- and/or NADP-dependent reversible deamination of L-glutamate into alpha-ketoglutarate. GluDH isozymes are generally involved with either ammonia assimilation or glutamate catabolism. Two separate enzymes are present in yeasts: the NADP-dependent enzyme, which catalyses the amination of alpha-ketoglutarate to L-glutamate; and the NAD-dependent enzyme, which catalyses the reverse reaction - this form links the L-amino acids with the Krebs cycle, which provides a major pathway for metabolic interconversion of alpha-amino acids and alpha-keto acids. Leucine dehydrogenase (LeuDH) is a NAD-dependent enzyme that catalyses the reversible deamination of leucine and several other aliphatic amino acids to their keto analogues. Each subunit of this octameric enzyme from Bacillus sphaericus contains 364 amino acids and folds into two domains, separated by a deep cleft. The nicotinamide ring of the NAD+ cofactor binds deep in this cleft, which is thought to close during the hydride transfer step of the catalytic cycle. Phenylalanine dehydrogenase (PheDH) is an NAD-dependent enzyme that catalyses the reversible deamidation of L-phenylalanine into phenyl-pyruvate. Valine dehydrogenase (ValDH) is an NADP-dependent enzyme that catalyses the reversible deamidation of L-valine into 3-methyl-2-oxobutanoate. These enzymes contain two domains, an N-terminal dimerisation domain, and a C-terminal domain. References Protein domains
ELFV dehydrogenase
Biology
494
63,694,803
https://en.wikipedia.org/wiki/Master%20mix%20%28PCR%29
A master mix is a mixture containing precursors and enzymes used as an ingredient in polymerase chain reaction techniques in molecular biology. Such mixtures contain a mixture dNTPs (required as a substrate for the building of new DNA strands), MgCl2, Taq polymerase (an enzyme required to building new DNA strands), a pH buffer and come mixed in nuclease-free water. Master mixes for real-time PCR include a fluorescent compound (frequently SYBR green), and the choice of mix also influence test sensitivity and consistency. Differences in the choice of master mixes can sometimes explain difference in experimental results, a particular case being the measurement of telomere length. References DNA sequencing
Master mix (PCR)
Chemistry,Biology
144
63,990,850
https://en.wikipedia.org/wiki/Cone-saturated
In mathematics, specifically in order theory and functional analysis, if is a cone at 0 in a vector space such that then a subset is said to be -saturated if where Given a subset the -saturated hull of is the smallest -saturated subset of that contains If is a collection of subsets of then If is a collection of subsets of and if is a subset of then is a fundamental subfamily of if every is contained as a subset of some element of If is a family of subsets of a TVS then a cone in is called a -cone if is a fundamental subfamily of and is a strict -cone if is a fundamental subfamily of -saturated sets play an important role in the theory of ordered topological vector spaces and topological vector lattices. Properties If is an ordered vector space with positive cone then The map is increasing; that is, if then If is convex then so is When is considered as a vector field over then if is balanced then so is If is a filter base (resp. a filter) in then the same is true of See also References Bibliography Functional analysis
Cone-saturated
Mathematics
219
1,774,909
https://en.wikipedia.org/wiki/Trailer%20%28computing%29
In information technology, a trailer or footer refers to supplemental data (metadata) placed at the end of a block of data being stored or transmitted, which may contain information for the handling of the data block, or simply mark the block's end. In data transmission, the data following the end of the header and preceding the start of the trailer is called the payload or body. It is vital that trailer composition follow a clear and unambiguous specification or format, to allow for parsing. If a trailer is not removed properly, or part of the payload is removed thinking it is a trailer, it can cause confusion. The trailer contains information concerning the destination of a packet being sent over a network so for instance in the case of emails the destination of the email is contained in the trailer Examples In data transfer, the OSI model's data link layer adds a trailer at the end of frames of the data encapsulation. References Computer data
Trailer (computing)
Technology
195
41,648,302
https://en.wikipedia.org/wiki/Shift%20plan
The shift plan, rota or roster (esp. British) is the central component of a shift schedule in shift work. The schedule includes considerations of shift overlap, shift change times and alignment with the clock, vacation, training, shift differentials, holidays, etc. The shift plan determines the sequence of work (W) and free (F) days within a shift system. Notation A notation used often identifies day (D), swing (S) and night (N) shifts for the W days and O (off) for rest days. W work days D day shift, 1st shift, early shift This shift often occurs from either 06:00 or 07:00 to either 14:00 or 15:00 for eight-hour shifts, and from 06:00 to 18:00 for twelve-hour shifts. S swing shift, 2nd shift, late shift, back shift, afternoon shift This shift often occurs from either 14:00 or 15:00 to either 22:00 or 23:00 for eight-hour shifts, and is not used with twelve-hour shifts. N night shift, 3rd shift, graveyard shift This shift often occurs from either 22:00 or 23:00 to either 06:00 or 07:00 for eight-hour shifts, and from 18:00 to 06:00 for twelve-hour shifts. F free days O days off This is defined as a day on which a shift does not begin. A~F work teams (starts from A as first team) Note that a worker transitioning from N to O works for the first six or seven hours of the first day "off". Thus, when days off follow night shifts, the first one or more days "off" are, in fact, days of recovery from lack of nighttime sleep. This daily notation refers to the start of a shift. If a shift starts at 23:00, then this is a W day even though only one hour is worked. The day after this shift is an F day if no shift starts on this day, though many hours have been worked from midnight on. One shift system may allow many shift plans. For example, the twelve-hour, 2nW:2nF system with n = 1 allows twelve different plans in three serially-identical sets. Within a set, DONO has the same sequence as NODO. DNOO is the preferred sequence because days off follow night work and there are two consecutive days off. 3-day shift plans Prior to 2014, the U.S. Navy used a three shift system with an 18-hour day instead of a 24-hour day. The 24-hour period was divided into four shifts: 00:00-06:00, 06:00-12:00, 12:00-18:00, and 18:00-00:00. A sailor stood watch on their shift. During the off shift there is time to perform maintenance, study for qualifications, and handle collateral duties. During off time the sailor has time to sleep, relax, and perform personal tasks, such as laundry. With sufficient personnel, a given watchstation may benefit from a fourth man (the midnight cowboy or "Balls-to-6"). He would stand the same 6-hour watch in a given 24-hour period, usually from midnight to 06:00 (hence the midnight portion of the name, often shortened to cowboy) and the normal watchstander would then be free. This gave rise to a schedule of six on, twelve off, six on, thirty off, six on, twelve off. Beginning in 2014, the Submarine Force began shifting to a 24-hour day, with watches split into 8 hours on, 16 hours off. This does have the side effect of sailors assigned to a certain shift having the same meals every day, and so the shifts are periodically rotated in order to provide variety. The Surface Fleet began its shift in 2017, transitioning from their "five and dimes" approach of 5 hours on, 10 hours off. This does not apply to the attached air wing, which will work a 12 on, 12 off schedule 7 days a week. 4-day shift plans In the 12/24/12/48 or 12/24 plan, employees work in shifts of 12 hours; first a "daily shift" (e.g. 06:00 to 18:00), followed by 24 hours' rest, then a "nightly shift" (18:00 to 06:00), finishing with 48 hours' rest. This plan needs four teams for full coverage, and makes an average 42-hour workweek. The pattern repeats in a 4-week cycle, i.e. over 28 days, and has 14 shifts per employee therein. 5-day shift plans In four on, one off the employee only gets one day off after a work streak of four days. There are 28 shifts per employee in a five-week cycle (i.e. 35 days). This adds up to an average of 42 hours worked per week with 7½-hour shifts. This plan is mainly adopted by industries in which companies prefers to work for all days of the week, often with four (overlapping) shifts per day, and where laws do not let employees work for 12 hours a day for several days. Five groups of employees are needed to cover a specific shift on all days, where each group gets a different day off. 6-day shift plans In four on, two off the employee gets two days off. There are 28 shifts per employee in a six-week cycle (i.e. 42 days), this adds up to an average of 56 hours worked per week with 12-hour shifts, or hours per week with 8-hour shifts. Three groups are needed for each time span, i.e. to cover the whole day and week a company needs 6 groups for 12-hour shifts or 9 groups for 8-hour shifts. This plan is mainly adopted by industries in which employees do not engage in much physical activity. Week shift plans Three-shifts The three-shift system is the most common plan for five 24-hour days per week. The "first shift" often runs from 06:00 to 14:00, "second shift" or "swing shift" from 14:00 to 22:00 and a "third shift" or "night shift" from 22:00 to 06:00, but shifts may also have different length to accommodate for workload, e.g. 7, 8 and 9 or 6, 8 and 10 hours. To provide coverage 24/7, employees have their days off ("weekends") on different days. All of the shifts have desirable and less desirable qualities. First shift has very early starts, so time in the evening before is heavily cut short. The second shift occupies the times during which many people finish work and socialize. The third shift creates a situation in which the employee must sleep during the day; it may be preferred for night owls, for whom this is a desired sleep pattern. To provide an overlap in shifts, some employers may require one of the shifts to work four 10-hour shifts per week (as opposed to five 8-hour shifts, both are 40 hours per week). In that scenario, the night shift might extend from 21:00 to 07:00, but the night- shift would have nearly four days off (86 hours) between work weeks. This change, along with first shift moving a half-hour later, or second moving a half-hour earlier, ensures at least a half-hour overlap between shifts, which might be desirable if the business is open to the public to ensure that customers continue to be served during a shift change. Some U.S. states, such as California, accommodate this arrangement by allowing the employee to be paid at their regular rate (as opposed to time-and-a-half, or an overtime rate, that would normally be required for any time past 8 hours) for the 10-hour shift, calling this an "alternative workweek". Four on, three off In four on, three off, each employee works four days and gets a three-day weekend. For some types of manufacturing, this is a win-win arrangement. For example, a paint company had been making 3 batches of paint per day, Monday through Friday (3 × 5 = 15). They changed to making 4 batches of paint, Monday through Thursday (4 × 4 = 16). Total worker hours remained the same, but profits increased. In exchange for two additional hours of work per day, over 4 days, workers got an additional day off every week. See also the book, 4 Days, 40 Hours. Continental plan Continental plan, adopted primarily in central Europe, is a rapidly changing three-shift system that is usually worked for seven days straight, after which employees are given time off, e.g. 3 mornings, 2 afternoons and then 2 nights. 24*7 shifts In the 24*7 plan there are 24 consecutive shifts of 7 hours per week, hence covering 24/7. With 4 groups and 6 shifts per group, the work time is 42 hours per week. Several sub-patterns are possible, but usually each group is responsible for one of four time slots per day. Each of these is 6 hours long and if a shift begins in their time slot, a group has to work it. This way there are 14, 21 or 42 hours of rest between shifts, every group gets one whole day off. Shifts can be swapped to make double-shifts and increase the minimum time of rest. Split shift Split shift is used primarily in the catering, transport, hotel, and hospitality industry. Waiters and chefs work for four hours in the morning (to prepare and serve Lunch), then four hours in the evening (for an Evening meal). The average working day of a chef on split shifts could be 10:00 to 14:00 and then 17:00 to 21:00 Earlies and lates Earlies and lates is used primarily in industries such as customer service (help desk, phone-support), convenience stores, child care (day nurseries), and other businesses that require coverage greater than the average 09:00 to 17:00 working day in the UK, but no 24/7 coverage either. Employees work in two shifts that largely overlap, such as early shift from 08:00 to 16:00 and late shift from 10:00 to 18:00 In businesses where two shifts are necessary to cover the day, earlies and lates may be combined with one double shift per week per worker. Six 7-hour shifts in five days and seven 6-hour shifts in six days both result in 42 hours per week. 28-hour day The 6-day week with 28 hours per "day" is a general concept for full week coverage where the 168 hours of the week are grouped differently. It can be used as a base for several shift plans, e.g. four 7-hour shifts per day where every employee works six shifts for a total of 42 hours per week. 21-hour day The 8-day week with 21 hours per "day" is a general concept for full week coverage where the 168 hours of the week are grouped differently. It can be used as a base for several shift plans, e.g. three 7-hour shifts per day where every employee works six shifts for a total of 42 hours per week, but to get whole days off groups work alternating double shifts. 8-day shift plans Four on, four off is a shift plan that is being heavily adopted in the United Kingdom and in some parts of the United States. An employee works for four days, usually in 12-hour shifts (7:00 to 7:00) then has four days off. While this creates a "48-hour week" (42-hour average over the year) with long shifts, it may be preferred because it shrinks the work week down to four days, and then gives the employee four days' rest—double the time of a usual weekend. Due to the plan, employees effectively work an eight-day week, and the days they work vary by "week". As with three-shift systems, most employees stay with the same shift rather than cycling through them. A variation of the four on, four off plan is the two days, two nights, four off plan of working, or 2-2-4. Like the previous example it requires four separate teams to maintain 24/7 coverage. The difference is that all employees work both day and night shifts. Usually employees have to work 12-hour shifts from 06:00 to 18:00 on day shifts and from 18:00 to 06:00 on nights. This plan is currently in use in the UK by HM Coastguard and some ambulance services. A similar shift pattern is used by fire services such as London Fire Brigade, where the night shifts are longer than the day shifts. This may be referred to as a ten-fourteen roster, if the day shift lasts for ten hours and the night shift lasts fourteen. Extended night shifts such as these are often a double edged sword; on one hand crews on slower weeknight shifts, or those in areas of low demand will receive excellent levels of rest (when there are no calls for emergency services, crews are encouraged to rest if required). Conversely, those employed on high demand days such as weekend nights, or in particularly high demand areas, will often be required to be awake or working for their entire rostered shift. However, due to the scheduled nature, most ambulance and fire employees can attempt to obtain sufficient rest before or after a particularly busy 14-hour night shift. 10-day shift plans The 6 on, 4 off plan is commonly used in British police forces. The pattern worked consists of 2 early shifts, 2 late shifts, 2 night shifts and 4 days off. Shifts last 9–10 hours, creating some overlap between the 5 teams. 12-day shift plans The 6 on, 6 off plan consists of 3 days and 3 nights of work, then 6 days off. These will alternate between other crews, also known as teams, for a full 24/7 operation. The 12-day pattern repeats in a cycle of twelve weeks, i.e. 84 days. Fortnight shift plans Panama Schedule The Panama plan follows a 2-2-3 pattern throughout a fortnight, in which shift workers generally are allowed every other Friday, Saturday, and Sunday off, with two additional days off during the week, although this may differ depending on organization and industry. The most common form utilizes four shifts, each working twelve hours, with two shifts generally paired together: A working days and B working nights while C and D are off, and vice versa. It is not uncommon for shifts to rotate between days and nights, most often with six months spent on nights and six on days. This shift is sometimes known as the 2-2-3 or "two, two and three". 7-day fortnight plan In the 7-day fortnight plan or 2-3-2 plan, employees work their allotted hours within 7 days rather than 10 in a fortnight, i.e. fourteen days and nights. Therefore, 41 hours per week equate to 82 hours per fortnight, which is worked in seven days, at 11–12 hours per shift. This shift structure is used in the broadcast television industry, as well as many law enforcement agencies, as well as health care fields such as nursing and clinical laboratories in the US. One of the advantages of using this plan is each shift pair, for example A and B, will get time off on weekends alternatively, because the schedule is fixed and does not drift. 10-day fortnight plan A 10-day fortnight plan uses six shifts. Each shift works for seven days straight for their first week. On their "off week", they can choose three days to come in, to support other non-shifted departments, fill gaps in coverage, or participate in training. Five and two The five and two or 3-2-2 plan provides 24/7 coverage using 4 crews and 12-hour shifts over a fortnight. Average hours is 42 per week but contains a 60-hour week which can be challenging. 5/4/9s 5/4/9s or Five/Four Nines is a mix of 5-day and 4-day work weeks. Employees work in two-week cycles. Week 1, the employee works 4 days of 9 hours followed by 1 day of 8 hours with 2 days off (i.e. 44 hours). Week 2, the employee works 4 days of 9 hours with 3 days off (i.e. 36 hours). Like 8 hours a day for 5 days a week, this plan works to 80-hours in a two-week pay-period. Since employees work on nine days per cycle, this plan is also referred to as 9/80. The benefit to working an extra hour a day gives you a normal 2-day weekend followed by a long 3-day weekend the next. Typical working hours for this type of shift would be 06:00 to 15:30 (9 hours with 30 minutes lunch) and 06:00 to 14:30 (8 hours with 30 minutes lunch) on the 8-hour work day. Often the employer will alter the starting times (e.g., start at 07:00 or 08:00). A variation, early weekend or 4½-day week, has the employees work every Friday, but only for 4 hours each. Their weekend thus starts with the Friday lunch break. Long-term shift plans DuPont 12-hour rotating plan The DuPont 12-hour rotating plan provides 24/7 coverage using 4 crews and 12-hour shifts while providing a week off. Average hours is 42 per week but contains a 72-hour week which can be challenging. It is used in several manufacturing industries in the US. Companies that have gone to this schedule have noticed a decrease in accidents plus more rest for employees, less call ins, and more coverage when crews are short handed. In all the schedule is designed to improve safety. A particular advantage of this plan is that it can readily be slewed to fit business requirements. For example, if less coverage is required on a Sunday, stand-alone shifts are avoided by scheduling the fourth night and first day of four on that day. This also has the additional benefit of the quick turnaround day between three shift days and nights also falling on a Sunday. To balance pay into 36- and 48-hour weeks, many US companies shift the DuPont Schedule so the seven-day rest period ends on Friday night. To allow 3 full days off following a shift of nights, the day off between three days and three nights is removed. This example allows for a recovery day after 3 nights before a weekend off and for some workers more appropriately balances work/life. Seven-day eight-hour rotating plan The seven-day eight-hour rotating plan provides 24/7 coverage using 8-hour shifts with 5 crews. It consists of a "morning shift" from 07:00 to 15:00, a "swing shift" from 15:00 to 22:30 and a "night shift" from 22:30 to 07:30. Each shift works for five days straight. The 8-hour shifts allow vacations and absences to be covered by splitting shifts or working double shifts. The run of day shifts is 56 hours, but the 8-hour shift provides time for some socializing after work. This plan was once common in the pulp and paper industry in the Western United States but has been largely replaced by an 8 days, 8 swing, 5 nights, 9 off, 8-hour rotation. Graveyard shift Graveyard shift, night shift, or third shift means a shift of work running through the early hours of the morning, especially shifts starting around midnight. The origin of this phrase is uncertain. According to Michael Quinion it is an "evocative term for the night shift … when … your skin is clammy, there's sand behind your eyeballs, and the world is creepily silent, like the graveyard." In 2007, the World Health Organization (WHO) announced that working the graveyard shift would be listed as a "probable" cause of cancer. On-call Employees who work on an on-call basis have no regular schedule. They agree as a condition of employment to report to work when they are called, 24 hours a day, 7 days a week. This is particularly common in American railroad employment, especially for train crews. Other groups of workers may be on-call from home for some days and working normal shifts for others, or will work during normal business hours and then remain on-call from home for the rest of that night until the following morning (this working pattern is common for senior doctors, for example). Firefighting schedules In many North American fire departments, firefighters work 24-hour shifts. They are authorized to sleep in the fire station during the time spent on night shift. Most departments split the 168-hour-long week between 3 or 4 work groups (sometimes referred to as 'shifts' or 'platoon groups'), resulting in a 56- or 42-hour workweek, respectively. Some departments reduce the average workweek by scheduling an extra day off for each firefighter in the work group, frequently reducing a 56-hour workweek to a 48-hour workweek by scheduling a 24-hour "Kelly Day" every three weeks. Departments have many options for scheduling firefighters for coverage. One option is 24 on/48 off, where a firefighter will work 24 hours and have 48 hours off, regardless of the day of the week or the holidays. Often they will be scheduled in an A–B–C pattern. Thus, a firefighter will be assigned to A, B or C shift and work whenever that letter is on the calendar. Most departments have found that a 24-hour work shift, with opportunistic sleeping between calls for service, is a valid means of avoiding some of the health and cognitive problems associated with shift work. Three-platoon schedules The most basic three-platoon schedule is a straight rotation of 24-hour shifts among three platoon groups. This rotation limits time off to 48 hours in a row, less than 66 hours off in a row most workers get each weekend. Workers on this schedule only get one short weekend off every three weeks. Twenty-four hours off-duty is also the minimum required to completely recover from a period of acute sleep deficit. Another option is known as a California roll, where some shifts will be close together but allow for several days off. This option gives a 96-hour break every 9th day, which is contiguous to the conventional weekend on two of nine weekends, with a third weekend providing a break that starts on Saturday morning. There is an opportunity to accumulate sleep debt over the three days of work, however this debt should be completely cleared over the four-day break. The nine-day rota that is repeated to fill the calendar. A firefighter will work 24 hours on, 24 off, 24 on, 24 off, 24 on, 96 hours (4 days) off. This rotation reduces the chronic sleep deficit accrued over the first two work days at the expense of a shorter long break. This schedule's long break coincides with a standard weekend exactly once every nine weeks. The four-day break could be retained by working a fourth day in the rotation - XOXOOXOXOOOO. A firefighter will work one day, off one, work one, off two, work one, off four days. A number of departments have investigated further work consolidation by allowing for a 48-hour work shift. Careful demand management would be required to avoid acute sleep deficit, however, firefighters should return to work fully recovered from the previous shift. Kenneth B. Ellerbe chief of the District of Columbia Fire and Emergency Medical Services Department has proposed a schedule where firefighters work three-day shifts, followed by three night shifts, followed by three days off. It is likely that such a schedule would impact all four alertness factors associated with shift work, and result in a threat to public safety. It would result in exactly one break coinciding with the standard weekend every nine weeks. DDDNNNOOODDDNNNOOO Four-platoon schedules The most basic four-platoon schedule is a straight rotation of 24-hour work shifts between four work groups or platoons. This schedule works 48 hours per week for three weeks and 24 hours the fourth week, averaging 42 hours per week. Another variation of the 24-hour shift schedule is a 4-platoon system, averaging 42 hours/week. Thus, the schedule is 24 on, 48 off, 24 on, 96 off, on a 4-day rotation. Split day and night shifts In other fire departments, firefighters work shorter shifts, such as a mix of 10-hour day shifts and 14-hour night shifts. The advantage is that firefighters have shorter working hours. The disadvantage is that they may sometimes have only 12 hours to recover between one night shift and the next. The 2005 Canadian Firefighter study comparing two models with 24-hour shifts with three models requiring at least three consecutive night shifts, found that consecutive nights were shown to be more deleterious to performance than a single, long shift. Performance effectiveness 75% after two consecutive nights and lower after three, compared to 78% for a 24-hour shift. If the schedule induces sleep deficit in a subsequent day shift, this performance would be worse. On the 2-2-4 schedule, firefighters work two 10-hour days, two 14-hour nights, and then have four days off. This schedule's long break aligns with the conventional weekend for exactly two weeks out of eight. The majority of Australian fire brigades use this schedule (which is locally referred to as the '10/14' or '4 on, 4 off' roster) The rota is: DDNNOOOO. The 2-2-3 schedule is also known as the Panama Schedule, however, when firefighters work it, the shifts rotate from day to night between every break. Since the firefighters have a two-day break before any nights worked, they do not start the series of nights with an employment-related sleep deficit. They do work three nights in a row, which would result in chronic sleep deficit if alarms are received on each night, however, the third night is always a Sunday night, which is often less busy than other nights of the week. This schedule allows for a long break every other weekend. The rota is: DDOONNN OODDOOO NNOODDD OONNOOO See also Eight-hour day Soviet calendar, which first used 5-day and later 6-day plans Split shift Gantt chart On call shift References Further reading Burr, Douglas Scott (2009) 'The Schedule Book', ''. Miller, James C. (2013) 'Fundamentals of Shiftwork Scheduling, 3rd Edition: Fixing Stupid', Smashwords, . Working time Circadian rhythm Labor rights
Shift plan
Biology
5,550
5,047,118
https://en.wikipedia.org/wiki/Range%20%28particle%20radiation%29
In passing through matter, charged particles ionize and thus lose energy in many steps, until their energy is (almost) zero. The distance to this point is called the range of the particle. The range depends on the type of particle, on its initial energy and on the material through which it passes. For example, if the ionising particle passing through the material is a positive ion like an alpha particle or proton, it will collide with atomic electrons in the material via Coulombic interaction. Since the mass of the proton or alpha particle is much greater than that of the electron, there will be no significant deviation from the radiation's incident path and very little kinetic energy will be lost in each collision. As such, it will take many successive collisions for such heavy ionising radiation to come to a halt within the stopping medium or material. Maximum energy loss will take place in a head-on collision with an electron. Since large angle scattering is rare for positive ions, a range may be well defined for that radiation, depending on its energy and charge, as well as the ionisation energy of the stopping medium. Since the nature of such interactions is statistical, the number of collisions required to bring a radiation particle to rest within the medium will vary slightly with each particle (i.e., some may travel further and undergo fewer collisions than others). Hence, there will be a small variation in the range, known as straggling. The energy loss per unit distance (and hence, the density of ionization), or stopping power also depends on the type and energy of the particle and on the material. Usually, the energy loss per unit distance increases while the particle slows down. The curve describing this fact is called the Bragg curve. Shortly before the end, the energy loss passes through a maximum, the Bragg Peak, and then drops to zero (see the figures in Bragg Peak and in stopping power). This fact is of great practical importance for radiation therapy. The range of alpha particles in ambient air amounts to only several centimeters; this type of radiation can therefore be stopped by a sheet of paper. Although beta particles scatter much more than alpha particles, a range can still be defined; it frequently amounts to several hundred centimeters of air. The mean range can be calculated by integrating the inverse stopping power over energy. Scaling The range of a heavy charged particle is approximately proportional to the mass of the particle and the inverse of the density of the medium, and is a function of the initial velocity of the particle. See also Stopping power (particle radiation) Attenuation length Radiation length Further reading Particle physics Radiation es:Alcance de la radiación
Range (particle radiation)
Physics,Chemistry
541
1,153,898
https://en.wikipedia.org/wiki/KVM%20switch
A KVM switch (with KVM being an abbreviation for "keyboard, video, and mouse") is a hardware device that allows a user to control multiple computers from one or more sets of keyboards, video monitors, and mouse. Name Switches to connect multiple computers to one or more peripherals have had multiple names. The earliest name was Keyboard Video Switch (KVS). With the advent of the mouse, the Keyboard, Video and Mouse (KVM) switch became popular. The name was introduced by Remigius Shatas, the founder of Cybex (now Vertiv), a peripheral switch manufacturer, in 1995. Some companies call their switches Keyboard, Video, Mouse and Peripheral (KVMP). Types USB keyboards, mice, and I/O devices are the most common devices connected to a KVM switch. The classes of KVM switches discussed below are based on different types of core technologies, which vary in how the KVM switch handles USB I/O devices—including keyboards, mice, touchscreen displays, etc. (USB-HID = USB Human Interface Device) USB Hub Based KVMAlso called an Enumerated KVM switch or USB switch selector, a connected/shared USB device must go through the full initiation process (USB enumeration) every time the KVM is switched to another target system/port. The switching to different ports is similar to the process of physically plugging and unplugging a USB device into the targeted system. Emulated USB KVM Dedicated USB console port(s) are assigned to emulate special sets of USB keyboard or mouse switching control information to each connected/targeted system. Emulated USB provides an instantaneous and reliable switching action that makes keyboard hotkeys and mouse switching possible. However, this class of KVM switch only uses generic emulations and consequently has only been able to support the most basic keyboard and mouse features. There are also USB KVM devices that allow cross-platform operating systems and basic keyboard and mouse sharing. Semi-DDM USB KVM Dedicated USB console port(s) work with all USB-HID (including keyboard and mouse), but do not maintain the connected devices' presence to all of the targeted systems simultaneously. This class of KVM takes advantage of DDM (Dynamic Device Mapping) technology. DDM USB KVM Dedicated Dynamic device mapping USB console port(s) work with all USB-HID (including keyboard and mouse) and maintain the connected devices' special functions and characteristics to each connected/targeted system. This class of KVM switch overcomes the frustrating limitations of an Emulated USB Class KVM by emulating the true characters of the connected devices to all the computers simultaneously. This means that you can now use the extra function keys, wheels, buttons, and controls that are commonly found on modern keyboards and mice. KVM+Dock A KVM switch with built-in docking station. It combines two devices, a KVM switch and a docking station. The customer expectations for this kind of product has increased due to a rising number of work from home setups that need to share user I/O devices between a personal PC and work laptop as a consequence of COVID pandemic restrictions. Limited*supported, but does not allow USB re-enumeration, which not only causes long delays in switching, but also sometimes causes HPD (Hot-Plug Device) errors to the OS system(s). Yes*Latency time within 1 second while switching between channels/ports. KVM+Dock Dual DP1.4 KVM switch with TB4 dock model will be the first model released for full-buss DisplayPort 1.4 sharing for 4K144hz gaming monitors. Use A KVM Switch is a hardware device used in data centers that allows the control of multiple computers from a single keyboard, monitor and mouse (KVM). The switch allows data center personnel to connect to any server in the rack. A common example of home use is to enable the use of the full-size keyboard, mouse and monitor of the home PC with a portable device such as a laptop, tablet PC or PDA, or a computer using a different operating system. KVM switches offer different methods of connecting the computers. Depending on the product, the switch may present native connectors on the device where standard keyboard, monitor and mouse cables can be attached. Another method to have a single DB25 or similar connector that aggregated connections at the switch with three independent keyboard, monitor and mouse cables to the computers. Subsequently, these were replaced by a special KVM cable which combined the keyboard, video and mouse cables in a single wrapped extension cable. The advantage of the last approach is in the reduction of the number of cables between the KVM switch and connected computers. The disadvantage is the cost of these cables. The method of switching from one computer to another depends on the switch. The original peripheral switches (Rose, circa 1988) used a rotary switch while active electronic switches (Cybex, circa 1990) used push buttons on the KVM device. In both cases, the KVM aligns operation between different computers and the users' keyboard, monitor and mouse (user console). In 1992–1993, Cybex Corporation engineered keyboard hot-key commands. Today, most KVMs are controlled through non-invasive hot-key commands (e.g. , and the keys). Hot-key switching is often complemented with an on-screen display system that displays a list of connected computers. KVM switches differ in the number of computers that can be connected. Traditional switching configurations range from 2 to 64 possible computers attached to a single device. Enterprise-grade devices interconnected via daisy-chained and/or cascaded methods can support a total of 512 computers equally accessed by any given user console. Video bandwidth While HDMI, DisplayPort, and DVI switches have been manufactured, VGA is still the most common video connector found with KVM switches for industrial applications and manufacturing applications, although many switches are now compatible with HDMI and DisplayPort connectors. Analogue switches can be built with varying capacities for video bandwidth, affecting the unit's overall cost and quality. A typical consumer-grade switch provides up to 200 MHz bandwidth, allowing for high-definition resolutions at 60 Hz. For analog video, resolution and refresh rate are the primary factors in determining the amount of bandwidth needed for the signal. The method of converting these factors into bandwidth requirements is a point of ambiguity, in part because it is dependent on the analogue nature and state of the hardware. The same piece of equipment may require more bandwidth as it ages due to increased degradation of the source signal. Most conversion formulas attempt to approximate the amount of bandwidth needed, including a margin of safety. As a rule of thumb, switch circuitry should provide up to three times the bandwidth required by the original signal specification, as this allows most instances of signal loss to be contained outside the range of the signal that is pertinent to picture quality. As CRT-based displays are dependent on refresh rate to prevent flickering, they generally require more bandwidth than comparable flat panel displays. High-resolution and High-refresh-rate monitors become standard setups for advanced high-end KVM switches (specially with Gaming PC). 2023: the highest resolutions and refresh-rate supported by Advanced DDM-class DisplayPort 1.4 KVM switch at 4K144hz, 5K120/240hz, 8K60hz (w/DSC) Monitor A monitor uses DDC and EDID, transmitted through specific pins, to identify itself to the system. KVM switches may have different ways of handling these data transmissions: None: the KVM switch lacks the circuitry to handle this data, and the monitor is not "visible" to the system. The system may assume a generic monitor is attached and defaults to safe settings. Higher resolutions and refresh rates may need to be manually unlocked through the video driver as a safety precaution. However, certain applications (especially games) that depend on retrieving DDC/EDID information will not be able to function correctly. Fake: the KVM switch generates its own DDC/EDID information that may or may not be appropriate for the monitor that is attached. Problems may arise if there is an inconsistency between the KVM's specifications and the monitor's, such as not being able to select desired resolutions. Pass-through: the KVM switch attempts to make communication between the monitor and the system transparent. However, it may fail to do so in the following ways: generating Hot Plug Detect (HPD) events for monitor arrival or removal upon switching, or not passing monitor power states - may cause the OS to re-detect the monitor and reset the resolution and refresh rate, or may cause the monitor to enter or exit power-saving mode; not passing or altering MCSS commands - may result in incorrect orientation of the display or improper color calibration. Microsoft guidelines recommend that KVM switches pass unaltered any I2C traffic between the monitor and the PC hosts, and do not generate HPD events upon switching to a different port while maintaining stable non-noise signal on inactive ports. Monitors with built-in KVM switch functions More monitors had been included a built-in KVM switch to be able to have two computer systems (two upstream system connections) to share the monitor. However, since most of current monitors with KVM switch functions had been putting the only hub-class KVM switch with them. There is no HID emulation or no EDID emulation/feeding to all connected systems. In addition, they're limited to having 2 systems connected to it. And only can control one monitor (the monitor itself only) with the built-in KVM switch. The built-in KVM switch CAN not support multi-monitor switching and control via it. Passive and active (electronic) switches KVM switches were originally passive, mechanical devices based on multi-pole switches and some of the cheapest devices on the market still use this technology. Mechanical switches usually have a rotary knob to select between computers. KVMs typically allow sharing of two or four computers, with a practical limit of about twelve machines imposed by limitations on available switch configurations. Modern hardware designs use active electronics rather than physical switch contacts with the potential to control many computers on a common system backbone. One limitation of mechanical KVM switches is that any computer not currently selected by the KVM switch does not 'see' a keyboard or mouse connected to it. In normal operation this is not a problem, but while the machine is booting up it will attempt to detect its keyboard and mouse and either fail to boot or boot with an unwanted (e.g. mouseless) configuration. Likewise, a failure to detect the monitor may result in the computer falling back to a low resolution such as (typically) 640x480. Thus, mechanical KVM switches may be unsuitable for controlling machines which can reboot automatically (e.g. after a power failure). Another problem encountered with mechanical devices is the failure of one or more switch contacts to make firm, low resistance electrical connections, often necessitating some wiggling or adjustment of the knob to correct patchy colors on screen or unreliable peripheral response. Gold-plated contacts improve that aspect of switch performance, but add cost to the device. Most active (electronic rather than mechanical) KVM devices provide peripheral emulation, sending signals to the computers that are not currently selected to simulate a keyboard, mouse and monitor being connected. These are used to control machines which may reboot in unattended operation. Peripheral emulation services embedded in the hardware also provides continuous support where computers require constant communication with the peripherals. Some types of active KVM switches do not emit signals that exactly match the physical keyboard, monitor, and mouse, which can result in unwanted behavior of the controlled machines. For example, the user of a multimedia keyboard connected to a KVM switch may find that the keyboard's multimedia keys have no effect on the controlled computers. Software alternatives There are software alternatives to some of the functionality of a hardware KVM switch, such as Multiplicity, Synergy, and Barrier, which does the switching in software and forwards input over standard network connections. This has the advantage of reducing the number of wires needed. Screen-edge switching allows the mouse to function over both monitors of two computers. Remote KVM devices There are two types of remote KVM devices that are best described as local remote and KVM over IP. Local remote (including KVM over USB) Local remote KVM device design allows users to control computer equipment up to away from the user consoles (keyboard, monitor and mouse). They always need direct cable connection from the computer to the KVM switch to the console and include support for standard category 5 cabling between computers and users interconnected by the switch device. In contrast, USB powered KVM devices are able to control computer equipment using a combination of USB, keyboard, mouse and monitor cables of up to . KVM over IP (IPKVM) KVM switch over IP devices use a dedicated micro-controller and potentially specialized video capture hardware to capture the video, keyboard, and mouse signals, compress and convert them into packets, and send them over an Ethernet link to a remote console application that unpacks and reconstitutes the dynamic graphical image. KVM over IP subsystem is typically connected to a system's standby power plane so that it's available during the entire BIOS boot process. These devices allow multiple computers to be controlled locally or globally with the use of an IP connection. There are performance issues related with LAN/WAN hardware, standard protocols and network latency so user management is commonly referred to as "near real time". Access to most remote or "KVM" over IP devices today use a web browser, although many of the stand-alone viewer software applications provided by many manufacturers are also reliant on ActiveX or Java. Whitelisting Some KVM chipsets or manufacturers require the "whitelisting" or authority to connect to be implicitly enabled. Without the whitelist addition, the device will not work. This is by design and required to connect non-standard USB devices to KVMs. This is completed by noting the device's ID (usually copied from the Device manager in Windows), or documentation from the manufacturer of the USB device. Generally all HID or consumer grade USB peripherals are exempt, but more exotic devices like tablets, or digitisers or USB toggles require manual addition to the white list table of the KVM. Implementation In comparison to conventional methods of remote administration (for example in-band Virtual Network Computing or Terminal Services), a KVM switch has the advantage that it doesn't depend on a software component running on the remote computer, thus allowing remote interaction with base level BIOS settings and monitoring of the entire booting process before, during, and after the operating system loads. Modern KVM over IP appliances or switches typically use at least 128-bit data encryption securing the KVM configuration over a WAN or LAN (using SSL). KVM over IP devices can be implemented in different ways. With regards to video, PCI KVM over IP cards use a form of screen scraping where the PCI bus master KVM over IP card would access and copy out the screen directly from the graphics memory buffer, and as a result it must know which graphics chip it is working with, and what graphics mode this chip is currently in so that the contents of the buffer can be interpreted correctly as picture data. Newer techniques in OPMA management subsystem cards and other implementations get the video data directly using the DVI bus. Implementations can emulate either PS/2 or USB based keyboards and mice. An embedded VNC server is typically used for the video protocol in IPMI and Intel AMT implementations. Computer sharing devices KVM switches are called KVM sharing devices because two or more computers can share a single set of KVM peripherals. Computer sharing devices function in reverse compared to KVM switches; that is, a single PC can be shared by multiple monitors, keyboards, and mice. A computer sharing device is sometimes referred to as a KVM Splitter or reverse KVM switch. While not as common, this configuration is useful when the operator wants to access a single computer from two or more (usually close) locations - for example, a public kiosk machine that also has a staff maintenance interface behind the counter, or a home office computer that doubles as a home theater PC. See also Console server Intel Active Management Technology Intelligent Platform Management Interface Remote graphics unit Dynamic device mapping Display Control Channel Reverse DDM Synergy (software) References Computer peripherals Input/output Out-of-band management Computer connectors
KVM switch
Technology
3,454
35,585,549
https://en.wikipedia.org/wiki/CheShift
CheShift-2 (pronounced /tʃeʃɪft/) is an application created to compute 13Cα and 13Cβ protein chemical shifts and to validate protein structures. It is based on quantum mechanics computations of 13Cα and 13Cβchemical shift as a function of the torsional angles (φ, ψ, ω and χ1, χ2) of the 20 amino acids. CheShift-2 can return a list of theoretical chemical shift values from a PDB file. It also can display a 3D protein model based on an uploaded PDB file and chemical shift values. The 3D protein model is colored using a five color code indicating the differences of the theoretical vs the observed chemical shifts values. The differences between observed and predicted 13Cα and 13Cβ chemical shifts can be used as a sensitive probe with which to detect possible local flaws in protein structures. If both 13Cα and 13Cβ observed chemical shifts are provided CheShift-2 will attempt to provide a list of alternative χ1 and χ2 side-chain torsional angles that will reduce the differences between observed and computed chemical shifts, these values can be used to repair flaws in protein structures. CheShift-2 can be accessed online at http://www.cheshift.com, or via PyMOL plugin. See also Structure validation Bioinformatics Computational biology Related software WHAT IF software PROCHECK PSVS External links CheShift PyMOL plugin GitHub repository References Bioinformatics software
CheShift
Biology
317
16,806,577
https://en.wikipedia.org/wiki/Base-richness
In ecology, base-richness is the level of chemical bases in water or soil, such as calcium or magnesium ions. Many organisms prefer base-rich environments. Chemical bases are alkalis, hence base-rich environments are either neutral or alkaline. Because acid-rich environments have few bases, they are dominated by environmental acids (usually organic acids). There is a positive correlation between base-richness and calcium (Ca), magnesium (Mg), and carbonates (HCO3, and a negative correlation with pH, Iron (Fe), Manganese (Mn), and Aluminum (Al). However, the relationship between base-richness and acidity is not a rigid one – changes in the levels of acids (such as dissolved carbon dioxide) may significantly change acidity without affecting base-richness. Base-rich terrestrial environments are characteristic of areas where underlying rocks (below soil) are limestone. Seawater is also base-rich, so maritime and marine environments are themselves base-rich. Base-poor environments are characteristic of areas where underlying rocks (below soil) are sandstone or granite, or where the water is derived directly from rainfall (ombrotrophic). There is no correlation between base-richness and availability of nitrogen (N), phosphorus (P), and potassium (K). Examples of base-rich environments Calcareous grassland Fen Limestone pavement Maquis shrubland Yew woodland Examples of base-poor environments Bog Heath (habitat) Poor fen Moorland Pine woodland Tundra See also Soil Calcicole Calcifuge References Ecology Soil chemistry
Base-richness
Chemistry,Biology
328
38,233,680
https://en.wikipedia.org/wiki/Peatland
A peatland is a type of wetland whose soils consist of organic matter from decaying plants, forming layers of peat. Peatlands arise because of incomplete decomposition of organic matter, usually litter from vegetation, due to water-logging and subsequent anoxia. Peatlands are unusual landforms that derive mostly from biological rather than physical processes, and can take on characteristic shapes and surface patterning. The formation of peatlands is primarily controlled by climatic conditions such as precipitation and temperature, although terrain relief is a major factor as waterlogging occurs more easily on flatter ground and in basins. Peat formation typically initiates as a paludification of a mineral soil forests, terrestrialisation of lakes, or primary peat formation on bare soils on previously glaciated areas. A peatland that is actively forming peat is called a mire. All types of mires share the common characteristic of being saturated with water, at least seasonally with actively forming peat, while having their own ecosystem. Peatlands are the largest natural carbon store on land. Covering around 3 million km2 globally, they sequester 0.37 gigatons (Gt) of carbon dioxide () a year. Peat soils store over 600 Gt of carbon, more than the carbon stored in all other vegetation types, including forests. This substantial carbon storage represents about 30% of the world's soil carbon, underscoring their critical importance in the global carbon cycle. In their natural state, peatlands provide a range of ecosystem services, including minimising flood risk and erosion, purifying water and regulating climate. Peatlands are under threat by commercial peat harvesting, drainage and conversion for agriculture (notably palm oil in the tropics) and fires, which are predicted to become more frequent with climate change. The destruction of peatlands results in release of stored greenhouse gases into the atmosphere, further exacerbating climate change. Types For botanists and ecologists, the term peatland is a general term for any terrain dominated by peat to a depth of at least , even if it has been completely drained (i.e., a peatland can be dry). A peatland that is still capable of forming new peat is called a mire, while drained and converted peatlands might still have a peat layer but are not considered mires as the formation of new peat has ceased. There are two types of mire: bog and fen. A bog is a mire that, due to its raised location relative to the surrounding landscape, obtains all its water solely from precipitation (ombrotrophic). A fen is located on a slope, flat, or in a depression and gets most of its water from the surrounding mineral soil or from groundwater (minerotrophic). Thus, while a bog is always acidic and nutrient-poor, a fen may be slightly acidic, neutral, or alkaline, and either nutrient-poor or nutrient-rich. All mires are initially fens when the peat starts to form, and may turn into bogs once the height of the peat layer reaches above the surrounding land. A quagmire or is a floating (quaking) mire, bog, or any peatland being in a stage of hydrosere or hydrarch (hydroseral) succession, resulting in pond-filling yields underfoot (floating mats). Ombrotrophic types of quagmire may be called quaking bog (quivering bog). Minerotrophic types can be named with the term quagfen. Some swamps can also be peatlands (e.g.: peat swamp forest), while marshes are generally not considered to be peatlands. Swamps are characterized by their forest canopy or the presence of other tall and dense vegetation like papyrus. Like fens, swamps are typically of higher pH level and nutrient availability than bogs. Some bogs and fens can support limited shrub or tree growth on hummocks. A marsh is a type of wetland within which vegetation is rooted in mineral soil. Global distribution Peatlands are found around the globe, although are at their greatest extent at high latitudes in the Northern Hemisphere. Peatlands are estimated to cover around 3% of the globe's surface, although estimating the extent of their cover worldwide is difficult due to the varying accuracy and methodologies of land surveys from many countries. Mires occur wherever conditions are right for peat accumulation: largely where organic matter is constantly waterlogged. Hence the distribution of mires is dependent on topography, climate, parent material, biota and time. The type of mire—bog, fen, marsh or swamp—depends also on each of these factors. The largest accumulation of mires constitutes around 64% of global peatlands and is found in the temperate, boreal and subarctic zones of the Northern Hemisphere. Mires are usually shallow in polar regions because of the slow rate of accumulation of dead organic matter, and often contain permafrost and palsas. Very large swathes of Canada, northern Europe and northern Russia are covered by boreal mires. In temperate zones mires are typically more scattered due to historical drainage and peat extraction, but can cover large areas. One example is blanket bog where precipitation is very high i.e., in maritime climates inland near the coasts of the north-east and south Pacific, and the north-west and north-east Atlantic. In the sub-tropics, mires are rare and restricted to the wettest areas. Mires can be extensive in the tropics, typically underlying tropical rainforest (for example, in Kalimantan, the Congo Basin and Amazon basin). Tropical peat formation is known to occur in coastal mangroves as well as in areas of high altitude. Tropical mires largely form where high precipitation is combined with poor conditions for drainage. Tropical mires account for around 11% of peatlands globally (more than half of which can be found in Southeast Asia), and are most commonly found at low altitudes, although they can also be found in mountainous regions, for example in South America, Africa and Papua New Guinea. Indonesia, particularly on the islands of Sumatra, Kalimantan and Papua, has one of the largest peatlands in the world, with an area of about 24 million hectares. These peatlands play an important role in global carbon storage and have very high biodiversity. However, peatlands in Indonesia also face major threats from deforestation and forest fires. In the early 21st century, the world's largest tropical mire was found in the Central Congo Basin, covering 145,500 km2 and storing up to 1013 kg of carbon. The total area of mires has declined globally due to drainage for agriculture, forestry and peat harvesting. For example, more than 50% of the original European mire area which is more than 300,000 km2 has been lost. Some of the largest losses have been in Russia, Finland, the Netherlands, the United Kingdom, Poland and Belarus. A catalog of the peat research collection at the University of Minnesota Duluth provides references to research on worldwide peat and peatlands. Biochemical processes Peatlands have unusual chemistry that influences, among other things, their biota and water outflow. Peat has very high cation-exchange capacity due to its high organic matter content: cations such as Ca2+ are preferentially adsorbed onto the peat in exchange for H+ ions. Water passing through peat declines in nutrients and pH. Therefore, mires are typically nutrient-poor and acidic unless the inflow of groundwater (bringing in supplementary cations) is high. Generally, whenever the inputs of carbon into the soil from dead organic matter exceed the carbon outputs via organic matter decomposition, peat is formed. This occurs due to the anoxic state of water-logged peat, which slows down decomposition. Peat-forming vegetation is typically also recalcitrant (poorly decomposing) due to high lignin and low nutrient content. Topographically, accumulating peat elevates the ground surface above the original topography. Mires can reach considerable heights above the underlying mineral soil or bedrock: peat depths of above 10 m have been commonly recorded in temperate regions (many temperate and most boreal mires were removed by ice sheets in the last Ice Age), and above 25 m in tropical regions.[7] When the absolute decay rate of peat in the catotelm (the lower, water-saturated zone of the peat layer) matches the rate of input of new peat into the catotelm, the mire will stop growing in height.[8] Carbon storage and methanogenesis Despite accounting for just 3% of Earth's land surfaces, peatlands are collectively a major carbon store containing between 500 and 700 billion tonnes of carbon. Carbon stored within peatlands equates to over half the amount of carbon found in the atmosphere. Peatlands interact with the atmosphere primarily through the exchange of carbon dioxide, methane and nitrous oxide, and can be damaged by excess nitrogen from agriculture or rainwater. The sequestration of carbon dioxide takes place at the surface via the process of photosynthesis, while losses of carbon dioxide occur through living plants via autotrophic respiration and from the litter and peat via heterotrophic respiration. In their natural state, mires are a small atmospheric carbon dioxide sink through the photosynthesis of peat vegetation, which outweighs their release of greenhouse gases. On the other hand, most mires are generally net emitters of methane and nitrous oxide. Due to the continued sequestration over millennia, and because of the longer atmospheric lifespan of the molecules compared with methane and nitrous oxide, peatlands have had a net cooling effect on the atmosphere. The water table position of a peatland is the main control of its carbon release to the atmosphere. When the water table rises after a rainstorm, the peat and its microbes are submerged under water inhibiting access to oxygen, reducing release via respiration. Carbon dioxide release increases when the water table falls lower, such as during a drought, as this increases the availability of oxygen to the aerobic microbes thus accelerating peat decomposition. Levels of methane emissions also vary with the water table position and temperature. A water table near the peat surface gives the opportunity for anaerobic microorganisms to flourish. Methanogens are strictly anaerobic organisms and produce methane from organic matter in anoxic conditions below the water table level, while some of that methane is oxidised by methanotrophs above the water table level. Therefore, changes in water table level influence the size of these methane production and consumption zones. Increased soil temperatures also contribute to increased seasonal methane flux. A study in Alaska found that methane may vary by as much as 300% seasonally with wetter and warmer soil conditions due to climate change. Peatlands are important for studying past climate because they are sensitive to changes in the environment and can reveal levels of isotopes, pollutants, macrofossils, metals from the atmosphere and pollen. For example, carbon-14 dating can reveal the age of the peat. The dredging and destruction of a peatland will release the carbon dioxide that could reveal irreplaceable information about the past climatic conditions. Many kinds of microorganisms inhabit peatlands, due to the regular supply of water and abundance of peat forming vegetation. These microorganisms include but are not limited to methanogens, algae, bacteria, zoobenthos, of which Sphagnum species are most abundant. Humic substances Peat contains substantial organic matter, where humic acid dominates. Humic materials can store substantial amounts of water, making them an essential component in the peat environment, contributing to increased carbon storage due to the resulting anaerobic condition. If the peatland is dried from long-term cultivation and agricultural use, it will lower the water table, and the increased aeration will release carbon. Upon extreme drying, the ecosystem can undergo a state shift, turning the mire into a barren land with lower biodiversity and richness. Humic acid formation occurs during the biogeochemical degradation of vegetation debris and animal residue. The loads of organic matter in the form of humic acid is a source of precursors of coal. Prematurely exposing the organic matter to the atmosphere promotes the conversion of organics to carbon dioxide to be released in the atmosphere. Use by humans Records of past human behaviour and environments can be contained within peatlands. These may take the form of human artefacts, or palaeoecological and geochemical records. Peatlands are used by humans in modern times for a range of purposes, the most dominant being agriculture and forestry, which accounts for around a quarter of global peatland area. This involves cutting drainage ditches to lower the water table with the intended purpose of enhancing the productivity of forest cover or for use as pasture or cropland. Agricultural uses for mires include the use of natural vegetation for hay crop or grazing, or the cultivation of crops on a modified surface. In addition, the commercial extraction of peat for energy production is widely practiced in Northern European countries, such as Russia, Sweden, Finland, Ireland and the Baltic states. Tropical peatlands comprise 0.25% of Earth's terrestrial land surface but store 3% of all soil and forest carbon stocks. The use of this land by humans, including draining and harvesting of tropical peat forests, results in the emission of large amounts of carbon dioxide into the atmosphere. In addition, fires occurring on peatland dried by the draining of peat bogs release even more carbon dioxide. The economic value of a tropical peatland was once derived from raw materials, such as wood, bark, resin and latex, the extraction of which did not contribute to large carbon emissions. In Southeast Asia, peatlands are drained and cleared for human use for a variety of reasons, including the production of palm oil and timber for export in primarily developing nations. This releases stored carbon dioxide and preventing the system from sequestering carbon again. Tropical peatlands The global distribution of tropical peatlands is concentrated in Southeast Asia where agricultural use of peatlands has been increased in recent decades. Large areas of tropical peatland have been cleared and drained for the production of food and cash crops such as palm oil. Large-scale drainage of these plantations often results in subsidence, flooding, fire and deterioration of soil quality. Small scale encroachment on the other hand, is linked to poverty and is so widespread that it also has negatively impacts these peatlands. The biotic and abiotic factors controlling Southeast Asian peatlands are interdependent. Its soil, hydrology and morphology are created by the present vegetation through the accumulation of its own organic matter, building a favorable environment for this specific vegetation. This system is therefore vulnerable to changes in hydrology or vegetation cover. These peatlands are mostly located in developing regions with impoverished and rapidly growing populations. These lands have become targets for commercial logging, paper pulp production and conversion to plantations through clear-cutting, drainage and burning. Drainage of tropical peatlands alters the hydrology and increases their susceptibility to fire and soil erosion, as a consequence of changes in physical and chemical compositions. The change in soil strongly affects the sensitive vegetation and forest die-off is common. The short-term effect is a decrease in biodiversity but the long-term effect, since these encroachments are hard to reverse, is a loss of habitat. Poor knowledge about peatlands' sensitive hydrology and lack of nutrients often lead to failing plantations, resulting in increasing pressure on remaining peatlands. Biology and peat characteristics Tropical peatland vegetation varies with climate and location. Three different characterizations are mangrove woodlands present in the littoral zones and deltas of salty water, followed inland by swamp forests. These forests occur on the margin of peatlands with a palm rich flora with trees 70 m tall and 8 m in girth accompanied by ferns and epiphytes. The third, padang, from the Malay and Indonesian word for forest, consists of shrubs and tall thin trees and appear in the center of large peatlands. The diversity of woody species, like trees and shrubs, are far greater in tropical peatlands than in peatlands of other types. Peat in the tropics is therefore dominated by woody material from trunks of trees and shrubs and contain little to none of the sphagnum moss that dominates in boreal peatlands. It's only partly decomposed and the surface consists of a thick layer of leaf litter. Forestry in peatlands leads to drainage and rapid carbon losses since it decreases inputs of organic matter and accelerate the decomposition. In contrast to temperate wetlands, tropical peatlands are home to several species of fish. Many new, often endemic, species has been discovered but many of them are considered threatened. Greenhouse gases and fires The tropical peatlands in Southeast Asia only cover 0.2% of Earth's land area but CO2 emissions are estimated to be 2 Gt per year, equal to 7% of global fossil fuel emissions. These emissions get bigger with drainage and burning of peatlands and a severe fire can release up to 4,000 t of CO2/ha. Burning events in tropical peatlands are becoming more frequent due to large-scale drainage and land clearance and in the past ten years, more than 2 million hectares was burnt in Southeast Asia alone. These fires last typically for 1–3 months and release large amounts of CO2. Indonesia is one of the countries suffering from peatland fires, especially during years with ENSO-related drought, an increasing problem since 1982 as a result of developing land use and agriculture. During the El Niño-event in 1997–1998 more than 24,400 km2 of peatland was lost to fires in Indonesia alone from which 10,000 km2 was burnt in Kalimantan and Sumatra. The output of CO2 was estimated to 0.81–2.57 Gt, equal to 13–40% of that year's global output from fossil fuel burning. Indonesia is now considered the third-biggest contributor to global CO2 emissions, caused primarily by these fires. The 2015 El Niño event further exacerbated the condition of these peatlands, as wildfires burned approximately 3 million hectares of forests and peatlands on the east coast of Sumatra and in Central Kalimantan, emitting 11.3 teragrams of CO2 per day during the months of September and October that year. With a warming climate, these burnings are expected to increase in intensity and number. This is a result of a dry climate together with an extensive rice farming project, called the Mega Rice Project, started in the 1990s, which converted 1 Mha of peatlands to rice paddies. Forest and land was cleared by burning and 4000 km of channels drained the area. Drought and acidification of the lands led to bad harvest and the project was abandoned in 1999. Similar projects in China have led to immense loss of tropical marshes and fens due to rice production. Drainage, which also increases the risk of burning, can cause additional emissions of CO2 by 30–100 t/ha/year if the water table is lowered by only 1 m. The draining of peatlands is likely the most important and long-lasting threat to peatlands globally, but is especially prevalent in the tropics. Peatlands release the greenhouse gas methane which has strong global warming potential. However, subtropical wetlands have shown high CO2 binding per mol of released methane, which is a function that counteracts global warming. Tropical peatlands are suggested to contain about 100 Gt carbon, corresponding to more than 50% of the carbon present as CO2 in the atmosphere. Accumulation rates of carbon during the last millennium were close to 40 g C/m2/yr. Northern peatlands Northern peatlands are associated with boreal and subarctic climates. Northern peatlands were mostly built up during the Holocene after the retreat of Pleistocene glaciers, but in contrast tropical peatlands are much older. Total northern peat carbon stocks are estimated to be 1055 Gt of carbon. Of all northern circumpolar countries, Russia has the largest area of peatlands, and contains the largest peatland in the world, The Great Vasyugan Mire. Nakaikemi Wetland in southwest Honshu, Japan is more than 50,000 years old and has a depth of 45 m. The Philippi Peatland in Greece has probably one of the deepest peat layers with a depth of 190 m. Impacts on global climate According to the IPCC Sixth Assessment Report, the conservation and restoration of wetlands and peatlands has large economic potential to mitigate greenhouse gas emissions, providing benefits for adaptation, mitigation and biodiversity. Wetlands provide an environment where organic carbon is stored in living plants, dead plants and peat, as well as converted to carbon dioxide and methane. Three main factors give wetlands the ability to sequester and store carbon: high biological productivity, high water table and low decomposition rates. Suitable meteorological and hydrological conditions are necessary to provide an abundant water source for the wetland. Fully water-saturated wetland soils allow anaerobic conditions to manifest, storing carbon but releasing methane. Wetlands make up about 5-8% of Earth's terrestrial land surface but contain about 20-30% of the planet's 2500 Gt soil carbon stores. Peatlands contain the highest amounts of soil organic carbon of all wetland types. Wetlands can become sources of carbon, rather than sinks, as the decomposition occurring within the ecosystem emits methane. Natural peatlands do not always have a measurable cooling effect on the climate in a short time span as the cooling effects of sequestering carbon are offset by the emission of methane, which is a strong greenhouse gas. However, given the short "lifetime" of methane (12 years), it is often said that methane emissions are unimportant within 300 years compared to carbon sequestration in wetlands. Within that time frame or less, most wetlands become both net carbon and radiative sinks. Hence, peatlands do result in cooling of the Earth's climate over a longer time period as methane is oxidised quickly and removed from the atmosphere whereas atmospheric carbon dioxide is continuously absorbed. Throughout the Holocene (the past 12,000 years), peatlands have been persistent terrestrial carbon sinks and have had a net cooling effect, sequestering 5.6 to 38 grams of carbon per square metre per year. On average, it has been estimated that today northern peatlands sequester 20 to 30 grams of carbon per square metre per year. Peatlands insulate the permafrost in subarctic regions, thus delaying thawing during summer, as well as inducing the formation of permafrost. As the global climate continues to warm, wetlands could become major carbon sources as higher temperatures cause higher carbon dioxide emissions. Compared with untilled cropland, wetlands can sequester around two times the carbon. Carbon sequestration can occur in constructed wetlands as well as natural ones. Estimates of greenhouse gas fluxes from wetlands indicate that natural wetlands have lower fluxes, but man-made wetlands have a greater carbon sequestration capacity. The carbon sequestration abilities of wetlands can be improved through restoration and protection strategies, but it takes several decades for these restored ecosystems to become comparable in carbon storage to peatlands and other forms of natural wetlands. Studies highlight the critical role of peatlands in biodiversity conservation and hydrological stability. These ecosystems are unique habitats for diverse species, including specific insects and amphibians, and act as natural water reservoirs, releasing water during dry periods to sustain nearby freshwater ecosystems and agriculture. Drainage for agriculture and forestry The exchange of carbon between the peatlands and the atmosphere has been of current concern globally in the field of ecology and biogeochemical studies. The drainage of peatlands for agriculture and forestry has resulted in the emission of extensive greenhouse gases into the atmosphere, most notably carbon dioxide and methane. By allowing oxygen to enter the peat column within a mire, drainage disrupts the balance between peat accumulation and decomposition, and the subsequent oxidative degradation results in the release of carbon into the atmosphere. As such, drainage of mires for agriculture transforms them from net carbon sinks to net carbon emitters. Although the emission of methane from mires has been observed to decrease following drainage, the total magnitude of emissions from peatland drainage is often greater as rates of peat accumulation are low. Peatland carbon has been described as "irrecoverable" meaning that, if lost due to drainage, it could not be recovered within time scales relevant to climate mitigation. When undertaken in such a way that preserves the hydrological state of a mire, the anthropogenic use of mires' resources can avoid significant greenhouse gas emissions. However, continued drainage will result in increased release of carbon, contributing to global warming. As of 2016, it was estimated that drained peatlands account for around 10% of all greenhouse gas emissions from agriculture and forestry. Palm oil plantations Palm oil has increasingly become one of the world's largest crops. In comparison to alternatives, palm oil is considered to be among the most efficient sources of vegetable oil and biofuel, requiring only 0.26 hectares of land to produce 1 ton of oil. Palm oil has therefore become a popular cash crop in many low-income countries and has provided economic opportunities for communities. With palm oil as a leading export in countries such as Indonesia and Malaysia, many smallholders have found economic success in palm oil plantations. However, the land selected for plantations are typically substantial carbon stores that promote biodiverse ecosystems. Palm oil plantations have replaced much of the forested peatlands in Southeast Asia. Estimates now state that 12.9 Mha or about 47% of peatlands in Southeast Asia were deforested by 2006. In their natural state, peatlands are waterlogged with high water tables making for an inefficient soil. To create viable soil for plantation, the mires in tropical regions of Indonesia and Malaysia are drained and cleared. The peatland forests harvested for palm oil production serve as above- and below-ground carbon stores, containing at least 42,069 million metric tonnes (Mt) of soil carbon. Exploitation of this land raises many environmental concerns, namely increased greenhouse gas emissions, risk of fires and a decrease in biodiversity. Greenhouse gas emissions for palm oil planted on peatlands is estimated to be between the equivalent of 12.4 (best case) to 76.6 t CO2/ha (worst case). Tropical peatland converted to palm oil plantation can remain a net source of carbon to the atmosphere after 12 years. In their natural state, peatlands are resistant to fire. Drainage of peatlands for palm oil plantations creates a dry layer of flammable peat. As peat is carbon dense, fires occurring in compromised peatlands release extreme amounts of both carbon dioxide and toxic smoke into the air. These fires add to greenhouse gas emissions while also causing thousands of deaths every year. Decreased biodiversity due to deforestation and drainage makes these ecosystem more vulnerable and less resilient to change. Homogenous ecosystems are at an increased risk to extreme climate conditions and are less likely to recover from fires. Fires Some peatlands are being dried out by climate change. Drainage of peatlands due to climatic factors may also increase the risk of fires, presenting further risk of carbon and methane to release into the atmosphere. Due to their naturally high moisture content, pristine mires have a generally low risk of fire ignition. The drying of this waterlogged state means that the carbon-dense vegetation becomes vulnerable to fire. In addition, due to the oxygen deficient nature of the vegetation, the peat fires can smolder beneath the surface causing incomplete combustion of the organic matter and resulting in extreme emissions events. In recent years, the occurrence of wildfires in peatlands has increased significantly worldwide particularly in the tropical regions. This can be attributed to a combination of drier weather and changes in land use which involve the drainage of water from the landscape. This resulting loss of biomass through combustion has led to significant emissions of greenhouse gasses both in tropical and boreal/temperate peatlands. Fire events are predicted to become more frequent with the warming and drying of the global climate. Management and rehabilitation The United Nations Convention on Biological Diversity highlights peatlands as key ecosystems to be conserved and protected. The convention requires governments at all levels to present action plans for the conservation and management of wetland environments. Wetlands are also protected under the 1971 Ramsar Convention. Often, restoration is done by blocking drainage channels in the peatland, and allowing natural vegetation to recover. Rehabilitation projects undertaken in North America and Europe usually focus on the rewetting of peatlands and revegetation of native species. This acts to mitigate carbon release in the short term before the new growth of vegetation provides a new source of organic litter to fuel the peat formation in the long term. UNEP is supporting peatland restoration in Indonesia. Peat extraction is forbidden in Chile since April 2024. Global Peatlands Initiative References External links Peatlands Environmental terminology Fluvial landforms Freshwater ecology Pedology Types of soil Wetlands
Peatland
Environmental_science
5,941
3,113,332
https://en.wikipedia.org/wiki/Upsilon2%20Eridani
{{DISPLAYTITLE:Upsilon2 Eridani}} Upsilon2 Eridani (υ² Eridani, abbreviated Upsilon2 Eri, υ2 Eri), officially named Theemin , is a star in the constellation of Eridanus. It is visible to the naked eye with an apparent visual magnitude of 3.8. Based upon parallax measurements obtained during the Hipparcos mission, it is approximately 66 parsecs (214 light-years) from the Sun. It is an evolved red clump giant star with a stellar classification of G8+ III. The measured angular diameter is 2.21 mas. At the star's distance, this yields a physical size of around 16 times the radius of the Sun. It radiates 138 times the solar luminosity from its outer atmosphere at an effective temperature of 5074 K. Nomenclature υ2 Eridani (Latinised to Upsilon2 Eridani) is the star's Bayer designation. It bore the traditional name Theemin (also written as Theemim and Beemin). In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalogue and standardize proper names for stars. The WGSN approved the name Theemin for this star on February 1, 2017, and it is now included in the List of IAU-approved Star Names. In the Almagest, Ptolemy called it hē kampē, "the bend in the river;" Arab writers corrupted this to bhmn, later becoming beemin, beemun in the West. Subsequently, its etymology was incorrectly derived from Hebrew תאומים (te'omim), meaning "twins," producing Theemin. In Chinese, (), meaning Celestial Orchard, refers to an asterism consisting of Upsilon2 Eridani, Chi Eridani, Phi Eridani, Kappa Eridani, HD 16754, HD 23319, Theta Eridani, HD 24072, HD 24160, Upsilon4 Eridani, Upsilon3 Eridani and Upsilon1 Eridani. Consequently, the Chinese name for Upsilon2 Eridani itself is (, ). References G-type giants Horizontal-branch stars Eridanus (constellation) Theemin Eridani, Upsilon2 CD-30 01901 Eridani, 52 029291 021393 01464
Upsilon2 Eridani
Astronomy
518
24,663,324
https://en.wikipedia.org/wiki/PICMG%202.2
PICMG 2.2 is a specification by PICMG that standardizes pin assignments for PICMG 2.0 CompactPCI to include VME64 Extensions. Status Adopted : 9/9/1998 Current Revision : 1.0 References Open standards PICMG standards
PICMG 2.2
Technology
54
36,659,903
https://en.wikipedia.org/wiki/Statistical%20area%20%28United%20States%29
The United States federal government defines and delineates the nation's metropolitan areas for statistical purposes, using a set of standard statistical area definitions. the U.S. Office of Management and Budget (OMB) defined and delineated 393 metropolitan statistical areas (MSAs) and 542 micropolitan statistical areas (μSAs) in the United States and Puerto Rico. Many of these 935 MSAs and μSAs are, in turn, components of larger combined statistical areas (CSAs) consisting of adjacent MSAs and μSAs that are linked by commuting ties; 582 metropolitan and micropolitan areas are components of the 184 defined CSAs. Metropolitan and micropolitan statistical areas are defined as consisting of one or more adjacent counties or county equivalents with at least one urban core area meeting relevant population thresholds, plus adjacent territory that has a high degree of social and economic integration with the core, as measured by commuting ties. A metropolitan statistical area has at least one core with a population of at least 50,000. In a micropolitan statistical area, the largest core has a population of at least 10,000 but less than 50,000. Maps Types and distribution The sortable table below shows the number of combined, metropolitan and micropolitan statistical areas in each of the U.S. states, the District of Columbia, and Puerto Rico as of 2023. For each jurisdiction, it lists: Total number of delineated areas wholly or partially in the named jurisdiction The number of CSAs wholly or partially in the jurisdiction The number of core-based statistical areas (i.e., MSAs and μSAs) wholly or partially in the jurisdiction The number of MSAs wholly or partially in the jurisdiction The number of μSAs wholly or partially in the jurisdiction See also United States of America Outline of the United States Index of United States-related articles Demographics of the United States United States Census Bureau List of U.S. states and territories by population List of metropolitan areas of the United States List of United States cities by population List of U.S. cities by adjusted per capita personal income List of United States counties and county-equivalents United States Office of Management and Budget Statistical area (United States) Combined statistical area Core-based statistical area (list) Metropolitan statistical area Micropolitan statistical area Notes References External links United States Government United States Census Bureau 2010 United States Census USCB population estimates United States Office of Management and Budget Statistical Area Of The United States
Statistical area (United States)
Mathematics
508
24,987,602
https://en.wikipedia.org/wiki/Waterborne%20Disease%20and%20Outbreak%20Reporting%20System
The Waterborne Disease and Outbreak Surveillance System (WBDOSS) is a national surveillance system maintained by the U.S. Centers for Disease Control and Prevention (CDC). The WBDOSS receives data about waterborne disease outbreaks and single cases of waterborne diseases of public health importance (for example, Primary Amebic Meningoencephalitis (PAM)) in the United States and then disseminates information about these diseases, outbreaks, and their causes. WBDOSS was initiated in 1971 by CDC, the Council of State and Territorial Epidemiologists (CSTE), and the Environmental Protection Agency (EPA). Data are reported by public health departments in individual states, territories, and the Freely Associated States (composed of the Republic of the Marshall Islands, the Federated States of Micronesia and the Republic of Palau; formerly parts of the U.S.-administered Trust Territories of the Pacific Islands). Although initially designed to collect data about drinking water outbreaks in the United States, WBDOSS now includes outbreaks associated with recreational water, as well as outbreaks associated with water that is not intended for drinking (non-recreational) and water for which the intended use is unknown. Definition of a Waterborne Disease Outbreak Waterborne disease outbreaks may be associated with recreational water, water intended for drinking, water not intended for drinking (non-recreational water, for example, from cooling towers or ornamental fountains) and water of unknown intent. In order for a waterborne disease outbreak to be included in WBDOSS there must be an epidemiologic link between two or more persons that includes a location of water exposure, a clearly defined time period for the water exposure, and one or more waterborne illnesses caused by pathogens such as bacteria, parasites and viruses, or by chemicals/toxins. Common routes of exposure to waterborne pathogens include swallowing contaminated water, inhaling water droplets or airborne chemicals from the water, and direct physical contact with contaminated water. Epidemiologic evidence must implicate water or volatile compounds from the water that have entered the air as the probable source of the illness. WBDOSS outbreaks are further evaluated and classified based on the strength of evidence in the outbreak report that implicates water as the source of the outbreak. Waterborne disease outbreaks that have both strong epidemiologic data and comprehensive water-quality testing data are assigned a higher class than outbreaks with weak epidemiologic data and little or no water-quality testing data. Data Sources for WBDOSS Public health departments investigate waterborne disease outbreaks in states, territories, and Freely Associated States and are essential contributors to the WBDOSS. The primary reporting tool for WBDOSS prior to 2009 was the CDC 52.12 waterborne disease outbreak reporting form. Beginning in 2009, this form was replaced by the electronic National Outbreak Reporting System (NORS). Secondary data sources include case reports of water-associated cases of PAM caused by Naegleria fowleri infections, case reports for chemical/toxin poisoning and wound infections (reported sporadically), data about recreational water-associated Vibrio cases from the Cholera and Other Vibrio Surveillance System, and case reports for pool chemical-related health events not associated with recreational water (reported sporadically. Data Use CDC has published WBDOSS surveillance summaries on an annual or biennial basis since 1971. Summary statistics and descriptions of waterborne disease outbreaks were published in CDC reports until 1984 and have been published in the Morbidity and Mortality Weekly Report (MMWR) since 1985. Public health researchers and policy makers use the data to understand and reduce waterborne disease and outbreaks. WBDOSS data are available to support EPA efforts to improve drinking water quality and to provide direction for CDC’s recreational water activities, such as the Healthy Swimming program. See also National Outbreak Reporting System (NORS) References External links Waterborne Disease and Outbreak Surveillance System (WBDOSS) Healthy Swimming at the U.S. Centers for Disease Control and Prevention (CDC) about swimming and recreational water-related information Healthy Water at the U.S. Centers for Disease Control and Prevention (CDC) Council of State and Territorial Epidemiologists OutbreakNet Team at the United States Centers for Disease Control and Prevention. Public health in the United States Centers for Disease Control and Prevention Water treatment
Waterborne Disease and Outbreak Reporting System
Chemistry,Engineering,Environmental_science
894
39,263,516
https://en.wikipedia.org/wiki/Nora%20Rubashova
Nora Rubashova (12 March 1909 – 12 May 1987) was a Belarusian Catholic nun who converted from Judaism to the Russian Greek Catholic Church. Her monastic name was Catherine. Biography Nora Rubashova was born in Minsk, Belarus, in a wealthy Orthodox Jewish family. In April 1926, under the influence of her high school teacher Tamara Sapozhnikova, she converted to Catholicism of the Byzantine Rite and took vows as a nun of the community of Sisters founded by Mother Catherine Abrikosova. Rubashova adopted the monastic name Catherine after Catherine of Siena. According to Fr. Georgii Friedman, Rubashova's parents were heartbroken by her conversion and entrance into the Dominican Order. Her father, though, eventually came to terms with the fact. He used to joke whenever his daughter visited along with her fellow nuns, "Here come my in-laws!" She studied at the Faculty of History and Philology of Moscow State University. Rubashova was a parishioner of former Russian Symbolist poet Fr. Sergei Solovyov, who offered the Divine Liturgy in the Old Church Slavonic liturgical language at the side altar dedicated to Our Lady of Ostrabrama inside what is now the Immaculate Conception Cathedral in Moscow. Rubashova later recalled, "Father Sergey said [Divine Liturgy] each day at this altar, and on the eve of major feasts he observed the All Night Vigil. Rarely would one ever see so beautiful a Liturgy. The Church was large, tall, and unheated. Father Sergey's lips became bloodied from touching them every day to the freezing cold metal of the chalice." Following the Great Turn, a government crackdown on religious practice made life in the parish ever more difficult. Nora Rubashova later recalled, "When [the Liturgy] in Slavonic was no longer permitted in the Church, Father Sergei continued to say [the Liturgy] in his friends' apartments. He also gave papers in their apartments; I remember his works on Sts Sergius of Radonezh, Serafim of Sarov, the unification of the Churches, and other theological themes. He has an excellent command of language, both in conversation and in scholarly works; his thinking was always original and deep, his speech was artistically gifted." On 15 February 1931, she was arrested for belonging to the Russian Greek Catholic Church. On August 18, 1931 she was sentenced to 5 years of labor camps in the Mariinsky District, was released in 1936 and sent into exile in Michurinsk. In 1937, she left for Maloyaroslavets, where she joined the sisters, the remains of Anna Abrikosova's Dominican community. After December she lived in Bryansk and in October 1937 moved to Maloyaroslavets. During World War II Maloyarolavets was occupied by Nazi Germany and, along with fellow Soviet Jewish Sister Theresa Kugel, Nora Rubashova survived the Holocaust in Russia by working as a nurse in a German military hospital. Whenever possible, both sisters attended the Masses offered by Wehrmacht military chaplains and knelt at the Communion Rail alongside German soldiers who were fully aware of their Jewish ancestry. Many years later, Secular Tertiary Ivan Lupandin asked Rubashova why one of the Catholic chaplains, whom she jokingly called a Hochdeutsch for his staunch belief in German nationalism, never reported her or Sister Theresa's Jewishness to the Gestapo or the SS. Rubashova replied, "Well, he was a Catholic priest. He was nationalistic, but not that nationalistic." In May 1944, Maloyaroslavets was liberated by the Red Army and Rubashova traveled to the Novo Shulba near Semipalatinsk, to help sister Stephanie Gorodets who was there in exile. Meanwhile, Sister Theresa Kugel, despite her Jewishness, was arrested by the NKVD on charges of collaboration with Nazi Germany. According to Ivan Lupandin, the NKVD's logic was that Sister Theresa must have been a collaborator because, "how else could she have worked in a hospital and not been shot by the Nazis?" In 1947, together with Sister Stephanie, Rubashova returned to Maloyaroslavets, and in summer of 1948 moved to Kaluga. On 30 November 1948, she was re-arrested for belonging to the Russian Catholic Church and, on 29 October 1949, was sentenced to 15 years of labor camps. Rubashova was sent to Vorkuta Gulag and, in 1954, to Karlag, staying there until May 1956. After her release from the Gulag during the Khrushchev thaw, Rubashova went to Moscow. Mother Stephania Gorodets soon joined her and they lived together in a small flat in a communal apartment building near the University Station of the Moscow metro. Nora Rubashova got a job at the State Historical Library, where she worked until retirement. She attended the Church of Saint Louis, and united around her the surviving community of Russian Catholics. Her room became a meeting place for the sisters and the spiritual center of the new community, which later attracted young people, Moscow State University students, and Soviet dissidents. Visitors included the poet Arseny Tarkovsky. Sergey Averintsev and Anna Godiner, Furthermore, because Nobel Prize-winning Soviet dissident Alexander Solzhenitsyn interviewed Rubashova in Moscow during his research process, Mother Catherine Abrikosova and the persecution of her monastic community are mentioned briefly in the first volume of The Gulag Archipelago. Following his illegal seminary training and secret ordination by an underground bishop of the Ukrainian Greek Catholic Church, the Moscow Catholic community arranged clandestine offerings of the Byzantine Rite Divine Liturgy beginning from October 1979 by a visiting Greek-Catholic priest from Leningrad, Fr. Georgii Friedman. The May 1981 attempt on the life of Pope John Paul II by Mehmet Ali Ağca was devastating for Rubashova, who often prayed afterwards to the Sacred Heart for the Pope's healing and protection. In her final years, Rubashova rejoiced in the beginning of glasnost and perestroika, but often said cautiously and in Gulag slang about Soviet Premier Mikhail Gorbachev, "I can believe any beast, but as for him -- I'll wait a bit." Towards the end of her life, Rubashova often confided in fellow Dominican tertiary Anna Godiner, "I am alone a lot, and I simply sit and timidly talk with God." Compounding her loneliness was the fact that Rubashova's brother and relatives emigrated to the United States under the Jackson-Vanik Amendment and settled in Brighton Beach. Sister Nora Rubashova died on 12 May 1987 in Moscow, Russia, and was buried at the Khovanskoye cemetery near Moscow. A Byzantine Rite funeral Liturgy, or Panikhida, was secretly offered for the repose of her soul by Fr Georgii Friedman. Sources " I. Osipova 1996. S. 195; I. Osipova 1999. S. 337, the investigative case SM Soloviev et al. 1931 / / TSAFSBRF; Investigation case AB Ott et al. / / CA FSB RF, Sokolovsky DC S. 174. References 1909 births 1987 deaths 20th-century Eastern Catholic nuns Belarusian Eastern Catholics Converts to Eastern Catholicism from Judaism Eastern Catholic Dominican nuns Gulag detainees Karlag detainees People from Minsk People of Belarusian-Jewish descent Persecution of Catholics during the pontificate of Pope Pius XII Persecution of Christians in the Eastern Bloc People from Moscow Governorate Belarusian people imprisoned in the Soviet Union Religious persecution by communists Rescue of Jews during the Holocaust Russian Eastern Catholics Russian Jews Soviet dissidents
Nora Rubashova
Biology
1,580
19,439,206
https://en.wikipedia.org/wiki/NGC%205754
NGC 5754 is a barred spiral galaxy located 218 million light years away in the constellation Boötes. It was discovered by German-British astronomer William Herschel on 16 May 1787. NGC 5754 is listed in the Atlas of Peculiar Galaxies as Arp 297, an interacting galaxies group, which consists of NGC 5752, NGC 5753, NGC 5754, NGC 5755. Along with NGC 2718 and UGC 12158, NGC 5754 is often considered a twin of the Milky Way. One supernova has been observed in NGC 5754: SN2021mnj (typeII, mag. 18.8). See also List of NGC objects (5001–6000) References External links Distance Image NGC 5754 SIMBAD data Boötes 5754 09505 052686 Interacting galaxies +07-30-061 Barred spiral galaxies 17870516 Discoveries by William Herschel 297
NGC 5754
Astronomy
186
41,081,924
https://en.wikipedia.org/wiki/3D%20sound%20localization
3D sound localization refers to an acoustic technology that is used to locate the source of a sound in a three-dimensional space. The source location is usually determined by the direction of the incoming sound waves (horizontal and vertical angles) and the distance between the source and sensors. It involves the structure arrangement design of the sensors and signal processing techniques. Most mammals (including humans) use binaural hearing to localize sound, by comparing the information received from each ear in a complex process that involves a significant amount of synthesis. It is difficult to localize using monaural hearing, especially in 3D space. Technology Sound localization technology is used in some audio and acoustics fields, such as hearing aids, surveillance and navigation. Existing real-time passive sound localization systems are mainly based on the time-difference-of-arrival (TDOA) approach, limiting sound localization to two-dimensional space, and are not practical in noisy conditions. Applications Applications of sound source localization include sound source separation, sound source tracking, and speech enhancement. Sonar uses sound source localization techniques to identify the location of a target. 3D sound localization is also used for effective human-robot interaction. With the increasing demand for robotic hearing, some applications of 3D sound localization such as human-machine interface, handicapped aid, and military applications, are being explored. Cues for sound localization Localization cues are features that help localize sound. Cues for sound localization include binaural and monoaural cues. Monoaural cues can be obtained via spectral analysis and are generally used in vertical localization. Binaural cues are generated by the difference in hearing between the left and right ears. These differences include the interaural time difference (ITD) and the interaural intensity difference (IID). Binaural cues are used mostly for horizontal localization. How does one localize sound? The first clue our hearing uses is interaural time difference. Sound from a source directly in front of or behind us will arrive simultaneously at both ears. If the source moves to the left or right, our ears pick up the sound from the same source arriving at both ears - but with a certain delay. Another way of saying it could be, that the two ears pick up different phases of the same signal. Methods There are many different methods of 3D sound localization. For instance: Different types of sensor structure, such as microphone array and binaural hearing robot head. Different techniques for optimal results, such as neural network, maximum likelihood and Multiple signal classification (MUSIC). Real-time methods using an Acoustic Vector Sensor (AVS) array Scanning techniques Offline methods (according to timeliness) Microphone Array Approach Steered Beamformer Approach This approach utilizes eight microphones combined with a steered beamformer enhanced by the Reliability Weighted Phase Transform (RWPHAT). The final results are filtered through a particle filter that tracks sources and prevents false directions. The motivation of using this method is that based on previous research. This method is used for multiple sound source tracking and localizing despite soundtracking and localization only apply for a single sound source. Beamformer-based Sound Localization To maximize the output energy of a delay-and-sum beamformer in order to find the maximum value of the output of a beamformer steered in all possible directions. Using the Reliability Weighted Phase Transform (RWPHAT) method, The output energy of M-microphone delay-and-sum beamformer is Where E indicates the energy, and K is a constant, is the microphone pairs cross-correlation defined by Reliability Weighted Phase Transform: the weighted factor reflect the reliability of each frequency component, and defined as the Wiener Filter gain , where is an estimate of a prior SNR at microphone, at time frame , for frequency , computed using the decision-directed approach. The is the signal from microphone and is the delay of arrival for that microphone. The more specific procedure of this method is proposed by Valin and Michaud The advantage of this method is that it detects the direction of the sound and derives the distance of sound sources. The main drawback of the beamforming approach is the imperfect nature of sound localization accuracy and capability, versus the neural network approach, which uses moving speakers. Collocated Microphone Array Approach This method relates to the technique of Real-Time sound localization utilizing an Acoustic Vector Sensor (AVS) array, which measures all three components of the acoustic particle velocity, as well as the sound pressure, unlike conventional acoustic sensor arrays that only utilize the pressure information and delays in the propagating acoustic field. Exploiting this extra information, AVS arrays are able to significantly improve the accuracy of source localization. Acoustic Vector Array • Contains three orthogonally placed acoustic particle velocity sensors (shown as X, Y and Z array) and one omnidirectional acoustic microphone (O). • Commonly used both in air and underwater. • Can be used in combination with the Offline Calibration Process to measure and interpolate the impulse response of X, Y, Z and O arrays, to obtain their steering vector. A sound signal is first windowed using a rectangular window, then each resulting segment signal is created as a frame. 4 parallel frames are detected from XYZO array and used for DOA estimation. The 4 frames are split into small blocks with equal size, then the Hamming window and FFT are used to convert each block from a time domain to a frequency domain. Then the output of this system is represented by a horizontal angle and a vertical angle of the sound sources which is found by the peak in the combined 3D spatial spectrum. The advantages of this array, compared with past microphone array, are that this device has a high performance even if the aperture is small, and it can localize multiple low frequency and high frequency wide band sound sources simultaneously. Applying an O array can make more available acoustic information, such as amplitude and time difference. Most importantly, XYZO array has a better performance with a tiny size. The AVS is one kind of collocated multiple microphone array, it makes use of a multiple microphone array approach for estimating the sound directions by multiple arrays and then finds the locations by using reflection information such as where the direction is detected where different arrays cross. Motivation of the Advanced Microphone array Sound reflections always occur in an actual environment and microphone arrays cannot avoid observing those reflections. This multiple array approach was tested using fixed arrays in the ceiling; the performance of the moving scenario still need to be tested. Learning how to apply Multiple Microphone Array Angle uncertainty (AU) will occur when estimating direction, and position uncertainty (PU) will also aggravate with increasing distance between the array and the source. We know that: Where r is the distance between array center to source, and AU is angle uncertainly. Measurement is used for judging whether two directions cross at some location or not. Minimum distance between two lines: where and are two directions, are vectors parallel to detected direction, and are the position of arrays. If Two lines are judged as crossing. When two lines are crossing, we can compute the sound source location using the following: is the estimation of sound source position, is the position where each direction intersect the line with minimum distance, and is the weighted factors. As the weighting factor , we determined use or from the array to the line with minimum distance. Scanning Techniques Scan-based techniques are a powerful tool for localizing and visualizing time-stationary sound sources, as they only require the use of a single sensor and a position tracking system. One popular method for achieving this is through the use of an Acoustic Vector Sensor (AVS), also known as a 3D Sound Intensity Probe, in combination with a 3D tracker. The measurement procedure involves manually moving the AVS sensor around the sound source while a stereo camera is used to extract the instantaneous position of the sensor in three-dimensional space. The recorded signals are then split into multiple segments and assigned to a set of positions using a spatial discretization algorithm. This allows for the computation of a vector representation of the acoustic variations across the sound field, using combinations of the sound pressure and the three orthogonal acoustic particle velocities. The results of the AVS analysis can be presented over a 3D sketch of the tested object, providing a visual representation of the sound distribution around a 3D mesh of the object or environment. This can be useful for localizing sound sources in a variety of fields, such as architectural acoustics, noise control, and audio engineering, as it allows for a detailed understanding of the sound distribution and its interactions with the surrounding environment. Learning method for binaural hearing Binaural hearing learning is a bionic method. The sensor is a robot dummy head with 2 sensor microphones along with the artificial pinna (reflector). The robot head has 2 rotation axes and can rotate horizontally and vertically. The reflector causes the spectrum change into a certain pattern for incoming white noise sound wave and this pattern is used for the cue of the vertical localization. The cue for horizontal localization is ITD. The system makes use of a learning process using neural networks by rotating the head with a settled white noise sound source and analyzing the spectrum. Experiments show that the system can identify the direction of the source well in a certain range of angle of arrival. It cannot identify the sound coming outside the range due to the collapsed spectrum pattern of the reflector. Binaural hearing use only 2 microphones and is capable of concentrating on one source among multiple sources of noises. Head-related Transfer Function (HRTF) In the real sound localization, the robot head and the torso play a functional role, in addition to the two pinnae. This functions as spatial linear filtering and the filtering is always quantified in terms of Head-Related Transfer Function (HRTF). HRTF also uses the robot head sensor, which is the binaural hearing model. The HRTF can be derived based on various cues for localization. Sound localization with HRTF is filtering the input signal with a filter which is designed based on the HRTF. Instead of using the neural networks, a head-related transfer function is used and the localization is based on a simple correlation approach. See more: Head-related transfer function. Cross-power spectrum phase (CSP) analysis CSP method is also used for the binaural model. The idea is that the angle of arrival can be derived through the time delay of arrival (TDOA) between two microphones, and TDOA can be estimated by finding the maximum coefficients of CSP. CSP coefficients are derived by: Where and are signals entering the microphone and respectively Time delay of arrival() then can be estimated by: Sound source direction is Where is the sound propagation speed, is the sampling frequency and is the distance with maximum time delay between 2 microphones. CPS method does not require the system impulse response data that HRTF needs. An expectation-maximization algorithm is also used for localizing several sound sources and reduce the localization errors. The system is capable of identifying several moving sound source using only two microphones. 2D sensor line array In order to estimate the location of a source in 3D space, two line sensor arrays can be placed horizontally and vertically. An example is a 2D line array used for underwater source localization. By processing the data from two arrays using the maximum likelihood method, the direction, range and depth of the source can be identified simultaneously. Unlike the binaural hearing model, this method is similar to the spectral analysis method. The method can be used to localize a distant source. Self-rotating Bi-Microphone Array The rotation of the two-microphone array (also referred as bi-microphone array ) leads to a sinusoidal inter-channel time difference (ICTD) signal for a stationary sound source present in a 3D environment. The phase shift of the resulting sinusoidal signal can be directly mapped to the azimuth angle of the sound source, and the amplitude of the ICTD signal can be represented as a function of the elevation angle of the sound source and the distance between the two microphones. In the case of multiple sources, the ICTD signal has data points forming multiple discontinuous sinusoidal waveforms. Machine learning techniques such as Random sample consensus (RANSAC) and Density-based spatial clustering of applications with noise (DBSCAN) can be applied to identify phase shifts (mapping to azimuths) and amplitudes (mapping to elevations) of each discontinuous sinusoidal waveform in the ICTD signal. Hierarchical Fuzzy Artificial Neural Networks Approach The Hierarchical Fuzzy Artificial Neural Networks Approach sound localization system was modeled on biologically binaural sound localization. Some primitive animals with two ears and small brains can perceive 3D space and process sounds, although the process is not fully understood. Some animals experience difficulty in 3D sound location due to small head size. Additionally, the wavelength of communication sound may be much larger than their head diameter, as is the case with frogs. Based on previous binaural sound localization methods, a hierarchical fuzzy artificial neural network system combines interaural time difference(ITD-based) and interaural intensity difference(IID-based) sound localization methods for higher accuracy that is similar to that of humans. Hierarchical Fuzzy Artificial Neural Networks were used with the goal of the same sound localization accuracy as human ears. IID-based or ITD-based sound localization methods have a main problem called Front-back confusion. In this sound localization based on a hierarchical neural network system, to solve this issue, an IID estimation is with ITD estimation. This system was used for broadband sounds and be deployed for non-stationary scenarios. 3D sound localization for monaural sound source Typically, sound localization is performed by using two (or more) microphones. By using the difference of arrival times of a sound at the two microphones, one can mathematically estimate the direction of the sound source. However, the accuracy with which an array of microphones can localize a sound (using Interaural time difference) is fundamentally limited by the physical size of the array. If the array is too small, then the microphones are spaced too closely together so that they all record essentially the same sound (with ITF near zero), making it extremely difficult to estimate the orientation. Thus, it is not uncommon for microphone arrays to range from tens of centimeters in length (for desktop applications) to many tens of meters in length (for underwater localization). However, microphone arrays of this size then become impractical to use on small robots. even for large robots, such microphone arrays can be cumbersome to mount and to maneuver. In contrast, the ability to localize sound using a single microphone (which can be made extremely small) holds the potential of significantly more compact, as well as lower cost and power, devices for localization. Conventional HRTF approach A general way to implement 3d sound localization is to use the HRTF(Head-related transfer function). First, compute HRTFs for the 3D sound localization, by formulating two equations; one represents the signal of a given sound source and the other indicates the signal output from the robot head microphones for the sound transferred from the source. Monaural input data are processed by these HRTFs, and the results are output from stereo headphones. The disadvantage of this method is that many parametric operations are necessary for the whole set of filters to realize the 3D sound localization, resulting in high computational complexity. DSP implementation of 3D sound localization A DSP-based implementation of a realtime 3D sound localization approach with the use of an embedded DSP can reduce the computational complexity As shown in the figure, the implementation procedure of this realtime algorithm is divided into three phases, (i) Frequency Division, (ii) Sound Localization, and (iii) Mixing. In the case of 3D sound localization for a monaural sound source, the audio input data are divided into two: left and right channels and the audio input data in time series are processed one after another. A distinctive feature of this approach is that the audible frequency band is divided into three so that a distinct procedure of 3D sound localization can be exploited for each of the three subbands. Single microphone approach Monaural localization is made possible by the structure of the pinna (outer ear), which modifies the sound in a way that is dependent on its incident angle. A machine learning approach is adapted for monaural localization using only a single microphone and an “artificial pinna” (that distorts sound in a direction-dependent way). The approach models the typical distribution of natural and artificial sounds, as well as the direction-dependent changes to sounds induced by the pinna. The experimental results also show that the algorithm is able to fairly accurately localize a wide range of sounds, such as human speech, dog barking, waterfall, thunder, and so on. In contrast to microphone arrays, this approach also offers the potential of significantly more compact, as well as lower cost and power, devices for sound localization. See also 3D sound reconstruction Acoustic source localization Binaural recording Head-related transfer function Perceptual-based 3D sound localization Sound localization Vertical sound localization References External links 3-D Localization of Virtual Sound Sources 3-D Acoustic Vector Sensor (air) Acoustics Hearing
3D sound localization
Physics
3,574
47,329,480
https://en.wikipedia.org/wiki/One-to-one%20%28data%20model%29
In systems analysis, a one-to-one relationship is a type of cardinality that refers to the relationship between two entities (see also entity–relationship model) A and B in which one element of A may only be linked to one element of B, and vice versa. In mathematical terms, there exists a bijective function from A to B. For instance, think of A as the set of all human beings, and B as the set of all their brains. Any person from A can and must have only one brain from B, and any human brain in B can and must belong to only one person that is contained in A. In a relational database, a one-to-one relationship exists when one row in a table may be linked with only one row in another table and vice versa. It is important to note that a one-to-one relationship is not a property of the data, but rather of the relationship itself. A list of mothers and their children may happen to describe mothers with only one child, in which case one row of the mothers table will refer to only one row of the children table and vice versa. The real-world relationship that the data models is not one-to-one, because mothers may have more than one child, thus forming a one-to-many relationship. See also One-to-many (data model) Many-to-many (data model) External links Design pattern: many-to-many (order entry), Tomjewett.com Data modeling
One-to-one (data model)
Engineering
312
1,573,974
https://en.wikipedia.org/wiki/Ken%20Alibek
Kanatzhan "Kanat" Baizakovich Alibekov (born 1950), known as Kenneth "Ken" Alibek since 1992, is a Kazakh-American microbiologist, bioweaponeer, and biological warfare administrative management expert. He was the first deputy director of Biopreparat. During his career in Soviet bioweaponry development in the late 1970s and 1980s, Alibekov managed projects that included weaponizing glanders and Marburg hemorrhagic fever, and created Russia's first tularemia bomb. His most prominent accomplishment was the creation of a new "battle strain" of anthrax, known as "Strain 836", later described by the Los Angeles Times as "the most virulent and vicious strain of anthrax known to man". In 1992, he defected to the United States; he has since become an American citizen and made his living as a biodefense consultant, speaker, and entrepreneur. He had actively participated in the development of biodefense strategy for the U.S. government, and between 1998 and 2005 he testified several times before the U.S. Congress and other governments on biotechnology issues, saying he was “convinced that Russia’s biological weapons program has not been completely dismantled”. In 1994, Alibek received a congressional award, a bronze Barkley medal awarded in recognition of distinguished public service and his contribution to world peace. In 2002, Alibek told United Press International that there is concern that monkeypox could be engineered into a biological weapon. Ohio-based Locus Fermentation Solutions hired Alibek in 2015 as executive vice president for research and development of biologically active molecules for different applications. Early life and education Alibek was born Kanat Alibekov in Kauchuk, in the Kazakh SSR of the Soviet Union (present-day Kazakhstan), to a Kazakh family. He grew up in Almaty, the republic's former capital. He is a certified oncologist, a doctor of science, doctor of philosophy and a doctor of medicine. Career Alibek's academic performance while studying military medicine at the Tomsk Medical Institute and his family's noted patriotism led to his selection to work for Biopreparat, the secret biological weapons program overseen by the Soviet Union's Council of Ministers. His first assignment in 1975 was to the Eastern European Branch of the Institute of Applied Biochemistry (IAB) near Omutninsk, a combined pesticide production facility and reserve biological weapons production plant intended for activation in a time of war. At Omutninsk, Alibek mastered the art and science of formulating and evaluating nutrient media and cultivation conditions for the optimization of microbial growth. While there, he expanded his medical school laboratory skills into the complex skill set required for industrial-level production of microorganisms and their toxins. After a year at Omutninsk, Alibek was transferred to the Siberian Branch of the IAB near Berdsk (another name of the branch was the Berdsk scientific and production base). With the assistance of a colleague, he designed and constructed a microbiology research and development laboratory that worked on techniques to optimize the production of biological formulations. After several promotions, Alibek was transferred back to Omutninsk, where he rose to the position of deputy director. He was soon transferred to the Kazakhstan Scientific and Production Base in Stepnogorsk (another reserve BW facility) to become the new director of that facility. Officially, he was deputy director of the Progress Scientific and Production Association, a manufacturer of fertilizer and pesticide. At Stepnogorsk, Alibek created an efficient industrial scale assembly line for biological formulations. In a time of war, the assembly line could be used to produce weaponized anthrax. Continued successes in science and biotechnology led to more promotions, which resulted in a transfer to Moscow. Biopreparat In Moscow, Alibek began his service as deputy chief of the biosafety directorate at Biopreparat. He was promoted in 1988 to first deputy director of Biopreparat, where he not only oversaw the biological weapons facilities but also the significant number of pharmaceutical facilities that produced antibiotics, vaccines, sera, and interferon for the public. In response to a Spring 1990 announcement that the Ministry of Medical and Microbiological Industry was to be reorganized, Alibek drafted and forwarded a memo to then General Secretary Mikhail Gorbachev proposing the cessation of Biopreparat's biological weapons work. Gorbachev approved the proposal, but an additional paragraph was secretly inserted into Alibek's draft, resulting in a presidential decree that ordered the end of Biopreparat's biological weapons work but also required them to remain prepared for future bioweapons production. Alibek used his position at Biopreparat and the authority granted to him by the first part of the decree to begin the destruction of the biological weapons to dismantle biological weapons production and testing capabilities at a number of research and development facilities, including Stepnogorsk, Kol'tsovo, Obolensk, and others. He also negotiated a concurrent appointment to a Biopreparat facility called Biomash. Biomash designed and produced technical equipment for microbial cultivation and testing. He planned to increase the proportion of its products sent to hospitals and civilian medical laboratories beyond the 40% allocated at the time. Following the dissolution of the Soviet Union in December 1991, Alibek was subsequently placed in charge of intensive preparations for inspections of Soviet biological facilities by a joint American and British delegation. But when he participated in the subsequent Soviet inspection of American facilities, his suspicion that the U.S. did not have an offensive bioweapons program was confirmed before his return to Russia. In January 1992, not long after his return from the U.S., Alibek protested against Russia's continuation of bioweapons work and resigned from both the Russian Army and Biopreparat. Immigration to the United States In October 1992, Alibek and his family emigrated to the United States. After moving to the U.S., Alibekov provided the government with a detailed accounting of the former Soviet biological weapons program. During a CIA debriefing, Alibek described the Soviet efforts to weaponize a particularly virulent smallpox strain, producing hundreds of tons of the virus that could be disseminated with bombs or ballistic missiles. Information about the Soviet biological weapons program had already been provided in 1989 by the defected scientist Vladimir Pasechnik. Alibekov has testified before the U.S. Congress several times and has provided guidance to U.S. intelligence, policy, national security, and medical communities. He was the impetus behind the creation of a biodefense graduate program at the Schar School of Policy and Government at George Mason University, serving as Distinguished Professor of Medical Microbiology and the program's Director of Education. He also developed the plans for the university's biosafety level three (BSL-3) research facility and secured $40 million of grants from the federal and state governments for its construction. From 1993 to 1999, Alibek took on multiple R&D roles, including a visiting scientist at the National Institute of Health, researching novel antigenic, potentially immunogenic substances for the development of tuberculosis vaccine; project manager at SRS Technologies where he researched, analyzed and developed detailed synthesis reports regarding the biotechnological of foreign countries; and program manager at Battelle Memorial Institute overseeing research projects in medical biotechnology, biosynthesis and fermentation equipment. In 1999, Alibek published an autobiographical account of his work in the Soviet Union and his defection. Reporting the prospect of Iraq gaining the ability to get hold of smallpox or anthrax, Alibek said, "there is no doubt that Saddam Hussein has weapons of mass destruction." However, no biological weapons were later found in Iraq. Entrepreneur and research administrator Alibek was president, chief scientific officer, and chief executive officer at AFG Biosolutions, Inc in Gaithersburg, Maryland, where he and his scientific team continued their development of advanced solutions for antimicrobial immunity. Motivated by the lack of affordable anti-cancer therapies available in Eastern Europe and Central Asia, AFG was using Alibek's biotechnology experience to plan, build, and manage a new pharmaceutical production facility designed specifically to address this problem. Alibek created a new pharmaceutical production company, MaxWell Biocorporation (MWB), in 2006 and served as its chief executive officer and president. Based in Washington, D.C., with several subsidiaries and affiliates in the U.S. and Ukraine, MWB's main stated goal is create a new, large-scale, high-technology, ultra-modern pharmaceutical fill-and-finish facility in Ukraine. Off-patent generic pharmaceuticals produced at this site are intended to target severe oncological, cardiological, immunological, and chronic infectious diseases. Construction of the Boryspil facility began in April 2007 and was completed in March 2008; initial production was scheduled to begin in 2008. The stated intention was that high-quality pharmaceuticals would be produced and become an affordable source of therapy for millions of underprivileged who currently have no therapeutic options. Abilek stepped down as President of MWB in the summer of 2008 shortly after the facility opened. Alibek's main research focus was developing novel forms of therapy for late-stage oncological diseases and other chronic degenerative pathologies and disorders. He focuses on the role of chronic viral and bacterial infections in causing age-related diseases and premature aging. Additionally, he develops and implements novel systemic immunotherapy methods for late-stage cancer patients. Throughout his career, Alibek has published nine research articles on the role of infectious diseases in cancer. Work in Kazakhstan In 2010, Alibek was invited to begin working in Kazakhstan as a head of the Department of Chemistry and Biology at the School of Science and Technology of Nazarbayev University in Astana, where he was engaged in the development of anti-cancer drugs and life-prolonging drugs, and was chairman of the board of the Republican Scientific Center for Emergency Medical Care and headed the National Scientific Center for Oncology and Transplantation. During his stay, he published a number of articles in research journals and taught various courses in various fields of biology and medicine. He focused on a possible role of chronic infections, metabolic disorders, and immunosuppression on cancer development. In 2011, he was awarded a prize from the Deputy Prime Minister for his contribution to the development of the educational system in Kazakhstan. In 2014, he was awarded a medal by the Minister of Education and Science of Kazakhstan for his contribution to research in Kazakhstan. He continues his work as an administrative manager of a research and medicine and education professor. However, after seven years, no significant scientific results from Alibek's work developed. During these seven years, Alibek received more than 1 billion Tenge from the budget for "New Systemic Therapy for Cancer Tumors" project he tried to implement. The promising Swedish technique has remained a common concept, a panacea for cancer treatment has not appeared. Three submitted Alibekov patent applications for registration were rejected by the National Institute of Intellectual Property of the Ministry of Justice of the Republic of Kazakhstan since there was no novelty. In 2016, Alibek was chosen as one of the nominees in the "Science" category of the national project El Tulgasy, which was designed to select the most significant citizens of Kazakhstan who are associated with national achievements. More than 350,000 people voted in this project, and Alibek was voted 10th place in his category. COVID-19 Alibek has experience in vaccine development for pandemics. In 2006, his article on new principles for developing these vaccines was published in Future Medicine. In January 2020, Alibek issued a warning about COVID-19 and its potential as a global problem. His research on safe methods of protection against the virus ahead of a vaccine was later published in the journal Research Ideas and Outcomes (RIO). He also wrote two chapters on methods to protect against the COVID-19 pandemic in the book Defending Against Biothreats: What We Can Learn from the Coronavirus Pandemic to Enhance U.S. Defenses Against Pandemics and Biological Weapons. In 2021, Alibek offered a free seminar on the antiviral biodefence in the world of epidemic uncertainties. Autism research Starting in 2007, Alibek began researching autism based on his background as a board-certified oncologist and his own personal connection to the disorder through his daughter, Mary. He supports the idea that the disorder is the result of prenatal viral and bacterial infections. Multiple studies have been conducted with patients that have autism spectrum disorder, including a 2018-2019 study with 57 patients, a 2021-2023 study with 142 patients and a 2023 study with 32 patients. In addition, more than 1,000 children have been treated using the protocol. His patients are located predominantly in nations in the former Soviet Union and Ukraine, and he consults mainly using free telemedicine services. During these studies, specific inflammation markers along with biochemical and neuropsychiatric parameters were identified as an objective measure of improvement in and a reduction of symptoms. Alibek has published 6 studies in peer-reviewed journals about the causes and treatment of Autism, and has one issued U.S. patent and three U.S. patent filings on his novel approach to treatment. Criticism In a September 2003 news release, Alibek and another professor suggested, based on their laboratory research, that the smallpox vaccination might increase a person's resistance to HIV. The work was rejected after peer review by the Journal of the American Medical Association and The Lancet and is no longer being pursued. According to smallpox expert and former White House science advisor Donald Henderson, "This is a theory that... does not hold up at all, and it does not make any sense from a biologic point of view...This idea...was straight off the wall. I would put no credence in it at all." In 2010, an article coauthored by Alibek appeared in Biomed Central - Immunology, a scientific journal, that outlined the results of their research showing that prior immunization with the vaccine Dryvax may confer resistance to HIV replication. Alibek also has promoted "Dr. Ken Alibek's Immune System Support Formula," a dietary supplement sold over the Internet. It contains vitamins, minerals, and a proprietary bacterial mix that will purportedly "bolster the immune system". Personal life Alibek has a wife and five children (two sons and three daughters); one of his daughters is autistic. Publications Books Alibek, Ken and Steven Handelman (1999), Biohazard: The Chilling True Story of the Largest Covert Biological Weapons Program in the World – Told from Inside by the Man Who Ran It, Random House, . "The Anthrax Vaccine: Is It safe? Does it Work?" (2002), Reviewer. National Academy Press, Washington, D.C., Institute of Medicine. Biological Threats and Terrorism: Assessing the Science and Response Capabilities (2002), Workshop Summary, Contributor. National Academy Press, Washington, D.C., Institute of Medicine. Weinstein, R.S. and K. Alibek (2003), Biological and Chemical Terrorism: A Guide for Healthcare Providers and First Responders, Thieme Medical Publishing, New York. Alibek, K., et al. (2003), Biological Weapons, Bio-Prep, Louisiana. Fong, I. and K. Alibek (2005), Bioterrorism and Infectious Agents: A New Dilemma for the 21st Century, Springer. Fong, I. and K. Alibek (2006), New and Evolving Infections of the 21st Century, Springer. Fleitz, Alibek, Bryen, Rosett, Chang, DeSutter, Elliott, Faddis, Geraghty, Gibson (2020), Defending Against Biothreats: What We Can Learn from the Coronavirus Pandemic to Enhance U.S. Defenses Against Pandemics and Biological Weapons Book chapters "Firepower in the Lab: Automation in the Fight Against Infectious Diseases and Bioterrorism" (2001), Chapter 15 of Biological Weapons: Past, Present, and Future, National Academy Press, Washington, D.C., Institute of Medicine. Jane's Chem-Bio Handbook (2002), Second Edition, F. R. Sidell, W. C. Patrick, T. R. Dashiell, K. Alibek, Jane's Information Group, Alexandria, VA. K. Alibek, C. Lobanova, "Modulation of Innate Immunity to Protect Against Biological Weapon Threat" (2006). In: Microorganisms and Bioterrorism, Springer. Op-Eds The New York Times "Russia's Deadly Expertise", March 27, 1998. "Smallpox Could Still Be a Danger", May 24, 1999. The Wall Street Journal "Russia Retains Biological Weapons Capability", February, 2000. "Bioterror: A Very Real Threat", October, 2001. The Washington Post "Anthrax under the Microscope", with Matthew Meselson, November 5, 2002. Selected Congressional Testimony Testimony before the Joint Economic Committee, May 1998: "Terrorist and Intelligence Operations: Potential Impact on the US Economy" Testimony before the Senate Select Committee on Intelligence, June, 1999 Testimony before the House Armed Services Committee, October, 1999 Testimony before the House Armed Services Committee, May, 2000 Testimony before the House Subcommittee on National Security, Veterans Affairs, and International Relations of the Committee on Government Reform, October 12, 2001: "Combating Terrorism: Assessing the Threat of a Biological Weapons Attack", House Serial No. 107-103 Testimony before the House Committee on International Relations, December, 2001: "Russia, Iraq, and Other Potential Sources of Anthrax, Smallpox, and Other Bioterrorist Weapons" Testimony before the Senate Subcommittee on Departments of Labor, Health and Human Services, Education, and Related Agencies of the Committee on Appropriations, November, 2001 Testimony before the Subcommittee on Prevention of Nuclear and Biological Attack, Committee on Homeland Security, US House of Representatives, July 28, 2005: "Implementing a National Biodefense Strategy" House Permanent Select Committee on Intelligence, March 1999 Biological Warfare Threats Testimony before the House Subcommittee on Prevention of Nuclear and Biological Attack, July 13, 2005: "Engineering Bio-terror Agents: Lessons Learned from the Offensive US and Russian Biological Weapons Programs" Awards, Presentations and Distinctions 2014 Kazakhstan Government's Prime Minister Award “For Distinguished Contribution to Science” 2011 Kazakhstan Government's Vice Prime Minister Award “For the Development of Kazakhstan Education System” 2007 “Panacea Award” for Innovations in Medical and Pharmaceutical Industries, Kyiv (Ukraine) 2005 Lecturer for on a “Russian-American Security Program of Harvard University’s John Kennedy Center for Government Studies 2005 Senior Fellow, Center for Advanced Defense Studies, Washington DC 2004 Outstanding Faculty Member Award, George Mason University 2002 Business Forward Magazine Award: “Deals of the Year” for One of the Largest Federal Research Contracts for Small Businesses 2000 (Davos, Switzerland), 2002 (New York) Invited speaker to the World Economic Forum 1994 Congressional Award: Bronze medal named after Albane W. Barkley - Awarded by the U.S. Government in Recognition of Distinguished Public Service 1989 A Colonel of Medical Services, awarded by the USSR's Minister of Defense 1983 Medal “For Combat Merits” by the USSR's Minister of Defense References Further reading "Interview Dr. Ken Alibek", Journal of Homeland Security, September 18, 2000 External links 1950 births Living people 20th-century American biologists 2001 anthrax attacks American people of Kazakhstani descent Kazakhstani emigrants to the United States Kazakhstani scientists People from Almaty Region People related to biological warfare Siberian State Medical University alumni Soviet biological weapons program Soviet microbiologists Soviet military doctors The Heritage Foundation
Ken Alibek
Biology
4,134
57,820,758
https://en.wikipedia.org/wiki/HD%2089345%20b
HD 89345 b is a Neptune-like exoplanet that orbits a G-type star. It is also called K2-234b. Its mass is equivalent to 35.7 Earths, it takes 11.8 days to complete one orbit of its star, and is 0.105 AU away from its star. It was discovered by a team of 43 astrophysicists, one of which was V. Van Eylen, and was announced in 2018. Overview The exoplanet HD 89345 b, which has a mass of 0.1 and a radius of 0.61 , was assigned to the class of ocean planets. The parent star of the planet, which is about 5.3 billion years old, belongs to the spectral class of G5V-G6V. It is 66 percent larger and 22 percent more massive than the Sun, and is located 413 light-years away. The effective temperature of the star is 5609 K. Considering that HD 89345 b makes one revolution around the star in 11.8 days at a distance of 0.11 AU, this planet was described by researchers as a warm subterranean with an equilibrium temperature of 1059 K. Discovery HD 89345 b, a Saturn-sized exoplanet orbiting a slightly evolved star HD 89345, was discovered in 2018 using the transit photometry method, the process that detects distant planets by measuring the minute dimming of a star as an orbiting planet passes between it and the Earth. It is the only planet orbiting around HD 89345, a G5 class star, situated in the constellation of Leo in 413 light-years from the Sun. This star is aged 9.4 billion years. HD 89345 b orbits its star in about 12 terrestrial days in an elliptical orbit. The orbit is closer to the star than the inner limit of the habitable zone. It has a low density and can be composed of gas. Its parent star, HD 89345, is a bright star (apparent magnitude 9.3) observed by the K2 mission with one-minute time sampling. It exhibits solar-like oscillations. The data is collected by asteroseismology, which enables to determine the parameters of the star and find its mass and radius. Its mass is 1.12 and its mean radius is 1.657 . The star appears to have recently left the main sequence, based on the inferred age, 9.4 Gyr, and the non-detection of mixed modes. The star hosts a "warm Saturn" with an orbital period of approximately 11.8 days and a radius of . Radial-velocity follow-up observations performed with the FIES, HARPS, and HARPS-N spectrographs show that the planet has a mass of . The data also show that the planet's orbit is eccentric (). See also List of potentially habitable exoplanets List of exoplanet firsts List of exoplanetary host stars List of exoplanets discovered using the Kepler spacecraft List of planets observed during Kepler's K2 mission List of nearest terrestrial exoplanet candidates References Transiting exoplanets Exoplanets discovered in 2018 Leo (constellation)
HD 89345 b
Astronomy
670
67,528,686
https://en.wikipedia.org/wiki/Telkom-1
Telkom-1 was a geosynchronous communications satellite built by Lockheed Martin, (Sunnyvale, California), for Indonesia's state-owned telecommunications company, PT Telekomunikasi Indonesia Tbk (PT Telkom). It operated for almost 18 years, more than two years past designed lifetime of 15 years. Launch Telkom-1 was successfully launched 12 August 1999, by an Ariane-42P H10-3, from Centre Spatial Guyanais, pad ELA-2, Kourou, French Guiana, at 22:52 UTC and positioned in geostationary orbit, at 108° East for replaced Palapa-B2R. Satellite description Based on Lockheed Martin A2100A satellite bus, Telkom-1 features communications satellite technology, with 24 C-band and 12 Enhanced C-band transponders. The new spacecraft replaced on-orbit Palapa-B2R satellite, improve communications coverage across Indonesia, and allow PT Telkom to expand its coverage area into Southeast Asia and the Indian subcontinent. Launch had been delayed because of problems with comsat manufacturing. Telkom-1 is a successor to the Palapa series of satellites, the first (Palapa-A1) of which was launched in 1976. Mass of Telkom-1 is launch, in geostationary orbit (GEO). Mission Telkom-1 had developed problems with the south solar panel drive, due to a manufacturing error. The satellite was planned to be decommissioned in 2018 and to be replaced by Telkom-4. On 25 August 2017, Telkom-1 lost contact and suffered a massive debris shedding event, and Telkom-1 was retired without being able to move itself into a graveyard orbit. References External links Telkom-2 Telkom-1 Communications satellites Satellites using the A2100 bus Satellites in geosynchronous orbit Lockheed Martin satellites and probes Satellites of Indonesia
Telkom-1
Astronomy,Technology
402
22,217,168
https://en.wikipedia.org/wiki/Cyber%20espionage
Cyber espionage, cyber spying, or cyber-collection is the act or practice of obtaining secrets and information without the permission and knowledge of the holder of the information using methods on the Internet, networks or individual computers through the use of proxy servers, cracking techniques and malicious software including Trojan horses and spyware. Cyber espionage can be used to target various actors- individuals, competitors, rivals, groups, governments, and others- in order to obtain personal, economic, political or military advantages. It may wholly be perpetrated online from computer desks of professionals on bases in far away countries or may involve infiltration at home by computer trained conventional spies and moles or in other cases may be the criminal handiwork of amateur malicious hackers and software programmers. History Cyber spying started as far back as 1996, when widespread deployment of Internet connectivity to government and corporate systems gained momentum. Since that time, there have been numerous cases of such activities. Details Cyber spying typically involves the use of such access to secrets and classified information or control of individual computers or whole networks for a strategic advantage and for psychological, political and physical subversion activities and sabotage. More recently, cyber spying involves analysis of public activity on social networking sites like Facebook and Twitter. Such operations, like non-cyber espionage, are typically illegal in the victim country while fully supported by the highest level of government in the aggressor country. The ethical situation likewise depends on one's viewpoint, particularly one's opinion of the governments involved. Platforms and functionality Cyber-collection tools have been developed by governments and private interests for nearly every computer and smart-phone operating system. Tools are known to exist for Microsoft, Apple, and Linux computers and iPhone, Android, Blackberry, and Windows phones. Major manufacturers of Commercial off-the-shelf (COTS) cyber collection technology include Gamma Group from the UK and Hacking Team from Italy. Bespoke cyber-collection tool companies, many offering COTS packages of zero-day exploits, include Endgame, Inc. and Netragard of the United States and Vupen from France. State intelligence agencies often have their own teams to develop cyber-collection tools, such as Stuxnet, but require a constant source of zero-day exploits in order to insert their tools into newly targeted systems. Specific technical details of these attack methods often sells for six figure sums. Common functionality of cyber-collection systems include: Data scan: local and network storage are scanned to find and copy files of interest, these are often documents, spreadsheets, design files such as Autocad files and system files such as the passwd file. Capture location: GPS, WiFi, network information and other attached sensors are used to determine the location and movement of the infiltrated device Bug: the device microphone can be activated in order to record audio. Likewise, audio streams intended for the local speakers can be intercepted at the device level and recorded. Hidden Private Networks that bypass the corporate network security. A compute that is being spied upon can be plugged into a legitimate corporate network that is heavy monitored for malware activity and at same time belongs to a private wifi network outside of the company network that is leaking confidential information off of an employee's computer. A computer like this is easily set up by a double-agent working in the IT department by install a second Wireless card in a computer and special software to remotely monitor an employee's computer through this second interface card without them being aware of a side-band communication channel pulling information off of his computer. Camera: the device cameras can be activated in order to covertly capture images or video. Keylogger and Mouse Logger: the malware agent can capture each keystroke, mouse movement and click that the target user makes. Combined with screen grabs, this can be used to obtain passwords that are entered using a virtual on-screen keyboard. Screen Grabber: the malware agent can take periodic screen capture images. In addition to showing sensitive information that may not be stored on the machine, such as e-banking balances and encrypted web mail, these can be used in combination with the key and mouse logger data to determine access credentials for other Internet resources. Encryption: Collected data is usually encrypted at the time of capture and may be transmitted live or stored for later exfiltration. Likewise, it is common practice for each specific operation to use specific encryption and poly-morphic capabilities of the cyber-collection agent in order to ensure that detection in one location will not compromise others. Bypass Encryption: Because the malware agent operates on the target system with all the access and rights of the user account of the target or system administrator, encryption is bypassed. For example, interception of audio using the microphone and audio output devices enables the malware to capture to both sides of an encrypted Skype call. Exfiltration: Cyber-collection agents usually exfiltrate the captured data in a discrete manner, often waiting for high web traffic and disguising the transmission as secure web browsing. USB flash drives have been used to exfiltrate information from air gap protected systems. Exfiltration systems often involve the use of reverse proxy systems that anonymize the receiver of the data. Replicate: Agents may replicate themselves onto other media or systems, for example an agent may infect files on a writable network share or install themselves onto USB drives in order to infect computers protected by an air gap or otherwise not on the same network. Manipulate Files and File Maintenance: Malware can be used to erase traces of itself from log files. It can also download and install modules or updates as well as data files. This function may also be used to place "evidence" on the target system, e.g. to insert child pornography onto the computer of a politician or to manipulate votes on an electronic vote counting machine. Combination Rules: Some agents are very complex and are able to combine the above features in order to provide very targeted intelligence collection capabilities. For example, the use of GPS bounding boxes and microphone activity can be used to turn a smart phone into a smart bug that intercepts conversations only within the office of a target. Compromised cellphones. Since, modern cellphones are increasingly similar to general purpose computer, these cellphones are vulnerable to the same cyber-collect attacks as computer systems, and are vulnerable to leak extremely sensitive conversational and location information to an attackers. Leaking of cellphone GPS location and conversational information to an attacker has been reported in a number of recent cyber stalking cases where the attacker was able to use the victim's GPS location to call nearby businesses and police authorities to make false allegations against the victim depending on his location, this can range from telling the restaurant staff information to tease the victim, or making false witness against the victim. For instance if the victim were parked in large parking lot the attackers may call and state that they saw drug or violence activity going on with a description of the victim and directions to their GPS location. Infiltration There are several common ways to infect or access the target: An Injection Proxy is a system that is placed upstream from the target individual or company, usually at the Internet service provider, that injects malware into the targets system. For example, an innocent download made by the user can be injected with the malware executable on the fly so that the target system then is accessible to the government agents. Spear Phishing: A carefully crafted e-mail is sent to the target in order to entice them to install the malware via a Trojan document or a drive by attack hosted on a web server compromised or controlled by the malware owner. Surreptitious Entry may be used to infect a system. In other words, the spies carefully break into the target's residence or office and install the malware on the target's system. An Upstream monitor or sniffer is a device that can intercept and view the data transmitted by a target system. Usually this device is placed at the Internet service provider. The Carnivore system developed by the U.S. FBI is a famous example of this type of system. Based on the same logic as a telephone intercept, this type of system is of limited use today due to the widespread use of encryption during data transmission. A wireless infiltration system can be used in proximity of the target when the target is using wireless technology. This is usually a laptop based system that impersonates a WiFi or 3G base station to capture the target systems and relay requests upstream to the Internet. Once the target systems are on the network, the system then functions as an Injection Proxy or as an Upstream Monitor in order to infiltrate or monitor the target system. A USB Key preloaded with the malware infector may be given to or dropped at the target site. Cyber-collection agents are usually installed by payload delivery software constructed using zero-day attacks and delivered via infected USB drives, e-mail attachments or malicious web sites. State sponsored cyber-collections efforts have used official operating system certificates in place of relying on security vulnerabilities. In the Flame operation, Microsoft states that the Microsoft certificate used to impersonate a Windows Update was forged; however, some experts believe that it may have been acquired through HUMINT efforts. Examples of operations Stuxnet Flame Duqu Bundestrojaner Rocra Operation High Roller Cozy Bear: a well-resourced, highly dedicated and organized cyber espionage group that F-Secure believes has been working for the Russian Federation since at least 2008. See also Chaos Computer Club Chinese intelligence operations in the United States Computer security Computer surveillance Cyber-security regulation Cyber spying on universities Cyber threat intelligence Cyberwarfare Employee monitoring software GhostNet Industrial espionage Proactive Cyber Defence Stalkerware Surveillance Titan Rain Vulkan files leak References Sources External links Congress to Investigate Google Charges Of Chinese Internet Spying (AHN) Archive of Information Warfare Monitor - Tracking Cyberpower (University of Toronto, Canada/Munk Centre) Cybercrime Cyberwarfare Spyware Types of espionage Military intelligence collection Computer security procedures Hacking (computer security) Information sensitivity Mass intelligence-gathering systems National security Sabotage Security engineering Social engineering (security) Computing terminology
Cyber espionage
Technology,Engineering
2,071
1,393,456
https://en.wikipedia.org/wiki/Landscape%20maintenance
Landscape maintenance (or groundskeeping) is the art and vocation of keeping a landscape healthy, clean, safe and attractive, typically in a garden, yard, park, institutional setting or estate. Using tools, supplies, knowledge, physical exertion and skills, a groundskeeper may plan or carry out annual plantings and harvestings, periodic weeding and fertilizing, other gardening, lawn care, snow removal, driveway and path maintenance, shrub pruning, topiary, lighting, fencing, swimming pool care, runoff drainage, and irrigation, and other jobs for protecting and improving the topsoil, plants, and garden accessories. Groundskeepers may also deal with local animals (including birds, rodents, reptiles, insects, and domestic animals or pets), and create means to attract or repel them, as desired or necessary. A garden may also be designed to include exotic animals, such as a koi pond. In larger estates, groundskeepers may be responsible for providing and maintaining habitat for wild animals. Landscape maintenance industry According to IBISWorld, who published an article in September 2019 on the Landscape Industry in the US, the Landscaping Industry is worth $98.8 billion. From 2014-2019, the industry had an annual growth of 4.4%, but it is estimated that from 2019- 2024 the industry will decrease to only a 1.5% annual growth. The Industry is suspected to have a 1.2% growth in the number or businesses and low entry barriers for new companies. Due to the continuous and stead growth of this industry, competition for new businesses is high. In May 2017, the U.S Bureau of Labor Statistics ( BLS ), estimated that 912,360 "Landscape and Groundskeeping Workers" maintained jobs under this job title. These workers have an average annual pay of $29,700 paired with a mean hourly wage of about $14.28. These jobs hold a variety of hourly rates ranging from $9.59, which equals an annual pay of $19,960 a year, to $20.61, which holds an annual pay of $42,870. The exact description of this job can change solely based on the company that has posted the job description, but according to the BLS, " Landscape or maintain grounds of property using hand or power tools or equipment. Workers typically perform a variety of tasks, which may include any combination of the following: sod laying, mowing, trimming, planting, watering, fertilizing, digging, raking, sprinkler installation, and installation of mortarless segmental concrete masonry wall units." The BLS also claims that this job title excludes "Farmworkers and Laborers, Crop, Nursery, and Greenhouse (45-2092)." Demand for landscaping and pool installation work increased during the COVID-19 pandemic due to the increased number of remote workers spending time in their homes. See also Landscape architecture List of domesticated animals List of domesticated plants Property manager References Landscape architecture
Landscape maintenance
Engineering
620
2,417,230
https://en.wikipedia.org/wiki/Sound%20recording%20and%20reproduction
Sound recording and reproduction is the electrical, mechanical, electronic, or digital inscription and re-creation of sound waves, such as spoken voice, singing, instrumental music, or sound effects. The two main classes of sound recording technology are analog recording and digital recording. Acoustic analog recording is achieved by a microphone diaphragm that senses changes in atmospheric pressure caused by acoustic sound waves and records them as a mechanical representation of the sound waves on a medium such as a phonograph record (in which a stylus cuts grooves on a record). In magnetic tape recording, the sound waves vibrate the microphone diaphragm and are converted into a varying electric current, which is then converted to a varying magnetic field by an electromagnet, which makes a representation of the sound as magnetized areas on a plastic tape with a magnetic coating on it. Analog sound reproduction is the reverse process, with a larger loudspeaker diaphragm causing changes to atmospheric pressure to form acoustic sound waves. Digital recording and reproduction converts the analog sound signal picked up by the microphone to a digital form by the process of sampling. This lets the audio data be stored and transmitted by a wider variety of media. Digital recording stores audio as a series of binary numbers (zeros and ones) representing samples of the amplitude of the audio signal at equal time intervals, at a sample rate high enough to convey all sounds capable of being heard. A digital audio signal must be reconverted to analog form during playback before it is amplified and connected to a loudspeaker to produce sound. Early history Long before sound was first recorded, music was recorded—first by written music notation, then also by mechanical devices (e.g., wind-up music boxes, in which a mechanism turns a spindle, which plucks metal tines, thus reproducing a melody). Automatic music reproduction traces back as far as the 9th century, when the Banū Mūsā brothers invented the earliest known mechanical musical instrument, in this case, a hydropowered (water-powered) organ that played interchangeable cylinders. According to Charles B. Fowler, this "... cylinder with raised pins on the surface remained the basic device to produce and reproduce music mechanically until the second half of the nineteenth century." Carvings in the Rosslyn Chapel from the 1560s may represent an early attempt to record the Chladni patterns produced by sound-in-stone representations, although this theory has not been conclusively proved. In the 14th century, a mechanical bell-ringer controlled by a rotating cylinder was introduced in Flanders. Similar designs appeared in barrel organs (15th century), musical clocks (1598), barrel pianos (1805), and music boxes (). A music box is an automatic musical instrument that produces sounds by the use of a set of pins placed on a revolving cylinder or disc so as to pluck the tuned teeth (or lamellae) of a steel comb. The fairground organ, developed in 1892, used a system of accordion-folded punched cardboard books. The player piano, first demonstrated in 1876, used a punched paper scroll that could store a long piece of music. The most sophisticated of the piano rolls were hand-played, meaning that they were duplicates from a master roll that had been created on a special piano, which punched holes in the master as a live performer played the song. Thus, the roll represented a recording of the actual performance of an individual, not just the more common method of punching the master roll through transcription of the sheet music. This technology to record a live performance onto a piano roll was not developed until 1904. Piano rolls were in continuous mass production from 1896 to 2008. A 1908 U.S. Supreme Court copyright case noted that, in 1902 alone, there were between 70,000 and 75,000 player pianos manufactured, and between 1,000,000 and 1,500,000 piano rolls produced. Phonautograph The first device that could record actual sounds as they passed through the air (but could not play them back—the purpose was only visual study) was the phonautograph, patented in 1857 by Parisian inventor Édouard-Léon Scott de Martinville. The earliest known recordings of the human voice are phonautograph recordings, called phonautograms, made in 1857. They consist of sheets of paper with sound-wave-modulated white lines created by a vibrating stylus that cut through a coating of soot as the paper was passed under it. An 1860 phonautogram of "Au Clair de la Lune", a French folk song, was played back as sound for the first time in 2008 by scanning it and using software to convert the undulating line, which graphically encoded the sound, into a corresponding digital audio file. Phonograph Thomas Edison's work on two other innovations, the telegraph and the telephone, led to the development of the phonograph. Edison was working on a machine in 1877 that would transcribe telegraphic signals onto paper tape, which could then be transferred over the telegraph again and again. The phonograph was both in a cylinder and a disc form. Cylinder On April 30, 1877, French poet, humorous writer and inventor Charles Cros submitted a sealed envelope containing a letter to the Academy of Sciences in Paris fully explaining his proposed method, called the paleophone. Though no trace of a working paleophone was ever found, Cros is remembered by some historians as an early inventor of a sound recording and reproduction machine. The first practical sound recording and reproduction device was the mechanical phonograph cylinder, invented by Thomas Edison in 1877 and patented in 1878. The invention soon spread across the globe and over the next two decades the commercial recording, distribution, and sale of sound recordings became a growing new international industry, with the most popular titles selling millions of units by the early 1900s. A process for mass-producing duplicate wax cylinders by molding instead of engraving them was put into effect in 1901. The development of mass-production techniques enabled cylinder recordings to become a major new consumer item in industrial countries and the cylinder was the main consumer format from the late 1880s until around 1910. Disc The next major technical development was the invention of the gramophone record, generally credited to Emile Berliner and patented in 1887, though others had demonstrated similar disk apparatus earlier, most notably Alexander Graham Bell in 1881. Discs were easier to manufacture, transport and store, and they had the additional benefit of being marginally louder than cylinders. Sales of the gramophone record overtook the cylinder ca. 1910, and by the end of World War I the disc had become the dominant commercial recording format. Edison, who was the main producer of cylinders, created the Edison Disc Record in an attempt to regain his market. The double-sided (nominally 78 rpm) shellac disc was the standard consumer music format from the early 1910s to the late 1950s. In various permutations, the audio disc format became the primary medium for consumer sound recordings until the end of the 20th century. Although there was no universally accepted speed, and various companies offered discs that played at several different speeds, the major recording companies eventually settled on a de facto industry standard of nominally 78 revolutions per minute. The specified speed was 78.26 rpm in America and 77.92 rpm throughout the rest of the world. The difference in speeds was due to the difference in the cycle frequencies of the AC electricity that powered the stroboscopes used to calibrate recording lathes and turntables. The nominal speed of the disc format gave rise to its common nickname, the seventy-eight (though not until other speeds had become available). Discs were made of shellac or similar brittle plastic-like materials, played with needles made from a variety of materials including mild steel, thorn, and even sapphire. Discs had a distinctly limited playing life that varied depending on how they were manufactured. Earlier, purely acoustic methods of recording had limited sensitivity and frequency range. Mid-frequency range notes could be recorded, but very low and very high frequencies could not. Instruments such as the violin were difficult to transfer to disc. One technique to deal with this involved using a Stroh violin which uses a conical horn connected to a diaphragm that in turn is connected to the violin bridge. The horn was no longer needed once electrical recording was developed. The long-playing 33 rpm microgroove LP record, was developed at Columbia Records and introduced in 1948. The short-playing but convenient 45 rpm microgroove vinyl single was introduced by RCA Victor in 1949. In the US and most developed countries, the two new vinyl formats completely replaced 78 rpm shellac discs by the end of the 1950s, but in some corners of the world, the 78 lingered on far into the 1960s. Vinyl was much more expensive than shellac, one of the several factors that made its use for 78 rpm records very unusual, but with a long-playing disc the added cost was acceptable. The compact 45 format required very little material. Vinyl offered improved performance, both in stamping and in playback. Vinyl records were, over-optimistically, advertised as "unbreakable". They were not, but they were much less fragile than shellac, which had itself once been touted as unbreakable compared to wax cylinders. Electrical Sound recording began as a purely mechanical process. Except for a few crude telephone-based recording devices with no means of amplification, such as the telegraphone, it remained so until the 1920s. Between the invention of the phonograph in 1877 and the first commercial digital recordings in the early 1970s, arguably the most important milestone in the history of sound recording was the introduction of what was then called electrical recording, in which a microphone was used to convert the sound into an electrical signal that was amplified and used to actuate the recording stylus. This innovation eliminated the horn sound resonances characteristic of the acoustical process, produced clearer and more full-bodied recordings by greatly extending the useful range of audio frequencies, and allowed previously unrecordable distant and feeble sounds to be captured. During this time, several radio-related developments in electronics converged to revolutionize the recording process. These included improved microphones and auxiliary devices such as electronic filters, all dependent on electronic amplification to be of practical use in recording. In 1906, Lee De Forest invented the Audion triode vacuum tube, an electronic valve that could amplify weak electrical signals. By 1915, it was in use in long-distance telephone circuits that made conversations between New York and San Francisco practical. Refined versions of this tube were the basis of all electronic sound systems until the commercial introduction of the first transistor-based audio devices in the mid-1950s. During World War I, engineers in the United States and Great Britain worked on ways to record and reproduce, among other things, the sound of a German U-boat for training purposes. Acoustical recording methods of the time could not reproduce the sounds accurately. The earliest results were not promising. The first electrical recording issued to the public, with little fanfare, was of November 11, 1920, funeral service for The Unknown Warrior in Westminster Abbey, London. The recording engineers used microphones of the type used in contemporary telephones. Four were discreetly set up in the abbey and wired to recording equipment in a vehicle outside. Although electronic amplification was used, the audio was weak and unclear, as only possible in those circumstances. For several years, this little-noted disc remained the only issued electrical recording. Several record companies and independent inventors, notably Orlando Marsh, experimented with equipment and techniques for electrical recording in the early 1920s. Marsh's electrically recorded Autograph Records were already being sold to the public in 1924, a year before the first such offerings from the major record companies, but their overall sound quality was too low to demonstrate any obvious advantage over traditional acoustical methods. Marsh's microphone technique was idiosyncratic and his work had little if any impact on the systems being developed by others. Telephone industry giant Western Electric had research laboratories with material and human resources that no record company or independent inventor could match. They had the best microphone, a condenser type developed there in 1916 and greatly improved in 1922, and the best amplifiers and test equipment. They had already patented an electromechanical recorder in 1918, and in the early 1920s, they decided to intensively apply their hardware and expertise to developing two state-of-the-art systems for electronically recording and reproducing sound: one that employed conventional discs and another that recorded optically on motion picture film. Their engineers pioneered the use of mechanical analogs of electrical circuits and developed a superior rubber line recorder for cutting the groove into the wax master in the disc recording system. By 1924, such dramatic progress had been made that Western Electric arranged a demonstration for the two leading record companies, the Victor Talking Machine Company and the Columbia Phonograph Company. Both soon licensed the system and both made their earliest published electrical recordings in February 1925, but neither actually released them until several months later. To avoid making their existing catalogs instantly obsolete, the two long-time archrivals agreed privately not to publicize the new process until November 1925, by which time enough electrically recorded repertory would be available to meet the anticipated demand. During the next few years, the lesser record companies licensed or developed other electrical recording systems. By 1929 only the budget label Harmony was still issuing new recordings made by the old acoustical process. Comparison of some surviving Western Electric test recordings with early commercial releases indicates that the record companies artificially reduced the frequency range of recordings so they would not overwhelm non-electronic playback equipment, which reproduced very low frequencies as an unpleasant rattle and rapidly wore out discs with strongly recorded high frequencies. Optical and magnetic In the 1920s, Phonofilm and other early motion picture sound systems employed optical recording technology, in which the audio signal was graphically recorded on photographic film. The amplitude variations comprising the signal were used to modulate a light source which was imaged onto the moving film through a narrow slit, allowing the signal to be photographed as variations in the density or width of a sound track. The projector used a steady light and a photodetector to convert these variations back into an electrical signal, which was amplified and sent to loudspeakers behind the screen. Optical sound became the standard motion picture audio system throughout the world and remains so for theatrical release prints despite attempts in the 1950s to substitute magnetic soundtracks. Currently, all release prints on 35 mm movie film include an analog optical soundtrack, usually stereo with Dolby SR noise reduction. In addition, an optically recorded digital soundtrack in Dolby Digital or Sony SDDS form is likely to be present. An optically recorded timecode is also commonly included to synchronize CDROMs that contain a DTS soundtrack. This period also saw several other historic developments including the introduction of the first practical magnetic sound recording system, the magnetic wire recorder, which was based on the work of Danish inventor Valdemar Poulsen. Magnetic wire recorders were effective, but the sound quality was poor, so between the wars, they were primarily used for voice recording and marketed as business dictating machines. In 1924, a German engineer, Kurt Stille, improved the Telegraphone with an electronic amplifier. The following year, Ludwig Blattner began work that eventually produced the Blattnerphone, which used steel tape instead of wire. The BBC started using Blattnerphones in 1930 to record radio programs. In 1933, radio pioneer Guglielmo Marconi's company purchased the rights to the Blattnerphone, and newly developed Marconi-Stille recorders were installed in the BBC's Maida Vale Studios in March 1935. The tape used in Blattnerphones and Marconi-Stille recorders was the same material used to make razor blades, and not surprisingly the fearsome Marconi-Stille recorders were considered so dangerous that technicians had to operate them from another room for safety. Because of the high recording speeds required, they used enormous reels about one meter in diameter, and the thin tape frequently broke, sending jagged lengths of razor steel flying around the studio. Tape Magnetic tape recording uses an amplified electrical audio signal to generate analogous variations of the magnetic field produced by a tape head, which impresses corresponding variations of magnetization on the moving tape. In playback mode, the signal path is reversed, the tape head acting as a miniature electric generator as the varyingly magnetized tape passes over it. The original solid steel ribbon was replaced by a much more practical coated paper tape, but acetate soon replaced paper as the standard tape base. Acetate has fairly low tensile strength and if very thin it will snap easily, so it was in turn eventually superseded by polyester. This technology, the basis for almost all commercial recording from the 1950s to the 1980s, was developed in the 1930s by German audio engineers who also rediscovered the principle of AC biasing (first used in the 1920s for wire recorders), which dramatically improved the frequency response of tape recordings. The K1 Magnetophon was the first practical tape recorder, developed by AEG in Germany in 1935. The technology was further improved just after World War II by American audio engineer John T. Mullin with backing from Bing Crosby Enterprises. Mullin's pioneering recorders were modifications of captured German recorders. In the late 1940s, the Ampex company produced the first tape recorders commercially available in the US. Magnetic tape brought about sweeping changes in both radio and the recording industry. Sound could be recorded, erased and re-recorded on the same tape many times, sounds could be duplicated from tape to tape with only minor loss of quality, and recordings could now be very precisely edited by physically cutting the tape and rejoining it. Within a few years of the introduction of the first commercial tape recorder—the Ampex 200 model, launched in 1948—American musician-inventor Les Paul had invented the first multitrack tape recorder, ushering in another technical revolution in the recording industry. Tape made possible the first sound recordings totally created by electronic means, opening the way for the bold sonic experiments of the Musique Concrète school and avant-garde composers like Karlheinz Stockhausen, which in turn led to the innovative pop music recordings of artists such as the Beatles and the Beach Boys. The ease and accuracy of tape editing, as compared to the cumbersome disc-to-disc editing procedures previously in some limited use, together with tape's consistently high audio quality finally convinced radio networks to routinely prerecord their entertainment programming, most of which had formerly been broadcast live. Also, for the first time, broadcasters, regulators and other interested parties were able to undertake comprehensive audio logging of each day's radio broadcasts. Innovations like multitracking and tape echo allowed radio programs and advertisements to be produced to a high level of complexity and sophistication. The combined impact with innovations such as the endless loop broadcast cartridge led to significant changes in the pacing and production style of radio program content and advertising. Stereo and hi-fi In 1881, it was noted during experiments in transmitting sound from the Paris Opera that it was possible to follow the movement of singers on the stage if earpieces connected to different microphones were held to the two ears. This discovery was commercialized in 1890 with the Théâtrophone system, which operated for over forty years until 1932. In 1931, Alan Blumlein, a British electronics engineer working for EMI, designed a way to make the sound of an actor in a film follow his movement across the screen. In December 1931, he submitted a patent application including the idea, and in 1933 this became UK patent number 394,325. Over the next two years, Blumlein developed stereo microphones and a stereo disc-cutting head, and recorded a number of short films with stereo soundtracks. In the 1930s, experiments with magnetic tape enabled the development of the first practical commercial sound systems that could record and reproduce high-fidelity stereophonic sound. The experiments with stereo during the 1930s and 1940s were hampered by problems with synchronization. A major breakthrough in practical stereo sound was made by Bell Laboratories, who in 1937 demonstrated a practical system of two-channel stereo, using dual optical sound tracks on film. Major movie studios quickly developed three-track and four-track sound systems, and the first stereo sound recording for a commercial film was made by Judy Garland for the MGM movie Listen, Darling in 1938. The first commercially released movie with a stereo soundtrack was Walt Disney's Fantasia, released in 1940. The 1941 release of Fantasia used the Fantasound sound system. This system used a separate film for the sound, synchronized with the film carrying the picture. The sound film had four double-width optical soundtracks, three for left, center, and right audio—and a fourth as a control track with three recorded tones that controlled the playback volume of the three audio channels. Because of the complex equipment this system required, Disney exhibited the movie as a roadshow, and only in the United States. Regular releases of the movie used standard mono optical 35 mm stock until 1956, when Disney released the film with a stereo soundtrack that used the Cinemascope four-track magnetic sound system. German audio engineers working on magnetic tape developed stereo recording by 1941. Of 250 stereophonic recordings made during WW2, only three survive: Beethoven's 5th Piano Concerto with Walter Gieseking and Arthur Rother, a Brahms Serenade, and the last movement of Bruckner's 8th Symphony with Von Karajan. Other early German stereophonic tapes are believed to have been destroyed in bombings. Not until Ampex introduced the first commercial two-track tape recorders in the late 1940s did stereo tape recording become commercially feasible. Despite the availability of multitrack tape, stereo did not become the standard system for commercial music recording for some years, and remained a specialist market during the 1950s. EMI (UK) was the first company to release commercial stereophonic tapes. They issued their first Stereosonic tape in 1954. Others quickly followed, under the His Master's Voice (HMV) and Columbia labels. 161 Stereosonic tapes were released, mostly classical music or lyric recordings. RCA imported these tapes into the USA. Although some HMV tapes released in the USA cost up to $15, two-track stereophonic tapes were more successful in America during the second half of the 1950s. The history of stereo recording changed after the late 1957 introduction of the Westrex stereo phonograph disc, which used the groove format developed earlier by Blumlein. Decca Records in England came out with FFRR (Full Frequency Range Recording) in the 1940s, which became internationally accepted as a worldwide standard for higher-quality recording on vinyl records. The Ernest Ansermet recording of Igor Stravinsky's Petrushka was key in the development of full frequency range records and alerting the listening public to high fidelity in 1946. Until the mid-1960s, record companies mixed and released most popular music in monophonic sound. From mid-1960s until the early 1970s, major recordings were commonly released in both mono and stereo. Recordings originally released only in mono have been rerendered and released in stereo using a variety of techniques from remixing to pseudostereo. 1950s to 1980s Magnetic tape transformed the recording industry. By the early 1950s, most commercial recordings were mastered on tape instead of recorded directly to disc. Tape facilitated a degree of manipulation in the recording process that was impractical with mixes and multiple generations of directly recorded discs. An early example is Les Paul's 1951 recording of How High the Moon, on which Paul played eight overdubbed guitar tracks. In the 1960s Brian Wilson of The Beach Boys, Frank Zappa, and The Beatles (with producer George Martin) were among the first popular artists to explore the possibilities of multitrack recording techniques and effects on their landmark albums Pet Sounds, Freak Out!, and Sgt. Pepper's Lonely Hearts Club Band. The next important innovation was small cartridge-based tape systems, of which the compact cassette, commercialized by the Philips electronics company in 1964, is the best known. Initially a low-fidelity format for spoken-word voice recording and inadequate for music reproduction, after a series of improvements it entirely replaced the competing consumer tape formats: the larger 8-track tape (used primarily in cars). The compact cassette became a major consumer audio format and advances in electronic and mechanical miniaturization led to the development of the Sony Walkman, a pocket-sized cassette player introduced in 1979. The Walkman was the first personal music player and it gave a major boost to sales of prerecorded cassettes. A key advance in audio fidelity came with the Dolby A noise reduction system, invented by Ray Dolby and introduced into professional recording studios in 1966. It suppressed the background of hiss, which was the only easily audible downside of mastering on tape instead of recording directly to disc. A competing system, dbx, invented by David Blackmer, also found success in professional audio. A simpler variant of Dolby's noise reduction system, known as Dolby B, greatly improved the sound of cassette tape recordings by reducing the especially high level of hiss that resulted from the cassette's miniaturized tape format. The compact cassette format also benefited from improvements to the tape itself as coatings with wider frequency responses and lower inherent noise were developed, often based on cobalt and chrome oxides as the magnetic material instead of the more usual iron oxide. The multitrack audio cartridge had been in wide use in the radio industry, from the late 1950s to the 1980s, but in the 1960s the pre-recorded 8-track tape was launched as a consumer audio format by the Lear Jet aircraft company. Aimed particularly at the automotive market, they were the first practical, affordable car hi-fi systems, and could produce sound quality superior to that of the compact cassette. The smaller size and greater durabilityaugmented by the ability to create home-recorded music mixtapes since 8-track recorders were raresaw the cassette become the dominant consumer format for portable audio devices in the 1970s and 1980s. There had been experiments with multi-channel sound for many yearsusually for special musical or cultural eventsbut the first commercial application of the concept came in the early 1970s with the introduction of Quadraphonic sound. This spin-off development from multitrack recording used four tracks (instead of the two used in stereo) and four speakers to create a 360-degree audio field around the listener. Following the release of the first consumer 4-channel hi-fi systems, a number of popular albums were released in one of the competing four-channel formats; among the best known are Mike Oldfield's Tubular Bells and Pink Floyd's The Dark Side of the Moon. Quadraphonic sound was not a commercial success, partly because of competing and somewhat incompatible four-channel sound systems (e.g., CBS, JVC, Dynaco and others all had systems) and generally poor quality, even when played as intended on the correct equipment, of the released music. It eventually faded out in the late 1970s, although this early venture paved the way for the eventual introduction of domestic surround sound systems in home theatre use, which gained popularity following the introduction of the DVD. Audio components The replacement of the relatively fragile vacuum tube by the smaller, rugged and efficient transistor also accelerated the sale of consumer high-fidelity sound systems from the 1960s onward. In the 1950s, most record players were monophonic and had relatively low sound quality. Few consumers could afford high-quality stereophonic sound systems. In the 1960s, American manufacturers introduced a new generation of modular hi-fi components — separate turntables, pre-amplifiers, amplifiers, both combined as integrated amplifiers, tape recorders, and other ancillary equipment like the graphic equalizer, which could be connected together to create a complete home sound system. These developments were rapidly taken up by major Japanese electronics companies, which soon flooded the world market with relatively affordable, high-quality transistorized audio components. By the 1980s, corporations like Sony had become world leaders in the music recording and playback industry. Digital The advent of digital sound recording and later the compact disc (CD) in 1982 brought significant improvements in the quality and durability of recordings. The CD initiated another massive wave of change in the consumer music industry, with vinyl records effectively relegated to a small niche market by the mid-1990s. The record industry fiercely resisted the introduction of digital systems, fearing wholesale piracy on a medium able to produce perfect copies of original released recordings. The most recent and revolutionary developments have been in digital recording, with the development of various uncompressed and compressed digital audio file formats, processors capable and fast enough to convert the digital data to sound in real time, and inexpensive mass storage. This generated new types of portable digital audio players. The minidisc player, using ATRAC compression on small, re-writeable discs was introduced in the 1990s, but became obsolescent as solid-state non-volatile flash memory dropped in price. As technologies that increase the amount of data that can be stored on a single medium, such as Super Audio CD, DVD-A, Blu-ray Disc, and HD DVD became available, longer programs of higher quality fit onto a single disc. Sound files are readily downloaded from the Internet and other sources, and copied onto computers and digital audio players. Digital audio technology is now used in all areas of audio, from casual use of music files of moderate quality to the most demanding professional applications. New applications such as internet radio and podcasting have appeared. Technological developments in recording, editing, and consuming have transformed the record, movie and television industries in recent decades. Audio editing became practicable with the invention of magnetic tape recording, but technologies like MIDI, sound synthesis and digital audio workstations allow greater control and efficiency for composers and artists. Digital audio techniques and mass storage have reduced recording costs such that high-quality recordings can be produced in small studios. Today, the process of making a recording is separated into tracking, mixing and mastering. Multitrack recording makes it possible to capture signals from several microphones, or from different takes to tape, disc or mass storage allowing previously unavailable flexibility in the mixing and mastering stages. Software There are many different digital audio recording and processing programs running under several computer operating systems for all purposes, ranging from casual users and serious amateurs working on small projects to professional sound engineers who are recording albums, film scores and doing sound design for video games. Digital dictation software for recording and transcribing speech has different requirements; intelligibility and flexible playback facilities are priorities, while a wide frequency range and high audio quality are not. Cultural effects The development of analog sound recording in the nineteenth century and its widespread use throughout the twentieth century had a huge impact on the development of music. Before analog sound recording was invented, most music was as a live performance. Throughout the medieval, Renaissance, Baroque, Classical, and through much of the Romantic music era, the main way that songs and instrumental pieces were recorded was through music notation. While notation indicates the pitches of the melody and their rhythm many aspects of the performance are undocumented. Indeed, in the Medieval era, Gregorian chant did not indicate the rhythm of the chant. In the Baroque era, instrumental pieces often lack a tempo indication and usually none of the ornaments were written down. As a result, each performance of a song or piece would be slightly different. With the development of analog sound recording, though, a performance could be permanently fixed, in all of its elements: pitch, rhythm, timbre, ornaments and expression. This meant that many more elements of a performance would be captured and disseminated to other listeners. The development of sound recording also enabled a much larger proportion of people to hear famous orchestras, operas, singers and bands, because even if a person could not afford to hear the live concert, they may be able to hear the recording. The availability of sound recording thus helped to spread musical styles to new regions, countries and continents. The cultural influence went in a number of directions. Sound recordings enabled Western music lovers to hear actual recordings of Asian, Middle Eastern and African groups and performers, increasing awareness of non-Western musical styles. At the same time, sound recordings enabled music lovers outside the West to hear the most famous North American and European groups and singers. As digital recording developed, so did a controversy commonly known as the analog versus digital controversy. Audio professionals, audiophiles, consumers, musicians alike contributed to the debate based on their interaction with the media and the preferences for analog or digital processes. Scholarly discourse on the controversy came to focus on concern for the perception of moving image and sound. There are individual and cultural preferences for either method. While approaches and opinions vary, some emphasize sound as paramount, others focus on technology preferences as the deciding factor. Analog fans might embrace limitations as strengths of the medium inherent in the compositional, editing, mixing, and listening phases. Digital advocates boast flexibility in similar processes. This debate fosters a revival of vinyl in the music industry, as well as analog electronics, and analog type plug-ins for recording and mixing software. Legal status In copyright law, a phonogram or sound recording is a work that results from the fixation of sounds in a medium. The notice of copyright in a phonogram uses the sound recording copyright symbol, which the Geneva Phonograms Convention defines as ℗ (the letter P in a full circle). This usually accompanies the copyright notice for the underlying musical composition, which uses the ordinary © symbol. The recording is separate from the song, so copyright for a recording usually belongs to the record company. It is less common for an artist or producer to hold these rights. Copyright for recordings has existed since 1972, while copyright for musical composition, or songs, has existed since 1831. Disputes over sampling and beats are ongoing. United States United States copyright law defines "sound recordings" as "works that result from the fixation of a series of musical, spoken, or other sounds" other than an audiovisual work's soundtrack. Prior to the Sound Recording Amendment (SRA), which took effect in 1972, copyright in sound recordings was handled at the state level. Federal copyright law preempts most state copyright laws but allows state copyright in sound recordings to continue for one full copyright term after the SRA's effective date, which means 2067. United Kingdom Since 1934, copyright law in Great Britain has treated sound recordings (or phonograms) differently from musical works. The Copyright, Designs and Patents Act 1988 defines a sound recording as (a) a recording of sounds, from which the sounds may be reproduced, or (b) a recording of the whole or any part of a literary, dramatic or musical work, from which sounds reproducing the work or part may be produced, regardless of the medium on which the recording is made or the method by which the sounds are reproduced or produced. It thus covers vinyl records, tapes, compact discs, digital audiotapes, and MP3s that embody recordings. See also International Association of Sound and Audiovisual Archives Notes References Further reading Barlow, Sanna Morrison. Mountain Singing: the Story of Gospel Recordings in the Philippines. Hong Kong: Alliance Press, 1952. 352 p. Coleman, Mark, Playback: from the Victrola to MP3, 100 years of music, machines, and money, Da Capo Press, 2003. Gronow, Pekka, "The Record Industry: The Growth of a Mass Medium", Popular Music, Vol. 3, Producers and Markets (1983), pp. 53–75, Cambridge University Press. Gronow, Pekka, and Saunio, Ilpo, "An International History of the Recording Industry", [translated from the Finnish by Christopher Moseley], London; New York : Cassell, 1998. Lipman, Samuel,"The House of Music: Art in an Era of Institutions", 1984. See the chapter on "Getting on Record", pp. 62–75, about the early record industry and Fred Gaisberg and Walter Legge and FFRR (Full Frequency Range Recording). Millard, Andre J., "America on record : a history of recorded sound", Cambridge; New York : Cambridge University Press, 1995. Millard, Andre J., " From Edison to the iPod", UAB Reporter, 2005, University of Alabama at Birmingham. Milner, Greg, "Perfecting Sound Forever: An Aural History of Recorded Music", Faber & Faber; 1 edition (June 9, 2009) . Cf. p. 14 on H. Stith Bennett and "recording consciousness". Read, Oliver, and Walter L. Welch, From Tin Foil to Stereo: Evolution of the Phonograph, Second ed., Indianapolis, Ind.: H.W. Same & Co., 1976. N.B.: This is an historical account of the development of sound recording technology. pbk. Read, Oliver, The Recording and Reproduction of Sound, Indianapolis, Ind.: H.W. Sams & Co., 1952. N.B.: This is a pioneering engineering account of sound recording technology. , San Diego University St-Laurent, Gilles, "Notes on the Degradation of Sound Recordings", National Library [of Canada] News, vol. 13, no. 1 (Jan. 1991), p. 1, 3–4. McWilliams, Jerry. The Preservation and Restoration of Sound Recordings. Nashville, Tenn.: American Association for State and Local History, 1979. Weir, Bob, et al. Century of Sound: 100 Years of Recorded Sound, 1877-1977. Executive writer, Bob Weir; project staff writers, Brian Gorman, Jim Simons, Marty Melhuish. [Toronto?]: Produced by Studio 123, cop. 1977. N.B.: Published on the occasion of an exhibition commemorating the centennial of recorded sound, held at the fairground of the annual Canadian National Exhibition, Toronto, Ont., as one of the C.N.E.'s 1977 events. Without ISBN External links History of Sound Recording – Maintained by the British Library Noise in the Groove – A podcast about the history of the phonograph, gramophone, and sound recording/reproduction. Recorded Music at A History of Central Florida Podcast Millard, Andre, "Edison's Tone Tests and the Ideal of Perfect Reproduction", Lost and Found Sound, interview on National Public Radio. Audio engineering Mass media technology Recording and reproduction Recording and reproduction
Sound recording and reproduction
Technology,Engineering
7,905
3,647,987
https://en.wikipedia.org/wiki/Allogamy
Allogamy or cross-fertilization is the fertilization of an ovum from one individual with the spermatozoa of another. By contrast, autogamy is the term used for self-fertilization. In humans, the fertilization event is an instance of allogamy. Self-fertilization occurs in hermaphroditic organisms where the two gametes fused in fertilization come from the same individual. This is common in plants (see Sexual reproduction in plants) and certain protozoans. In plants, allogamy is used specifically to mean the use of pollen from one plant to fertilize the flower of another plant and usually synonymous with the term "cross-fertilization" or "cross-pollination" (outcrossing). The latter term can be used more specifically to mean pollen exchange between different plant strains or even different plant species (where the term cross-hybridization can be used) rather than simply between different individuals. Allogamy is achieved through the use of external pollinating factors. The process of allogamy involves two types of external pollinating agents, known as abiotic agents and biotic agents. The abiotic agents are water and wind. The biotic agents are insects and animals, which include bees, butterflies, snails, and birds. Wind pollination is referred to as anemophily, and water pollination is referred to as hydrophilly. Insect pollination is referred to as entomophily, bird pollination is referred to as omithophily, and snail pollination is referred to as malacophily. Allogamy can lead to homozygosity. After reaching homozygosity, the species develop homozygous balance and fail to exhibit inbreeding depression. Mechanisms that promote self-pollination include homogamy, bisexuality, cleistogamy, the position of anthers, and chasmogamy. Allogamy promotes genetic diversity and reduces the risk of inbreeding depression. The persistent prevalence of allogamy throughout different species implies that this strategy provides selective advantages concerning adaptation to changing environments and sustaining fitness. Parasites having complex life cycles can pass through alternate stages of allogamous and autogamous reproduction, and the description of a hitherto unknown allogamous stage can be a significant finding with implications for human disease. Avoidance of inbreeding depression Allogamy ordinarily involves cross-fertilization between unrelated individuals leading to the masking of deleterious recessive alleles in progeny. By contrast, close inbreeding, including self-fertilization in plants and automictic parthenogenesis in hymenoptera, tends to lead to the harmful expression of deleterious recessive alleles (inbreeding depression). In dioecious plants, the stigma may receive pollen from several different potential donors. As multiple pollen tubes from the different donors grow through the stigma to reach the ovary, the receiving maternal plant may carry out pollen selection favoring pollen from less related donor plants. Thus post-pollination selection may occur in order to promote allogamy and avoid inbreeding depression. Also, seeds may be aborted selectively depending on donor–recipient relatedness. See also Heterosis Outcrossing Self-incompatibility in plants References Reproduction
Allogamy
Biology
696
28,663,133
https://en.wikipedia.org/wiki/DEME
Dredging, Environmental and Marine Engineering NV (DEME) is an international group of specialised companies in the fields of capital and maintenance dredging, land reclamation, port infrastructure development, offshore related services for the oil & gas industry, offshore windfarm installation, and environmental remediation. The group is based in Zwijndrecht, Belgium, and has current operations on five continents. History DEME was established as a holding company of two Belgian dredging contractors: Dredging International and Baggerwerken Decloedt. Two industrial and financial Groups currently control the share capital: Ackermans & van Haaren, a publicly listed Antwerp-based industrial investment Group; and CFE, a publicly listed civil contractor controlled by the French Vinci-Group. Baggerwerken Decloedt has been active in maintenance and capital dredging along the Belgian coast since 1875. Dredging International in itself is a merger of two dredging companies in September 1974: Ackermans & van Haaren, and Société Générale de Dragage (SGD). Ackermans & van Haaren was established in 1852, Société Générale de Dragage was incorporated in 1930. Both companies, Baggerwerken Decloedt and Dredging International have been involved in the construction of the main Belgian ports and the deepening and maintenance of their navigation channels in the North Sea and the river Scheldt. Since their early beginnings they have operated on a worldwide basis, on all continents simultaneously. General DEME's tagline is Creating Land for the Future. The company colours are blue and green, representing the boundary between water (blue) and land (green). The corporate logo features the head of a cutter-suction dredger (CSD), underlined with a blue and green bar. The group is made up of 71 different companies, subsidiaries, branches and representative offices worldwide. In 2009 DEME companies were operating in 42 countries. Fleet In 2009 DEME had a fleet of almost 300 vessels, including over 80 main dredging and hydraulic engineering vessels. At the end of 2009 DEME owned and operated 25 trailing suction hopper dredgers (TSHDs) such as the , with a capacity from 1,635 to 30,000 m3. They also owned the 24,130 m3 Pearl River which was the first megatrailer in the world when she was commissioned in 1994. The group owned and operated 20 cutter suction dredgers (CSDs), with installed power between 441 kW and 28,200 kW. The CSD fleet includes the world's heaviest self-propelled and ocean-going rock breaker, the 28,200 kW d'Artagnan. D'Artagnan was built for DEME's French subsidiary Société de Dragage International (SDI) and launched in 2005. Other ships include backhoe dredgers (e.g. the 2,600 kW Pinocchio); lifting vessels (e.g. the 3,300 tons Rambiz); self-elevating drilling platforms (e.g. the 1,600 tons Goliath); fallpipe vessels (e.g. the 19,000 tons Flintstone); and auxiliary equipment. In 2010 DEME appointed Wilhelmsen Ships Service as their supplier for maintenance products and technical gases globally. Operating companies Operating companies of DEME share a common flag, colours and tagline. However, each keeps their own identity, operational autonomy, and legal structure. Some of the operating companies of DEME include: Dredging International Dredging International is one of the major operating companies of DEME and focuses on these core activities: capital and maintenance dredging; deepening and maintaining navigation channels; major port development; reclamation of new industrial or residential areas, artificial islands or beaches; and coastal protection. It was founded in Antwerp, Belgium where its constituent companies began capital and maintenance dredging on the Schelde at the end of the 19th century. Dredging International has subsidiaries, branches and representative offices in Spain, Portugal, the United Kingdom, Russia, Mexico, Uruguay, India, Nigeria, Bahrain, Panama, Venezuela, Singapore, Australia, China, Taiwan, Vietnam, Brazil, Ghana, Luxembourg, Finland, Dubai, Abu Dhabi, Angola, Saudi Arabia and Latvia. Dredging International is an associated company in Middle East Dredging Company QSC (MEDCO). Baggerwerken Decloedt & Zoon Baggerwerken Decloedt & Zoon works on capital and maintenance dredging; deepening and maintaining navigation channels; major port development; reclamation of new industrial or residential areas, artificial islands or beaches; and coastal protection. Baggerwerken Decloedt & Zoon started its business with capital and maintenance dredging in the coastal areas of Belgium. Today Baggerwerken Decloedt & Zoon operate globally with assignments on all continents. Until 2000 the De Cloedt family was a major shareholder in Baggerwerken Decloedt & Zoon. DEME Offshore DEME Offshore was founded in 2019 with the merging of the three former offshore companies of DEME: GeoSea, Tideway, and A2Sea. The company focuses on the renewables and oil and gas sectors. The portfolio of services for the renewables sector includes foundation, turbine and substation transport and installation, cable laying, operations and maintenance activities, engineering, procurement, construction and installation, and Balance of Plant contracts. For oil and gas, services includes landfalls and offshore civil works, rock placement, heavy lifting, subsea construction, umbilical laying, and the installation and decommissioning of offshore platforms. Through its subsidiaries Cathie Associates and G-tec, DEME Offshore also offers geoscience services. DEME Offshore operates these vessels: heavy-lift vessel (Orion), jack-up vessels (Innovation, Apollo, Sea Installer, Sea Challenger, Neptune, Thor, Goliath), rock-installation vessels (Flintstone, Rolling Stone, Seahorse), cable-lay vessel (Living Stone) and drilling vessel (Omalius). DEME Environmental Contractors (DEC) DEME Environmental Contractors (DEC) was incorporated in 1999 as a merger of various DEME companies that were established in the 1980s: NV Soils was a specialised company for soil washing and in-situ soil remediation techniques; NV Silt had expertise in silt recycling and sludge treatment; and NV Bitumar focused on bituminous materials for hydraulic engineering and fibrous stone asphalt for coastal engineering. DEC is specialised in groundwater and soil remediation; sediment treatment; recycling and landfill techniques; environmental dredging; and the redevelopment of brownfields. DEC owns and operates seven permitted soil and sediment recycling centres in Belgium, including in the ports of Antwerp, Zeebrugge, and Ghent. DEC is part of DEME-controlled Ecoterres Holding, where all environmental activities of DEME are brought together. Other DEME-environmental companies under Ecoterres Holding include de Vries & van de Wiel (Netherlands); Ecoterres (Wallonia, Belgium); and Extract-Ecoterres (France). Two new DEC-subsidiaries were established in October 2010: Purazur provides water treatment and industrial wastewater treatment services and Terrenata provides purchasing, remediation and redevelopment of brownfield land. Terrenata is business partnering with BPI and Extensa, the project development companies of the two DEME shareholders. Joint ventures DEME is part of several joint ventures with other companies such as Middle East Dredging Company (MEDCO), a partnership of DEME with the Qatari United Development Company (UDC) and the Qatari government. MEDCO focuses on dredging and land reclamation projects in the Gulf. In 2004, International Seaport Dredging (ISD) was incorporated in India as a joint venture with Larsen & Toubro. ISD focuses on port development and land reclamation in India. Scaldis Salvage & Marine Contractors is another joint venture partly owned by DEME. Scaldis is involved with wreck removal and heavy lifting. DEME has a stake of 55 per cent in Scaldis. DEME is also invested in C-Power, the constructors of Thorntonbank Wind Farm off the Belgian coast. DEME Blue Energy (DBE) DEME established DEME Blue Energy (DBE). DBE focuses on wave energy and tidal energy, including the development of prototype equipment that could be used to generate electricity. In 2010, a consortium of industrial partners named Flanders Electricity from the Sea (FlanSea), submitted a research project for developing a wave energy converter (WEC) which will undergo an instrumented test in the summer of 2012. DBE is a member of the FlanSea consortium, together with Ghent University, the port of Oostende (Ostend), and others. If successful, the new technology could be installed in between the offshore wind turbines of the Thorntonbank Wind Farm, linking two different sources of renewable energy. In September 2010, the Flemish Agency for Innovation by Science and Technology (IWT) granted a €2.4 million subsidy for the FlanSea project. The FlanSea wave energy converter will be a point absorber, modelled after the B1-device off Southeast Norway which was developed under the European funded SEEWEC project, co-ordinated by Prof. Julien De Rouck of the Department of Coastal Engineering at Ghent University, Belgium. Besides the FlanSea project, DBE is involved in other initiatives. DBE is a founding member of Friends of the Supergrid, established in London on March 8, 2010, with the objective to develop a pan-European offshore super grid for renewable energy. DBE is also a partner and shareholder in Renewable Energy Base Oostende (REBO), founded on October 28, 2010, for servicing offshore wind farms in Northwest Europe. Global Sea Mineral Resources (GSR) DEME has founded a deep-sea exploration and exploitation subsidiary company for underwater mining of polymetallic nodules: Global Sea Mineral Resources (GSR). GSR works on the recovery of manganese nodules rich in cobalt, nickel and copper from the deep seabed. Exploitation of polymetallic nodules has been studied since the 1960s, but was not considered economically profitable until the early 2000s. Now that lower grade nickel ores must be exploited on land, underwater mining activities are a viable business, to be balanced with concern over the environmental destruction of the deep ocean. In 2013, GSR and the International Seabed Authority (ISA) signed a 15-year contract for the prospection and exploration of polymetallic nodules. GSR acquired exploration rights over 75,000 square kilometers of the seabed in the eastern part of the Clarion Clipperton fracture zone (CCZ) of the Central Pacific Ocean. GSR and research institutions collect baseline data to assess the environmental impact of deep sea mining. Projects Over the decades, DEME activities have significantly diversified. New businesses were developed such as soil remediation; silt recycling; offshore services for the oil and gas industry; fluvial & marine aggregates; installation of near- and farshore wind farms; marine salvage, wreck removal and heavy lifting; 'tidal' blue energy; and financial engineering. In 2009, capital and maintenance dredging represented 67 per cent of consolidated turnover. Assignments in European Union countries stood for 35 per cent of consolidated turnover, with 18 per cent in Africa and 13 per cent in Asia. Historic realizations For more than a century, DEME companies have been involved in capital and maintenance dredging in the maritime approaches to the Belgian coastal ports as well as the fairway between Vlissingen and Antwerp. Between 1894 and 1911, Ackermans & van Haaren dredged some 25 million m3 when deepening the Belgian and Dutch stretches of the Western Scheldt, a high volume given the technology available at the time. After the Second World War, DEME companies extended the port of Antwerp, both on the right bank of the Schelde during the major ten-year infrastructure programme (1956–1967) and on the left bank, for an entirely new port. In the Port of Antwerp DEME built the 500 × 68 m Berendrecht Lock, the largest lock in the world at the time, which was completed in 1989. In the 1970s and 80s, several DEME companies took the lead in building the outer port of Zeebrugge. DEME companies have worked abroad too. Starting in 1903, Ackermans & van Haaren almost continuously executed infrastructure projects in Latin America. For ten years, the company worked on the port of Rosario (Santa Fe) in Argentina. In 1912, total volume handled at Rosario was estimated at 9.5 million m3 of dredging and 5.7 million m3 of reclamation. Other dredging works in Argentina in those years were executed at La Plata, Bahía Blanca, Puerto Belgrano, San Nicolas, Ensenada, Sorento, Quequén, the Paraná Delta, and the Matschwitz canal. From 1910 until 1913, Ackermans & van Haaren built a 1150 m long tunnel in Buenos Aires, supplying water from the Rio de la Plata. Other Latin American countries where Ackermans & van Haaren was active before the First World War include Uruguay, where the company built the port of Montevideo and carried out gravel dredging near Colón, and Brazil, where 8.5 million m3 were dredged in the port of Rio Grande do Sul between 1908 and 1916. In the Russian city of Saint Petersburg, Ackermans & van Haaren built the military harbour, known as Emperor Peter the Great, between 1913 and 1917. In 1916, the company started construction of docks for the Russian navy at Sveaborg near Helsingfors (Helsinki), the capital of Finland. In the Interbellum, DEME companies built the new Baltic port of Gdynia for which a total of 36 million m3 was dredged in what Richardson calls "quite heroic circumstances". In the mid-1930s, DEME companies worked in the Persian port of Now-Chahr and in Phnom Penh, the capital of then- French protectorate of Cambodia, where Ackermans & van Haaren's flagship Antwerpen III was assigned for dredging the Mekong river. Richardson claims "the maritime history of France may be written by way of the involvement of Ackermans & van Haaren in building and dredging its Atlantic and Mediterranean ports." DEME companies have been involved in all successive phases of the extension of the Port of Le Havre since 1904 – the latest phase being Le Havre Port 2000. Another DEME operating company, Baggerwerken Decloedt & Zoon, has been dredging the maritime access channels to Belgian sea ports for over a century. Among the company's early assignments in Asia, three projects stand out: construction of the port of Bluff on New Zealand's South Island between October 1956 and October 1960 (in a joint-venture with another constituent company of DEME, Société Générale de Dragage/Algemene Baggermaarschappij); the very first reclamation in Malaysia's Port Klang; and the 1969–1970 extension of the first runway on Kingsford-Smith Airport in Sydney. In 1994, DEME also built the parallel runway there, which extends on reclaimed land in Botany Bay. Orders in the 2000s Driven by the constant need for new infrastructure, population growth, climate change, expanding maritime trade, further containerisation, and a dramatic increase of scale (both in navigation and port facilities), the demand for capital and maintenance dredging, land reclamation and port construction has generated demand for ocean engineering projects on an unsurpassed scale. DEME continued maintenance dredging throughout the 2000s in the Schelde access channel to Antwerp; the North Sea access lanes to the Belgian sea ports; the Elbe river between Cuxhaven and Hamburg in Germany; the Orinoco river in Venezuela; and the mouths of the Niger Delta in Nigeria. In France, DEME companies completed the Port 2000 extension project in the Port of Le Havre. The €218 million contract involved a total dredged volume of over 45 million m3; construction of 10 km of breakwaters; and 78 ha of land reclamation. Marine works were executed in tides of up to 8 m and strong currents of up to 5 knots. On the Mediterranean coast of France, DEME finished the Fos2XL extension at Fos-sur-Mer. The €400 million contract was awarded by the Port of Marseille Authority (PMA) to a consortium of DEME-companies which carried out the dredging and marine works. In Vuosaari, Finland, DEME dredged hard rock for the construction of a new container terminal. In Sepetiba and Itaguaí, both in Brazil, DEME's TSHD Breydel dredged the access channel and a basin for port extensions. In Dhamra in the state of Orissa on the eastern coast of India, DEME deepened a 19 km long access channel and reclaimed 130ha for a new port, assigning a water injection dredger. The €100 million contract in Dhamra was executed by International Seaport Dredging (ISD), in which DEME is partnered with the Indian company Larsen & Toubro. Since 2005, DEME has been active in four successive construction phases of the Russian port Ust-Luga at 120 km west of Saint Petersburg. Ust-Luga will be the final point of the Second Baltic Pipeline. On a visit at Ust-Luga in January 2006, Russian president Vladimir Putin declared that the new port was "extremely important for us. It is one of the largest infrastructure projects of the decade." In the Gulf state of Qatar, DEME delivered the artificial island Pearl of the Gulf before the Asian Olympic Games in Doha under the lead of Hedwig Vanlishout, Roland Durie, and Jasper Verstreepen. The island, shaped like a seahorse, called for the excavation of approximately 18 million m3 of material, reclamation of an area of approximately 4.2 million m2, around 180,000 m3 of concrete quay walls, and approximately 45 linear kilometers of rock revetment and sandy beaches. The residential and touristic development project takes into account the future sea level rise for one hundred years to come. Together with its partners United Development Company (UDC) and the Qatari government, DEME created a 22 km2 platform for the new Doha airport, which required 62 million m3 of sand and rock to be removed. On 4 April 2008, the Panama Canal Authority (ACP) awarded the contract to dredge the Pacific sea entrance of the Panama Canal to DEME operating company Dredging International. The 177.5 million (USD) project widened the Canal's 14 km approach, access, and navigation channels to a minimum of 218m and deepened them to a maximum level of -15.5m. DEME removed a total of 9.07 million m3. Dredging operations took place close to port activity at Baldoa and Rodman. A follow-on contract involved dredging and deepening at the Rodman quay of Panama International Terminal. For ACP's Fresh Water Dredging and Excavation Project for the Canal Expansion, Panama Canal Authority further awarded a US$40 million contract to DEME for widening and deepening the existing channel by dredging 4.6 million m3 in the northern reaches of Gatun Lake. Offshore assignments by Tideway Offshore Contractors included trench dredging, construction of landfalls, and protection and stabilisation of the Enagás submarine Balearic gas pipeline (between the Spanish city of Denia and the Balearic Islands of Ibiza and Mallorca), where a depth record was achieved at -987m. Tideway had established an earlier depth record in 2000, at a depth of -780m at the Malampaya development project in the Philippines. With proprietary fall pipe vessels, Tideway was continuously involved in rock placement and protection works for the oil industry in North Sea oil projects. Media reported other assignments, such as the Encana Deep Panuke project in Nova Scotia, Canada, and the P9 project for the Pluto LNG project at Woodside, Australia. At the end of 2007, Tideway installed the 580 km HVDC submarine power cable known as NorNed, which links the electricity grids of Norway and the Netherlands. DEME companies worked on construction of the first phase of C- Power's Thorntonbank Wind Farm, including offshore soil investigation, transport and placement of the gravity-based structures, erosion protection, cable-laying, and directional drilling. DEME-controlled Scaldis Salvage & Marine Contractors was involved in the successful wreck removal of the MV Tricolor car carrier, lost at sea off Dunkirk. In the field of environmental remediation, DEC (in its home country Belgium) executed the remediation of acid tar basins for Total in Ertvelde; the remediation of the former Carcoke Coking Works site in Zeebrugge; and the remediation and redevelopment of the 42ha brownfield 't Eilandje in Zwijnaarde. Abroad, DEC was involved in remediation works at the cyanide-contaminated former Gas Works site in Dublin Dockland, Ireland; at the Avenue Coking Works site near Chesterfield, UK; and the decontamination of the London Olympics 2012 site in Stratford. In the 2000s, DEC was active in Sweden, cleaning the mercury contamination in the Svartsjö lakes near Hultsfred; removing mercury and dioxine from a site in Bengtsfors; and further in Favernik, Söderhamn and Gävle. Orders in 2010 On 16 March 2010 DEME started major dredging works for DP World's London Gateway, the UK's new deep sea port and logistics park at 25 miles east of Central London on the river Thames. DEME is dredging a 300m wide channel to a depth between 14.6m and 16.5m while the estuary is currently around 11m deep. Over a distance of 100 km to the sea some 29 million m3 will be dredged; in addition DEME is reclaiming 18 million m3 in the Thames. The 400 million pound contract must be finished by mid-2014. For the civilian works, DEME partnered with British contractor Laing O'Rourke. On 31 March 2010, DEME completed the widening and deepening works in the port of Durban, South Africa that began in mid-2007. The existing northern breakwater was demolished and rebuilt; the existing southern breakwater was strengthened; the port entrance channel was widened from 120m to 220m; and deepened from 12.8m to 19m in the outer channel and 17m in the inner port. A total of more than 10 million m3 of material was dredged, part of which was used for the foundation and reinforcement of the breakwaters. Apart from the mega-dipper Pinocchio and two split barges, DEME assigned the trailing suction hopper dredgers Marieke, Krankeloon, Orwell and Pallieter to the €220 million project. DEME was the managing partner in a consortium that included South African civilian contractor Group Five. In the Middle East, DEME is executing the dredging and reclamation package for the Ruwais Refinery Expansion project of Takreer in Abu Dhabi. The work was awarded to Dredging International in May 2009 and started on 7 June 2009. A total of 42 million m3 is being dredged, pumped and reclaimed by heavy-duty cutter suction dredger Al Mahar and trailing suction hopper dredgers. In Latin America DEME is active in a major remediation project in the Port of Santos, Brazil. The €75 million turnkey project calls for the remediation of the illegal dump site Lixao da Alemoa at the edge of a bay. A total of 680.000 m3 of domestic waste and industrial waste are being processed on an area covering 45 ha, where Brasil Terminal Portuario is building a container terminal. Most of this waste will be recycled and reused; a fraction, between 10.000 and 50.000 m3, will have to be stored. DEME executes this contract through its subsidiary DEC, which also arranged project financing. The Santos contract was said to be "a breakthrough for DEME in Latin America." During a mission to Brazil, Belgian crown prince Prince Philippe, Duke of Brabant visited the DEC turnkey project in Santos. Together with its partner Larsen & Toubro in International Seaport Dredging (ISD), DEME finished a 10 km access channel, turning basin, and berthing foreground in the Indian port of Kakinada, state of Andhra Pradesh, in 2010. Dredging a total of 6 million m3 deepened the sea port from -11.5m to between -13.5 and -14.5m. A second contract for dredging another 5 million m3 was to be completed by March 2011. Since 1999 DEME has already executed several capital dredging campaigns at Kakinada, both directly. and through ISD. On 18 August 2010, DEME was awarded a €105 million contract for dredging and reclamation works to prepare the Imeretinskaya lowland area at Sochi, Russia, where the Olympic Village for the 2014 Olympic Winter Games will be built. In an execution period of maximum 14 months, a total of 8 million m3 sand must be dredged and transported over a distance of 120 km to reclaim a 412ha swamp area to 2.5m above the Black Sea zero level. DEME has assigned its TSHDs Brabo and Nile River to the Sochi project. In Hayle, off the coast of Cornwall, UK, DEME subsidiary Tideway was involved in precision rock placement for protection of the wave hub and a 16 km power cable, linking the tidal energy park with land. Tideway's fall pipe vessel Rollingstone placed some 100.000 tons at a depth between 25m and 35m by way of digital terrain modelling and a remotely operated vehicle (ROV). Investment programme Between 2002 and 2007, ten major dredgers were added to the DEME fleet. This included the world's largest heavy-duty seagoing cutter suction dredger, the 28,000 kW d'Artagnan. With the construction of Pearl River in 1994, DEME became notable in the industry for operating "the first 'jumbo' trailer suction dredger." On 1 August 2002, the enlargement of Pearl River was started in Singapore. In the process, the hopper capacity of Pearl River increased from 17,000 m3 to 24,146 m3. A deep dredging installation for the Pearl River was built in, which allows the jumbo trailer to dredge to depths of 120 m. A further fleet investment is scheduled to take place in the late 2010s, beginning with dredgers Minerva and Scheldt River, which were delivered in 2017, followed by Spartacus, the largest and most powerful cutter suction dredger in the world in 2019, and two trailing suction hopper dredgers and two split barges in 2020. Innovation According to Mort J. Richardson in The Dynamics of Dredging Ackermans & van Haaren has long held a leading position in technological innovation within the field. In 1895 Ackermans & van Haaren helped create hydraulic dredging techniques and designed a suction dredger capable of unloading by itself. In the same year, the first such vessel was built, Schelde II. Société Générale de Dragage/Algemene Baggermaatschappij (SGD) applied the first submersible pump in dredging, fixed on the drag head of TSHD Maas. At the time of her commissioning in September 1994, DEME's 17,000 m3 flagship Pearl River became, according to Richardson, "the very first suction hopper dredge of a completely new generation – featuring twice as much capacity as its biggest successor." In 2005 DEME's French subsidiary Société de Dragage International (SDI) launched the world's largest heavy-duty and ocean-going cutter suction dredger d'Artagnan (28,200 kW installed power). DEME's proprietary DRACULA technique uses high-pressure waterjets to excavate seabed material. DRACULA is an acronym for "Dredging, And Cutting Using Liquid Action". Various improvements of navigation and dredging software have led DEME to develop and practise the one man-operated bridge. A purpose-built drill barge with ten drilling towers, Yuan Dong 007, was designed and constructed for the 2009 expansion project of the Panama Canal. The performance and efficiency of the Hong Kong-built drill barge was a decisive factor in winning the 2008 contract for improving the Pacific side of the Canal. Shareholders DEME is 12.11% VINCI, 60.82% Ackermans & van Haaren, 27.07% Public Shares. References External links Companies based in Antwerp Province Dredging companies Underwater mining
DEME
Engineering
6,023
699,722
https://en.wikipedia.org/wiki/R%C3%B8mer%20scale
The Rømer scale (; notated as °Rø), also known as Romer or Roemer, is a temperature scale named after the Danish astronomer Ole Christensen Rømer, who developed it for his own use in around 1702. It is based on the freezing point of pure water being 7.5 degrees and the boiling point of water as 60 degrees. Degree measurements There is no solid evidence as to why Rømer assigned the value of 7.5 degrees to water's freezing point. One proposed explanation is that Rømer initially intended the 0-degree point of his scale to correspond to the eutectic temperature of ammonium chloride brine, which was the coldest easily-reproducible temperature at the time and had already been used as the lower fiducial point for multiple temperature scales. The boiling point of water was defined as 60 degrees. Rømer then saw that the freezing point of pure water was roughly one eighth of the way (about 7.5 degrees) between these two points, so he redefined the lower fixed point to be the freezing point of water at precisely 7.5 degrees. This did not greatly change the scale but made it easier to calibrate by defining it by reference to pure water. Thus the unit of this scale, a Rømer degree, is of a kelvin or Celsius degree. The symbol is sometimes given as °R, but since that is also sometimes used for the Réaumur and Rankine scales, the other symbol °Rø is to be preferred. Historical significance Rømer's scale would have been lost to history if Rømer's notebook, Adverseria, was not found and published in 1910 and letters of correspondence between Daniel Gabriel Fahrenheit and Herman Boerhaave were not uncovered in 1936. These documents demonstrate the important influence Rømer's work had on Fahrenheit, a young maker and seller of barometers and thermometers. Fahrenheit visited Rømer in Copenhagen in 1708 and while there, became familiar with Rømer's work with thermometers. Rømer also told Fahrenheit that demand for accurate thermometers was high. The visit ignited a keen interest in Fahrenheit to try to improve thermometers. By 1713, Fahrenheit was creating his own thermometers with a scale heavily borrowed from Rømer that ranged from 0 to 24 degrees but with each degree divided into quarters. At some point, the quarter degrees became whole degrees and Fahrenheit made other adjustments to Rømer's scale, modifying the freezing point from 7.5 degrees to 8, which, when multiplied by four, correlates to 32 degrees on Fahrenheit's scale The 22.5 degree point would have become 90 degrees, however, Fahrenheit rounded this up to 24 degrees–96 when multiplied by 4–in order to make calculations easier. After Fahrenheit perfected the crafting of his accurate thermometers, their use became widespread and the Fahrenheit scale is still used today in the United States and a handful of other countries. See also Outline of metrology and measurement Comparison of temperature scales Notes and references Obsolete units of measurement Scales of temperature Danish inventions 1700s in science 1700s establishments in Denmark
Rømer scale
Physics,Mathematics
689
4,117,074
https://en.wikipedia.org/wiki/Acoustic%20cleaning
Acoustic cleaning is a maintenance method used in material-handling and storage systems that handle bulk granular or particulate materials, such as grain elevators, to remove the buildup of material on surfaces. An acoustic cleaning apparatus, usually built into the material-handling equipment, works by generating powerful sound waves which shake particulates loose from surfaces, reducing the need for manual cleaning. History and design An acoustic cleaner consists of a sound source similar to an air horn found on trucks and trains, attached to the material-handling equipment, which directs a loud sound into the interior. It is powered by compressed air rather than electricity so there is no danger of sparking, which could set off an explosion. It consists of two parts: The acoustic driver. In the driver, compressed air escaping past a diaphragm causes it to vibrate, generating the sound. It is usually made from solid machined stainless steel. The diaphragm, the only moving part, is usually manufactured from special aerospace grade titanium to ensure performance and longevity. The bell, a flaring horn, usually made from spun 316 grade stainless steel. The bell serves as a sound resonator, and its flaring shape couples the sound efficiently to the air, increasing the volume of sound radiated. The overall length of acoustic cleaner horns range from 430 mm to over 3 metres long. The device can operate from a pressure range of 4.8 to 6.2 bars or 70 to 90 psi. The resultant sound pressure level will be around 200 dB. There are generally 4 ways to control the operation of an acoustic cleaner: The most common is by a simple timer Supervisory control and data acquisition (SCADA) Programmable logic controller (PLC) Manually by ball valve An acoustic cleaner will typically sound for 10 seconds and then wait for a further 500 seconds before sounding again. This ratio for on/off is approximately proportional to the working life of the diaphragm. Provided the operating environment is between −40 °C and 100 °C, a diaphragm should last between 3 and 5 years. The wave generator and the bell have a much longer life span and will often outlast the environment in which they operate. The older bells which were made from cast iron were susceptible to rusting in certain environments. The new bells made from 316 spun steel have no problem with rust and are ideal for sterile environments such as found in the food industry or in pharmaceutical plants. Acoustic cleaning began in the early 1970s with experiments using ship horns or air raid sirens. The first acoustic cleaners were made from cast iron. From 1990 onwards the technology became commercially viable and began to be used in dry processing, storage, transport, power generation and manufacturing industries. The latest technology uses 316 spun stainless steel to ensure optimum performance. Operation and performance The majority of acoustic cleaners operate in the audio frequency range from 60 hertz up to 420 Hz. However a few operate in the infrasonic range, below 40 Hz, which is mostly below the human hearing range, to satisfy strict noise control requirements. There are three scientific fields which converge in the understanding of acoustic cleaning technology. Sound propagation. This relates to an understanding of the nature of the sound waves, how they vary and how they will interact with the environment. Mathematics of the environment. Materials science, surface friction, distance and areas familiar to a mechanical engineer. Chemical engineering. The chemical properties of the powder or substance to be debonded. Especially the auto adhesive properties of the powder. An acoustic cleaner will create a series of very rapid and powerful sound induced pressure fluctuations which are then transmitted into the solid particles of ash, dust, granules or powder. This causes them to move at differing speeds and debond from adjoining particles and the surface that they are adhering to. Once they have been separated then the material will fall off due to gravity or it will be carried away by the process gas or air stream. The key features which determine whether or not an acoustic cleaner will be effective for any given problem are the particle size range, the moisture content and the density of the particles as well as how these characteristics will change with temperature and time. Typically particles between 20 micrometres and 5 mm with moisture content below 8.5% are ideal. Upper temperature limits are dependent upon the melting point of the particles and acoustic cleaners have been employed at temperatures above 1000 °C to remove ash build-up in boiler plants. It is important to match the operating frequencies to the requirements. Higher frequencies can be directed more accurately whilst lower frequencies will carry further, and are generally used for more demanding requirements. A typical selection of frequencies available would be as follows: 420 Hz for a small acoustic cleaner which might be used to clear bridging at the base of a silo. 350 Hz will be more powerful and this frequency can be used to unblock material build-up in ID (induced draft) fans, filters, cyclones, mixers, dryers and coolers. 230 Hz. At this frequency, the power involved is sufficient to use in most electricity generation applications. 75 Hz and 60 Hz. These are generally the most powerful acoustic cleaners and are often used in large vessels and silos. Health and safety The introduction of acoustic cleaners has been a significant improvement in many areas of health and safety. For instance in silo cleaning - the previous solutions tended to be intrusive or destructive. Air cannons, soot blowers, external vibrators, hammering or costly man entry are all superseded by noninvasive sonic horns. An acoustic cleaner requires no down time and will operate during normal usage of the site. Taking the example of silo cleaning a little further, there are two typical problems. Bridging This is when the silo blocks at the outlet. Previously the problem was addressed by manual cleaning from underneath the silo which in its turn introduced significant risk from falling material when the blockage was cleared. An acoustic cleaner is able to operate from the top of a silo through in situ material to clear the blockage at the base. Rat holing Compaction on the side of a silo. This not only reduces the operating volume in a silo but it also compromises quality control by disrupting the first in first out cycle. Older material compacted on the side of a silo can also start to degrade and produce dangerous gases. An acoustic cleaner will produce sound waves which will make the compacted material resonate at a different rate to the surrounding environment resulting in debonding and clearance. Advantages of acoustic cleaners Repetitive use during operations means that there are fewer unscheduled shut downs. Improved material flow by the elimination of hang-ups, blocking and bridging. Minimisation of cross contamination by ensuring complete emptying of the environment. Improved cleaning and reduction of health and safety risks. Increased energy efficiency. Reducing the buildup on heat exchange surfaces results in lower energy usage. Extended plant life. Aggressive cleaning regimes are avoided. Ease of operation. It is easy to automate the horns either at regular intervals or to tie the sounding in to changes in their environment such as pressure or flow rates. Importantly they prevent the material buildup problem from occurring in the first place. These advantages mean that the financial payback is often very quick. It is also possible to compare acoustic cleaners directly to alternative solutions. Air cannons. These are well established but are expensive with limited coverage thus requiring multi unit purchase. They are also noise intrusive and have a high compressed air consumption. Vibrators. These are easy to fit to an empty silo but can cause structural damage as well as contributing to powder compaction. Low friction linings. These are very quiet but are expensive to install. Also they are prone to erosion and can then contaminate the environment or product. Inflatable pads and liners. Again these are easy to install in an empty silo. They help side wall buildup but have no impact on bridging. They are also hard to maintain and can cause compaction. Fluidisation through a 1 way membrane. This can help already compacted material. However they are expensive and difficult to install and maintain. They can also contribute to mechanical interlocking and bridging. Specific applications Boilers. Cleaning of the heat transfer surfaces. Electrostatic precipitators. Acoustic cleaners are being used for cleaning hoppers, turning vanes, distribution plates, collecting plates and electrode wires. Super heaters, economisers and air heaters. Duct work. Filters. Acoustic cleaners are used on reverse air, pulse jet and shaker units. They are effective in reducing pressure drop across the collection surface which will increase bag life and prevent hopper pluggage. Generally they can totally replace the both reverse air fans and shaker units and significantly reduce the compressed air requirement on pulse jet filters. ID fans. Acoustic cleaning helps to provide a uniform cleaning pattern even for inaccessible parts of the fan. This maintains the balance of the fan. Kiln inlet. Acoustic cleaners help to prevent particulate buildup at the kiln inlet and this will minimise nose ring formation. Mechanical pre Collectors. Acoustic cleaners help prevent buildup around the impellers and between the tubes. Mills. Acoustic cleaners help maintain material flow and also prevent blockages in the pre grind silos. They also help prevent material buildup in the downstream separators and fans. Planetary Coolers. Acoustic cleaners help prevent bridging and ensure complete evacuation. Precipitator. Acoustic cleaners help clean the turning vanes, distribution plates, collecting plates and electrode wires. They can either assist or replace the mechanical rapping systems. They also prevent particulate buildup in the under hoppers which would otherwise result in opacity spiking. Pre heaters. Used in towers, gas risers, cyclones and fans. Ship cargo holds. Used both to clean and de aerate current loads. Silos and hoppers. To prevent bridging and rat holing. Static cyclones. Acoustic cleaners will work both within the cyclone and with the associated duct work. See also Ultrasonic cleaner - Cleaning using higher frequencies than those found in acoustic cleaners. Sonic soot blowers Ultrasonic homogenizer References External links Acoustics Audio engineering Cleaning tools Cleaning methods
Acoustic cleaning
Physics,Engineering
2,089
63,486,013
https://en.wikipedia.org/wiki/Snout%E2%80%93vent%20length
Snout–vent length (SVL) is a morphometric measurement taken in herpetology from the tip of the snout to the most posterior opening of the cloacal slit (vent). It is the most common measurement taken in herpetology, being used for all amphibians, lepidosaurs, and crocodilians (for turtles, carapace length (CL) and plastral length (PL) are used instead). The SVL differs depending on whether the animal is struggling or relaxed (if alive), or various other factors if it is a preserved specimen. For fossils, an osteological correlate such as precaudal length must be used. When combined with weight and body condition, SVL can help deduce age and sex. Advantages Because tails are often missing or absent, especially in juveniles, SVL is seen as more invariant than total length. Even in the case of crocodiles, tail tips may be missing. Methods The measurements may be taken with dial calipers or digital calipers. Various devices are used to position the animal while the measurement is being taken, such as a snake tube, "Mander Masher", or a "Salamander Stick". References Further reading Herpetology Measurement
Snout–vent length
Physics,Mathematics
264
54,439,518
https://en.wikipedia.org/wiki/Junior%20Solar%20Sprint
Junior Solar Sprint (JSS) is a competitive program for 5th- to 8th-grade students to create a small solar-powered vehicle. JSS competitions are sponsored by the Army Educational Outreach Program (AEOP), and administered by the Technology Student Association (TSA). Objectives of JSS are to create the fastest, most interesting, or best crafted vehicle. Skills in science, technology, engineering, and mathematics (STEM) are fostered when designing and constructing the vehicles, as well as principles of alternative fuels, engineering design, and aerodynamics. History Junior Solar Sprint was created in the 1980s by the National Renewable Energy Laboratory (NREL) to teach younger children about the importance and challenges of using renewable energy. The project also teaches students how the engineering process is applied, and how solar panels, transmission, and aerodynamics can be used in practice. Since 2001, the AEOP has funded JSS events. TSA began hosting competitions in 2011, and it became a middle school-level event in 2014. In association with TSA, Pitsco Education has sold recommended materials for the project. Competition Since Junior Solar Sprint became a TSA event, the rules for creating the vehicle have been defined in the TSA rulebook. At the conference, the total cost of creating each car must be less than US$50. The team must also document their process in a notebook. During the time trials section, each car is raced three times down a lane long, on a hard surface like a tennis court. To keep the vehicle pointed straight ahead, a guide wire is run across every lane attached by an eyelet. When the cars are racing, they must remain attached to the wire with no external control. During the race, no modifications are allowed, though anyone may watch. The fastest time of the three trials is used for qualification to the semifinal round. In the next stage, a single- or double-elimination tournament, cars are raced against each other at the same time, until one of the 16 semifinalists is determined the fastest. Junior Solar Sprint competitions are held at the national-, state-, and some regional-level TSA conferences, as well as other AEOP-hosted locations. In the event that the site is overcast, and the solar panels won't work, two 1.5 volt AA batteries will be given to each team. Judges determine the three best vehicles for each category: speed, craftsmanship, and appearance. The 2017 national TSA conference was held June 21–25, in Orlando, Florida, and many middle school students across the country traveled to compete with their vehicles. The team from Joan MacQueen Middle School in Alpine, California, won first place. References External links Junior Solar Sprint from the Army Educational Outreach Program Junior Solar Sprint from the Technology Student Association Engineering competitions Engineering education in the United States Technology Student Association American military youth groups
Junior Solar Sprint
Technology
585
3,676,597
https://en.wikipedia.org/wiki/Gabazine
Gabazine (SR-95531) is a drug that acts as an antagonist at GABAA receptors. It is used in scientific research and has no role in medicine, as it would be expected to produce convulsions if used in humans. Gabazine binds to the GABA recognition site of the receptor-channel complex and acts as an allosteric inhibitor of channel opening. The net effect is to reduce GABA-mediated synaptic inhibition by inhibiting chloride flux across the cell membrane, and thus inhibiting neuronal hyperpolarization. While phasic (synaptic) inhibition is gabazine-sensitive, tonic (extrasynaptic) inhibition is relatively gabazine-insensitive. Gabazine has been found to bind to and antagonize α4βδ subunit-containing GABAA receptors, which may represent the GHB receptor. References GABAA receptor antagonists GABAA-rho receptor antagonists GHB receptor antagonists Convulsants Pyridazines 4-Methoxyphenyl compounds Carboxylic acids
Gabazine
Chemistry
233
53,089,397
https://en.wikipedia.org/wiki/Photolabile%20protecting%20group
A photolabile protecting group (PPG; also known as: photoremovable, photosensitive, or photocleavable protecting group) is a chemical modification to a molecule that can be removed with light. PPGs enable high degrees of chemoselectivity as they allow researchers to control spatial, temporal and concentration variables with light. Control of these variables is valuable as it enables multiple PPG applications, including orthogonality in systems with multiple protecting groups. As the removal of a PPG does not require chemical reagents, the photocleavage of a PPG is often referred to as "traceless reagent processes", and is often used in biological model systems and multistep organic syntheses. Since their introduction in 1962, numerous PPGs have been developed and utilized in a variety of wide-ranging applications from protein science to photoresists. Due to the large number of reported protecting groups, PPGs are often categorized by their major functional group(s); three of the most common classifications are detailed below. Historical introduction The first reported use of a PPG in the scientific literature was by Barltrop and Schofield, who in 1962 used 253.7 nm light to release glycine from N-benzylglycine. Following this initial report, the field rapidly expanded throughout the 1970s as Kaplan and Epstein studied PPGs in a variety of biochemical systems. During this time, a series of standards for evaluating PPG performance was compiled. An abbreviated list of these standards, which are commonly called the Lester rules, or Sheehan criteria are summarized below: In biological systems, the protected substrate, as well as the photoproducts should be highly soluble in water; in synthesis, this requirement is not as strict The protected substrate, as well as the photoproducts should be stable in the photolysis environment Separation of the PPG should exhibit a quantum yield greater than 0.10 Separation of the PPG should occur through a primary photochemical process The chromophore should absorb incident light with reasonable absorptivity The excitation wavelength of light should be greater than 300 nm The media and photoproducts should not absorb the incident light A general, high-yield synthetic procedure should exist for attaching the PPG to an unprotected substrate The protected substrate and the photoproducts should be easily separated Main classifications Nitrobenzyl-based PPGs Norrish Type II mechanism Nitrobenzyl-based PPGs are often considered the most commonly used PPGs. These PPGs are traditionally identified as Norrish Type II reaction as their mechanism was first described by Norrish in 1935. Norrish elucidated that an incident photon (200 nm < λ < 320 nm) breaks the N=O π-bond in the nitro-group, bringing the protected substrate into a diradical excited state. Subsequently, the nitrogen radical abstracts a proton from the benzylic carbon, forming the aci-nitro compound. Depending on pH, solvent and the extent of substitution, the aci-nitro intermediate decays at a rate of roughly 102–104 s−1. Following resonance of the π-electrons, a five-membered ring is formed before the PPG is cleaved yielding 2-nitrosobenzaldehyde and a carboxylic acid. Overall, nitrobenzyl-based PPGs are highly general. The list of functional groups that can be protected include, but are not limited to, phosphates, carboxylates, carbonates, carbamates, thiolates, phenolates and alkoxides. Additionally, while the rate varies with a number of variables, including choice of solvent and pH, the photodeprotection has been exhibited in both solution and in the solid-state. Under optimal conditions, the photorelease can proceed with >95% yield. Nevertheless, the photoproducts of this PPG are known to undergo imine formation when irradiated at wavelengths above 300 nm. This side product often competes for incident radiation, which may lead to decreased chemical and quantum yields. Common modifications In attempts to raise the chemical and quantum yields of nitrobenzyl-based PPGs, several beneficial modifications have been identified. The largest increase in quantum yield and reaction rate can be achieved through substitution at the benzylic carbon. However, potential substitutions must leave one hydrogen atom so the photodegradation can proceeded uninhibited. Additional modifications have targeted the aromatic chromophore. Specifically, multiple studies have confirmed that the use of a 2,6-dinitrobenzyl PPG increases reaction yield. Additionally, depending on the leaving group, the presence of a second nitro-group may nearly quadruple the quantum yield (e.g. Φ = 0.033 to Φ = 0.12 when releasing a carbonate at 365 nm). While one may credit the increase in efficiency to the electronic effects of the second nitro group, this is not the case. Analogous systems with a 2-cyano-6-nitrobenzyl PPG exhibit similar electron-withdrawing effects, but do not provide such a large increase in efficiency. Therefore, the increase in efficiency is likely due to the increased probability of achieving the aci-nitro state; with two nitro groups, an incoming photon will be twice as likely to promote the compound into an excited state. Finally, changing the excitation wavelength of the PPG may be advantageous. For example, if two PPGs have different excitation wavelengths one group may be removed while the other is left in place. To this end, several nitrobenzyl based PPGs display additional functionality. Common modifications include the use of 2-nitroveratryl (NV) or 6-nitropiperonylmethyl (NP). Both of these modifications induced red-shifting in the compounds' absorption spectra. Carbonyl-based PPGs Phenacyl PPGs The phenacyl PPG is the archetypal example of a carbonyl-based PPG. Under this motif, the PPG is attached to the protected substrate at the αβ-carbon, and can exhibit varied photodeprotection mechanisms based on the phenacyl skeleton, substrate identify and reaction conditions. Overall, phenacyl PPGs can be used to protect sulfonates, phosphates, carboxylates and carbamates. As with nitrobenzyl-based PPGs, several modifications are known. For example, the 3',5'-dimethoxybenzoin PPG (DMB) contains a 3,5-dimethoxyphenyl substituent on the carbonyl's α-carbon. Under certain conditions, DMB has exhibited quantum yields as high as 0.64. Additionally, the p-hydroxyphenacyl PPG (pHP) has been designed to react through a photo-Favorskii rearrangement. This mechanism yields the carboxylic acid as the exclusive photoproduct; the key benefit of the pHP PPG is the lack of secondary photoreactions and the significantly different UV absorption profiles of the products and reactants. While the quantum yield of the p-hydroxyphenacyl PPG is generally in the 0.1-0.4 range, it can increase to near unity when releasing a good leaving group such as a tosylate. The photoextrusion of the leaving group from the pHP PPG is so effective, that it also releases even poor nucleofuges such as amines (with the quantum yield in the 0.01-0.5 range, and dependent on solution pH). The Additionally, photorelease occurs on the nanosecond timeframe, with krelease > 108 s−1. The o-hydroxyphenacyl PPG has been introduced as an alternative with absorption band shifted closer towards the visible region, however it has slightly lower quantum yields of deprotection (generally 0.1-0.3) due to excited state proton transfer available as an alternative deactivation pathway. The phenacyl moiety itself contains one chiral carbon atom in the backbone. The protected group (leaving group) is not directly attached to this chiral carbon atom, however has been shown to be able to work as a chiral auxiliary directing approach of a diene to a dienophile in a stereoselective thermal Diels–Alder reaction. The auxiliary is then removed simply upon irradiation with UV light. Photoenolization through γ-hydrogen abstraction Another family of carbonyl-based PPGs exists that is structurally like the phenacyl motif, but which reacts through a separate mechanism. As the name suggests, these PPGs react through abstraction of the carbonyl's γ-hydrogen. The compound is then able to undergo a photoenolization, which is mechanistically like a keto-enol tautomerization. From the enol form, the compound can finally undergo a ground-state transformation that releases the substrate. The quantum yield of this mechanism directly corresponds to the ability of the protected substrate to be a good leaving group. For good leaving groups, the rate-determining step is either hydrogen abstraction or isomerization; however, if the substrate is a poor leaving group, release is the rate-determining step. Benzyl-based PPGs Barltrop and Schofield first demonstrated the use of a benzyl-based PPG, structural variations have focused on substitution to the benzene ring, as well as extension of the aromatic core. For example, insertion of a m,m’-dimethoxy substituent was shown to increase the chemical yield ~75% due to what has been termed the “excited state meta effect.” However, this substitution is only able to release good leaving groups such as carbamates and carboxylates. Additionally, the addition of an o-hydroxy group enables the release of alcohols, phenols and carboxylic acids due to the proximity of the phenolic hydroxy to the benzylic leaving group. Finally, the carbon skeleton has been expanded to include PPGs based on naphthalene, anthracene, phenanthrene, pyrene and perylene cores, resulting in varied chemical and quantum yields, as well as irradiation wavelengths and times. Applications Use in total synthesis Despite their many advantages, the use of PPGs in total syntheses are relatively rare. Nevertheless, PPGs’ "orthogonality" to common synthetic reagents, as well as the possibility of conducting a "traceless reagent process", has proven useful in natural product synthesis. Two examples include the syntheses of ent-Fumiquinazoline and (-)-diazonamide A. The syntheses required irradiation at 254 and 300 nm, respectively. Photocaging Protecting a substrate with a PPG is commonly referred to as "photocaging." This term is especially popular in biological systems. For example, Ly et al. developed a p-iodobenzoate-based photocaged reagent, which would experience a homolytic photoclevage of the C-I bond. They found that the reaction could occur with excellent yields, and with a half-life of 2.5 minutes when a 15 W 254 nm light source was used. The resulting biomolecular radicals are necessary in many enzymatic processes. As a second example, researchers synthesized a cycloprene-modified glutamate photocaged with a 2-nitroveratrol-based PPG. As it is an excitatory amino acid neurotransmitter, the aim was to develop a bioorthagonal probe for glutamate in vivo. In a final example, Venkatesh et al. demonstrated the use of a PPG-based photocaged therapeutic. Their prodrug, which released one equivalent of caffeic acid and chlorambucil upon phototriggering, showed reasonable biocompatibility, cellular uptake and photoregulared drug release in vitro. Photoresists During the 1980s, AT&T Bell Laboratories explored the use of nitrobenzyl-based PPGs as photoresists. Over the course of the decade, they developed a deep UV positive-tone photoresist where the protected substrate was added to a copolymer of poly(methyl methacrylate) and poly(methacrylic acid). Initially, the blend was insoluble. However, upon exposure to 260 ± 20 nm light, the PPG would be removed yielding 2-nitrosobenzaldehyde and a carboxylic acid that was soluble in aqueous base. Surface modification When covalently attached to a surface, PPGs do not exhibit any surface-induced properties (i.e. they behave like PPGs in solution, and do not exhibit any new properties because of their proximity to a surface). Consequently, PPGs can be patterned on a surface and removed in manner analogous to lithography to create a multifunctionalized surface. This process was first reported by Solas in 1991; protected nucleotides were attached to a surface and spatially-resolved single stranded polynucleotides were generated in a step-wise “grafting from” method. In separate studies, there have been multiple reports of using PPGs to enable the selective separation of blocks within block-copolymers to expose fresh surfaces. Furthermore, this surface patterning method has since been extended to proteins. Caged etching agents (such as hydrogen fluoride protected with 4-hydroxyphenacyl) allows to etch only surfaces exposed to light. Gels Various PPGs, often featuring the 2-nitrobenzyl motif, have been used to generate numerous gels. In one example, researchers incorporated PPGs into a silica-based sol-gel. In a second example, a hydrogel was synthesized to include protected Ca2+ ions. Finally, PPGs have been utilized to cross-link numerous photodegradable polymers, which have featured linear, multi-dimensional network, dendrimer, and branched structures. References Protecting groups Photochemistry
Photolabile protecting group
Chemistry
2,977
10,536,327
https://en.wikipedia.org/wiki/NGC%206027a
NGC 6027a is a spiral galaxy that is part of Seyfert's Sextet, a compact group of galaxies, which is located in the constellation Serpens. In optical wavelengths, it has a strong resemblance to Messier 104, the Sombrero Galaxy, with which it shares a near equivalent orientation to observers on Earth. See also NGC 6027 NGC 6027b NGC 6027c NGC 6027d NGC 6027e Seyfert's Sextet References External links HubbleSite NewsCenter: Pictures and description Spiral galaxies 6027A 56576 10116 NED02 Serpens Peculiar galaxies
NGC 6027a
Astronomy
132
19,544,011
https://en.wikipedia.org/wiki/Pintle%20injector
The pintle injector is a type of propellant injector for a bipropellant rocket engine. Like any other injector, its purpose is to ensure appropriate flow rate and intermixing of the propellants as they are forcibly injected under high pressure into the combustion chamber, so that an efficient and controlled combustion process can happen. A pintle-based rocket engine can have a greater throttling range than one based on regular injectors, and will very rarely present acoustic combustion instabilities, because a pintle injector tends to create a self-stabilizing flow pattern. Therefore, pintle-based engines are specially suitable for applications that require deep, fast, and safe throttling, such as landers. Pintle injectors began as early laboratory experimental apparatuses, used by Caltech's Jet Propulsion Laboratory in the mid-1950s, to study the mixing and combustion reaction times of hypergolic liquid propellants. The pintle injector was reduced to practice and developed by Space Technology Laboratories (STL), then a division of Ramo-Wooldridge Corp., later TRW, starting in 1960. There have been pintle-based engines built ranging from a few newtons of thrust up to several millions, and the pintle design has been tested with all the common and many exotic propellant combinations, including gelled propellants. Pintle-based engines were first used on a crewed spacecraft during the Apollo Program in the Lunar Excursion Module's Descent Propulsion System, however, it was not until October 1972 that the design was made public. and was granted to its inventor Gerard W. Elverum Jr. Description Working principle A pintle injector is a type of coaxial injector. It consists of two concentric tubes and a central protrusion. Propellant A (usually the oxidizer, represented with blue in the image) flows through an outer tube, coming out as a cylindrical stream, while propellant B (usually the fuel, represented with red in the image) flows within an inner tube and impinges on a central pintle-shaped protrusion, (similar in shape to a poppet valve like those found on four-stroke engines), spraying out in a broad cone or a flat sheet that intersects the cylindrical stream of propellant A. In the typical pintle-based engine design, only a single central injector is used, differing from "showerhead" injector plates which use multiple parallel injector ports. Throttleability can be obtained either by placing valves before the injector, by moving the inner pintle or outer sleeve, or both. Many people have experienced throttleable pintle sprayers in the form of standard garden hose-end sprayers. Variants In pintle engines that do not require throttling, the pintle is fixed in place, and propellant valves for startup and shutdown are placed somewhere else. A movable pintle allows for throttleability, and, if the moving part is the sleeve, the pintle itself can act as the propellant valve. This is called a Face Shutoff pintle. A fast-moving sleeve allows for the engine to be operated in pulses, and this is usually done in pintle-based RCS thrusters and missile divert thrusters. In a variant of the Face Shutoff pintle, the pintle itself is hydraulically actuated by the fuel via a pilot valve, and no extra valves are required between the engine and tanks. This is called an FSO (Face Shutoff Only) pintle. In some variants the pintle has grooves or orifices cut into it to produce radial jets in the flow of propellant B, this allows for extra unburned fuel to impinge on the walls of the combustion chamber, and provide fuel film cooling. The pintle pictured here is of this type. Advantages and disadvantages Advantages Compared to some injector designs, pintle injectors allow greater throttling of bipropellant flow rates, although throttling rocket engines in general is still very difficult. If only one central injector is used, the mass flow inside the combustion chamber will have two main recirculation zones which decrease acoustic instability without necessarily requiring acoustic cavities or baffles. The pintle injector design can deliver high combustion efficiency (typically 96–99%). If fuel is chosen for the inner flow (which is the case in most pintle-based engines), the injector can be tuned so that any excess fuel which is not reacted immediately as it passes through the oxidizer stream is projected onto the combustion chamber walls and cools them through evaporation, thus providing fuel film cooling to the combustion chamber walls, without incurring the mass penalty of a dedicated coolant subsystem. While pintle injectors have been developed for applications in rocket propulsion, due to their relative simplicity, they could easily be adapted for industrial fluid handling processes requiring high flowrate and thorough mixing. A given injector's performance can be easily optimized by varying the geometries of the outer propellant's annular gap and the central propellant slots (and/or continuous gap, if used). As this requires only two new pieces to be made, trying variations is usually cheaper and less time-consuming than with regular injectors. Disadvantages Because combustion tends to occur in the surface of a frustum, peak thermal stresses are localized on the combustion chamber wall rather than a more evenly distributed combustion across the chamber section and more even heating. This has to be contemplated when designing the cooling system, or it might cause burn-through. The pintle injector is known to have caused throat-erosion problems in the early ablatively cooled Merlin engines due to uneven mixing causing hot streaks in the flow, however, as of 2021, it is not clear whether this is a problem that applies to all pintle-based engines, or this was a design problem of the Merlin. Pintle injectors work very well with liquid propellants and can be made to work with gelled propellants, but for gas–liquid or gas–gas applications, conventional injectors remain superior in performance. The pintle injector is desirable for engines that have to be throttled or restarted repeatedly, but it does not deliver optimal efficiency for fuel and oxidizer mixing at any given throttle rate. History 1950s In 1957, Gerard W. Elverum Jr. was employed by the Jet Propulsion Laboratory, and working under the supervision of Art Grant to characterize the reaction rates of new rocket propellants by using a device consisting of two concentric tubes, through which propellants were fed at a known flowrate, and a set of thermocouples to measure their reaction rates. The device encountered problems, because as the propellants were flowing parallel to each other, not much mixing was happening. Elverum then placed a tip at the end of the innermost tube, attached to an internal support, which forced the inner propellant to flow outwards and mix with the outer propellant. This device worked fine for low energy propellants, but when high energy combinations started being tested, it proved impractical due to nearly instantaneous reaction times at the mixing point. In order to keep the device from blowing itself apart during high energy tests, the outer tube was retracted, thus constituting a primitive pintle injector. Peter Staudhammer, under the supervision of Program Manager Elverum, had a technician cut multiple slots across the end of an available inner tube and subsequent tests of this new configuration showed a substantial improvement in mixing efficiency. 1960s By 1960, Elverum, Grant, and Staudhammer had moved to the newly-formed Space Technology Laboratories, Inc. (Later TRW, Inc.) to pursue development of monopropellant and bipropellant rocket engines. By 1961, the pintle injector was developed into a design usable in rocket engines, and subsequently, the pintle injector design was matured and developed by a number of TRW employees, adding such features as throttling, rapid pulsing capability, and face shutoff. Throttling was tested in the 1961 MIRA 500, at 25 to 500 lbf (111 to 2,224 N) and its 1962 successor, the MIRA 5000, at 250 to 5,000 lbf (1,112 to 22,241 N). In 1963, TRW introduced the MIRA 150A as a backup for the Thiokol TD-339 vernier thruster to be used in the Surveyor probes, and started development of the Apollo Lunar Excursion Module's Descent Propulsion System. Near this time, a pintle injector was considered for simplicity and lower cost on the Sea Dragon. In parallel with those projects, TRW continued development of other pintle engines, including by 1966 the URSA (Universal Rocket for Space Applications) series. These were bipropellant engines offered at fixed thrusts of 25, 100, or 200 lbf, (111, 445, or 890 N) with options for either ablative or radiatively cooled combustion chambers. These engines were capable of pulsing at 35 Hz, with pulse widths as small as .02 seconds, but also had design steady state firing life in excess of 10,000 seconds (with radiatively-cooled chambers). In 1967 the Apollo Descent Propulsion System was qualified for flight. From 1968 to 1970, a 250,000 lbf (1,112,055 N) engine was tested. 1970s In 1972 the Apollo Descent Propulsion System ended production, but starting in 1974, and continuing through 1988, the TR-201, a simplified, low cost derivative of it, featuring ablative cooling and fixed thrust, was used in the second stage of the Delta 2914 and 3914 launch vehicles. In October 1972, the pintle injector design was patented and made public. 1980s In the early 1980s, a series of design refinements were applied to the pintle injector obtaining exceptionally fast and repeatable pulses on command and linear throttling capability. By enabling shutoff of propellants at their injection point into the combustion chamber, the pintle injector provided excellent pulse response by eliminating injector "dribble volume" effects. Starting in 1981, a very compact, 8,200 lbf N2O4/MMH engine employing this feature was developed as a pitch and yaw thruster for the army's SENTRY missile program. This engine could throttle over a 19:1 thrust range and deliver repeatable "on" pulses as small as 8 milliseconds at any thrust level. A further refinement of the face shutoff injector was used on the Army Strategic Defense Command's Exoatmospheric Reentry-vehicle Interceptor Subsystem (ERIS). In its 900 lbf lateral divert engines the injector shutoff element provided the only control of propellant flow. The large bipropellant valve normally required in such engines was replaced by a small pilot valve that used high pressure fuel (MMH) to hydraulically actuate the moveable injector sleeve. This feature, called FSO (Face Shutoff Only) greatly improved overall thruster response and significantly reduced engine size and mass. Another design challenge from the mid 1980s and early 1990s was that of obtaining miniaturization of rocket engines. As part of the Air Force Brilliant Pebbles program, TRW developed a very small 5 lbf (22 N) N2O4/hydrazine thruster using a pintle injector. This radiatively-cooled engine weighed 0.3 lb (135 grams) and was successfully tested in August 1993, delivering over 300 seconds Isp with a 150:1 nozzle expansion ratio. The pintle diameter was (1.6764 mm) and scanning electron microscopy was needed to verify the dimensions on the ± (0.0762 mm ±0.00762 mm) radial metering orifices. 1990s The preceding technology innovations enabled the first exoatmospheric kinetic kill of a simulated reentry warhead off Kwajalein atoll on 28 January 1991 on the first flight of ERIS. In the late '90s, FSO pintle injectors were used with gelled propellants, which have a normal consistency like that of smooth peanut butter. Gelled propellants typically use either aluminum powder or carbon powder to increase the energy density of the liquid fuel base (typically MMH) and they use additives to rheologically match the oxidizer (typically IRFNA based) to the fuel. For gelled propellants to be used on a rocket, face shutoff is mandatory to prevent dry-out of the base liquid during off times between pulses, which would otherwise result in the solids within the gels plugging the injector passages. FSO pintle injectors were used on a variety of programs, the McDonnell Douglas Advanced Crew Escape Seat – Experimental (ACES-X) program and its successor, the Gel Escape System Propulsion (GESP) program. Another major design adaptation in this time period was the use of pintle injectors with cryogenic liquid hydrogen fuel. Beginning in 1991, TRW joined with McDonnell Douglas and NASA Lewis (now Glenn) Research Center to demonstrate that TRW's pintle engine could use direct injection of liquid hydrogen to simplify the design of high-performance booster engines. Attempts to use direct injection of cryogenic hydrogen in other types of injectors had until then consistently resulted in the onset of combustion instabilities. In late 1991 and early 1992, a 16,000 lbf (71,172 N) LOX/LH2 test engine was successfully operated with direct injection of liquid hydrogen and liquid oxygen propellants. A total of 67 firings were conducted, and the engine demonstrated excellent performance and total absence of combustion instabilities. Subsequently, this same test engine was adapted for and was successfully tested with LOX/LH2 at 40,000 lbf (177,929 N) and with LOX/RP-1 at 13,000 and 40,000 lbf. (57,827 and 177,929 N). At the same time, TR-306 liquid apogee engines were used on the Anik E-1/E-2 and Intelsat K spacecraft. In August 1999 the dual mode TR-308 was used to place NASA's Chandra spacecraft on its final orbit. The early FSO injector and gel propellant development work of the late 1980s and early 1990s led to the world's first missile flights using gelled oxidizer and gelled fuel propellants on the Army's/AMCOM's Future Missile Technology Integration (FMTI) program, with the first flight in March 1999 and the second flight in May 2000. 2000s In the early 2000s TRW continued development of large LOX/LH2 pintle engines, and test-fired the TR-106 at NASA's John C. Stennis Space Center. This was a 650,000 lbf (2,892,000 N) engine, a 16:1 scale-up from the largest previous LOX/LH2 pintle engine and about a 3:1 scale-up from the largest previous pintle engine ever tested. This injector's pintle diameter was , by far the largest built to date. In 2002 the larger TR-107 was designed. Tom Mueller, who had worked on the TR-106 and TR-107, was hired by SpaceX and started development of the Merlin and Kestrel engines. 2010s The Merlin engine was the only pintle injector engine in operation, used for all SpaceX Falcon 9 and Falcon Heavy flights. 2020s In the early 2020s, the Merlin engine continued to be used on the Falcon 9 and Falcon Heavy. The pintle injector was also used on the Reaver 1 engine by Firefly Aerospace. Engines known to use pintle injectors References Rocket engines
Pintle injector
Technology
3,341
185,370
https://en.wikipedia.org/wiki/Drooling
Drooling, or slobbering, is the flow of saliva outside the mouth. Drooling can be caused by excess production of saliva, inability to retain saliva within the mouth (incontinence of saliva), or problems with swallowing (dysphagia or odynophagia). There are some frequent and harmless cases of drooling – for instance, a numbed mouth from either benzocaine, or when going to the dentist's office. Isolated drooling in healthy infants and toddlers is normal and may be associated with teething. It is unlikely to be a sign of disease or complications. Drooling in infants and young children may be exacerbated by upper respiratory infections and nasal allergies. Some people with drooling problems are at increased risk of inhaling saliva, food, or fluids into the lungs, especially if drooling is secondary to a neurological problem. However, if the body's normal reflex mechanisms (such as gagging and coughing) are not impaired, this is not life-threatening. Causes Drooling or sialorrhea can occur during sleep. It is often the result of open-mouth posture from CNS depressants intake or sleeping on one's side. Sometimes while sleeping, saliva does not build up at the back of the throat and does not trigger the normal swallow reflex, leading to the condition. Freud conjectured that drooling occurs during deep sleep, and within the first few hours of falling asleep, since those who are affected by the symptom experience the most severe harm while napping, rather than during overnight sleep. A sudden onset of drooling may indicate poisoning – especially by pesticides or mercury – or reaction to snake or insect venom. Excess capsaicin can cause drooling as well, an example being the ingestion of particularly high Scoville Unit chili peppers. Some neurological problems cause drooling. Medication can cause drooling, either due to primary action or side-effects; for example the pain-relief medication Orajel can numb the mucosa. Causes include: exercise, especially cardiovascular exercise stroke and other neurological pathologies intellectual disability adenoid hypertrophy cerebral palsy amyotrophic lateral sclerosis tumors of the upper aerodigestive tract Parkinson's disease rabies mercury poisoning Drooling associated with fever or trouble swallowing may be a sign of an infectious disease including: retropharyngeal abscess peritonsillar abscess tonsilitis mononucleosis strep throat obstructive diseases (tumors, stenosis) inability to swallow due to neurodegenerative diseases (amyotrophic lateral sclerosis) Treatment A comprehensive treatment plan depends on the cause and incorporates several stages of care: Correction of reversible causes, behavior modification, medical treatment, and surgical procedures. Atropine sulfate tablets are used in some circumstances to reduce salivation. The same for anticholinergic drugs which can be also a benefit because they decrease the activity of the acetylcholine muscarinic receptors and can result in decreased salivation. They may be prescribed by doctors in conjunction with behavior modification strategies. Other drugs used are glycopyrrolate and botulinum toxin A – botox injection in salivary glands to diminish saliva production. In general, surgical procedures are considered after clear diagnosis of the cause and evaluation of non-invasive treatment options. Severe cases can be sometimes be treated by surgical intervention – salivary duct relocalization, or in extreme cases resection of salivary glands. Popular culture The scope of the meaning of the term drool in popular use has expanded to include any occasion wherein someone highly desires something. See also Slobbers Salivary microbiome References External links NIH site on drooling Ethology Diseases of oral cavity, salivary glands and jaws Symptoms Excretion Saliva
Drooling
Biology
809
60,765,961
https://en.wikipedia.org/wiki/Eugene%20Catalan%20Prize
The Eugene Catalan Prize (Prix Eugène-Catalan) is awarded every five years by the Royal Academies for Science and the Arts of Belgium to recognize a scholar who has made important progress in pure mathematics. The prize, created in honor of the mathematician Eugène Charles Catalan, was first given in 1969; the original criteria specified Belgian or French scholars but European Union citizens are now eligible. Recipients The recipients of the Eugene Catalan Prize are: 2020: Antoine Gloria 2015: Pierre Bieliavsky 2010: 2005: Didier Smets 2000: Jean-Michel Coron 1995: Jean-Pierre Tignol 1990: Haïm Brezis 1979: Roger Apéry 1974: J. Goffar-Lombet 1969: Gilbert Crombez See also List of mathematics awards References Mathematics awards
Eugene Catalan Prize
Technology
158
382,139
https://en.wikipedia.org/wiki/Interferon%20beta-1a
Interferon beta-1a (also interferon beta 1-alpha) is a cytokine in the interferon family used to treat multiple sclerosis (MS). It is produced by mammalian cells, while interferon beta-1b is produced in modified E. coli. Some research indicates that interferon injections may result in an 18–38% reduction in the rate of MS relapses. Interferon beta has not been shown to slow the advance of disability. Interferons are not a cure for MS (there is no known cure); the claim is that interferons may slow the progress of the disease if started early and continued for the duration of the disease. Medical uses Clinically isolated syndrome The earliest clinical presentation of relapsing-remitting multiple sclerosis is the clinically isolated syndrome (CIS), that is, a single attack of a single symptom. During a CIS, there is a subacute attack suggestive of demyelination which should be included in the spectrum of MS phenotypes. Treatment with interferons after an initial attack decreases the risk of developing clinical definite MS. Relapsing-remitting MS Medications are modestly effective at decreasing the number of attacks in relapsing-remitting multiple sclerosis and in reducing the accumulation of brain lesions, which is measured using gadolinium-enhanced magnetic resonance imaging (MRI). Interferons reduce relapses by approximately 30% and their safe profile make them the first-line treatments. Nevertheless, not all the patients are responsive to these therapies. It is known that 30% of MS patients are non-responsive to Beta interferon. They can be classified in genetic, pharmacological and pathogenetic non-responders. One of the factors related to non-respondance is the presence of high levels of interferon beta neutralizing antibodies. Interferon therapy, and specially interferon beta 1b, induces the production of neutralizing antibodies, usually in the second 6 months of treatment, in 5 to 30% of treated patients. Moreover, a subset of RRMS patients with specially active MS, sometimes called "rapidly worsening MS" are normally non-responders to interferon beta 1a. While more studies of the long-term effects of the drugs are needed, existing data on the effects of interferons indicate that early-initiated long-term therapy is safe and it is related to better outcomes. Side effects Interferon beta-1a is available only in injectable forms, and can cause skin reactions at the injection site that may include cutaneous necrosis. Skin reactions with interferon beta are more common with subcutaneous administration and vary greatly in their clinical presentation. They usually appear within the first month of treatment albeit their frequence and importance diminish after six months of treatment. Skin reactions are more prevalent in women. Mild skin reactions usually do not impede treatment whereas necroses appear in around 5% of patients and lead to the discontinuation of the therapy. Also over time, a visible dent at the injection site due to the local destruction of fat tissue, known as lipoatrophy, may develop, however, this rarely occurs with interferon treatment. Interferons, a subclass of cytokines, are produced in the body during illnesses such as influenza in order to help fight the infection. They are responsible for many of the symptoms of influenza infections, including fever, muscle aches, fatigue, and headaches. Many patients report influenza-like symptoms hours after taking interferon beta that usually improve within 24 hours, being such symptoms related to the temporary increase of cytokines. This reaction tends to disappear after 3 months of treatment and its symptoms can be treated with over-the-counter nonsteroidal anti-inflammatory drugs, such as ibuprofen, that reduce fever and pain. Another common transient secondary effect with interferon-beta is a functional deterioration of already existing symptoms of the disease. Such deterioration is similar to the one produced in MS patients due to heat, fever or stress (Uhthoff's phenomenon), usually appears within 24 hours of treatment, is more common in the initial months of treatment, and may last several days. A symptom specially sensitive to worsening is spasticity. Interferon-beta can also reduce numbers of white blood cells (leukopenia), lymphocytes (lymphopenia) and neutrophils (neutropenia), as well as affect liver function. In most cases these effects are non-dangerous and reversible after cessation or reduction of treatment. Nevertheless, recommendation is that all patients should be monitored through laboratory blood analyses, including liver function tests, to ensure safe use of interferons. To help prevent injection-site reactions, patients are advised to rotate injection sites and use an aseptic injection technique. Injection devices are available to optimize the injection process. Side effects are often onerous enough that many patients ultimately discontinue taking interferons (or glatiramer acetate, a comparable disease-modifying therapy requiring regular injections). Mechanism of action Interferon beta balances the expression of pro- and anti-inflammatory agents in the brain, and reduces the number of inflammatory cells that cross the blood brain barrier. Overall, therapy with interferon beta leads to a reduction of neuron inflammation. Moreover, it is also thought to increase the production of nerve growth factor and consequently improve neuronal survival. In vitro, interferon beta reduces production of Th17 cells which are a subset of T lymphocytes believed to have a role in the pathophysiology of MS. Society and culture Brand names Avonex Avonex was approved in the US in 1996, and in the European Union in 1997, and is registered in more than 80 countries worldwide. It is the leading MS therapy in the US, with around 40% of the overall market, and in the European Union, with around 30% of the overall market. It is produced by the Biogen biotechnology company, originally under competition protection in the US under the Orphan Drug Act. Avonex is sold in three formulations, a lyophilized powder requiring reconstitution, a pre-mixed liquid syringe kit, and a pen; it is administered via intramuscular injection. Rebif Rebif is a disease-modifying drug (DMD) used to treat multiple sclerosis in cases of clinically isolated syndromes as well as relapsing forms of multiple sclerosis and is similar to the interferon beta protein produced by the human body. It is co-marketed by Merck Serono and Pfizer in the US under an exception to the Orphan Drug Act. It was approved in the European Union in 1998, and in the US in 2002; it has since been approved in more than 90 countries worldwide including Canada and Australia. EMD Serono has had sole rights to Rebif in the US since January 2016. Rebif is administered via subcutaneous injection. Cinnovex Cinnovex is the brand name of recombinant Interferon beta-1a, which is manufactured as biosimilar/biogeneric in Iran. It is produced in a lyophilized form and sold with distilled water for injection. Cinnovex was developed at the Fraunhofer Society in collaboration with CinnaGen, and is the first therapeutic protein from a Fraunhofer laboratory to be approved as biogeneric / biosimilar medicine. There are several clinical studies to prove the similarity of CinnoVex and Avonex. A more water-soluble variant is currently being investigated by the Vakzine Projekt Management (VPM) GmbH in Braunschweig, Germany. Plegridy Plegridy is a brand name of a pegylated form of Interferon beta-1a. Plegridy's advantage is it only needs injecting once every two weeks. Betaferon (interferon beta-1b) Closely related to interferon beta-1a is interferon beta-1b, which is also indicated for MS, but is formulated with a different dose and administered with a different frequency. Each drug has a different safety/efficacy profile. Interferon beta-1b is marketed only by Bayer in the US as Betaseron, and outside the US as Betaferon. Economics In the United States, , the cost is between US$1,284 and US$1,386 per 30 mcg vial. As of 2020, the National Average Drug Acquisition Cost (NADAC) in the United States for Avonex was $6,872.94 for a 30 mcg kit. Avonex and Rebif are on the top ten best-selling multiple sclerosis drugs of 2013. It is an example of a specialty drug that would only be available through a specialty pharmacy. This is because it requires a refrigerated chain of distribution and costs $17,000 a year. Research COVID-19 Interferon beta-1a administered subcutaneously or intravenously was investigated since March 2020 as a potential treatment in patients hospitalized with COVID-19 in a multinational Solidarity trial (initially in combination with lopinavir) but it did not reduce in-hospital mortality compared to local standard of care. SNG001, an inhalation formulation of interferon beta-1a, is being developed as a treatment for COVID-19 by Synairgen. A pilot trial in hospitalized patients showed higher odds of clinical improvement with SNG001 compared to placebo and in January 2021 a phase 3 trial in this population started. References External links Cytokines Drugs developed by Pfizer Drugs developed by Merck Immunostimulants Specialty drugs
Interferon beta-1a
Chemistry,Biology
2,058
65,291,136
https://en.wikipedia.org/wiki/Tulane%20National%20Primate%20Research%20Center
The Tulane National Primate Research Center (TNPRC) is a federally funded biomedical research facility affiliated with Tulane University. The TNPRC is one of seven National Primate Research Centers which conduct biomedical research on primates. The TNPRC is situated in 500 acres of land in Covington, Louisiana, and originally opened as the Delta Regional Primate Center in 1964. The center uses five types of non-human primates in its research: cynomolgus macaques, African green monkeys, mangabeys, pig-tailed macaques and rhesus macaques. The TNPRC employs over three hundred people and has an estimated economic impact of $70.1 million a year. Research The TNPRC has four divisions: Comparative Pathology, Microbiology, Immunology, and Veterinary Medicine. The center investigates diseases including HIV/AIDS, celiac disease, Krabbe disease, leukemia, Lyme disease, respiratory syncytial virus (RSV), rotavirus, tuberculosis, varicella zoster virus (VZV), and Zika virus. Facilities The TNPRC is located on 500 acres of land, in unincorporated St. Tammany Parish, Louisiana, with a Covington, Louisiana postal address. In addition to its research facilities, the center has an on-site, Biosafety Level 3 biocontainment laboratory. The TNPRC also operates a large breeding colony of non-human primates. Breeding colony The TNPRC operates an on-site breeding colony of 5,000 non-human primates. Incidents and controversies In 1998, two dozen rhesus macaques escaped from their cage into the surrounding area of the TNPRC. In 2005, over 50 monkeys escaped from their cage into the surrounding area of the TNPRC. Four of the primates died or were never found. In 2006, thirteen baboons were killed after being placed in a crowded chute. In September 2012, a rhesus macaque was inadvertently left in an unattended vehicle for approximately 22 hours. As a result, the macaque was dehydrated and later died. In September 2014, a USDA inspection report revealed that several of the animal cages had been kept in unclean and unsanitary conditions. In November 2014, three macaques in the TNPRC's breeding colony were affected by a biosecurity breach due to staff members not following proper procedure. As a result, the animals were euthanized. In September 2015, a USDA inspection revealed that personnel at the TNPRC were not following appropriate procedures regarding the criteria for euthanizing animals. References External links TNPRC home page Primate research centers Animal testing on non-human primates Tulane University Medical research institutes in the United States Biomedical research foundations 1964 establishments in Louisiana Research institutes in Louisiana Covington, Louisiana Buildings and structures in St. Tammany Parish, Louisiana
Tulane National Primate Research Center
Engineering,Biology
604
5,735,440
https://en.wikipedia.org/wiki/Virial%20stress
In mechanics, virial stress is a measure of stress on an atomic scale for homogeneous systems. The name is derived : "Virial is then derived from Latin as well, stemming from the word (plural of ) meaning forces." The expression of the (local) virial stress can be derived as the functional derivative of the free energy of a molecular system with respect to the deformation tensor. Volume averaged Definition The instantaneous volume averaged virial stress is given by where and are atoms in the domain, is the volume of the domain, is the mass of atom , is the -th component of the velocity of atom , is the -th component of the average velocity of atoms in the volume, is the -th component of the position of atom , and is the -th component of the force applied on atom by atom . At zero kelvin, all velocities are zero so we have This can be thought of as follows. The component of stress is the force in the -direction divided by the area of a plane perpendicular to that direction. Consider two adjacent volumes separated by such a plane. The 11-component of stress on that interface is the sum of all pairwise forces between atoms on the two sides. The volume averaged virial stress is then the ensemble average of the instantaneous volume averaged virial stress. In a three dimensional, isotropic system, at equilibrium the "instantaneous" atomic pressure is usually defined as the average over the diagonals of the negative stress tensor: The pressure then is the ensemble average of the instantaneous pressure This pressure is the average pressure in the volume . Equivalent Definition It's worth noting that some articles and textbook use a slightly different but equivalent version of the equation where is the -th component of the vector oriented from the -th atoms to the -th calculated via the difference Both equation being strictly equivalent, the definition of the vector can still lead to confusion. Derivation The virial pressure can be derived, using the virial theorem and splitting forces between particles and the container or, alternatively, via direct application of the defining equation and using scaled coordinates in the calculation. Inhomogeneous Systems If the system is not homogeneous in a given volume the above (volume averaged) pressure is not a good measure for the pressure. In inhomogeneous systems the pressure depends on the position and orientation of the surface on which the pressure acts. Therefore, in inhomogeneous systems a definition of a local pressure is needed. As a general example for a system with inhomogeneous pressure you can think of the pressure in the atmosphere of the earth which varies with height. Instantaneous local virial stress The (local) instantaneous virial stress is given by: Measuring the virial pressure in molecular simulations The virial pressure can be measured via the formulas above or using volume rescaling trial moves. See also Virial theorem References External links Physical Interpretation of the volume averaged Virial Stress 2017 edition (second): Python and Fortran code examples for Computer Simulation of Liquids Continuum mechanics
Virial stress
Physics
612
1,037,401
https://en.wikipedia.org/wiki/Picatinny%20rail
The 1913 rail (MIL-STD-1913 rail) is an American rail integration system designed by Richard Swan that provides a mounting platform for firearm accessories. It forms part of the NATO standard STANAG 2324 rail. It was originally used for mounting of telescopic sights atop the receivers of larger caliber rifles. Once established as United States Military Standard, its use expanded to also attaching other accessories, such as: iron sights, tactical lights, laser sights, night-vision devices, reflex sights, holographic sights, foregrips, bipods, slings and bayonets. An updated version of the rail is adopted as a NATO standard as the STANAG 4694 NATO Accessory Rail. History Attempts to standardize the Weaver rail mount designs date from work by the A.R.M.S. company and Richard Swanson in the early 1980s. Specifications for the M16A2E4 rifle and the M4E1 carbine received type classification generic in December 1994. These were the M16A2 and the M4 modified with new upper receivers where rails replaced hand guards. The MIL-STD-1913 rail is commonly called the "Picatinny Rail", in reference to the Picatinny Arsenal in New Jersey. Picatinny Arsenal works as a contracting office for small arms design (they contracted engineers to work on the M4). Picatinny Arsenal requested Swan's help in developing the rail, but did not draft blueprints or request paperwork for a patent. That credit goes to Richard Swanson of A.R.M.S., who conducted research and development and acquired a patent for the rail in 1995. Swan has litigated in civil court against Colt and Troy industries regarding patent infringement. The courts found that Troy had developed rifles with rail mounting systems nearly identical to the MIL-STD-1913 rail. A metric-upgraded version of the 1913 rail, the STANAG 4694 NATO Accessory Rail, was designed in conjunction with weapon manufacturers like Aimpoint, Beretta, Colt, FN Herstal and Heckler & Koch, and was approved by the NATO Army Armaments Group (NAAG), Land Capability Group 1 Dismounted Soldier (LCG1-DS) on May 8, 2009. Many firearm manufacturers include a MIL-STD-1913 rail system from the factory, such as the Ruger Mini-14 Ranch Rifle. Design The rail consists of a strip undercut to form a "flattened T" with a hexagonal top cross-section, with cross slots interspersed with flats that allow accessories to be slid into place from the end of the rail and then locked in place. It is similar in concept to the earlier commercial Weaver rail mount used to mount telescopic sights, but is taller and has wider slots at regular intervals along the entire length. The MIL-STD-1913 locking slot width is . The spacing of slot centres is and the slot depth is . Comparison to Weaver rail The only significant difference between the MIL-STD-1913 rail and the similar Weaver rail mount are the size and shapes of the slots. Whereas the earlier Weaver rail is modified from a low, wide dovetail rail and has rounded slots, the 1913 rail has a more pronounced angular section and square-bottomed slots. This means that an accessory designed for a Weaver rail will fit onto a MIL-STD-1913 rail whereas the opposite might not be possible, unless the slots in the Weaver rail are modified to have square bottoms. While some accessories are designed to fit on both Weaver and 1913 rails, most 1913 compatible devices will not fit on Weaver rails. From May 2012, most mounting rails are cut to MIL-STD-1913 standards. Many accessories can be secured to a rail with a single spring-loaded retaining pin. Designed to mount heavy sights of various kinds, a great variety of accessories and attachments are now available and the rails are no longer confined to the rear upper surface (receiver) of long arms but are either fitted to or machine milled into the upper, side or lower surfaces of all manner of weapons from crossbows to pistols and long arms up to and including anti-materiel rifles. Impact Because of their many uses, 1913 rails and accessories have replaced iron sights in the design of many firearms and are available as aftermarket add-on parts for most actions that do not have them integrated, and they are also on the undersides of semi-automatic pistol frames and grips. Their usefulness has led to them being used in paintball, gel blasters and airsoft. See also Third Arm Weapon Interface System Warsaw Pact rail Zeiss rail References External links Picatinny Rail Specifications Firearm components Mechanical standards Military equipment introduced in the 1990s
Picatinny rail
Technology,Engineering
963
2,161,528
https://en.wikipedia.org/wiki/Yan%20tan%20tethera
Yan Tan Tethera or yan-tan-tethera is a sheep-counting system traditionally used by shepherds in Northern England and some other parts of Britain. The words are numbers taken from Brythonic Celtic languages such as Cumbric which had died out in most of Northern England by the sixth century, but they were commonly used for sheep counting and counting stitches in knitting until the Industrial Revolution, especially in the fells of the Lake District. Though most of these number systems fell out of use by the turn of the 20th century, some are still in use. Origin and development Sheep-counting systems ultimately derive from Brythonic Celtic languages, such as Cumbric; Tim Gay writes: “[Sheep-counting systems from all over the British Isles] all compared very closely to 18th-century Cornish and modern Welsh". It is impossible, given the corrupted form in which they have survived, to be sure of their exact origin. The counting systems have changed considerably over time. A particularly common tendency is for certain pairs of adjacent numbers to come to resemble each other by rhyme (notably the words for 1 and 2, 3 and 4, 6 and 7, or 8 and 9). Still, multiples of five tend to be fairly conservative; compare bumfit with Welsh , in contrast with standard English fifteen. Use in sheep counting Like most Celtic numbering systems, they tend to be vigesimal (based on the number twenty), but they usually lack words to describe quantities larger than twenty; this is not a limitation of either modernised decimal Celtic counting systems or the older ones. To count a large number of sheep, a shepherd would repeatedly count to twenty, placing a mark on the ground, or move a hand to another mark on a shepherd's crook, or drop a pebble into a pocket to represent each score (e.g. 5 score sheep = 100 sheep). Importance of keeping count In order to keep accurate records (e.g. of birth and death) and to be alert to instances of straying, shepherds must perform frequent head-counts of their flocks. Dating back at least to the medieval period, and continuing to the present in some areas like Slaidburn, farms were granted fell rights, allowing them access to common grazing land. To prevent overgrazing, it was vitally important for each farm to keep accurate, updated head-counts. Though fell rights are largely obsolete in modern agriculture except in upland areas, farms are often subsidised and taxed according to the quantity of their sheep. For this reason, accurate counts are still necessary, and must be performed frequently. Generally, a count is the first action performed in the morning and the last action performed at night. A count is made after moving the sheep from one pasture to another, and after any operation involving the sheep, such as shearing, tagging, foot-trimming, mulesing, etc., although sheep are far less likely to stray while being moved in a group rather than when grazing at large on open ground. Knitting Their use is also attested in a "knitting song" known to be sung around the middle of the nineteenth century in Wensleydale, Yorkshire, beginning "yahn, tayhn, tether, mether, mimph". Modern usage The counting system has been used for products sold within Northern England, such as prints, beers, alcoholic sparkling water (hard seltzer in U.S.), and yarns, as well as in artistic works referencing the region, such as Harrison Birtwistle's 1986 opera Yan Tan Tethera. Jake Thackray's song "Old Molly Metcalfe" from his 1972 album Bantam Cock uses the Swaledale "Yan Tan Tether Mether Pip" as a repeating lyrical theme. Garth Nix used the counting system to name the seven Grotesques in his novel Grim Tuesday. Yan or yen The word yan or yen for 'one' in Cumbrian, Northumbrian, and some Yorkshire dialects generally represents a regular development in Northern English in which the Old English long vowel <ā> was broken into , and so on. This explains the shift to yan and ane from the Old English , which is itself derived from the Proto-Germanic . Another example of this development is the Northern English word for 'home', hame, which has forms such as hyem, yem and yam all deriving from the Old English . Systems by region Yorkshire and Lancashire Lincolnshire, Derbyshire and County Durham Southwest England Cumberland, and Westmorland Wilts, Scots, Lakes, Dales and Welsh Note: Scots here means "Scots" not "Gaelic" Numerals in Brythonic Celtic languages See also Counting-out game References Further reading Rawnsley, Hardwicke Drummmond (1987) "Yan tyan tethera: counting sheep". Woolley: Fleece Press External links Breton numerals Carol Justus's use of this numbering system to explain pre-decimal counting systems The Sheep Counting Score – By Walter Skeat, 1910 Modern Welsh decimal system and older vigesimal system in full Sheep farming in the United Kingdom Languages of the United Kingdom British English Celtic words and phrases English words and phrases Numeral systems
Yan tan tethera
Mathematics
1,079
2,814,326
https://en.wikipedia.org/wiki/Effective%20atomic%20number%20%28compounds%20and%20mixtures%29
The atomic number of a material exhibits a strong and fundamental relationship with the nature of radiation interactions within that medium. There are numerous mathematical descriptions of different interaction processes that are dependent on the atomic number, . When dealing with composite media (i.e. a bulk material composed of more than one element), one therefore encounters the difficulty of defining . An effective atomic number in this context is equivalent to the atomic number but is used for compounds (e.g. water) and mixtures of different materials (such as tissue and bone). This is of most interest in terms of radiation interaction with composite materials. For bulk interaction properties, it can be useful to define an effective atomic number for a composite medium and, depending on the context, this may be done in different ways. Such methods include (i) a simple mass-weighted average, (ii) a power-law type method with some (very approximate) relationship to radiation interaction properties or (iii) methods involving calculation based on interaction cross sections. The latter is the most accurate approach (Taylor 2012), and the other more simplified approaches are often inaccurate even when used in a relative fashion for comparing materials. In many textbooks and scientific publications, the following - simplistic and often dubious - sort of method is employed. One such proposed formula for the effective atomic number, , is as follows: where is the fraction of the total number of electrons associated with each element, and is the atomic number of each element. An example is that of water (H2O), made up of two hydrogen atoms (Z=1) and one oxygen atom (Z=8), the total number of electrons is 1+1+8 = 10, so the fraction of electrons for the two hydrogens is (2/10) and for the one oxygen is (8/10). So the for water is: The effective atomic number is important for predicting how photons interact with a substance, as certain types of photon interactions depend on the atomic number. The exact formula, as well as the exponent 2.94, can depend on the energy range being used. As such, readers are reminded that this approach is of very limited applicability and may be quite misleading. This 'power law' method, while commonly employed, is of questionable appropriateness in contemporary scientific applications within the context of radiation interactions in heterogeneous media. This approach dates back to the late 1930s when photon sources were restricted to low-energy x-ray units. The exponent of 2.94 relates to an empirical formula for the photoelectric process which incorporates a ‘constant’ of 2.64 × 10−26, which is in fact not a constant but rather a function of the photon energy. A linear relationship between Z2.94 has been shown for a limited number of compounds for low-energy x-rays, but within the same publication it is shown that many compounds do not lie on the same trendline. As such, for polyenergetic photon sources (in particular, for applications such as radiotherapy), the effective atomic number varies significantly with energy. It is possible to obtain a much more accurate single-valued by weighting against the spectrum of the source. The effective atomic number for electron interactions may be calculated with a similar approach. The cross-section based approach for determining Zeff is obviously much more complicated than the simple power-law approach described above, and this is why freely-available software has been developed for such calculations. References Eisberg and Resnick, Quantum Physics of Atoms, Molecules, Solids, Nuclei, and Particles. Atomic physics
Effective atomic number (compounds and mixtures)
Physics,Chemistry
733
23,019,319
https://en.wikipedia.org/wiki/Cucurbitin
Cucurbitin is an amino acid and a carboxypyrrolidine that is found in Cucurbita seeds. Cucurbitin causes degenerative changes in the reproductive organs of parasitic flatworms called flukes. References 2,3-Diaminopropionic acids Pyrrolidines Plant toxins
Cucurbitin
Chemistry
73
571,480
https://en.wikipedia.org/wiki/Absorbed%20dose
Absorbed dose is a dose quantity which is the measure of the energy deposited in matter by ionizing radiation per unit mass. Absorbed dose is used in the calculation of dose uptake in living tissue in both radiation protection (reduction of harmful effects), and radiology (potential beneficial effects, for example in cancer treatment). It is also used to directly compare the effect of radiation on inanimate matter such as in radiation hardening. The SI unit of measure is the gray (Gy), which is defined as one Joule of energy absorbed per kilogram of matter. The older, non-SI CGS unit rad, is sometimes also used, predominantly in the USA. Deterministic effects Conventionally, in radiation protection, unmodified absorbed dose is only used for indicating the immediate health effects due to high levels of acute dose. These are tissue effects, such as in acute radiation syndrome, which are also known as deterministic effects. These are effects which are certain to happen in a short time. The time between exposure and vomiting may be used as a heuristic for quantifying a dose when more precise means of testing are unavailable. Effects of acute radiation exposure Radiation therapy Dose computation The absorbed dose is equal to the radiation exposure (ions or C/kg) of the radiation beam multiplied by the ionization energy of the medium to be ionized. For example, the ionization energy of dry air at 20 °C and 101.325 kPa of pressure is . (33.97 eV per ion pair) Therefore, an exposure of (1 roentgen) would deposit an absorbed dose of (0.00876 Gy or 0.876 rad) in dry air at those conditions. When the absorbed dose is not uniform, or when it is only applied to a portion of a body or object, an absorbed dose representative of the entire item can be calculated by taking a mass-weighted average of the absorbed doses at each point. More precisely, Where is the mass-averaged absorbed dose of the entire item ; is the item of interest; is the absorbed dose density (absorbed dose per unit volume) as a function of location; is the density (mass per unit volume) as a function of location; is volume. Stochastic risk - conversion to equivalent dose For stochastic radiation risk, defined as the probability of cancer induction and genetic effects occurring over a long time scale, consideration must be given to the type of radiation and the sensitivity of the irradiated tissues, which requires the use of modifying factors to produce a risk factor in sieverts. One sievert carries with it a 5.5% chance of eventually developing cancer based on the linear no-threshold model. This calculation starts with the absorbed dose. To represent stochastic risk the dose quantities equivalent dose HT and effective dose E are used, and appropriate dose factors and coefficients are used to calculate these from the absorbed dose. Equivalent and effective dose quantities are expressed in units of the sievert or rem which implies that biological effects have been taken into account. The derivation of stochastic risk is in accordance with the recommendations of the International Committee on Radiation Protection (ICRP) and International Commission on Radiation Units and Measurements (ICRU). The coherent system of radiological protection quantities developed by them is shown in the accompanying diagram. For whole body radiation, with Gamma rays or X-rays the modifying factors are numerically equal to 1, which means that in that case the dose in grays equals the dose in sieverts. Development of the absorbed dose concept and the gray Wilhelm Röntgen first discovered X-rays on November 8, 1895, and their use spread very quickly for medical diagnostics, particularly broken bones and embedded foreign objects where they were a revolutionary improvement over previous techniques. Due to the wide use of X-rays and the growing realisation of the dangers of ionizing radiation, measurement standards became necessary for radiation intensity and various countries developed their own, but using differing definitions and methods. Eventually, in order to promote international standardisation, the first International Congress of Radiology (ICR) meeting in London in 1925, proposed a separate body to consider units of measure. This was called the International Commission on Radiation Units and Measurements, or ICRU, and came into being at the Second ICR in Stockholm in 1928, under the chairmanship of Manne Siegbahn. One of the earliest techniques of measuring the intensity of X-rays was to measure their ionising effect in air by means of an air-filled ion chamber. At the first ICRU meeting it was proposed that one unit of X-ray dose should be defined as the quantity of X-rays that would produce one esu of charge in one cubic centimetre of dry air at 0 °C and 1 standard atmosphere of pressure. This unit of radiation exposure was named the roentgen in honour of Wilhelm Röntgen, who had died five years previously. At the 1937 meeting of the ICRU, this definition was extended to apply to gamma radiation. This approach, although a great step forward in standardisation, had the disadvantage of not being a direct measure of the absorption of radiation, and thereby the ionisation effect, in various types of matter including human tissue, and was a measurement only of the effect of the X-rays in a specific circumstance; the ionisation effect in dry air. In 1940, Louis Harold Gray, who had been studying the effect of neutron damage on human tissue, together with William Valentine Mayneord and the radiobiologist John Read, published a paper in which a new unit of measure, dubbed the "gram roentgen" (symbol: gr) was proposed, and defined as "that amount of neutron radiation which produces an increment in energy in unit volume of tissue equal to the increment of energy produced in unit volume of water by one roentgen of radiation". This unit was found to be equivalent to 88 ergs in air, and made the absorbed dose, as it subsequently became known, dependent on the interaction of the radiation with the irradiated material, not just an expression of radiation exposure or intensity, which the roentgen represented. In 1953 the ICRU recommended the rad, equal to 100 erg/g, as the new unit of measure of absorbed radiation. The rad was expressed in coherent cgs units. In the late 1950s, the CGPM invited the ICRU to join other scientific bodies to work on the development of the International System of Units, or SI. It was decided to define the SI unit of absorbed radiation as energy deposited per unit mass which is how the rad had been defined, but in MKS units it would be J/kg. This was confirmed in 1975 by the 15th CGPM, and the unit was named the "gray" in honour of Louis Harold Gray, who had died in 1965. The gray was equal to 100 rad, the cgs unit. Other uses Absorbed dose is also used to manage the irradiation and measure the effects of ionising radiation on inanimate matter in a number of fields. Component survivability Absorbed dose is used to rate the survivability of devices such as electronic components in ionizing radiation environments. Radiation hardening The measurement of absorbed dose absorbed by inanimate matter is vital in the process of radiation hardening which improves the resistance of electronic devices to radiation effects. Food irradiation Absorbed dose is the physical dose quantity used to ensure irradiated food has received the correct dose to ensure effectiveness. Variable doses are used depending on the application and can be as high as 70 kGy. Radiation-related quantities The following table shows radiation quantities in SI and non-SI units: Although the United States Nuclear Regulatory Commission permits the use of the units curie, rad, and rem alongside SI units, the European Union European units of measurement directives required that their use for "public health ... purposes" be phased out by 31 December 1985. See also Kerma (physics) Mean glandular dose :Category:Units of radiation dose Notes References Literature External links Specific Gamma-Ray Dose Constants for Nuclides Important to Dosimetry and Radiological Assessment, Laurie M. Unger and D. K . Trubey, Oak Ridge National Laboratory, May 1982 - contains gamma-ray dose constants (in tissue) for approximately 500 radionuclides. Radioactivity quantities Radiobiology Radiation protection
Absorbed dose
Physics,Chemistry,Mathematics,Biology
1,726
55,805,806
https://en.wikipedia.org/wiki/NGC%20496
NGC 496, also occasionally referred to as PGC 5037, UGC 927 or GC 288, is a spiral galaxy in the constellation Pisces. It is located approximately 250 million light-years from the Solar System and was discovered on 12 September, 1784 by astronomer William Herschel. Observation history The object was discovered by Herschel along with NGC 495 and NGC 499. He initially described the discovery as "Three [NGC 496 along with NGC 495 and 499], eS and F, forming a triangle.". As he observed the trio again the next night, he was able to make out more detail: "Three, forming a [right triangle]; the [right angle] to the south NGC 499, the short leg preceding [NGC 496], the long towards the north [NGC 495]. Those in the legs [NGC 496 and 495] the faintest imaginable; that at the rectangle [NGC 499] a deal larger and brighter, but still very faint." NGC 496 was later also observed by Bindon Blood Stoney. This position is also noted in the New General Catalogue. See also Spiral Galaxy List of NGC objects (1–1000) Pisces (constellation) References External links SEDS Spiral galaxies Pisces (constellation) 0496 005061 Astronomical objects discovered in 1784 Discoveries by William Herschel 927
NGC 496
Astronomy
289
11,548,124
https://en.wikipedia.org/wiki/Peziotrichum%20corticola
Peziotrichum corticola is an ascomycete fungus that is a plant pathogen. It was first discovered in India by Massee. Rhinocladium corticola is a known synonym. P. corticola causes black-band disease on the leaves and bark of mango trees. Black-band disease is little known, but highly infectious; it has caused significant damage to mango yield in India since 2009. References Fungal plant pathogens and diseases Nectriaceae Fungus species
Peziotrichum corticola
Biology
100
25,464,970
https://en.wikipedia.org/wiki/Solar%20eclipse%20of%20October%204%2C%202051
A partial solar eclipse will occur at the Moon's ascending node of orbit between Wednesday, October 4 and Thursday, October 5, 2051, with a magnitude of 0.6024. A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. A partial solar eclipse occurs in the polar regions of the Earth when the center of the Moon's shadow misses the Earth. The partial solar eclipse will be visible for parts of southeastern Australia, New Zealand, and Antarctica. Eclipse details Shown below are two tables displaying details about this particular solar eclipse. The first table outlines times at which the moon's penumbra or umbra attains the specific parameter, and the second table describes various other parameters pertaining to this eclipse. Eclipse season This eclipse is part of an eclipse season, a period, roughly every six months, when eclipses occur. Only two (or occasionally three) eclipse seasons occur each year, and each season lasts about 35 days and repeats just short of six months (173 days) later; thus two full eclipse seasons always occur each year. Either two or three eclipses happen each eclipse season. In the sequence below, each eclipse is separated by a fortnight. Related eclipses Eclipses in 2051 A partial solar eclipse on April 11. A total lunar eclipse on April 26. A partial solar eclipse on October 4. A total lunar eclipse on October 19. Metonic Preceded by: Solar eclipse of December 16, 2047 Followed by: Solar eclipse of July 24, 2055 Tzolkinex Preceded by: Solar eclipse of August 23, 2044 Followed by: Solar eclipse of November 16, 2058 Half-Saros Preceded by: Lunar eclipse of September 29, 2042 Followed by: Lunar eclipse of October 9, 2060 Tritos Preceded by: Solar eclipse of November 4, 2040 Followed by: Solar eclipse of September 3, 2062 Solar Saros 125 Preceded by: Solar eclipse of September 23, 2033 Followed by: Solar eclipse of October 15, 2069 Inex Preceded by: Solar eclipse of October 25, 2022 Followed by: Solar eclipse of September 13, 2080 Triad Preceded by: Solar eclipse of December 4, 1964 Followed by: Solar eclipse of August 5, 2138 Solar eclipses 2051–2054 Saros 125 Metonic series Tritos series Inex series References External links http://eclipse.gsfc.nasa.gov/SEplot/SEplot2051/SE2051Oct04P.GIF 2051 in science 2051 10 4 2051 10 4
Solar eclipse of October 4, 2051
Astronomy
539
71,364,289
https://en.wikipedia.org/wiki/RR%20Lyncis
RR Lyncis is a star system in the northern constellation of Lynx, abbreviated RR Lyn. It is an eclipsing binary of the Algol type; one of the closest in the northern sky at an estimated distance of approximately 263 light years based on parallax measurements. The system is faintly visible to the naked eye with a combined apparent visual magnitude of 5.53. During the primary eclipse the brightness drops to 6.03, while it decreases to magnitude 5.90 with the secondary eclipse. The system is drifting closer to the Sun with a radial velocity of −12 km/s. This star was found to have a variable radial velocity by W. S. Adams, based on measurements taken in 1911, which suggested it is a spectroscopic binary system. At the time it was identified as Boss 1607 and Groom 1149. Orbital elements for the binary were first published in 1915 by W. E. Harper. In 1931, C. M. Huffer determined Boss 1607 to be an eclipsing binary, based on a light curve generated using photoelectric measurements. This showed a period of 9.9450 days with a magnitude difference of 1.20 between the components. N. G. Roman in 1949 found this to be a metallic-line star and a possible member of the Ursa Major stream. In 1960, C. and M. Jaschek published a spectral analysis of RR Lyn that showed hydrogen lines for a star of type A7, a K-line of type A3, and metallic lines of type F0 or cooler. Kh. F. Khaliullin and A. I. Khaliullina in 2002 found that the timing of the primary and secondary eclipses underwent quasi-period oscillations. This may be explained by a third body with a mass 90% of the Sun in orbit with the pair. However, as of 2006 the presence of this object has not been confirmed through spectroscopic measurement. This is a detached double-lined spectroscopic binary with an orbital period of 9.95 days and a small eccentricity. The orbital plane is inclined at an angle of 87.5°, so both stars are seen to eclipse each other once per orbit. The primary component is a slightly-evolved Am star with 1.9 times the mass and 2.6 times the radius of the Sun. The secondary is an F-type main-sequence star with 1.5 times the mass of the Sun and 1.6 times the Sun's radius. The system exhibits pulsation behavior, most of which is attributed to the secondary. The higher frequency modes are Delta Scuti-type pulsations, while the intermediate frequencies are of the Gamma Doradus type. Lower frequency pulsations may be tidally-excited. The system is about one billion years old. References Further reading F-type main-sequence stars Am stars Delta Scuti variables Gamma Doradus variables Algol variables Spectroscopic binaries Lynx (constellation) Durchmusterung objects 2291 044691 030651 Lyncis, RR
RR Lyncis
Astronomy
635
65,611,509
https://en.wikipedia.org/wiki/List%20of%20plant%20genus%20names%20with%20etymologies%20%28L%E2%80%93P%29
Since the first printing of Carl Linnaeus's Species Plantarum in 1753, plants have been assigned one epithet or name for their species and one name for their genus, a grouping of related species. Many of these plants are listed in Stearn's Dictionary of Plant Names for Gardeners. William Stearn (1911–2001) was one of the pre-eminent British botanists of the 20th century: a Librarian of the Royal Horticultural Society, a president of the Linnean Society and the original drafter of the International Code of Nomenclature for Cultivated Plants. The first column below contains seed-bearing genera from Stearn and other sources as listed, excluding those names that no longer appear in more modern works, such as Plants of the World by Maarten J. M. Christenhusz (lead author), Michael F. Fay and Mark W. Chase. Plants of the World is also used for the family and order classification for each genus. The second column gives a meaning or derivation of the word, such as a language of origin. The last two columns indicate additional citations. Key Latin: = derived from Latin (otherwise Greek, except as noted) Ba = listed in Ross Bayton's The Gardener's Botanical Bu = listed in Lotte Burkhardt's Index of Eponymic Plant Names CS = listed in both Allen Coombes's The A to Z of Plant Names and Stearn's Dictionary of Plant Names for Gardeners G = listed in David Gledhill's The Names of Plants St = listed in Stearn's Dictionary of Plant Names for Gardeners Genera See also Glossary of botanical terms List of Greek and Latin roots in English List of Latin and Greek words commonly used in systematic names List of plant genera named for people: A–C, D–J, K–P, Q–Z List of plant family names with etymologies Notes Citations References See http://creativecommons.org/licenses/by/4.0/ for license. Further reading Available online at the Perseus Digital Library. Available online at the Perseus Digital Library. Systematic Greek words and phrases Systematic Systematic Taxonomy (biology) Glossaries of biology Gardening lists Genus names with etymologies (L-P) Etymologies, L Wikipedia glossaries using tables
List of plant genus names with etymologies (L–P)
Biology
477
32,421,458
https://en.wikipedia.org/wiki/Ophiuchus%20in%20Chinese%20astronomy
The modern constellation Ophiuchus lies across two of the quadrants symbolized by the Azure Dragon of the East (東方青龍, Dōng Fāng Qīng Lóng) and The Black Tortoise of the North (北方玄武, Běi Fāng Xuán Wǔ), and Three Enclosures (三垣, Sān Yuán), that divide the sky in traditional Chinese uranography. The name of the western constellation in modern Chinese is 蛇夫座 (shé fū zuò), which means "the snake man constellation". Stars The map of Chinese constellation in constellation Ophiuchus area consists of : See also Traditional Chinese star names Chinese constellations References External links Ophiuchus – Chinese associations 香港太空館研究資源 中國星區、星官及星名英譯表 天象文學 台灣自然科學博物館天文教育資訊網 中國古天文 中國古代的星象系統 Astronomy in China Ophiuchus
Ophiuchus in Chinese astronomy
Astronomy
208
69,670,410
https://en.wikipedia.org/wiki/Iron%28III%29%20azide
Iron(III) azide, also called ferric azide, is a chemical compound with the formula . It is an extremely explosive, impact-sensitive, hygroscopic dark brown solid. This compound is used to prepare various azidoalkanes, such as n-butyl azide, from alkenes via formation of alkylboranes and subsequent anti-Markovnikov addition of azide group. Preparation This compound is prepared by the reaction of sodium azide and iron(III) sulfate in methanol: Iron(III) azide can also be formed by pulse gamma-irradiation of a mixture of iron(II) perchlorate, sodium azide, and hydrogen peroxide. Under these conditions, a neutral N3 radical is formed, which oxidizes the iron(II) to iron(III); the iron(III) then promptly combines with azide ions. References Iron(III) compounds Azides
Iron(III) azide
Chemistry
201
8,384,010
https://en.wikipedia.org/wiki/List%20of%20fluid%20mechanics%20journals
This is a list of scientific journals related to the field of fluid mechanics. See also List of scientific journals List of physics journals List of materials science journals Fluid mechanics Fluid mechanics
List of fluid mechanics journals
Chemistry,Engineering
36
75,145,251
https://en.wikipedia.org/wiki/Crematogaster%20aurora
Crematogaster aurora is a valid species of myrmicine ant that lived in Baltic Europe about 46 million to 43 million years ago during the Cenozoic era Eocene epoch. C. aurora has a similar look to the ant genus Acanthomyrmex and shares some similarities with the ant genus Pristomyrmex. The fossil found of C. aurora is of a queen ant that is brown in coloration. It probably died by drowning in a lake approximately 46 million years ago. References Insects described in 2015 aurora Eocene insects of Europe Baltic amber Species known from a single specimen Fossil ant taxa
Crematogaster aurora
Biology
126
40,200,137
https://en.wikipedia.org/wiki/Comparison%20of%20crewed%20space%20vehicles
A number of different spacecraft have been used to carry people to and from outer space. Table code key Orbital and interplanetary space vehicles Suborbital space vehicles Footnotes See also Cargo spacecraft (robotic resupply spacecraft) Comparison of orbital launch systems Comparison of orbital rocket engines Comparison of space station cargo vehicles Human spaceflight References Technological comparisons
Comparison of crewed space vehicles
Technology
70
18,874,133
https://en.wikipedia.org/wiki/Polysubstance%20dependence
Polysubstance dependence refers to a type of substance use disorder in which an individual uses at least three different classes of substances indiscriminately and does not have a favorite substance that qualifies for dependence on its own. Although any combination of three substances can be used, studies have shown that alcohol is commonly used with another substance. One study on polysubstance use categorized participants who used multiple substances according to their substance of preference. The results of a longitudinal study on substance use led the researchers to observe that excessively using or relying on one substance increased the probability of excessively using or relying on another substance. Common combinations The three substances were cocaine, alcohol, and heroin, which implies that those three are very popular. Other studies have found that opiates, cannabis, amphetamines, hallucinogens, inhalants, and benzodiazepines are often used in combination as well. Presentation Associated cognitive impairments Cognition refers to what happens in the mind, such as mental functions like "perception, attention, memory, language, problem solving, reasoning, and decision making." Although many studies have looked at the cognitive impairments of individuals who are dependent on one substance, there are few researchers who have tried to determine the problems with cognitive functioning that are caused by dependence on multiple substances. Therefore, what is known about the effects of polysubstance dependence on mental abilities is based on the results of a few studies. Learning ability The effect of polysubstance dependence on learning ability is one area of interest to researchers. A study involving 63 polysubstance dependent women and 46 controls (participants who were not using substances) used the Benton Visual Retention Test (BVRT) and the California Verbal Learning Test (CVLT) to look at visual memory and verbal ability. This study showed that in polysubstance dependent women, verbal learning ability was significantly decreased, though visual memory was not affected. In addition, alcohol and cocaine use led to more severe issues with verbal learning, recall, and recognition. Memory, reasoning and decision making Sometimes studies about specific groups in the general population can be informative. One study decided to test the cognitive abilities of participants in rave parties who used multiple substances. To do this, they compared 25 rave party attenders with 27 control participants who were not using substances. The results of this study indicated that in general, the rave attender group did not perform as well on tasks that tested speed of information processing, working memory, knowledge of similarities between words, ability to attend to a task with interference in the background, and decision making. Certain substances were associated with particular mental functions, but the researchers suggested that the impairments for working memory and reasoning were caused by the misuse of multiple substances. Another study that tried to find differences between the effects of particular substances focused on people with polysubstance use who were seeking treatment for addictions to cannabis, cocaine, and heroin. They studied a group of people with polysubstance use and a group that was not dependent on any substances. Because alcohol was a common co-substance for nearly all of the people in the polysubstance use group, it was difficult to tell exactly which substances were affecting certain cognitive functions. The researchers found that the difference in the two groups' performance levels on executive function, or higher-level cognitive processing tasks were consistently showing that the polysubstance group scored lower than the control group. In general, this meant that multiple substances negatively affected the polysubstance group's cognitive functioning. More specifically, the researchers found that the amount of cannabis and cocaine affected the verbal part of working memory, the reasoning task, and decision making, while cocaine and heroin had a similar negative effect on visual and spatial tasks, but cannabis particularly affected visual and spatial working memory. These results suggest that the combined use of cannabis, cocaine, and heroin impair more cognitive functions more severely than if used separately. Alcohol's negative effects on learning, spatial abilities and memory has been shown in many studies. This raises a question: does using alcohol in combination with other substances impair cognitive functioning even more? One study decided to try to determine if people with polysubstance use who also recreationally use alcohol would display poorer performance on a verbal learning and memory test in comparison to those who consumed excessive amounts of alcohol specifically. The California Verbal Learning Test (CVLT) was used due to its ability to "quantify small changes in verbal learning and memory" by evaluating errors made during the test and the strategies used to make those errors. The results of this study showed that the group of people with polysubstance and alcohol use performed poorly on the CVLT recall and recognition tests compared to the group of people who exclusively consumed excessive alcohol only, which implies that polysubstance use impaired the memory and learning in a different way than the effects of alcohol alone can explain. Length of abstinence matter To examine whether abstinence for long periods of time helps people with polysubstance use recover their cognitive function, a group of researchers tested 207 polysubstance dependent men, of whom 73.4% were dependent on three or more substances. The researchers were interested in six areas of cognitive functioning, which included visual memory, verbal memory, knowledge of words, abstract reasoning, inhibition (interference), and attention. The study used the Benton Visual Retention Test (BVRT) for testing visual memory, the California Verbal Learning Test (CVLT) for verbal memory, the Wechsler Adult Intelligence Scale vocabulary portion for knowledge of words, the Booklet Category Test for abstract reasoning, the Stroop Neuropsychological Screening task for inhibition, and the Trail Making Test for attention. The results showed that neuropsychological ability did not improve with increases in the length of time abstinent. This suggests that polysubstance dependence leads to serious impairment which cannot be recovered much over the span of a year. Causes Biological There is data to support that some genes contribute to substance dependence. Some studies have focused on finding genes that predispose the person to be dependent on marijuana, cocaine, or heroin by studying genes that control a person's dopamine and opioid receptors, but no conclusive findings were reported. Other researchers found a connection between dopamine receptor genes and dependency on a substance. A potential problem with this study was that alcohol is commonly used with another substance, so the results of the study may not have been caused by dependency on a single substance. This means that multiple substances may have been contributing to the results, but the researchers suggested that further research should be done. However, there are studies that have found evidence of the influence of genes on vulnerability to substance dependence. These studies often use genotype, or the genetic information found on a person's chromosomes, and phenotype, which consists of the visible features of a person, to look at genetic patterns. One study examined the phenotype and genotype of 1,858 participants from 893 families to look at differences in three nicotinic acetylcholine receptor genes found within these individuals. The experimenters found significant connections between receptor genes for nicotine and polysubstance dependence, which indicated that differences in these genes can create the risk of being dependent on multiple substances. Psychological A 1985 study conducted by Khantzian and Treece found that 65% of their opioid-dependent sample met criteria for a personality disorder diagnosis. In the same study, 93% of the sample had a comorbid disorder, implying that the comorbid disorder plays some role in the addiction. It has also been shown that depression and polysubstance dependence are often both present at the same time. If a person is genetically predisposed to be depressed then they are at a higher risk of having polysubstance dependence. Possibly the most widely accepted cause of addictions is the self-medication hypothesis, that views substance addiction as a form of coping with stress through negative reinforcement, by temporarily alleviating awareness of or concerns over the stressor. People who use substances learn that the effects of each type of substance works to relieve or better painful states. They use substances as a form of self-medication to deal with difficulties of self-esteem, relationships, and self-care. Individuals with substance use disorders often are overwhelmed with emotions and painful situations and turn to substances as a coping method. Sociocultural The sociocultural causes are areas in a person's life that might have influenced their decision to start and continue using multiple substances. Sociocultural causes can be divided into social causes and cultural causes. Social Causes: Some studies have shown that adolescents have one of the highest rates of polysubstance dependence. According to one study this population, ages 12–25, represents about half of the nation's population that uses illicit substances. Of these individuals, half of them have started using substances by the end of 12th grade. This could be attributed to social expectations of peers, peer pressure to fit in, or a way of numbing their emotions. Some of these young kids start trying different substances initially to fit in, but then after a while they start to develop a tolerance for these substances and experience withdrawal if they don't have enough substances in their system and eventually become dependent on having the effects of substance dependence. With tolerance comes the craving for additional substances to get high, this constant need for that feeling is polysubstance dependence. In the older generations, polysubstance dependence had been linked to additional considerations such as personality disorder, homelessness, bipolar disorder, major depressive disorder and so on. Medical care being so expensive and difficult to get long term has been linked to polysubstance dependence. Those who need psychological help sometimes use multiple substances as a type of self medication to help manage their mental illnesses. Comorbidity of mental disorders For most of these disorders, in relation to polysubstance dependence, there is a vicious cycle that those with a dependence go through. First, ingesting the substance creates a need for more, which creates a dopamine surge, which then creates pleasure. As the dopamine subsides, the pleasure adds to the emotional and physical pain and triggers stress transmitters, which in turn creates a craving, which must then be medicated, and thus the cycle begins again. However, the next time they use, more of the substance will need to be used to get to the same degree of intoxication . Depression Scientists have hypothesized that the use of a substance either causes a mood disorder such as depression or at least attributes to a pre-existing one. Additionally, the substances that people with depression use can be a misguided method of self-medication in order to manage their depression. This is the classic chicken or egg hypothesis, does the pre-existing condition cause dependence or does dependence cause the condition? The underlying mental illness needs to be identified and treated in conjunction with treating the polysubstance dependence in order to increase the success rate of treatment and decrease the probability of relapse. One specific study focused on alcohol and depression, because they are so commonly inter-related. Researchers have discovered that depression continues for several weeks after a patient had been rehabilitated and those who relapsed developed depression again. This means that the onset of depression happens after alcohol dependence occurs, which means that alcohol is a major contributor to depression. Eating disorders One study showed that patients who are recovering from an addiction, who have had an eating disorder in the past, often use food to try to replace the substance that they are no longer getting. Or they obsess over controlling their weight and appearance. Some rehabilitation centers have licensed nutritionists to help patients develop healthy eating habits to help them cope while recovering from their addictions. It is important that those who have a former eating disorder be taught how to eat healthfully, so they don't continuously switch from one addiction back to another. Diagnosis According to the DSM-IV, a diagnosis of polysubstance dependence must include a person who has used at least three different substances (not including caffeine or nicotine) indiscriminately, but does not have a preference to any specific one. In addition they must show a minimum of three of the following symptoms listed below, all within the past twelve months. There is a distinct difference between a person having three separate dependence issues and having Polysubstance dependence the main difference is polysubstance dependence means that they are not specifically addicted to one particular substance. This is often confused with multiple specific dependences present at the same time. To elaborate, if a person is addicted to three separate substance such as cocaine, methamphetamines and alcohol and is dependent on all three then they would be diagnosed with three separate dependence disorders existing together (cocaine dependence, methamphetamine dependence and alcohol dependence,) not polysubstance dependence. In addition to using three different substances without a preference to one, there has to be a certain level of dysfunction in a person's life to qualify for a diagnosis of polysubstance dependence. One of the bigger challenges that often occurs when trying to diagnose is the fact that people don't always report what they are taking because they are afraid of getting into legal trouble. When coding polysubstance Dependence in a DSM-IV it would be a multiaxial diagnosis 304.80- Polysubstance Dependence", next to the classification, it is accompanied by a list of other types of Substance dependence (e.g. "305.00 Alcohol Abuse" or "305.60 Cocaine Abuse"). The DSM-IV requires at least three of the following symptoms present during a 12-month period for a diagnoses of polysubstance dependence. Tolerance: Use of increasingly high amounts of a substance or they find the same amount less and less effective ( the amount has to be at least 50% more of the original amount needed.) Withdrawal: Either withdrawal symptoms when the substance stops being used or the substance is used to prevent withdrawal symptoms. Loss of control: Repeated use of more substance than was initially planned or use of the substances over longer periods of time than was planned. Inability to stop using: Either unsuccessfully attempted to cut down or stop using the substances or a persistent desire to stop using. Time: Spending a lot of time studying substances, obtaining substances, using substances, being under the influence of substances, and recovering from the effects of substances. Interference with activities: Give up or reduce the amount of time involved in recreational activities, social activities, and/or occupational activities because of the use of substances. Harm to self: Continuous use of substances despite having a physical or psychological problem caused by or made worse by the use of substances. DSM-5 eliminated polysubstance disorder; there the substances must be specified, among other related changes. Treatment Treatment for polysubstance dependence has many critical aspects. Substance rehabilitation is a lengthy and difficult process. Treatment must be individualized and last a sufficient amount of time to ensure the patient has kicked the addictions and to ensure the prevention of relapse. The most common forms of treatment for polysubstance dependence include: inpatient and outpatient treatment centers, counseling and behavioral treatments, and medications. It is important that treatments be carried on throughout the patient's life in order to prevent relapse. It is a good idea that recovering addicts continue to attend social support groups or meet with counselors to ensure they do not relapse. Inpatient treatment center Inpatient treatment centers are treatment centers where addicts move to the facility while they are undergoing treatment. Inpatient treatment centers offer a safe environment where patients will not be exposed to potentially harmful situations during their treatments as they would on the outside. Inpatients usually undergo the process of detoxification. Detox involves withdrawing the user (usually medically) from all substances of concern. During their stay in the treatment facility, patients often are learning to manage and identify their substance addictions and to find alternate ways to cope with whatever is the cause of their addiction. Outpatient treatments Outpatient treatments include many of the same activities offered in an inpatient treatment facility, but the patient is not protected by the secure and safe environment of an inpatient treatment center. For this reason, they are significantly less effective. The patient usually continues to hold a job and goes to treatment nightly. Twelve-step programs Both in-patient and out-patient treatments can offer introductions to Twelve-step programs such as Alcoholics Anonymous and Narcotics Anonymous. They offer regular meetings where members can discuss their experiences in a non-judgmental and supportive place. Cognitive behavioral therapy Also offered to patients are one-on-one counseling sessions and cognitive behavioral therapy (CBT). When looked at through a cognitive-behavioral perspective, addictions are the result of learned behaviors developed through positive experiences. In other words, when an individual uses a substance and receives desired results (happiness, reduced stress, etc.) it may become the preferred way of attaining those results, leading to addictions. The goal of CBT is to identify the needs that the addictions are being used to meet and to develop skills and alternative ways of meeting those needs. The therapist will work with the patient to educate them on their addictions and give them the skills they need to change their cognitions and behaviors. Addicts will learn to identify and correct problematic behavior. They will be taught how to identify harmful thoughts and substance cravings. CBT is an effective treatment for addictions. Medications Medications can be very helpful in the long-term treatment of polysubstance dependence. Medications are a useful aid in helping to prevent or reducing substance cravings. Another benefit of medications is helping to preventing relapse. Since substance use disorders affect brain functioning, medications assist in returning to normal brain functioning. People who use multiple substances require medications for each substance they use, as the current medications do not treat all substance use disorders simultaneously. Medications are a useful aid in treatments, but are not effective when they are the sole treatment method. Substance use Disorder Medications Methadone treatment for heroin addiction. Naltrexone: Reduces opiates and alcohol cravings. Disulfiram: induces intense nausea after drinking alcohol. Acamprosate: normalizes brain chemistry disrupted by alcohol withdrawal and aids alcohol abstinence. Buprenorphine/naloxone: The two medications together reduce cravings and block the pleasure from opiates. Epidemiology There are not very many studies that have examined how often polysubstance dependence occurs or how many people are dependent on multiple substances. However, according to a study that analyzed the results from the National Epidemiological Survey on Alcohol and Related Conditions, approximately 215.5 out of a total of 43,093 individuals in the United States (0.5%) met the requirements for polysubstance use disorder. Another study suggested that the number of new cases of polysubstance dependence has been going up. This idea was supported by a study that took place in Munich, Germany. A group of researchers chose to look at responses to a survey using the M-Composite International Diagnostic Interview (M-CIDI). The M-CIDI is a version of the Composite International Diagnostic Interview (CIDI). The researchers collected data from 3,021 participants, all between the ages of 14 and 24, to estimate the prevalence, or total number of cases, of substance use and of polysubstance use/dependence. The results of this study indicated that of the 17.3% who said that they regularly used substances, 40% said that they used more than one substance, but 3.9% specifically reported using three or more substances, indicating that there is a lot of overlap in the use of different substances. The researchers compared their results to earlier German studies and found that substance dependence seems to be increasing, at least in Germany. Gender differences Women and men differ in various ways when it comes to addictions. Research has shown that women are more likely to be polysubstance dependent. It has been noted that a larger percentage of women use licit (legal) substances such as tranquilizers, sedatives, and stimulants. On the other hand, men are more likely to use illicit (illegal) substances such as cocaine, methamphetamine, and other illicit substances. Research suggests that women addicts more frequently have a family history of substance use. When asked to describe their onset of addictions, women more frequently describe their addiction as sudden where as men describe them as gradual. Females have a higher percentage of fatty tissues and a lower percentage of body water than men. Therefore, women absorb substances more slowly. This means these substances are at a higher concentration in a woman's bloodstream. Female addicts are known to be at greater risk for fatty liver disease, hypertension, anemia, and other disorders. See also Self-medication References External links A great resource for more information: http://www.nida.nih.gov/nidahome.html Clinical pharmacology
Polysubstance dependence
Chemistry
4,324
33,314,617
https://en.wikipedia.org/wiki/Radiation%20sensitivity
Radiation sensitivity is the susceptibility of a material to physical or chemical changes induced by radiation. Examples of radiation sensitive materials are silver chloride, photoresists and biomaterials. Pine trees are more radiation susceptible than birch due to the complexity of the pine DNA in comparison to the birch. Examples of radiation insensitive materials are metals and ionic crystals such as quartz and sapphire. The radiation effect depends on the type of the irradiating particles, their energy, and the number of incident particles per unit volume. Radiation effects can be transient or permanent. The persistence of the radiation effect depends on the stability of the induced physical and chemical change. Physical radiation effects depending on diffusion properties can be thermally annealed whereby the original structure of the material is recovered. Chemical radiation effects usually cannot be recovered. See also Geochronometry- the quantitative measurement of geologic time Fission track dating- the radiometric dating technique based on fission fragments Radiosensitivity- the susceptibility of living cells, tissues, organs or organisms to the effects of ionizing radiation References Radiation effects
Radiation sensitivity
Physics,Materials_science,Engineering
221
69,470,433
https://en.wikipedia.org/wiki/Surplus%20sharing
Surplus sharing is a kind of a fair division problem where the goal is to share the financial benefits of cooperation (the "economic surplus") among the cooperating agents. As an example, suppose there are several workers such that each worker i, when working alone, can gain some amount ui. When they all cooperate in a joint venture, the total gain is u1+...+un+s, where s>0. This s is called the surplus of cooperation, and the question is: what is a fair way to divide s among the n agents? When the only available information is the ui, there are two main solutions: Equal sharing: each agent i gets ui+s/n, that is, each agent gets an equal share of the surplus. Proportional sharing: each agent i gets ui+(s*ui/Σui), that is, each agent gets a share of the surplus proportional to his external value (similar to the proportional rule in bankruptcy). In other words, ui is considered a measure of the agent's contribution to the joint venture. Kolm calls the equal sharing "leftist" and the proportional sharing "rightist". Chun presents a characterization of the proportional rule. Moulin presents a characterization of the equal and proportional rule together by four axioms (in fact, any three of these axioms are sufficient): Separability - the division of surplus within any coalition T should depend only on the total amount allocated to T, and on the opportunity costs of agents within T. No advantageous reallocation - no coalition can benefit from redistributing its ui among its members (this is a kind of strategyproofness axiom). Additivity - for each agent i, the allocation to i is a linear function of the total surplus s. Path independence - for each agent i, the allocation to i from surplus s is the same as allocating a part of s, updating the ui, and then allocating the remaining part of s. Any pair of these axioms characterizes a different family of rules, which can be viewed as a compromise between equal and proportional sharing. When there is information about the possible gains of sub-coalitions (e.g., it is known how much agents 1,2 can gain when they collaborate in separation from the other agents), other solutions become available, for example, the Shapley value. See also Bankruptcy problem - a similar problem in which the goal is to share losses (negative gains). Cost-sharing mechanism - a similar problem in which the goal is to share costs. Frederic G. Mather, Both sides of profit sharing: an 1896 article about the need to share the surplus of work fairly between employees and employers. References Fair division
Surplus sharing
Mathematics
564
43,717,830
https://en.wikipedia.org/wiki/Mac%20OS%20Romanian%20encoding
Mac OS Romanian is a character encoding used on Apple Macintosh computers to represent the Romanian language. It is a derivative of Mac OS Roman. IBM uses code page 1285 (CCSID 1285) for Mac OS Romanian. Character set Each character is shown with its equivalent Unicode code point. Only the second half of the table (code points 128–255) is shown, the first half (code points 0–127) being the same as ASCII. References Character sets Romanian
Mac OS Romanian encoding
Technology
99
12,229,416
https://en.wikipedia.org/wiki/Eternity%20II%20puzzle
The Eternity II puzzle (E2 or E II) is an edge-matching puzzle launched on 28 July 2007. It was developed by Christopher Monckton and marketed and copyrighted by TOMY UK Ltd as a successor to the original Eternity puzzle. The puzzle was part of a competition in which a $2 million prize was offered for the first complete solution. The competition ended at noon on 31 December 2010, with no solution being found. Description The Eternity II puzzle is an edge-matching puzzle which involves placing 256 square puzzle pieces into a 16 × 16 grid, constrained by the requirement to match adjacent edges. It has been designed to be difficult to solve by brute-force computer search. Each puzzle piece has its edges on one side marked with different shape/colour combinations (collectively called "colours" here), each of which must match precisely with its neighbouring side on each adjacent piece when the puzzle is complete. The other side of each piece is blank apart from an identifying number, and is not used in the puzzle. Thus, each piece can be used in only 4 orientations. There are 22 colours, not including the grey edges. Five of the colours are found exclusively in the 60 edge-pairs ("diamonds") in the outermost ring, i.e. between the border and corner pieces, while the other 17 are used in the remaining 420 "interior" edge-pairs. The colours are used evenly, with each of the 5 border colours used in exactly 12 edge-pairs, and each of the 17 inner colours used for either 24 edge-pairs (5 colours) or 25 edge-pairs (12 colours). The total number of edge-pairs is 480. One of the five border colours is not found on any corner piece, while all of the 17 inner colours are used at least once on a border piece. There are 4 corner pieces (with two grey sides), 56 border pieces (with one grey side) and 142 = 196 inner pieces (with four coloured sides). Each piece has a unique arrangement of colours, and none of the pieces are rotationally symmetric, so each of the 256 × 4 = 1024 choices of piece and orientation results in a different pattern of edge colours. The puzzle differs from the first Eternity puzzle in that there is a non-optional starter piece (a mandatory hint) which must be placed in a specified position and orientation near the centre of the board. Two clue puzzles were available with the launch of the product, which, if solved, each give a piece position (hint) on the main 256-piece puzzle. Clue Puzzle 1 is a 36-piece square (6 × 6) puzzle and Clue Puzzle 2 is a 72-piece rectangular (12 × 6) puzzle. Two additional clue puzzles of the same dimensions were made available in 2008: the 36-piece Clue Puzzle 3 and the 72-piece Clue Puzzle 4. The rule book states that the puzzle can be solved without using the hints. Complexity The number of possible configurations for the Eternity II puzzle, assuming all the pieces are distinct, and ignoring the fixed pieces with pre-determined positions, is 256! × 4256, roughly 1.15 × 10661. A tighter upper bound to the possible number of configurations can be achieved by taking into account the fixed piece in the center and the restrictions set on the pieces on the edge: 1 × 4! × 56! × 195! × 4195, roughly 1.12 × 10557. A further upper bound can be obtained by considering the position and orientation of the hint pieces obtained through the clue puzzles. In this case the position and orientation of five pieces is known, giving an upper bound of 4! × 56! × 191! × 4191 = 3.11 × 10545, yielding a search space 3.70 × 10115 times smaller than the first approximation. To first approximation, the edge-matching constraint reduces the number of valid configurations by a factor of (1/5) for every border edge-pair and (1/17) for every inner edge-pair. The number of valid configurations is then approximated by 4! × 56! × 196! × 4196 × (1/5)60 × (1/17)420 ≈ 16.4, which is very close to unity. This indicates the puzzle has likely been designed to have only one or a few solutions, which maximizes the difficulty: more solutions (looser constraints, e.g. less colours) would make it easier to find a solution (one of many), while tighter constraints decrease the search space, making it easier to locate the (unique) solution. Optimization of the number of colours has been investigated empirically for smaller puzzles, bearing out this observation. Competition and solution After the first scrutiny date on 31 December 2008 it was announced that no complete solution had been found. A prize of $10,000 was awarded to Louis Verhaard from Lund in Sweden for a partial solution with 467 matching edges out of 480. Verhaard published three more partial solutions with the same number of matching edges. As of 30 January 2011, the official Eternity II site announces that "The final date for the correct solution of the Eternity II puzzle passes without a winner, and the $2m Prize for a correct solution to the Eternity II puzzle goes unclaimed." No verified complete solution to the Eternity 2 puzzle has ever been published. This includes Christopher Monckton's intended solution, which remains unpublished. Several fake solutions are known to have been circulated online. History and design The original Eternity puzzle was a tiling puzzle with a million-pound prize, created by Monckton. Launched in June 1999, it was solved by a computer search algorithm designed by Alex Selby and Oliver Riordan, which exploited combinatorial weaknesses of the original puzzle design. The prize money was paid out in full to Selby and Riordan. A puzzle with striking similarities to both eternity puzzles, the Diamond Dilemma, with a deadline in 1990, 10 years before the deadline of the original eternity puzzle, has fewer puzzle pieces, 160 compared with 209 and 256 for the first two eternity puzzles respectively, and yet Diamond Dilemma has not yet been solved in over 25 years. The Eternity II puzzle was designed by Monckton in 2005, this time in collaboration with Selby and Riordan, who designed a computer program that generated the final Eternity II design. According to the mathematical game enthusiast Brendan Owen, the Eternity II puzzle appears to have been designed to avoid the combinatorial flaws of the previous puzzle, with design parameters which appear to have been chosen to make the puzzle as difficult as possible to solve. In particular, unlike the original Eternity puzzle, there are likely only to be a very small number of possible solutions to the problem. Owen estimates that a brute-force backtracking search might take around 2 steps to complete. Monckton was quoted by The Times in 2005 as saying: "Our calculations are that if you used the world's most powerful computer and let it run from now until the projected end of the universe, it might not stumble across one of the solutions." Although it has been demonstrated that the class of edge-matching puzzles, of which Eternity II is a special case, is in general NP-complete, the same can be said of the general class of polygon packing problems, of which the original Eternity puzzle was a special case. Like the original Eternity puzzle, it is easy to find large numbers of ways to place substantial numbers of pieces on the board whose edges all match, making it seem that the puzzle is easy. However, given the low expected number of possible solutions, it is presumably astronomically unlikely that any given partial solution will lead to a complete solution. See also TetraVex, a similar simpler (no piece rotation or border pieces) edge-matching puzzle game from the Microsoft Entertainment Pack, shown to be NP-complete. References External links Official website (archived) Flash demo of a 4x4 puzzle from the original (now defunct) website Online solution visualizer Eternity II discussion forum (Groups.io) Description of Eternity II and discussion of solvers Description of Louis Verhaard's Eternity II solver used by Anna Karlsson Software: Open Source Matlab Eternity II Solver Open Source Eternity II Editor/Solver software Open Source Eternity II puzzle software E2Lab : Free Eternity II Editor/Solver software E2Solver : Open Source Eternity II puzzle solver Android app for Eternity II type edge matching puzzles. iPhone and iPad app for Eternity II type edge matching puzzles. Tiling puzzles Puzzle competitions Products introduced in 2007
Eternity II puzzle
Physics,Mathematics
1,753
31,381,517
https://en.wikipedia.org/wiki/United%20States%20Department%20of%20State%20panic%20button%20software
The panic button software is an application being developed by the United States Department of State as part of its "Internet freedom programming.” The program, which is designed for mobile devices, will allow users to wipe the contacts in address books, history, and text messages, and also sends out an alert to all the contacts. The application, which was first reported by Reuters, will be used by qualified social activists. The application has received criticism that it may be used against American law enforcement. The United States has trained over 5,000 activists with plans to control the application's distribution. References Mobile software
United States Department of State panic button software
Technology
122
222,434
https://en.wikipedia.org/wiki/Table%20of%20divisors
The tables below list all of the divisors of the numbers 1 to 1000. A divisor of an integer n is an integer m, for which n/m is again an integer (which is necessarily also a divisor of n). For example, 3 is a divisor of 21, since 21/7 = 3 (and therefore 7 is also a divisor of 21). If m is a divisor of n, then so is −m. The tables below only list positive divisors. Key to the tables d(n) is the number of positive divisors of n, including 1 and n itself σ(n) is the sum of the positive divisors of n, including 1 and n itself s(n) is the sum of the proper divisors of n, including 1 but not n itself; that is, s(n) = σ(n) − n a deficient number is greater than the sum of its proper divisors; that is, s(n) < n a perfect number equals the sum of its proper divisors; that is, s(n) = n an abundant number is lesser than the sum of its proper divisors; that is, s(n) > n a highly abundant number has a sum of positive divisors that is greater than any lesser number; that is, σ(n) > σ(m) for every positive integer m < n. Counterintuitively, the first seven highly abundant numbers are not abundant numbers. a prime number has only 1 and itself as divisors; that is, d(n) = 2 a composite number has more than just 1 and itself as divisors; that is, d(n) > 2 a highly composite number has a number of positive divisors that is greater than any lesser number; that is, d(n) > d(m) for every positive integer m < n. Counterintuitively, the first two highly composite numbers are not composite numbers. a superior highly composite number has a ratio between its number of divisors and itself raised to some positive power that equals or is greater than any other number; that is, there exists some ε such that for every other positive integer m a primitive abundant number is an abundant number whose proper divisors are all deficient numbers a weird number is an abundant number that is not semiperfect; that is, no subset of the proper divisors of n sum to n 1 to 100 101 to 200 201 to 300 301 to 400 401 to 500 501 to 600 601 to 700 701 to 800 801 to 900 901 to 1000 Sortable 1-1000 See also External links Divisor function Elementary number theory Mathematics-related lists Mathematical tables Number-related lists
Table of divisors
Mathematics
578
1,342,704
https://en.wikipedia.org/wiki/Hyphomicrobiales
The Hyphomicrobiales (synonom Rhizobiales) are an order of Gram-negative Alphaproteobacteria. The rhizobia, which fix nitrogen and are symbiotic with plant roots, appear in several different families. The four families Nitrobacteraceae, Hyphomicrobiaceae, Phyllobacteriaceae, and Rhizobiaceae contain at least several genera of nitrogen-fixing, legume-nodulating, microsymbiotic bacteria. Examples are the genera Bradyrhizobium and Rhizobium. Species of the Methylocystaceae are methanotrophs; they use methanol (CH3OH) or methane (CH4) as their sole energy and carbon sources. Other important genera are the human pathogens Bartonella and Brucella, as well as Agrobacterium, an important tool in genetic engineering. Taxonomy Accepted families Aestuariivirgaceae Li et al. 2019 Afifellaceae Hördt et al. 2020 Ahrensiaceae Hördt et al. 2020 Alsobacteraceae Sun et al. 2018 Amorphaceae Hördt et al. 2020 Ancalomicrobiaceae Dahal et al. 2018 Aurantimonadaceae Hördt et al. 2020 Bartonellaceae Gieszczykiewicz 1939 (Approved Lists 1980) Beijerinckiaceae Garrity et al. 2006 Blastochloridaceae Hördt et al. 2020 Boseaceae Hördt et al. 2020 Breoghaniaceae Hördt et al. 2020 Brucellaceae Breed et al. 1957 (Approved Lists 1980) Chelatococcaceae Dedysh et al. 2016 Cohaesibacteraceae Hwang and Cho 2008 Devosiaceae Hördt et al. 2020 Hyphomicrobiaceae Babudieri 1950 (Approved Lists 1980) Kaistiaceae Hördt et al. 2020 Lichenibacteriaceae Pankratov et al. 2020 Lichenihabitantaceae Noh et al. 2019 Methylobacteriaceae Garrity et al. 2006 Methylocystaceae Bowman 2006 Nitrobacteraceae corrig. Buchanan 1917 (Approved Lists 1980) Notoacmeibacteraceae Huang et al. 2017 Parvibaculaceae Hördt et al. 2020 Phreatobacteraceae Hördt et al. 2020 Phyllobacteriaceae Mergaert and Swings 2006 Pleomorphomonadaceae Hördt et al. 2020 Pseudoxanthobacteraceae Hördt et al. 2020 Rhabdaerophilaceae Ming et al. 2020 Rhizobiaceae Conn 1938 (Approved Lists 1980) Rhodobiaceae Garrity et al. 2006 Roseiarcaceae Kulichevskaya et al. 2014 Salinarimonadaceae Cole et al. 2018 Segnochrobactraceae Akter et al. 2020 Stappiaceae Hördt et al. 2020 Tepidamorphaceae Hördt et al. 2020 Xanthobacteraceae Lee et al. 2005 Unassigned Genera The following genus has not been assigned to a family: Flaviflagellibacter Dong et al. 2019 Provisional Taxa These taxa have been published, but have not been validated according to the Bacteriological Code: "Nordella" La Scola et al. 2004 "Propylenellaceae" Liu et al. 2021 "Propylenella" Liu et al. 2021 "Propylenella binzhouense" Liu et al. 2021 "Thermopetrobacter" Sislak 2013 Phylogeny The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature and the phylogeny is based on whole-genome sequences. Natural genetic transformation Natural genetic transformation has been reported in at least four Hyphomicrobiales species: Agrobacterium tumefaciens, Methylobacterium organophilum, Ensifer adhaerens, and Bradyrhizobium japonicum. Natural genetic transformation is a sexual process involving DNA transfer from one bacterial cell to another through the intervening medium, and the integration of the donor sequence into the recipient genome by homologous recombination. See also Lar1 Notes References Further reading Bacteria orders Soil biology
Hyphomicrobiales
Biology
911
580,039
https://en.wikipedia.org/wiki/Goods
In economics, goods are items that satisfy human wants and provide utility, for example, to a consumer making a purchase of a satisfying product. Economics focuses on the study of economic goods, or goods that are scarce; in other words, producing the good requires expending effort or resources. Economic goods contrast with free goods such as air, for which there is an unlimited supply. A consumer good or "final good" is any item that is ultimately consumed, rather than used in the production of another good. For example, a microwave oven or a bicycle that is sold to a consumer is a final good or consumer good, but the components that are sold to be used in those goods are intermediate goods. For example, textiles or transistors can be used to make some further goods. Commercial goods are construed as tangible products that are manufactured and then made available for supply to be used in an industry of commerce. Commercial goods could be tractors, commercial vehicles, mobile structures, airplanes, and even roofing materials. Commercial and personal goods as categories are very broad and cover almost everything a person sees from the time they wake up in their home, on their commute to work to their arrival at the workplace. Commodities may be used as a synonym for economic goods but often refer to marketable raw materials and primary products. Although common goods are tangible, certain classes of goods, such as information, only take intangible forms. For example, among other goods an apple is a tangible object, while news belongs to an intangible class of goods and can be perceived only by means of an instrument such as printers or television. Utility and characteristics of goods The change in utility (pleasure or satisfaction) gained by consuming one unit of a good is called its marginal utility. Goods are commonly considered to have diminishing marginal utility, which means that consuming more gives less utility per amount consumed. Some things are useful, but not scarce enough to have monetary value, such as the Earth's atmosphere; these are referred to as free goods. The opposite of a good is a bad—in other words, a 'bad' is anything with a negative value to the consumer. A bad lowers a consumer's overall welfare. Types of goods Goods' diversity allows for their classification into different categories based on distinctive characteristics, such as tangibility and (ordinal) relative elasticity. A tangible good like an apple differs from an intangible good like information due to the impossibility of a person to physically hold the latter, whereas the former occupies physical space. Intangible goods differ from services in that final (intangible) goods are transferable and can be traded, whereas a service cannot. Price elasticity also differentiates types of goods. An elastic good is one for which there is a relatively large change in quantity due to a relatively small change in price, and therefore is likely to be part of a family of substitute goods; for example, as pen prices rise, consumers might buy more pencils instead. An inelastic good is one for which there are few or no substitutes, such as tickets to major sporting events, original works by famous artists, and prescription medicine such as insulin. Complementary goods are generally more inelastic than goods in a family of substitutes. For example, if a rise in the price of beef results in a decrease in the quantity of beef demanded, it is likely that the quantity of hamburger buns demanded will also drop, despite no change in buns' prices. This is because hamburger buns and beef (in Western culture) are complementary goods. Goods considered complements or substitutes are relative associations and should not be understood in a vacuum. The degree to which a good is a substitute or a complement depends on its relationship to other goods, rather than an intrinsic characteristic, and can be measured as cross elasticity of demand by employing statistical techniques such as covariance and correlation. Bads A bad is the opposite of a good, because its consumption or presence lowers the customer's utility. With goods, a two-party transaction results in the exchange of money for some object, as when money is exchanged for a car. With a bad, however, both money and the object in question go the same direction, as when a household gives up both money and garbage to a waste collector, meaning the garbage has a negative price: the waste collector is receiving both garbage and money and thus is paying a negative amount for the garbage. Goods classified by exclusivity and competitiveness Fourfold model of goods Goods can be classified based on their degree of excludability and rivalry (competitiveness). Considering excludability can be measured on a continuous scale, some goods would not be able to fall into one of the four common categories used. There are four types of goods based on the characteristics of rival in consumption and excludability: Public Goods, Private Goods, Common Resources, and Club Goods. These four types plus examples for anti-rivalry appear in the accompanying table. Public goods Goods that are both non-rival and non-excludable are called public goods. In many cases, renewable resources, such as land, are common commodities but some of them are contained in public goods. Public goods are non-exclusive and non-competitive, meaning that individuals cannot be stopped from using them and anyone can consume this good without hindering the ability of others to consume them. Examples in addition to the ones in the matrix are national parks, or firework displays. It is generally accepted by mainstream economists that the market mechanism will under-provide public goods, so these goods have to be produced by other means, including government provision. Public goods can also suffer from the Free-Rider problem. Private goods Private goods are excludable goods, which prevent other consumers from consuming them. Private goods are also rivalrous because one good in private ownership cannot be used by someone else. That is to say, consuming some goods will deprive another consumer of the ability to consume the goods. Private goods are the most common type of goods. They include what you have to get from the store. For examples food, clothing, cars, parking spaces, etc. An individual who consumes an apple denies another individual from consuming the same one. It is excludable because consumption is only offered to those willing to pay the price. Common-pool resources Common-pool resources are rival in consumption and non-excludable. An example is that of fisheries, which harvest fish from a shared common resource pool of fish stock. Fish caught by one group of fishermen are no longer accessible to another group, thus being rivalrous. However, oftentimes, due to an absence of well-defined property rights, it is difficult to restrict access to fishermen who may overfish. Club goods Club goods are excludable but not rivalrous in the consumption. That is, not everyone can use the good, but when one individual has claim to use it, they do not reduce the amount or the ability for others to consume the good. By joining a specific club or organization we can obtain club goods; As a result, some people are excluded because they are not members. Examples in addition to the ones in the matrix are cable television, golf courses, and any merchandise provided to club members. A large television service provider would already have infrastructure in place which would allow for the addition of new customers without infringing on existing customers viewing abilities. This would also mean that marginal cost would be close to zero, which satisfies the criteria for a good to be considered non-rival. However, access to cable TV services is only available to consumers willing to pay the price, demonstrating the excludability aspect. Economists set these categories for these goods and their impact on consumers. The government is usually responsible for public goods and common goods, and enterprises are generally responsible for the production of private and club goods, although this is not always the case. History of the fourfold model of goods In 1977, Nobel winner Elinor Ostrom and her husband Vincent Ostrom proposed additional modifications to the existing classification of goods so to identify fundamental differences that affect the incentives facing individuals. Their definitions are presented on the matrix. Elinor Ostrom proposed additional modifications to the classification of goods to identify fundamental differences that affect the incentives facing individuals Replacing the term "rivalry of consumption" with "subtractability of use". Conceptualizing subtractability of use and excludability to vary from low to high rather than characterizing them as either present or absent. Overtly adding a very important fourth type of good—common-pool resources—that shares the attribute of subtractability with private goods and difficulty of exclusion with public goods. Forests, water systems, fisheries, and the global atmosphere are all common-pool resources of immense importance for the survival of humans on this earth. Changing the name of a "club" good to a "toll" good since goods that share these characteristics are provided by small scale public as well as private associations. Expansion of Fourfold model: Anti-rivalrous Consumption can be extended to include "Anti-rivalrous" consumption. Expansion of Fourfold model: Semi-Excludable The additional definition matrix shows the four common categories alongside providing some examples of fully excludable goods, Semi-excludable goods and fully non-excludeable goods. Semi-excludable goods can be considered goods or services that a mostly successful in excluding non-paying customer, but are still able to be consumed by non-paying consumers. An example of this is movies, books or video games that could be easily pirated and shared for free. Trading of goods Goods are capable of being physically delivered to a consumer. Goods that are economic intangibles can only be stored, delivered, and consumed by means of media. Goods, both tangibles and intangibles, may involve the transfer of product ownership to the consumer. Services do not normally involve transfer of ownership of the service itself, but may involve transfer of ownership of goods developed or marketed by a service provider in the course of the service. For example, sale of storage related goods, which could consist of storage sheds, storage containers, storage buildings as tangibles or storage supplies such as boxes, bubble wrap, tape, bags and the like which are consumables, or distributing electricity among consumers is a service provided by an electric utility company. This service can only be experienced through the consumption of electrical energy, which is available in a variety of voltages and, in this case, is the economic goods produced by the electric utility company. While the service (namely, distribution of electrical energy) is a process that remains in its entirety in the ownership of the electric service provider, the goods (namely, electric energy) is the object of ownership transfer. The consumer becomes an electric energy owner by purchase and may use it for any lawful purposes just like any other goods. See also Bad (economics) Commodification Fast-moving consumer goods Final goods Goods and services Intangible asset Intangible good List of economics topics Property Tangible property Service (economics) Notes References Bannock, Graham et al. (1997). Dictionary of Economics, Penguin Books. Milgate, Murray (1987), "goods and commodities," The New Palgrave: A Dictionary of Economics, v. 2, pp. 546–48. Includes historical and contemporary uses of the terms in economics. Vuaridel, R. (1968). Une définition des biens économiques. (A definition of economic goods). L'Année sociologique (1940/1948-), 19, 133–170. External links Utility Supply chain management Microeconomics
Goods
Physics
2,390
22,506,127
https://en.wikipedia.org/wiki/Inherent%20chirality
In chemistry, inherent chirality is a property of asymmetry in molecules arising, not from a stereogenic or chiral center, but from a twisting of the molecule in 3-D space. The term was first coined by Volker Boehmer in a 1994 review, to describe the chirality of calixarenes arising from their non-planar structure in 3-D space. This phenomenon was described as resulting from "the absence of a place of symmetry or an inversion center in the molecule as a whole". Boehmer further explains this phenomenon by suggesting that if an inherently chiral calixarene macrocycle were opened up it would produce an "achiral linear molecule". There are two commonly used notations to describe a molecules inherent chirality: cR/cS (arising from the notation used for classically chiral compounds, with c denoting curvature) and P/M. Inherently chiral molecules, like their classically chiral counterparts, can be used in chiral host–guest chemistry, enantioselective synthesis, and other applications. There are naturally occurring inherently chiral molecules as well. Retinal, a chromophore in rhodopsin. exists in solution as a racemic pair of enantiomers due to the curvature of an achiral polyene chain. History Calixarenes After creating a series of traditionally chiral calixarenes (through the addition of a chiral substituent group on the top or bottom rim of the macrocycle,) the first inherently chiral calixarenes were synthesized in 1982, though the molecules were not yet described as such. The inherently chiral calixarenes featured an XXYZ or WXYZ substitution pattern, such that the planar representation of the molecule does not show any chirality, and if the macrocycle were to be broken open, this would produce an achiral linear molecule. The chirality in these calixarenes is instead derived from the curvature of the molecule in space. Definition Due to the initial lack of a formal definition after the initial conception, the term inherent chirality was utilized to describe a variety of chiral molecules that don't fall into other defined chirality types. The first fully formulated definition of inherent chirality was published in 2004 by Mandolini and Schiaffino, (and later modified by Szumna): inherent chirality arises from the introduction of a curvature in an ideal planar structure that is devoid of perpendicular symmetry planes in its bidimensional representation. Inherent chirality has been known by a variety of names in the literature including bowl chirality (in fullerene fragments), intrinsic chirality, helicity (see section 3a) residual enantiomers (as applied to sterically hindered molecular propellers,) and cyclochirality (though this is often considered to be a more specific example and cannot be applied to all inherently chiral molecules). A simple example of inherent chirality is that of corannulene commonly referred to as "bowl chirality" in the literature. The chirality of an unsubstituted corranulene (containing no classic stereogenic centers) cannot be seen in a 2D representation, but becomes clear when a 3D representation is evoked, as the C5 symmetry of corranulenes provides the molecules with a source of chirality (figure 2.) Racemization of these molecules is possible through an inversion of curvature, though some inherently chiral molecules have inversion barriers comparable to a classic chiral center. Molecular symmetry Chiral plane Some inherently chiral molecules contain chirality planes, or planes within a given molecules across which the molecule is dissymmetric. Paracyclophanes often contain chiral planes if the bridge across the phenylene unit is short enough, or if the phenylene contains another substituent, not in the bridge, that hinders rotation of the phenylene unit. Chiral axis Similar to chirality planes, chirality axes arise from an axis about which the spatial arrangement of substituents creates chirality. This can be seen in helical molecules (see section 3a) as well as some alkenes. Other examples Spiro compounds (compounds with a twisted structure of two or more rings) can have inherent chirality at the spiroatom, due to the twisting of the achiral ring system. Inherently chiral alkenes have been synthesized through the use of a "buckle" where in an achiral, linear alkene is forced into a chiral conformation. Alkenes have no classical chirality, so generally, an external stereogenic center must be introduced. However, by locking the alkene into a conformation through the use of an achiral buckle allows for the creation of an inherently chiral alkene. Inherently chiral alkenes have been synthesized through the use of dialkoxysilanes, with a large enough racemization barrier that enantiomers have been isolated. See also Chirality (chemistry) Planar chirality Axial chirality References Chirality Organic chemistry Stereochemistry
Inherent chirality
Physics,Chemistry,Biology
1,089
21,104,677
https://en.wikipedia.org/wiki/TB10Cs1H1%20snoRNA
TB10Cs1H1 is a member of the H/ACA-like class of non-coding RNA (ncRNA) molecule that guide the sites of modification of uridines to pseudouridines of substrate RNAs. It is known as a small nucleolar RNA (snoRNA) thus named because of its cellular localization in the nucleolus of the eukaryotic cell. TB10Cs1H1 is predicted to guide the pseudouridylation of LSU3 ribosomal RNA (rRNA) at residue Ψ659. See also TB11Cs2H1 snoRNA TB3Cs2H1 snoRNA References Non-coding RNA
TB10Cs1H1 snoRNA
Chemistry
143
16,438,537
https://en.wikipedia.org/wiki/Hooker%20reaction
In the Hooker reaction (1936) an alkyl chain in a certain naphthoquinone (phenomenon first observed in the compound lapachol) is reduced by one methylene unit as carbon dioxide in each potassium permanganate oxidation. Mechanistically oxidation causes ring-cleavage at the alkene group, extrusion of carbon dioxide in decarboxylation with subsequent ring-closure. References Organic reactions Name reactions Degradation reactions Homologation reactions
Hooker reaction
Chemistry
97
45,253,066
https://en.wikipedia.org/wiki/Missing%20middle%20housing
Missing middle housing refers to a lack of medium-density housing in the North American context. The term describes an urban planning phenomenon in Canada, the United States, Australia and more recent developments in industrialized and newly industrializing countries due to zoning regulations favoring social and racial separation and car-dependent suburban sprawl. Medium-density housing is characterized by a range of multi-family or clustered housing types that are still compatible in scale and heights with single-family or transitional neighborhoods. Multi-family housing facilitates walkable neighborhoods and affordable housing, and provides a response to changing demographics. Instead of focusing on the number of units in a structure, density can also be increased by building types such as duplexes, rowhouses, and courtyard apartments. The term "missing middle housing" was introduced by architect Daniel Parolek in 2010. Many forms of what is now described as "missing middle" housing were built before the 1940s, including two-flats in Chicago; rowhouses in Brooklyn, Baltimore, Washington, D.C., and Philadelphia; two-family homes or "triple-decker" homes in Boston, Worcester; and bungalow courts in California. Post-WWII, housing in the United States trended significantly toward single-family with zoning making it difficult to build walkable medium-density housing in many areas and, therefore, reducing the supply of the now "missing" middle. History At the end of the 19th and beginning of the 20th century, Canadian and American cities with few exceptions, most notably New York and Chicago which already had many tall buildings, were not dramatically different in form from their European counterparts. They had a relatively small physical footprint compared to their population size, and buildings were largely 3-7 stories tall surrounded by a relatively modest ring of streetcar suburbs. Most city dwellers who were in the lower to middle-income brackets lived in dense urban environments within a practical distance of their workplace. The less well-off typically lived on either the upper floors of multi-unit residential buildings, as most did not have elevators, or in tenements. Merchants frequently lived in a residential unit above their store. Those who were better-off may have lived in a rowhouse or terrace, and starting toward the end of the 19th century, perhaps in a streetcar suburb still relatively close to the city centre. Overall, the typical arrangement of urban spaces was one where communities were serviced by small-scale owner-operated shops and transport to non-walkable destinations was done by bicycle, bus, streetcar, or train. Traditionally those in the highest income brackets had typically lived in large houses outside of, but often near to, the city. They travelled to the city originally by horse carriage and later by automobile. For most people, the need to live close to their job significantly limited spatial social stratification beyond economic class. This situation collapsed in the wake of explosive expansion of post-war suburban sprawl, which enabled white flight. The early to mid-twentieth century implementation of the suburb was thoroughly informed by this social context, and it was not uncommon for policymakers to inappropriately conflate small residential unit size, insanity, and crime with the traditional urban form; while simultaneously idealizing “rural” and upper class style estate living as its cure-all. The new car suburb was an affordable imitation of upper-class housing which became possible at such a vast scale when, after the war, factories could be turned over from producing military vehicles to consumer cars, helping to reduce the nominal price of a private automobile. Originally in the US, the legal rule was that "all persons have an equal right in the highway, and that in exercising the right each shall take due care not to injure other users of the way". Pro-automobile interests advocated for the removal of non-drivers from the road, and particularly targeted pedestrians with the invention and criminalization of “jaywalking.” Importantly Federal, State, and Provincial governments undertook massive highway building programmes and also directly subsidized the purchasing of new suburban homes (Levittown being the prototype). These government policies helped to make cars a practical choice and fostered the wholesale adoption of the car by the middle classes by the 1960s and helped to create the conditions for a decline in the quality, availability, and financial viability of public transportation. Increasingly the prestige and influence of New York and Chicago, with their high land prices and abundant skyscrapers, fostered a sense among many Canadian and Americans that “real cities” have tall buildings, and a "downtown" dominated by them, meanwhile European cities remained relatively medium-rise, dense, and pluricentric. With this in mind, it is possible to understand the factors considered most important by policy makers in the mid-twentieth century context and how they pursued policies which would no longer allow for the previously dominant medium-density building types. The resulting policies radically reformed cities into ones that typically have a unicentric urban core which is dominated by tall buildings built to be reliant on office uses with the area often referred to as the central business district (CBD). This new "urban core" of stacked office uses is typically surrounded by swathes of sub-urban and peri-urban landscapes dominated by single-family homes with gardens serviced by the private automobile, car-centric retail destinations, and vast highway networks. Impacts Longer commuting patterns The loss of flexible middle-density development serviced by affordable and widely used public transportation has resulted in high commute times for commuters, which have remained stubbornly unaffected by further investment in new road capacity due to the nature of induced demand, a practical limit on the space required to move large volumes of people in relatively large vehicles, and greatly increased costs for both the vehicle owner and government due to the inherent inefficiency compared to previous modes of transport. Other problems include difficulty for low-income residents to find affordable accommodation within a reasonably affordable and practical commute of their place of employment. Negative environmental impacts Car-centric cities are less climate-friendly due to impacts relating to inefficient use of resources, volume of paved area contributing to flood risk, and potential loss of natural habitats to human development. Loss of small retail Without middle-density development to support them, cities have lost retailers not operating with substantial economies of scale. “Mom and pop shops” are replaced by big-box stores. Loss of third spaces Cities without middle density have also seen the loss of third places, places where people spend time which is neither their private residence nor their place of work. These places are important for recreation, meeting neighbours, for adults to make friends, and for community organization. The loss of these third places and small businesses is due to the need of both to rely on proximity to a large number of people for whom visiting them is easy, can be spontaneous, and would not require a special trip. Some have characterized the replacement of these “third places,” where historically people of all backgrounds in the neighbourhood gathered organically, by relatively fewer spaces where people must choose to drive, as a source of social filtering and potential source of social alienation. Some have suggested the loss of genuine “third spaces” as a contributing factor to a perceived reduction in a sense of belonging, inter-group social cohesion, and a rise in generalized loneliness. Environmental racism According to the environmental geographer Laura Pulido, the historical processes of suburbanization and urban decentralization have contributed to contemporary environmental racism. Causes The polarization of Canadian and American cities into ones dominated by low- and high-density development with little in-between, has been due to implementing strict single-use land-use zoning laws at a municipal level which prioritizes these use types while making new medium-density illegal. This, combined with shifts in transportation planning at all levels, had helped to create a development paradigm which takes the private motor vehicle as its default mode of transportation, and only after that, considering other modes like walking, cycling, buses, streetcars, and subways. Public transport, where it still exists, has typically also built within this paradigm of car dependency. For example, GO Transit rail services in the Greater Toronto Area is one of the few commuter-rail services in either Canada or the United States, but is designed for commuters to drive to parking lots with a train platform where the rail service will take passengers to the CBD in the morning and return them to the parking lot in the afternoon, service has been unidirectional and only operated during rush hour. Possible Solutions Missing middle housing offers a greater choice in housing types that still blend into existing single-family neighborhoods, create more affordable housing options, and help reach sustainability goals. Missing middle housing units are usually smaller units than single-family homes because they share a lot with other homes, which results in lower per-unit land costs and, therefore, lower housing costs. Missing middle housing types are also one of the cheapest forms of housing to produce because they are typically low-rise, low parking and wood-frame construction, which avoids expensive concrete podiums. Because the construction and building materials are comparatively less complicated than larger mid- and high-rise structures, a larger pool of small-scale and local home builders can participate in the creation of this form of housing. To support municipal budgets, the denser and more efficient use of land and infrastructure may be financially productive for municipalities with more people paying taxes per acre for less infrastructure than large lot single-family homes. Increasing missing middle housing options may allow families of different sizes, types, and incomes to access quality housing. Missing middle housing tends to become naturally affordable housing as it ages, and provides a level of density that supports the shops, restaurants, and transit that are associated with walkable neighborhoods. Walkable neighborhoods may then support sustainability, health, and affordability goals by reducing reliance on personal vehicles. This would promote active transportation, reduce sprawl, reduce pollution, and reduce transportation costs by lessening the need for personal vehicles. Missing middle housing options may allow seniors to downsize without leaving their neighborhood. For example, accessory dwelling units can enable multi-generation households to have privacy while all living on the same property. Missing middle housing may enable a wider range of families to achieve homeownership by offering a wider range of housing options and prices. Additionally, missing middle housing types such as accessory dwelling units can support mortgages through the rents of those secondary units. Overall, missing middle housing options can create housing at a wide range of prices for a range of family types. Some property rights advocates believe that widely permitting missing middle housing expands property rights by allowing property owners more choice in how to use their property. Some equity advocates feel that permitting more diverse housing choices, such as missing middle housing, may reduce historic and modern inequities that keep less affluent people out of certain amenity-rich neighborhoods. Transit-oriented development (TOD) Increasingly from the 1990s onwards, transit-oriented development (TOD) has been put forward by many urban planners as a way to create more medium-density development. The idea is that creating communities of mixed-use development around public transport nodes will help to recreate demand for public transport and help to re-urbanize Canadian and American municipalities. TOD developments in Canada and the United States are typically near a public transport node, made of large plots with between one and a few buildings on them owned by one to a few owners and, typically, tall buildings and or buildings of intermediate size with a mixture of uses permitted within them predominating. Uses often include shops at the ground floor with residential and office uses interspersed throughout the upper floors. Some critics of the way the TOD concept has been implemented in Canada and the United States point out that the large TODs fail to engage in placemaking and the result is relatively large highly controlled characterless places not unlike the suburbs and strip malls they are meant to replace. These critics say that the problem is with trying to zone for what planners think a city looks like; rather than creating the transport and legal conditions to allow it to take shape organically. Nested Intensity Zoning However, it is worth noting that urban planning in Japan uses a zoning system and has not lost middle-density housing. Instead of single-use zoning, zones are defined by the "most intense" use permitted. Uses of lesser intensity are permitted in zones where higher intensity uses are permitted but higher intensity uses are not allowed in lower intensity zones. This results in nested zoning, where the higher intensity zones are inclusive of related lower intensity ones. Zoning districts in Japan are classified into twelve use zones. Each zone determines a building's shape and permitted uses. A building's shape is controlled by zonal restrictions on allowable building coverage ratios, floor area ratios, height (in absolute terms and in relation to adjacent buildings and roads), and minimum residential unit size. These controls are intended to allow adequate light and ventilation between buildings and on roads, and to ensure a decent quality of housing. In this system, rather than trying to plan for how and where to create dedicated districts of medium-density housing, planners are left to focus on creating the conditions necessary to encourage land owners to intensify the use of their plots, and ensuring new areas of medium density that arise have the amenities they need to be successful. When discussing medium-density housing, it is important to explore the differences between the approach this type of zoning uses with respect to single-detached housing compared to that traditionally used in the United States and Canada. In the United States and Canada, single-detached homes typically require large setbacks for off-street car parking and yards/gardens; while single detached homes in Japan are in many cases similarly large single detached houses but on small plots taking up virtually the entirety of the plot fronting directly on to the street with no requirement for off-street car parking; instead assuming a reliance on public transport rather than cars to meet daily needs. Roads in these areas are slow and drivers are aware they must legally share responsibility for mutual safety with all the other types of road users equally. This type of single detached house can achieve medium housing density while fostering a sense of community, municipal fiscal viability, and good residential amenity. This is achieved while maintaining privacy and access to sunlight by regulating the direction of windows, the use of very small setbacks, much higher maximum building coverage ratios, higher floor area ratios, and other considerations discussed in the previous paragraph. It is also worth noting that Japanese houses offer, on average, larger living spaces than that of many wealthy European countries which have not lost their medium-density housing. This approach to not require car parking provision or private yards/gardens in areas with high degrees of good connectivity is seen as desirable because: access to common outdoor green space is seen as sufficient for these needs or, at the very least, an acceptable tradeoff for the convenience of improved connectivity; the provision of sprawling lower-value land-uses like private car parking and residential garden spaces in such locations is viewed as a poor return on investment by developers eager to maximize living space and plot utilization; urban planners who are eager to avoid the imprudent use of limited public funds with respect to the large nominal and operational costs of public transportation, water, power, roads, etc. which usually increase over distances, while access costs for users do not; and urban planners seeking to avoid wasteful and shortsighted opportunity costs. This approach to zoning gives the landowner more flexibility in using the land while still precluding harmful or inappropriate development and maintaining the benefit of remaining predictable and easy to understand. The result is that when demand changes, like with new public transport investment, land-owners are able to, on an individual basis, redevelop their land to meet demand in a manner that can be reactive to local demand and distributes risk for the local community; for example the failure of a medium-sized building to find tenants may have a relatively small impact, whereas a large one failing to do so may hamper development of other types in the same community. This type of zoning may also help to foster a more organic and local character to communities, especially over time. Form-based code (FBC) A form-based code (FBC) is a means of regulating land development to achieve a specific urban form. Form-Based Codes foster predictable built results and a high-quality public realm by using physical form (rather than separation of uses) as the organizing principle, with less focus on land use, through municipal regulations. An FBC is a regulation, not a mere guideline, adopted into city, town, or county law and offers an alternative to conventional zoning regulation. Missing-middle housing comes in a variety of building types and densities but may be characterized by location in a walkable context, lower perceived density, small building footprints, smaller homes, reasonably low amounts of parking, simple construction, and focus on community. Forms of missing middle housing may include side-by-side duplexes, stacked duplexes, bungalow courts, accessory dwelling units (carriage houses, basement apartments, etc.), fourplexes, multiplexes, townhomes, courtyard apartments, and live/work units. These building types typically have a residential unit density in the range of 16 to 30 units per acre but are often perceived as being less dense because they are smaller in scale. Because of its scale, missing middle housing may mix into single-family neighborhoods, act as an end-grain of a single-family housing block, act as a transition between higher density housing and single-family housing, or act as a transition from a mixed-use area to a single-family area. The resulting density may support broader community desires, including walkable retail, amenities, public transportation, and increased "feet on the street". Barriers Many local governments do not allow the zoning necessary to build MMH. Owning a studio, 1 bedroom, or 2 bedroom condominium that is 600–1,000 ft2 in a multi-unit complex with a reasonable HOA monthly fee and a 1.5 detached garage isn't allowed in many areas because of zoning ordinances. Many 5-over-1 complexes were built starting in the 2010s but primarily for leasing and not owning. Recent Developments The resurgence of missing middle housing is due to many factors including resurgent market demand for this type of housing, demand for housing in amenity-rich walkable neighborhoods, the necessity of housing affordability, environmental efforts to support walkability, transit-oriented developments, and changing demographic trends. In 2014, the American Association for Retired Persons (AARP) released a report showing that more and more, Americans want to "age in place" and need easy access to services and amenities available in walkable, urban, transit-oriented communities. Millennials have been shown to drive less and seek housing choices in walkable neighborhoods close to transit. The number of automobile miles traveled increased each year between 1946 and 2004. In 2014, Americans drove less than 2004, and no more per person than in 1996. The decline was driving is most striking among young people aged 16 to 34, who drove 23% fewer miles on average in 2009 than their age group did in 2001. Research suggests that millennials prefer amenity-rich, transit rich, and walkable neighborhoods. In 2015, Small Housing B.C. stated that "The structure of the traditional North American suburb has failed to live up to the expectations of many who settled in suburban neighborhoods, and new ways are being sought to re-engineer suburban living and re-build those settlement patterns." State-level examples Several American states have adopted or proposed legislation aimed at increasing the stock of missing middle housing. Most notably, Oregon adopted House Bill 2001 in 2019. The bill requires Oregon's medium-sized cities to allow duplexes on each lot or parcel zoned for residential use that allows for the development of single-family homes. Oregon's large cities, with a population over 25,000, and cities in the Portland Metro region, must allow duplexes, triplexes, quadplexes, cottage clusters, and townhouses in residential areas. The Bill set aside funds for planning assistance to local governments to help draft local codes and allows municipalities to set reasonable design and infrastructure standards. In Massachusetts, H.5250 was adopted to require municipalities near the MTBA to reasonably allow duplex or multi-family housing near transit stations. The Bill also created financial incentives for communities to zone for "smart growth" and made it easier for municipalities to adopt zoning ordinances or zoning amendments. In 2019, Washington State adopted E2SHB 1923 encouraging all cities under the Growth Management Act (GMA) to increase residential capacity by supporting many forms of missing middle housing. The State of Washington provided grant funds to help support code changes, housing action plans, and sub-area plans to support missing middle housing types. In 2022 Maine adopted bills LD2003 and LD201 that implement several affordable housing strategies including allowing accessory dwelling units and duplexes on residential lots statewide and permitting fourplexes in certain "growth areas". The states of Vermont, New Hampshire, and California have adopted a number of bills that promote accessory dwelling units and reduce regulatory barriers to accessory dwelling unit construction. State-level action has also occurred in Australia where, citing an effort to promote more 'missing middle' development, New South Wales launched the Low Rise Housing Diversity Code and Design Guides for Low Rise Housing Diversity. The State of Connecticut House and Senate approved legislation to reduce some zoning restrictions on missing middle housing types such as accessory dwelling units. Other states have considered but not adopted similar legislation to support missing middle housing types. Illinois considered HB4869 which would have required municipalities to permit and reasonably regulate accessory dwelling units. Virginia considered HB 152 which would have required municipalities to allow, and reasonably regulate, missing middle housing types (duplexes, cottages, etc.) on all lots currently zoned for single-family housing. Maryland considered HB1406 "Planning for Modest Homes Act of 2020" which would have required census tracts that are affluent, transit-adjacent, and/or near a large number of jobs, to allow missing middle housing types. Nebraska considered LB794 would mandate every city with more than 5,000 people to allow missing middle housing in areas previously zoned exclusively for single-family detached residential. Montana considered HB 134 which would have allowed duplex, triplex, and fourplex housing in certain municipalities. North Carolina considered House Bill 401 and Senate Bill 349, which would have allowed middle housing in any neighborhood zoned for detached, single-family homes. Municipal examples Many municipalities are updating their land-use and zoning regulations to better support missing middle housing. Changes to land use regulations to support missing middle housing may also include changes such as form-based-codes, transit-oriented development, and other updates. In the United States, Portland, Oregon, has a number of historic missing middle housing types located throughout the city, most of which are duplexes, that were built before the 1920s before the city's first zoning plan was approved. Zoning for single-family homes was expanded in the 1950s and the building of duplexes or triplexes largely became illegal in Portland. In the 2010s Portland began updating its zoning regulations to permit Missing Middle Housing types. Missing Middle zoning updates have spread through the Pacific Northwest and now include Seattle, Walla Walla, Lake Stevens, Orting, Wenatchee, Eugene, Olympia, Spokane, and Bellingham, Tacoma, and Tigard among others. Zoning updates to support missing middle housing are not just found in the Pacific Northwest. Notably, In Minnesota, the Minneapolis 2040 plan called for up-zoning the city to allow more missing middle housing types throughout the city. The new zoning in Minneapolis does not prohibit the construction of single-family homes, but no neighborhoods in the city are zoned exclusively for single-family zoning. The city also eliminated mandatory parking minimums from its zoning regulations allowing builders and business owners to choose the amount of parking they provide based on the market and their unique needs. In California, Sacramento voted to permit up to four housing units on all residential lots and reduce parking requirements in order to help the city alleviate its housing crisis and to achieve equity goals. The City of Berkeley, California has voted unanimously to zone for several missing middle housing types city-wide by 2022 citing equity and housing affordability as goals. Bryan, Texas implemented a pattern-zoning policy in which the City provides several pre-designed and pre-approved plans for missing middle housing types (with significantly reduced permitting procedures) in the "midtown" portion of the city. The goal of the program is to reduce housing costs caused by design fees and lengthy permitting procedures, reduce burdens on city staff, achieve public input and support for housing designs in advance, and ensure quality housing designs. Norfolk, VA also has a missing middle pattern book with free designs for missing middle housing types including duplexes and quadplexes. Many local governments across the United States have chosen to zone for missing middle housing types in significant portions of their zoning districts including Grand Rapids Michigan, Durham, North Carolina, Kirkland Washington's cottage housing zoning, Montgomery County, Maryland's numerous housing studies, Bloomington, Indiana, and Dekalb County, Georgia. Indianapolis, Indiana chose to permit missing middle housing types (in addition to higher density housing types) along bus rapid transit corridors. Indianapolis also included missing middle housing types in its residential infill guidelines. Other cities are making long-term plans to increase the supply of missing middle housing. Charlotte, NC added language in their comprehensive plan to allow duplexes and triplexes across the city. Citing missing middle housing as a component of a larger affordable housing strategy, Raleigh, NC voted to permit several missing middle housing types in most residential zones. While some communities have not adopted regulations to widely permit the full range of missing middle housing types, they have made changes to permit accessory dwelling units. Diverse examples include large cities such as Los Angeles, CA, the City of Chicago, IL, and smaller cities such as Lexington, KY, and Santa Cruz, CA. Outside of the United States, cities in both Australia and Canada have adopted missing middle housing reforms. Notable examples in Canada include Edmonton, Alberta's missing middle zoning reforms, and Vancouver British Columbia's secondary unit zoning. Montréal, Québec is notable for its distinct architecture and urban planning that has historically included significant amounts of missing middle housing. Due to its unique history, many neighborhoods in Montreal include low-rise attached duplexes, triplexes, and apartments often with exterior stair-entry, minimal front setbacks, and with small backyards. This creates a significant level of density without high-rises. In Australia, The 30-Year Plan for Greater Adelaide includes a focus on missing middle housing as does Moreland's Medium Density Housing Review. See also Affordable housing Bicycle-friendly Duplex Form-based code Green building New Urbanism Stacked triplex Starter home Streetcar suburb Sustainable city Traditional neighborhood development Urban sprawl Zoning codes References Urban design Urban studies and planning terminology Real estate in the United States Zoning New Urbanism
Missing middle housing
Engineering
5,521
28,123,482
https://en.wikipedia.org/wiki/Decarbonylation
In chemistry, decarbonylation is a type of organic reaction that involves the loss of carbon monoxide (CO). It is often an undesirable reaction, since it represents a degradation. In the chemistry of metal carbonyls, decarbonylation describes a substitution process, whereby a CO ligand is replaced by another ligand. Organic chemistry In the absence of metal catalysts, decarbonylation (vs decarboxylation) is rarely observed in organic chemistry. One exception is the decarbonylation of formic acid: HOH → + The reaction is induced by sulfuric acid, which functions as both a catalyst and a dehydrating agent. Via this reaction, formic acid is occasionally employed as a source of CO in the laboratory in lieu of cylinders of this toxic gas. With strong heating, formic acid and some of its derivatives may undergo decarbonylation, even without adding a catalyst. For instance, dimethylformamide () slowly decomposes to give dimethylamine and carbon monoxide when heated to its boiling point (154 °C). Some derivatives of formic acid, like formyl chloride (), undergo spontaneous decarbonylation at room temperature (or below). Reactions involving oxalyl chloride (e.g., hydrolysis, reaction with carboxylic acids, Swern oxidation, etc.) often liberate both carbon dioxide and carbon monoxide via a fragmentation process. α-Hydroxy acids, e.g. (lactic acid and glycolic acid) undergo decarbonylation when treated with catalytic concentrated sulfuric acid, by the following mechanism: Silacarboxylic acids () undergo decarbonylation upon heating or treatment with base and have been investigated as carbon monoxide-generating molecules. Aldehyde decarbonylation A common transformation involves the conversion of aldehydes to alkanes. RH → RH + Decarbonylation can be catalyzed by soluble metal complexes. These reactions proceed via the intermediacy of metal acyl hydrides. An example of this is the Tsuji–Wilkinson decarbonylation reaction using Wilkinson's catalyst. (Strictly speaking, the noncatalytic version of this reaction results in the formation of a rhodium carbonyl complex rather than free carbon monoxide.) This reaction is generally carried out on small scale in the course of a complex natural product total synthesis, because although this reaction is very efficient at slightly elevated temperatures (e.g., 80 °C) when stoichiometric rhodium is used, catalyst turnover via extrusion of CO requires dissociation of a very stable rhodium carbonyl complex and temperatures exceeding 200 °C are required. This conversion is of value in organic synthesis, where decarbonylation is an otherwise rare reaction. Decarbonylations are of interest in the conversions of sugars. Ketones and other carbonyl-containing functional groups are more resistant to decarbonylation than are aldehydes. Pericyclic reactions Some cyclic molecules containing a ketone undergo a cheletropic extrusion reaction, leaving new carbon–carbon π bonds on the remaining structure. This reaction can be spontaneous, as in the synthesis of hexaphenylbenzene. Cyclopropenones and cyclobutenediones can be converted to alkynes by elimination of one or two molecules of CO, respectively. Biochemistry Carbon monoxide is released in the degradation (catabolism) of heme by the action of , NADPH and the enzyme heme oxygenase: Inorganic and organometallic synthesis Many metal carbonyls are prepared via decarbonylation reactions. The CO ligand in Vaska's complex arises by the decarbonylation of dimethylformamide: The conversion of and to their many derivatives often involves decarbonylation. Here decarbonylation accompanies the preparation of cyclopentadienyliron dicarbonyl dimer: Decarbonylation can be induced photochemically as well as using reagents such as trimethylamine N-oxide: References Chemical reactions Carbon monoxide
Decarbonylation
Chemistry
888
50,777,184
https://en.wikipedia.org/wiki/PTS%20glucose-glucoside%20%28Glc%29%20family
The PTS Glucose-Glucoside (Glc) family (TC# 4.A.1) includes porters specific for glucose, glucosamine, N-acetylglucosamine and a large variety of α- and β-glucosides, and is part of the PTS-GFL superfamily. Homology Not all β-glucoside PTS porters are in this class, as the PTS porter first described as the cellobiose β-glucoside porter is the diacetylchitobiose porter in the Lac family. The IIA, IIB and IIC domains of all of the group translocators listed below are demonstrably homologous. These porters (the IIC domains) show limited sequence similarity with and are homologous to members of the Fru family and less with members of the Lac family. The IIC domains of the glucose and glucoside subfamilies are as distant from each other as they are from the Fru, Mtl and Lac families. As is true of other members of the PTS-GFL superfamily, the IIC domains of these permeases probably have a uniform 10 TMS topology. Structure and function The three-dimensional structures of the IIA and IIB domains of the Escherichia coli glucose porter have been elucidated. IIAglc has a complex β-sandwich structure while IIBglc is a split αβ-sandwich with a topology unrelated to the split αβ-sandwich structure of HPr. Some bacteria have many PTS transport systems belonging to different families. For example, the solventogenic Clostridium acetobutylicum ATCC 824 has 13 altogether with 6 in the Glc family, 2 in the Fru family, 2 in the Lac family, 1 in the Gat family and 2 in the Man family. Several of the PTS porters in the Glc family lack their own IIA domains and instead use the glucose IIA protein (IIAglc or Crr). Most of these porters have the B and C domains linked together in a single polypeptide chain. A cysteyl residue in the IIB domain is phosphorylated by direct phosphoryl transfer from IIAglc(his~P) or one of its homologues. Those porters which lack a IIA domain include the maltose, arbutin-salicin-cellobiose, trehalose, putative glucoside and sucrose porters of E. coli. Most, but not all Scr porters of other bacteria also lack a IIA domain. BglF consists of a transmembrane domain, which in addition to TMSs, contains a large cytoplasmic loop. According to Yagur-Kroll et al., this loop, connecting TMS 1 to TMS 2, contains regions that alternate between facing-in and facing-out states and creates the sugar translocation channel. Yagur-Kroll et al. demonstrate spatial proximity between positions at the center of the big loop and the phosphorylation site, suggesting that the two regions come together to execute sugar phosphotransfer. References External links The PTS Glucose-Glucoside (Glc) Family (Transporter Classification Database, Saier Lab Group, UCSD) Protein families Transmembrane proteins Transmembrane transporters Transport proteins Integral membrane proteins
PTS glucose-glucoside (Glc) family
Biology
726