id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
1,590,615
https://en.wikipedia.org/wiki/Water%20organ
The water organ or hydraulic organ () (early types are sometimes called hydraulos, hydraulus or hydraula) is a type of pipe organ blown by air, where the power source pushing the air is derived by water from a natural source (e.g. by a waterfall) or by a manual pump. Consequently, the water organ lacks a bellows, blower, or compressor. The hydraulic organ is often confused with the hydraulis. The hydraulis is the name of a Greek instrument created by Ctesibius of Alexandria. The hydraulis has a reservoir of air which is inserted into a cistern of water. The air is pushed into the reservoir with hand pumps, and exits the reservoir as pressurized air to blow through the pipes. The reservoir is open on the bottom, allowing water to maintain the pressure on the air as the air supply fluctuates from either the pumps pushing more air in, or the pipes letting air out. On the water organ, since the 15th century, the water is also used as a source of power to drive a mechanism similar to that of the barrel organ, which has a pinned barrel that contains a specific song to be played. The hydraulis in ancient Greek is often imagined as an automatic organ, but there is no source evidence for it. Hydraulis A hydraulis is an early type of pipe organ that operated by converting the dynamic energy of water () into air pressure to drive the pipes (). Hence its name hydraulis, literally "water (driven) pipe (instrument)." It is attributed to the Hellenistic scientist Ctesibius of Alexandria, an engineer of the 3rd century BCE. The hydraulis was the world's first keyboard instrument and was the predecessor of the modern church organ. Unlike the instrument of the Renaissance period, which is the main subject of the article on the pipe organ, the ancient hydraulis was played by hand, not automatically by the water-flow; the keys were balanced and could be played with a light touch, as is clear from the reference in a Latin poem by Claudian (late 4th century), who uses this very phrase (magna levi detrudens murmura tactu... intonet, “let him thunder forth as he presses out mighty roarings with a light touch”) (Paneg. Manlio Theodoro, 320–22). Mechanics Typically, water is supplied from some height above the instrument through a pipe, and air is introduced into the water stream by aspiration (using the Bernoulli effect) into the main pipe from a side-pipe holding its top above the water source. Both water and air arrive together in the camera aeolis (wind chamber). Here, water and air separate and the compressed air is driven into a wind-trunk on top of the camera aeolis, to blow the organ pipes. Two perforated ‘splash plates’ or ‘diaphragms’ prevent water spray from getting into the organ pipes. The water, having been separated from the air, leaves the camera aeolis at the same rate as it enters. It then drives a water wheel, which in turn drives the musical cylinder and the movements attached. To start the organ, the tap above the entry pipe is turned on and, given a continuous flow of water, the organ plays until the tap is closed again. Many water organs had simple water-pressure regulating devices. At the Palazzo del Quirinale, the water flows from a hilltop spring (once abundant, now only sufficient to play the organ for about 30 minutes at a time), coursing through the palace itself into a stabilizing ‘room’ some above the camera aeolis in the organ grotto. This drop provides sufficient wind to power the restored six-stop instrument. Among Renaissance writers on the water organ, Salomon de Caus is particularly informative. His book of 1615 includes a short treatise on making water organs, advice on tuning and registration, and many fine engravings showing the instruments, their mechanisms and scenes in which they were used. It also includes an example of suitable music for water organ, the madrigal Chi farà fed' al cielo by Alessandro Striggio, arranged by Peter Philips. History Water organs were described in the numerous writings of the famous Ctesibius (3rd century BCE), Philo of Byzantium (3rd century BCE) and Hero of Alexandria ( CE). Like the water clocks (clepsydra) of Plato's time, they were not regarded as playthings but might have had a particular significance in Greek philosophy, which made use of models and simulacra of this type. Hydraulically blown organ pipes were used to imitate birdsong, and musicologists Susi Jeans and Arthur W.J.G. Ord-Hume have suggested that it was used to create the sounds of the Vocal Memnon. For the latter, solar heat was used to syphon water from one closed tank into another, thereby producing compressed air for sounding the pipes. Characteristics of the hydraulis have been inferred from mosaics, paintings, literary references, and partial remains. In 1931, the remains of a hydraulis were discovered in Hungary, with an inscription dating it to 228 CE. The leather and wood of the instrument had decomposed, but the surviving metal parts made it possible to reconstruct a working replica now in the Aquincum Museum in Budapest. The exact mechanism of wind production is debated, and almost nothing is known about the music played on the hydraulis, but the tone of the pipes can be studied. The Talmud mentions the instrument as having been played in the Jerusalem Temple. After its invention by the Greeks, the hydraulis continued to be used through antiquity in the Roman world. In the Middle Ages, Eastern Roman (Byzantine) Empire, Medieval Europe and Muslim world further developed these instruments. A well-known instance of an early positive or portable organ of the 4th century occurs on the obelisk erected to the memory of Theodosius I on his death in 395 CE. Among the illuminated manuscripts of the British Museum there are many miniatures representing interesting varieties of the portable organ of the Middle Ages used in European churches. The Pippin's organ of 757 was a hydraulic organ sent as a gift to the Carolingian empire by the Byzantine emperor Constantine V. A long-distance hydraulic organ that could be heard from sixty miles away was described in Arabic texts and attributed to an ancient Greek figure called Muristus; this individual's identity is unknown, but is sometimes suggested to be an Arabized version of the name Ctesibius. By the end of the 12th century hydraulic automata were often seen in Italy and the rest of Western Europe. During the Renaissance, water organs again acquired magical and metaphysical connotations among followers of the hermetic and esoteric sciences. Organs were placed in gardens, grottoes and conservatories of royal palaces and the mansions of rich patricians to delight onlookers not only with music but also with displays of automata – dancing figurines, wing-flapping birds and hammering cyclopes – all operated by projections on the musical cylinder. Other types of water organ were played out of sight and were used to simulate musical instruments apparently being played by statues in mythological scenes such as 'Orpheus playing the viol', 'The contest between Apollo and Marsyas' and 'Apollo and the nine Muses'. The most famous water organ of the 16th century was at the Villa d'Este in Tivoli. Built about 1569–1572 by Lucha Clericho (Luc de Clerc; completed by Claude Venard), it stood about six metres high under an arch, and was fed by a magnificent waterfall; it was described by Mario Cartaro in 1575 as playing 'madrigals and many other things'. G. M. Zappi (Annalie memorie de Tivoli, 1576) wrote: 'When somebody gives the order to play, at first one hears trumpets which play a while and then there is a consonance .... Countless gentlemen could not believe that this organ played by itself, according to the registers, with water, but they rather thought that there was somebody inside'. Besides automatically playing at least three pieces of music, it is now known that the organ was also provided with a keyboard. Other Italian gardens with water organs were at Pratolino, near Florence (), Isola de Belvedere, Ferrara (before 1599), Palazzo del Quirinale, Rome (built by Luca Biagi in 1598, restored 1990), Villa Aldobrandini, Frascati (1620), one of the Royal Palaces at Naples (1746), Villa Doria Pamphili, Rome (1758–1759). Of these only the one at the Palazzo del Quirinale has survived. Kircher's illustration in Musurgia universalis (1650), long thought to be a fanciful representation of a hypothetical possibility, has been found to be accurate in every detail when compared to the organ grotto at the Quirinale, except that it was reversed left to right. There are still traces of the instrument at the Villa d'Este but the mineral-rich water of the river which cascades through the organ grotto has caused accretions which have hidden most of the evidence from view. In the early 17th century, water organs were built in England; Cornelius Drebbel built one for King James I (Harstoffer, 1651), and Salomon de Caus built several at Richmond while in the service of Prince Henry. There was one in Bagnigge Vale, London, the summer home of Nell Gwynn (1650–1687), and Henry Winstanley (1644–1703), the designer of the Eddystone Lighthouse, who is thought to have built one at his home in Saffron Walden, Essex. After the marriage of Princess Elizabeth to the Elector Palatine Prince Friedrich V, de Caus laid out for them the gardens at Heidelberg Castle which became famous for their beautiful and intricate waterworks. A water organ survives in the gardens at Heilbronn, Württemberg, and parts of one at the Wilhelmshöhe gardens in Kassel. The brothers Francini constructed waterworks and organs at Saint Germain-en-Laye and Versailles, which reached new heights of splendour and extravagance. By the end of the 17th century, however, interest in water organs had waned. As their upkeep was costly they were left to decay and were soon forgotten; by 1920 not one survived (the so-called water organ at Hellbrunn Castle, Salzburg, is a pneumatic organ driven by hydraulically operated bellows). Their mechanism was subsequently misunderstood until the Dutch engineer Van Dijk pointed out in 1954 that air was supplied to the water organ by aspiration, which was the same method used in forges and smelting works in the 16th and 17th centuries. Aspiration is the process by which air is drawn into an opening into which water flows. For the water organ, a small pipe is arranged so that one end is open to the air and the other extends into a larger pipe that contains flowing water supplied by a stream, pond or stabilizing reservoir. The longer the vertical drop of the water, the more forceful the suction will be and the greater the volume of air sucked in. The hydraulis of Dion In 1992, the remains of a 1st-century BCE pipe organ were found at Dion, an ancient Macedonian city near Mount Olympus, Greece, during excavations under Dimitrios Pandermalis. This instrument consisted of 24 open pipes of different height with a conical lower ending. The first 19 pipes have a height from . Their inner diameter gradually decreases from 2 to 1.5 cm. These 19 pipes correspond to the "perfect system" of the ancient Greek music which consisted of one chromatic and one diatonic scale. The pipes No. 20 to 24 are smaller and almost equal in height and they seem to form an extension of the diatonic scale. The conical end of the pipes is inserted in a metal plate. At a point just before the narrowing part of every pipe there is an opening producing the turbulence of the pressurized air and the sound. The pipes are stabilized by two metal plates. The one facing outwards has decorative motifs. The instrument had one row of keys. The lower part of the organ, with the air-pressing system, was missing. In 1995, a reconstruction project started, and by 1999 a working replica of hydraulis was made based on the archaeological finding and on ancient descriptions. The remains of the ancient hydraulis are exhibited at the Archaeological Museum of Dion. See also Calliope Hydraulophone Muristus Organ (music) Notes References Reconstruction of a Roman hydraulis by Justus Willberg and Martin Braun Musica Romana: Ensemble for ancient music Hydraulis : The Ancient Hydraulis and its Reconstruction Hydraulus The Organ in Classic Literature The Pneumatics of Hero of Alexandria: The construction of a hydraulic organ. Curious facts from the organ's history ACM Multimedia 2005 paper on the hydraulophone Shockwave-Animation: Vitruv's hydraulis Further reading External links Hydraulis video (click "The Ancient Hydraulis" in the second paragraph to watch) A Bach piece being played on a hydraulophone pipe organ (video) Dead link Ancient Greek musical instruments Ancient Roman musical instruments Greek inventions Hellenistic engineering Organs (music) Pipe organ Ancient inventions Water
Water organ
[ "Environmental_science" ]
2,798
[ "Water", "Hydrology" ]
8,829,912
https://en.wikipedia.org/wiki/Operational%20historian
In manufacturing, an operational historian is a time-series database application that is developed for operational process data. Historian software is often embedded or used in conjunction with standard DCS and PLC control systems to provide enhanced data capture, validation, compression, and aggregation capabilities. Historians have been deployed in almost every industry and contribute to functions such as supervisory control, performance monitoring, quality assurance, and, more recently, machine learning applications which can learn from vast quantities of historical data. These systems were originally developed to capture instrumentation and control data, which led many to use the term "tag" for a stream of process data, referring to the physical "tags" which had been placed on instrumentation for manually capturing data. Raw data may be accessed via OPC HDA, SQL, or REST API interfaces. Operational Support Operational historians are typically used within the manufacturing facility by engineers and operators for supervisory functions and analysis. An operational historian will typically capture all instrumentation and control data, whereas an enterprise historians that is deployed to support business functions will take a subset of the plant data. Typically, these applications offer data access through dedicated APIs (Application Programming Interfaces) and SDKs (Software Development Kits) which offer high-performance read and write operations through vendor-specific or custom applications. Front-end tools for trending process data over time are the most common interfaces to these databases. Because these applications are typically deployed next to or near the source of their process data, these are often marketed and sold as 'real-time database systems.' This distinction varies among vendors who often have to make development choices between performance in capturing and presenting data vs. application and analysis functionality. Usual challenges the operational historians must address are as follows: data collection from instrumentation and controls storage and archiving of very large volumes of data organization of data in the form of "tags" or "points" limit monitoring (alarms) and validation aggregation and interpolation and manual data entry (MDE) Data access As opposed to enterprise historians, the data access layer in the operational historian is designed to offer sophisticated data fetching modes without complex information analysis facilities. The following settings are typically available for data access operations: Data scope (single point or tag, history based on time range, history based on sample count) Request modes (raw data, last-known value, aggregation, interpolation) Sampling (single point, all points without sampling, all points with interval sampling) Data omission (based on the sample quality, based on the sample value, based on the count) Even though the operational historians are rarely relational database management systems, they often offer SQL-based interfaces to query the database. In most of such implementations, the dialect does not follow the SQL standard in order to provide syntax for specifying data access operations parameters. See also Time series database Relational database management system References Data management
Operational historian
[ "Technology" ]
573
[ "Data management", "Data" ]
8,830,237
https://en.wikipedia.org/wiki/Subspace%20theorem
In mathematics, the subspace theorem says that points of small height in projective space lie in a finite number of hyperplanes. It is a result obtained by . Statement The subspace theorem states that if L1,...,Ln are linearly independent linear forms in n variables with algebraic coefficients and if ε>0 is any given real number, then the non-zero integer points x with lie in a finite number of proper subspaces of Qn. A quantitative form of the theorem, which determines the number of subspaces containing all solutions, was also obtained by Schmidt, and the theorem was generalised by to allow more general absolute values on number fields. Applications The theorem may be used to obtain results on Diophantine equations such as Siegel's theorem on integral points and solution of the S-unit equation. A corollary on Diophantine approximation The following corollary to the subspace theorem is often itself referred to as the subspace theorem. If a1,...,an are algebraic such that 1,a1,...,an are linearly independent over Q and ε>0 is any given real number, then there are only finitely many rational n-tuples (x1/y,...,xn/y) with The specialization n = 1 gives the Thue–Siegel–Roth theorem. One may also note that the exponent 1+1/n+ε is best possible by Dirichlet's theorem on diophantine approximation. References Diophantine approximation Theorems in number theory
Subspace theorem
[ "Mathematics" ]
326
[ "Mathematical theorems", "Theorems in number theory", "Mathematical relations", "Mathematical problems", "Diophantine approximation", "Approximations", "Number theory" ]
8,830,624
https://en.wikipedia.org/wiki/National%20minimum%20dataset
In health informatics, a national minimum dataset is a database of health encounters held by a central repository. "Minimum" implies that the data fields will be only those required to aggregate information for the purposes of administering the health system in the particular country and for reporting information required as a member country of WHO. See also Minimum Data Set (MDS), US National Minimum Data Set for Social Care (NMDS-SC), England Nursing Minimum Data Set (NMDS), US References External links New Zealand NMDS National Minimum Dataset (Hospital Events) data dictionary Health informatics
National minimum dataset
[ "Biology" ]
120
[ "Health informatics", "Medical technology" ]
8,831,010
https://en.wikipedia.org/wiki/Teneurin
Teneurins are a family of phylogenetically conserved single-pass transmembrane glycoproteins expressed during pattern formation and morphogenesis. The name refers to "ten-a" (from "tenascin-like protein, accessory") and "neurons", the primary site of teneurin expression. Ten-m refers to tenascin-like protein major. Teneurins are highly conserved between Drosophila, C. elegans and vertebrates. In each species, they are expressed by a subset of neurons as well as at sites of pattern formation and morphogenesis. In Drosophila, a teneurin known as ten-m or Odz is a pair-rule gene, and its expression is required for normal development. The knockdown of teneurin (ten-1) expression in C. elegans with RNAi leads to abnormal neuronal pathfinding and abnormal development of the gonads. The intracellular domain of some, if not all, teneurins can be cleaved and transported to the cell nucleus, where it proposed to act as a transcription factor. A peptide derived from the terminus of the extracellular domain shares structural homology with certain neuropeptides. There are four teneurin genes in vertebrates, named teneurin-1 through -4. Other names found in the literature include Odz-1 through -4 and Tenm-1 through -4. History Originally discovered as ten-m and ten-a in Drosophila melanogaster, the teneurin family is conserved from Caenorhabditis elegans (ten-1) to vertebrates, in which four paralogs exist (teneurin-1 to -4 or odz-1 to -4). Their distinct protein domain architecture is highly conserved between invertebrate and vertebrate teneurins, particularly in the extracellular part. The intracellular domains of Ten-a, Ten-m/Odz and C. elegans TEN-1 are significantly different, both in size and structure, from the comparable domains of vertebrate teneurins, but the extracellular domains of all of these proteins are remarkably similar. Function Teneurins translocate to the nucleus where they regulate transcriptional activity. Teneurins promote neurite outgrowth and cell adhesion. The intracellular domain interacts with the DNA-binding transcriptional repressors and also regulate the activity of transcription factors. Additionally, they have been known to interact with the cytoskeleton adaptor protein, CAP/ponsin, suggesting cell signalling roles and regulation of actin organisation. Teneurin-3 regulates the structural and functional wiring of retinal ganglion cells in the vertebrate visual system. Structure Ten-m1–4, exist as homodimers and undergo homophilic interactions in vertebrates. C terminal domain The large C-terminal extracellular domain consists of eight EGF-like repeats (see PROSITEDOC), a region of conserved cysteines and unique YD-repeats. N terminal domain The teneurin intracellular (IC) domain (~300–400 aa) is located at the N-terminus and contains a number of conserved putative tyrosine phosphorylation sites, two EF-hand-like calcium-binding motifs, and two polyproline domains. These proline-rich stretches are characteristic of SH3-binding sites. There is considerable divergence between intracellular domains of invertebrate and vertebrate teneurins as well as between different invertebrate proteins. This domain is found in the intracellular N-terminal region of the teneurin family. Human genes Human genes encoded teneurin domain proteins (TENM1-4) are list in the infoboxes. References Further reading Protein families
Teneurin
[ "Biology" ]
814
[ "Protein families", "Protein classification" ]
8,831,257
https://en.wikipedia.org/wiki/Fermented%20bean%20paste
Fermented bean paste is a category of fermented foods typically made from ground soybeans, which are indigenous to the cuisines of East, South and Southeast Asia. In some cases, such as the production of miso, other varieties of beans, such as broad beans, may also be used. The pastes are usually salty and savoury, but may also be spicy, and are used as a condiment to flavour foods such as stir-fries, stews, and soups. The colours of such pastes range from light tan to reddish brown and dark brown. The differences in colour are due to different production methods, such as the conditions of fermentation, the addition of wheat flour, pulverized mantou, rice, or sugar and the presence of different microflora, such as bacteria or molds used in their production, as well as whether the soybeans are roasted (as in chunjang) or aged (as in tauco) before being ground. Fermented bean pastes are sometimes the starting material used in producing soy sauces, such as tamari, or an additional product created from the same fermented mass. The paste is also the main ingredient of hoisin sauce. Due to the protein content of the beans, the fermentation process releases a large amount of free amino acids, which when combined with the large amounts of salt used in its production, produces a highly umami product. This is particularly true with miso, which can be used as the primary ingredient in certain dishes, such as miso soup. Types Various types of fermented bean paste (all of which are based on soy and cereal grains) include: See also Bean dip List of fermented soy products Sweet bean paste References bean paste Food preservation Food ingredients Chinese condiments Umami enhancers Food paste
Fermented bean paste
[ "Technology" ]
380
[ "Food ingredients", "Components" ]
8,833,112
https://en.wikipedia.org/wiki/Albert%20Morris
Albert Morris (13 August 1886 in Bridgetown, South Australia – 9 January 1939, Broken Hill New South Wales) was an acclaimed Australian botanist, landscaper, ecologist, conservationist and developer of arid-zone revegetation techniques that featured natural regeneration . Morris is particularly celebrated for his decisive role in the development of the Broken Hill regeneration area, a pioneering arid-zone natural regeneration project. The regeneration area project exhibited standards and principles characteristic of the contemporary environmental repair practice, ecological restoration. The work of Albert Morris, Margaret Morris and their restoration colleagues significantly influenced the development of New South Wales government soil erosion management policies in the 1940s. First Nations communities From time immemorial traditional owners, the Wilyakali people, cared for homelands that encompassed the extended Broken Hill and Barrier Ranges region, western New South Wales (hereafter NSW). They maintained relations with the Barkandji (aka Paakantyi) nation, of the Baaka (aka Darling River). From ca.1830 onwards, pastoralists forcibly dispossessed the Barkandji and Wilyakali communities, seizing homelands along the Baaka and steadily extending their influence to more distant regions. As well as being dispossessed of their spiritually significant homelands, First Nations communities of western NSW were for many decades subjected to various hardships: material deprivation; widespread ill health and epidemics; racism; confinement to government reserves and denial of civil liberties. Dedicated government rectification of these injustices only commenced in the latter decades of the twentieth century. In 2015, the Wilyakali community and the Barkandji nation, after eighteen years of challenging and protracted legal proceedings, were successful in establishing their Native title claim to traditional homelands along the Baaka and extensive areas of western NSW. Today, Australian First Nations communities assert that their homelands were never ceded to the Crown. Early life of Albert and Margaret Morris Albert was born in Bridgetown, South Australia, to parents Albert Joseph Morris and Emma Jane (Smith). Confronted by the economic depression that gripped South Australia in the late 1880s, Morris's father sought work in the new mines of far western NSW. He moved his family to Thackaringa, and then to nearby Broken Hill, to live. Broken Hill was to become Albert's permanent home. Early in life, Albert developed a keen interest in plants. Possibly a serious childhood injury to his foot, which prevented him from taking part in the bustle of childhood activity, contributed to his independence and self-containment, and to an increasing interest in botany. However, it is documented that his father, Joe Morris, was an "enthusiastic" botanist and young Albert was his "offsider", so this was a more likely source of his botanical interests, as well as innate talent and an interest in the subject. By the time he was undertaking technical school studies in metallurgy and assaying, Morris had developed a small garden and nursery, and contributed to the cost of his fees by selling plants (pepper trees) that he had grown. Morris took up work with the Central Mine in Broken Hill, eventually becoming chief assayer for the company. Albert Morris and Ellen Margaret Sayce (1887-1957) were married on 13 April 1909. Margaret (Morris) was a dressmaker, and developed extensive interests and skills in art, botany, conservation and journalism. She was a member of the Society of Friends, (Quakers). Albert's formative years were spent as an Anglican, and "some years" after his marriage he converted to Quakerism. Albert and Margaret, with family assistance, built a cottage in Cornish Street, Railway Town, a western suburb of Broken Hill. The Broken Hill work of Albert Morris Erosion and early experiments 1900s By ca.1900, the previously well vegetated homelands of the Wilyakali community had progressively been exploited by overstocking on pastoralist stations (properties, or ranches), and further devastated by introduced animals such as rabbits, foxes and feral goats. The mining industry and the impacts of people and their stock had resulted in the Broken Hill region being stripped of trees such as Acacia aneura Mulga, Eucalyptus camaldulensis River Red Gum, and soil binding shrubs and ground cover plants. Natural recovery from these detrimental impacts was inhibited by the arid climate, which featured low average rainfall of 250 millimetres or less per annum, long dry periods and high summer temperatures. Exposed to the regular westerly winds, previously well vegetated and stable soils had been transformed into soil-drifts; severe dust storms were common. By the 1920s, these degraded vegetation and soil conditions were regarded as the norm. As early as 1908, newspaper comments indicated that the sheet erosion around Broken Hill had already begun. Morris described the degraded landscape in these terms: "The extending country stretched for miles without a vestige of any green thing and each stone or old tin had a streamer of sand tailing out from it. The fences were piled high with sand, inside and out and it looked as if the intended railway lines would just be buried every dusty day, which was every windy day". Albert and Margaret Morris were concerned about the detrimental impacts that wind erosion was inflicting on the amenity of their fellow citizens in Broken Hill, as houses, gardens, roads and public facilities were often smothered in sand. Albert lamented the loss of indigenous fauna species brought about by the destruction of their natural habitat, and the breakdown of local natural ecosystems and their beauty. He looked for ways to manage these issues. Several failures at establishing a barrier to the wind blown sand deposits in his exposed garden inspired Morris to search for plants that could be grown in the prevailing tough arid conditions, and which would control erosion by binding the exposed soils. He and Margaret began to acquire expertise with botanical taxonomy and systematics, and by the mid 1920s Albert was corresponding with other Australian botanists. He established a home nursery, purchasing adjoining land and expanding his garden. Barrier Field Naturalists Club 1920 In 1920, along with Margaret Morris and W.D.K. McGillivray (1868-1933), a local doctor and also a prominent Australian ornithologist and natural scientist, Albert helped establish the Broken Hill based Barrier Field Naturalists Club, serving as its secretary until his death in 1939. Margaret also served on the executive of the club. Members were interested in natural sciences such as botany and geology, and also history, conducting regular field trips and lecture series. Albert and Margaret were prominent members, participating in field trips to the country around Broken Hill, studying and collecting specimens of the indigenous flora and observing the local ecosystems. As well as Margaret's diverse contributions, it is important to note that throughout the 1920s and 1930s Albert's botanical, conservation, tree plantation and regeneration work were strongly stimulated and supported by the many talented members of the Field Naturalists Club, people such as Dr. William MacGillivray, his son Dr. Ian MacGillivray, Edmund Dow, Maurice Mawby and many others. Morris became widely recognised for his botanical expertise, urban tree plantation work, his propagation and contributions of plants to residents and civic bodies in Broken Hill, and for his firm belief in the possibility of revegetating the barren city landscapes. The influence of Professor T G Osborn 1920s University of Adelaide botanist and plant ecologist Professor T G Osborn had been concerned about the degradation of South Australia's arid-zone flora, and the resultant wind erosion, since approximately 1920. At the university's Koonamore research facility, Yunta, he studied the capacity of the flora to naturally regenerate under stock exclosure conditions. Osborn concluded that overstocking on pastoral stations was the primary cause of the vegetation degradation, and that natural regeneration of the flora was possible. He advised pastoralists to carefully manage station stocking levels, and to preserve the indigenous vegetation. South Australian pastoralists heeded Osborn's research work and advice. From approximately 1930, pastoralists developed "'flora reserves", which were fenced areas that excluded stock and allowed natural regeneration of the indigenous flora. The largest known flora reserve was approximately four hectares (ten acres). Other pastoralists undertook furrowing projects (a form of ploughing), a practice that facilitated the natural regeneration of the flora. Many of these projects were highly successful, and degraded, wind eroded soil-drifts and scalds (areas of eroded, hardened, water impervious soil) were revegetated and stabilised. Albert Morris was certainly aware of Professor Osborn's Koonamore research work by 1928, as the research was well publicised and Morris had engaged in botanical correspondence with Osborn. Quite possibly Morris visited the Koonamore research facility, as it was located only approximately 250 kilometres from Broken Hill. The restoration work of South Australian pastoralists also received some newspaper publicity, so it is quite possible that Morris's thinking on the restoration of degraded indigenous flora was influenced by the stock exclosure and natural regeneration research and projects conducted in South Australia. Botany, conservation, restoration 1930s Morris achieved national and international recognition as an expert on arid-zone Australian flora, and corresponded with many prominent Australian botanists. He, with Margaret, made a collection of about 8000 plant specimens, the bulk of which were donated to the Waite Institute in South Australia in 1944. This collection is now predominantly held by the State Herbarium of South Australia with some specimens held by other state collections, including the Royal Botanic Garden of NSW. He and Margaret were noted for their generosity and hospitality to fellow naturalists and others working at Broken Hill. Among those they befriended was the noted botanist and author Thistle Harris, who worked in Broken Hill as a teacher c.1930. By 1936 Albert Morris had acquired considerable expertise in the distinct fields of arid-zone tree plantation establishment, and arid-zone natural regeneration. His expertise in natural regeneration was based on the field knowledge that he had acquired on Barrier Field Naturalists Club outings into the surrounding countryside, and his deep botanical knowledge of arid-zone flora species. His own home nursery experiments with sand stabilising plants such as Atriplex spp. saltbushes, further enhanced his regeneration and restoration knowledge. Quite possibly the natural regeneration work of Professor Osborn and the South Australian pastoralists had influenced him. Broad acreage furrowing field trials, conducted in 1935-36 by Morris with local pastoralists on their pastoral stations, facilitated natural regeneration of the indigenous flora and must also have convinced him of the efficacy of natural regeneration as a means of restoring degraded lands. Albert was also possessed of extensive administrative and communication skills. His professional employment as an assayer involved responsible administrative duties, and he utilised this experience to good effect in his volunteer conservation work. As secretary of the Barrier Field Naturalists Club, he corresponded with and lobbied New South Wales state government ministers and other representatives of industry and government bodies, on conservation and restoration matters. In particular, in 1935, he wrote on behalf of the Barrier Field Naturalists to the New South Wales state government, urging the government to establish a fenced natural regeneration area around Broken Hill. In April 1936, Albert and other field naturalists presented detailed submissions on soil and flora conservation, and stock exclosure and natural regeneration techniques, to the New South Wales Erosion Committee. Tree plantations, regeneration reserves 1936 Equipped with evidence of the efficacy of stock exclosure and natural regeneration as a means of restoring eroded lands, in May 1936 Albert and club members commenced lobbying the state government to fence two water reservoir sites in Broken Hill, to exclude stock and rabbits and allow the indigenous flora there to naturally regenerate. Due to Albert's persistence, this work was approved in September 1937, and the fencing was done in April 1939, shortly after his death. However, Albert Morris is best remembered and celebrated for the natural regeneration area that now encircles Broken Hill, a project that is today referred to as the Broken Hill regeneration area. Displaying considerable initiative and management skills, Morris demonstrated to Broken Hill mining executives the botanical feasibility of his plans, and convinced them to financially back the project. The natural regeneration area project was conceived in the winter of 1936, and commenced in the spring of that year. The Zinc Corporation, another Broken Hill mining company, had developed extensive plans to commence construction in 1936 of a new mine complex on a bare, desert like piece of ground located along the south-west urban fringes of Broken Hill. The company engaged the honorary services of Albert Morris to advise on the establishment of tree plantations adjacent to the proposed new mining, office and residential complex, to protect the complex from sand-drifts and the strong local westerly winds. Construction of these tree plantations, which were to be irrigated with waste water and established by traditional planting methods, but using indigenous Australian vegetation including saltbushes, a method Morris had experimented with, commenced in May, 1936. The initial fencing of the main tree plantation site facilitated rapid and substantial natural regeneration within the still unplanted, and otherwise bare, fenced enclosure, of native grasses and forbs germinating from seed naturally stored in the soil. Crucially, this regrowth of indigenous vegetation persisted, as a result of foraging livestock and rabbits having been excluded by the new fencing. The knowledgeable Albert Morris had fully anticipated and predicted the natural regeneration that occurred within the fenced tree plantations adjacent to the new mining complex. As mentioned, he had already observed and confirmed this process in previous broad acreage field trials, and was aware of the ways in which arid-zone indigenous flora seed could be naturally dispersed by wind and stored in the soil, germinate, and thrive after relatively small amounts of rainfall. Although at this time natural regeneration of many indigenous flora species, such as Eucalyptus spp. (often referred to as gum trees), was a familiar concept to many settler Australians, Morris's knowledge of the viability of various arid plant species' seed, and his experience with the natural regeneration capabilities of the indigenous flora communities, were exceptional. Morris seized on this significant (approx. 22 acres; 9 hectares) demonstration of natural regeneration principles, and convinced the Zinc Corporation mine manager, A J Keast, to obtain the backing of senior Zinc Corp executive, W S Robinson, and other mining companies in Broken Hill, to undertake a new, separate project, the trial fencing of regeneration reserves to the south-west of the city. Morris intended that these reserves would primarily utilise natural regeneration, and limited, targeted amounts of planting, as their primary means of revegetation. Broken Hill regeneration area 1936-58 Work on the Zinc Corporation mining complex tree plantations continued, but Morris, also in an honorary capacity, was now additionally advising on the new Broken Hill regeneration area project, which consisted of a series of fenced regeneration reserves extending around the south-west perimeter of Broken Hill and covering hundreds of hectares. This work commenced in the spring of 1936 and was completed in February 1937. Further reserves were added between 1937 and 1939. Good rains fell, and substantial revegetation success was achieved across all of the reserves. The entire south and westward aspects of Broken Hill were now protected from wind driven sand-drifts by naturally regenerated indigenous vegetation of the type that naturally occurred on the site. It is important to note though, that the traditional owners of the lands of Broken Hill and the surrounding region, the dispossessed Wilyakali community, appear to have had no opportunities to consider contributing to the development of the regeneration area project, despite their long and deep physical and spiritual connections to these lands. Also, it is unlikely that their Traditional Ecological Knowledge was utilised, either directly or indirectly. Sadly, Albert Morris died in January 1939, after several months of illness, but he did live to see substantial evidence of the success of his regeneration vision and initiatives. Indeed, the successful vegetation regeneration within the initial set of regeneration reserves was highly praised by the visiting South Australian Erosion Committee in June 1937. Before he died, Albert was also aware that a Broken Hill community progress association had successfully obtained funds from the state government to finance the construction of a regeneration reserve to the south of the city in 1938-39. Unfortunately, Albert did not live to see the beneficial effect that the good rains of 1939 had on the reserves. The resource demands of the Second World War (1939–45) delayed the development of further regeneration reserves and the encirclement of the city with a protective belt of indigenous flora. During this challenging period Margaret Morris played an important role in the botanical management, study and documentation of the reserves. She successfully promoted their benefits with regular newspaper articles, and authored an influential article in the Australian Journal of Science. In her various articles, Margaret emphasised the natural regeneration of indigenous species, such as Acacia aneura Mulga, that had occurred in the reserves. She wrote of the natural resilience of the regeneration reserves, correctly predicting that they would survive the severe drought of 1940, and was unstinting in her generous acknowledgement of the contributions made by members of the Broken Hill community, the mining industry and Broken Hill Council. The Barrier Field Naturalists Club also continued its involvement with the reserves, with members conducting botanical surveys of the thriving natural flora and advocating for the extension of the regeneration area. The Mine Managers Association of Broken Hill financed the upkeep of the regeneration reserves, and Broken Hill Council managed this work. The citizens of Broken Hill suffered severely from the effects of the 1940 drought, and further prolonged dry periods in the early to mid-1940s, as enormous dust storms ravaged the city. Due to the success and popularity of the regeneration reserves, from 1946 the city administration lobbied the New South Wales government to complete the encirclement of the city with further regeneration reserves. Three new reserves were fenced to the north and east of Broken Hill between 1950 and 1958, and natural regeneration of the indigenous vegetation occurred. The regeneration reserves created between 1936 and 1958 now primarily comprise the current Broken Hill regeneration area, with minor adjustments having been made over the years. Natural regeneration It has in the past, and still is very often mistakenly assumed, that planting techniques were predominantly utilised to initially establish the regeneration reserves, and that the regeneration area project was primarily an exercise in planting. It is correct that the Zinc Corporation tree plantations of 1936-37, quite separate and also small projects relative to the regeneration reserves, and located immediately adjacent to the urban area and piped water resources, were irrigated, and their vegetation established by the manual planting of thousands of trees, along with saltbushes; this was documented at the time. However it is clear from Albert Morris's interest in natural regeneration, as already outlined in this article, and the historical documentation, that the regeneration reserves, as distinct from the tree plantations, primarily and intentionally utilised principles of stock exclosure (fencing to exclude stock) and natural regeneration, and not planting, to achieve the revegetation, with indigenous flora, of the hitherto barren reserves. Albert Morris was interested in achieving broad acreage arid-zone revegetation outcomes, both for amenity and conservation purposes, and as he realised, it would have been impossible to achieve this, given the prevailing dry, hot and often drought stricken conditions, by utilising a planting technique. To propagate, manually plant and then keep hydrated until they were established the tens of thousands of trees, shrubs, grasses and forbs necessary for such a project, conducted over many hundreds of rugged hectares, would have required extensive seed collection and plant propagation capabilities, and generous personnel resources and funding; it is unlikely that such a project would even be feasible today. There is no evidence of such a large planting project occurring at the time of the establishment of the regeneration reserves. It was clearly Morris's intention that the establishment of vegetation in the regeneration reserves was to be primarily left to the factors associated with natural regeneration: germination of existing, naturally deposited and wind dispersed seeds of the local flora, the regrowth of established but degraded in ground rootstocks, and the local rainfall of approximately 250mm per year. Crucially, fencing around the reserves excluded the livestock and rabbits that had previously decimated this indigenous flora. In fact, University of Sydney researchers Professor Eric Ashby and Ilma Pidgeon were drawn to the project in order to study the spectacular natural regeneration of the indigenous flora that had occurred following exclusion of stock, and concluded that ‘fencing the land has restored the vegetation’. Spreading of seed by hand, and the ploughing of moisture impermeable claypans (aka scalds), were techniques also contemplated by Morris. Relatively little or no tree or shrub planting was done in order to establish the regeneration reserves, except in an undefined section of regeneration reserve no. 2, which was also irrigated, as this reserve was adjacent to the small and irrigated tree plantation no. 1, now known as Albert Morris Park. Some planting was carried out by community members along water courses and in claypans, and extensive tree planting was carried out along some road verges from approximately 1939. Ecological restoration The historical regeneration area project exhibits principles of the contemporary environmental repair concept, ecological restoration. See National standards for the practice of ecological restoration in Australia. Substantial to full restoration of the indigenous flora was aspired to; appropriate levels of site intervention, predominantly in the form of fencing and small amounts of furrowing and planting were adopted, with re-establishment of the indigenous vegetation primarily left to natural regeneration; formal science and local ecological knowledge were utilised; the indigenous flora and fauna were conserved; the residents of Broken Hill came to appreciate and engage with the project. However, as noted, there is no record of traditional owners and Custodians of the regional lands, the Wilyakali community, being presented with opportunities to consider contributing to the project. Government erosion management policies and legislation 1940s The Broken Hill regeneration area project and its outcomes significantly influenced the development of NSW state government soil erosion management policies and legislation. NSW Soil Conservation Service (established 1938) director Sam Clayton, and researcher Noel Beadle, were impressed by the successful revegetation outcomes achieved within the regeneration area. Throughout the 1940s, they pushed for and implemented state government land management policies that aimed to revegetate, by stock exclosure and natural regeneration processes, those landscapes of western NSW that were in a degraded vegetation condition, but were still, fortunately, not yet wind or water eroded. To achieve this outcome, Beadle recognised that tree planting programs were completely unfeasible, given the extent of the problem and the arid conditions. Clayton and Beadle also targeted the revegetation of the twenty million hectares of western NSW that were in an eroded condition. To achieve both of these objectives, state legislation was passed in 1949, and stock exclosure and natural regeneration processes were codified as government land management techniques and policies; overstocking was outlawed. Remembrance and celebration The regeneration area still encircles Broken Hill today, providing the city with an attractive ring of natural vegetation. Broken Hill City Council manages the regeneration area, with the crucial support of Landcare Broken Hill and members of the Barrier Field Naturalists Club. The regeneration reserves were recognised as cultural heritage items by the New South Wales National Trust in 1991. In 2015 the City of Broken Hill was declared a place of national heritage values by the Australian government. As part of this recognition, Albert's achievements, and the Broken Hill regeneration reserves, were listed as heritage values of the city. The work of Albert Morris was valued and commemorated by the citizens of Broken Hill. In 1941 an impressive water fountain, dedicated to his memory and funded by public subscription, was installed outside the Technical College, Argent Street, Broken Hill. In 1944 Margaret Morris opened the Albert Morris Memorial Gates, which are now located in Wentworth Road, Broken Hill. The John Scougall Gates, named after Jack Scougall, a foreman of works on the regeneration reserves and later manager of the Zinc Corporation nursery, stand nearby. A consortium of Australian ecological restoration organisations initiated the Albert Morris Award for an Outstanding Ecological Restoration Project in 2017, to mark the eighty year anniversary of the completion of the first regeneration reserves in 1937. In 22–24 August 2017, the Australian Association of Bush Regenerators, Broken Hill City Council, The Barrier Field Naturalists Club, Landcare Broken Hill and Broken Hill Art Exchange, came together with many visitors and local residents in Broken Hill to mark this event, with field trips and an inaugural Albert Morris Ecological Restoration Award dinner. The Award dinner recognised the skills, dedication and community spirit of Albert Morris, Margaret Morris and their many colleagues in the Barrier Field Naturalists Club, the contributions of Broken Hill citizens and community members, and the contributions of the mining industry of Broken Hill, Broken Hill City Council and the New South Wales state government, to the regeneration area project. At the Award dinner the inaugural Albert Morris Award for an Outstanding Ecological Restoration Project was presented "to the Broken Hill Regeneration Reserves Project itself and all those who made it happen from 1936-1958 and those who are still making it happen". The actual award is a sculpture crafted by Badger Bates, a distinguished Barkandji (Paakantji), Broken Hill artist. The sculpture is titled 'Regeneration’ and is made from the wattle "Dead Finish", Acacia tetragonophylla. The 2018 Award was presented at the Society for Ecological Restoration Australasia Conference held in Brisbane, September, 2018, to Murray Local Land Services, recognising the Murray Riverina Travelling Stock Reserves Project. The 1930s South Australian work of Albert Morris In 1932 Essington Lewis, famed manager of the Australian industrial and mining corporation, Broken Hill Proprietary Company (BHP), invited Albert Morris to visit South Australia and investigate the possibility of establishing tree plantations at the companies' corporate towns of Whyalla and Iron Knob, for amenity purposes. During a series of visits between 1932 and 1937, Morris (the historical documentation does not record participation by Margaret Morris in the South Australian projects), successfully established an Australian flora plant nursery in Whyalla and developed plantations of Australian flora there and at Iron Knob. He also advised the municipal council of Port Pirie on the possibility of establishing plantations there, although no actual work appears to have been undertaken. Morris initiated two natural regeneration projects in Whyalla, approximately between 1935 and 1937 (precise dates unknown). At Hummock Hill, fencing of the bare site to exclude dairy cattle led to the regeneration of the indigenous flora. The second project was located on the current site of the Ada Ryan Gardens in Whyalla, and involved the management of invasive beach sand dunes, by fencing to exclude rabbits and cattle, allowing the indigenous flora to recover. By 1939 both projects were being hailed as major successes, with tangible outcomes being evident. References AABR News October 2017 "The inaugural Albert Morris Ecological Restoration Award" http://www.aabr.org.au/learn/publications-presentations/aabr-newsletters/ Ardill, Peter J. (2017) "Albert Morris and the Broken Hill regeneration area: time, landscape and renewal". ed. 3. Australian Association of Bush Regenerators (AABR) Sydney. http://www.aabr.org.au/morris-broken-hill/ Ardill, Peter J. (2018) "The South Australian arid zone plantation and natural regeneration work of Albert Morris". Australian Association of Bush Regenerators (AABR) Sydney http://www.aabr.org.au/morris-broken-hill/ Ardill, Peter J.(2022) ‘Rekindling memory of environmental repair responses to the Australian wind erosion crisis of 1930–45: ecologically aligned restoration of degraded arid-zone pastoral lands and the resultant shaping of state soil conservation policies’ (January) The Repair Press Sydney https://ecologicalrestorationhistory.org/articles/ Ardill, Peter & Brodie, Louise ed. (2018) "Albert Morris and the Broken Hill regeneration area. Essays and supplementary materials commemorating and celebrating the history and eightieth anniversary of this project". Australian Association of Bush Regenerators Inc (AABR) Sydney. Beadle, N.C.W. (1948) "The Vegetation and Pastures of Western New South Wales". Department of Conservation of NSW. Sydney. NSW. Briggs, Barbara (2017) Bush Regeneration at Broken Hill:‘radical for their time’ "Australasian Plant Conservation" pp. 7–9 26:3 Dec 2017-Feb 2018. Kennedy, B.E. (1986) 'Morris, Albert (Bert) (1886–1939)', Australian Dictionary of Biography, National Centre of Biography, Australian National University, http://adb.anu.edu.au/biography/morris-albert-bert-7659/text13397, published first in hardcopy 1986, accessed online 18 June 2018.This article was first published in hardcopy in Australian Dictionary of Biography, Volume 10, (MUP), 1986 CHAH (2016): Council of Heads of Australasian Herbaria. Australian National Herbarium Biographical Notes: Morris, Albert (1886 - 1939) http://www.anbg.gov.au/biography/morris-albert.html Jones David S. (2011). Re-Greening ‘The Hill’: Albert Morris and the transformation of the Broken Hill landscape. Studies in the History of Gardens & Designed Landscapes 31: 181–195. Jones, David (2016). "Evolution and significance of the regeneration reserve heritage landscape of broken hill: History, values and significance" [online]. Historic Environment, Vol. 28, No. 1, 2016: 40-57. Availability: <https://search.informit.com.au McDonald, Tein (2017) "Report on the Albert Morris Inaugural Award" in "Australasian Plant Conservation" pp. 9–10 26:3 Dec 2017-Feb 2018. McDonald, Tein, (2017a) “How do the Broken Hill Regeneration Reserves stand up as an Ecological Restoration project?” AABR News. No. 132, April, 2017. Australian Association of Bush Regenerators. Sydney. Australia. http://www.aabr.org.au/learn/publications-presentations/aabr-newsletters/ McDonald, Tein, (2017b) “Would the Broken Hill Regeneration Reserves meet today’s National Standards?” AABR News. No.134, 2017. Australian Association of Bush Regenerators. Sydney. Australia. http://www.aabr.org.au/learn/publications-presentations/aabr-newsletters/ Morris, A.(1938) "Broken Hill Fights Sand-Drift" in "Plant life of the West Darling", Barrier Field Naturalists Club compiler (Broken Hill, NSW, 1966) Morris, M. (1939) "Plant Regeneration in the Broken Hill District" The Australian Journal of Science pp. 43–48. October. Morris, M. (1966) "Biographical Notes" in ″Plantlife of the West Darling" ed. Barrier Field Naturalists Club. Broken Hill OEH (2021) ‘Bioregions of NSW/A Brief overview of NSW/NSW-Regional History/New South Wales-Aboriginal occupation-Aboriginal occupation of the Western Division’ p. 15 New South Wales Office of Environment and Heritage. NSW Department Planning Industry Environment. Sydney https://www.environment.nsw.gov.au/-/media/OEH/Corporate-Site/Documents/Animals-andplants/ Bioregions/bioregions-of-new-south-wales.pdf Pearce, Lilian M (2019) ‘Critical Histories for Ecological Restoration’ (Thesis for Doctor of Philosophy Australian National University) https://openresearchrepository.anu.edu.au/handle/1885/173547 Pidgeon, I Ashby, E (1940) ‘Studies in Applied Ecology I. A Statistical Analysis of Regeneration Following Protection from Grazing’ Proceedings of the Linnean Society of NSW 65 123-143 at p. 127 http://www.biodiversitylibrary.org/bibliography/6525#/summary Robin, L. (2007) How A Continent Created A Nation (University New South Wales Press:Sydney) Webber Horace, 1992 The Greening of the Hill - Re-vegetation of Broken Hill in the 1930s published by Hyland House Specific 1886 births 1939 deaths 20th-century Australian botanists History of Broken Hill Converts to Quakerism People from Broken Hill, New South Wales Australian Quakers Ecological restoration Landscape ecology
Albert Morris
[ "Chemistry", "Engineering" ]
6,669
[ "Ecological restoration", "Environmental engineering" ]
8,833,837
https://en.wikipedia.org/wiki/Peter%20Cusack%20%28musician%29
Peter Cusack is an English artist and musician who is a member of CRiSAP (Creative Research in Sound Arts Practice), and is a research staff member and founding member of the London College of Communication in the University of the Arts London. He was a founding member and director of the London Musicians' Collective. He is best known as a member of the avant garde musical quartet, Alterations (1978–1986; with Steve Beresford, David Toop, and Terry Day), and the creator of field and wildlife recording-based albums including: Where Is the Green Parrot? (1999) with tracks like "Toy Shop (Two Small Boys Go Shopping)" and "Siren", which are just as advertised. Day for Night (2000), with Max Eastley. This features "duets" between Eastley's kinetic sculpture and Cusack's field recordings. Baikal Ice (2003), featuring tracks like "Banging Holes In Ice" and "Floating Icicles Rocked By Waves" and "Falling In". Cusack has been involved in a wide range of projects throughout his career. Several of his pieces have been reviewed in Leonardo Music Journal, the annual music Journal published by MIT Press. He has also curated an album for Leonardo Music Journal. He is currently research fellow on the Engineering and Physical Sciences Research Council's multidisciplinary 'Positive Soundscapes Project'. Musical interests Cusack is particularly interested in environmental sound and acoustic ecology. He has examined the sound properties of areas such as Lake Baikal, Siberia, and the Azerbaijan oil fields, and is interested in how sounds change as people migrate and as technology changes. In 1998, Cusack started the "Your Favorite London Sound" project. The goal is to find out what London noises are found appealing by people who live in London. This was so popular that it has been repeated in Chicago, Beijing, and other cities. He is involved in the "Sound & The City" art project using sounds from Beijing in October 2005. Cusack's Sounds From Dangerous Places is a project to collect sounds from sites which have sustained major environmental damage. Sites that Cusack is working on include Chernobyl, the Azerbaijan oil fields, and areas around controversial dams on the Tigris and Euphrates river systems in south east Turkey. Cusack's performances are a central part of the book Haunted Weather: Music, Silence, and Memory (Toop, 2004) by his old collaborator and respected music critic and author, David Toop. Toop investigates the use of environmental sound and electronic instruments in experimental music in his book. Other performances With clarinetist Simon Mayo, he formed the duo known as "A Touch of the Sun". His first "major" recording was part of Fred Frith's 1974 record, "Guitar Solos". He was one of the first to play the bouzouki in England, which gained him the respect of London's musical avant garde. As a musician, he has collaborated with artists such as Clive Bell, Nic Collins, Alterations, Chris Cutler, Max Eastley, Evan Parker, Hugh Davies, Annette Krebs and Eastern Mediterranean singer Viv Corringham. A live performance with Nicolas Collins was released as "A Host, of Golden Daffodils" in 1999. Activities related to music He co-founded an artist-owned record label called "Bead Records" which has released many previously unavailable pieces in 1972. It had released more than 30 albums, as of 2007. In 1975 Derek Bailey, Steve Beresford, Max Boucher, Paul Burwell, Jack Cooke, Peter Cusack, Hugh Davies, Madelaine and Martin Davidson, Richard Leigh, Evan Parker, John Russell, David Toop, Philipp Wachsmann and Colin Wood formed the journal MUSICS, later described as "an impromental experivisation arts magazine". Cusack produces the monthly radio program "Vermilion Sounds" with Isobel Clouter. Vermilion Sounds explores environmental sounds and is broadcast by Resonance FM in London. John Levack Drever, writing in Soundscape, comments: Of significant note is the work of Peter Cusack and Isobel Clouter (from the British Library Sound Archive who we now welcome onto the UKISC Management Committee), who have done a sterling job producing Vermilion Sounds—a weekly radio show for Resonance FM... Other projects Soundlines: City of London Festival educational project on music and environmental sound in East London schools (April to November 2003). Baku, 5 Quarters at the University of Baku, Azerbaijan. This was a collaboration with Swiss video artist Ursula Biemann in 2004. Urban Grime, exhibition at the Museum of London Sept 2003 to Jan 2004 Send+Receive Festival performance & workshops, Winnipeg, Canada 2004: LMC Guitar Festival performances, Museum of Garden History, London 2004 Frère Jacques et autres pièces à Francis: Expositions. 1997. Saint-Fons, with Ron Haselden, a British artist living in the French town of Brizard, in Brittany. This was a well-known interactive multimedia piece featuring the song Frère Jacques. Selected recordings Your Favourite London Sounds 1998–2001, Peter Cusack, Resonance (2002) Day For Night, Peter Cusack, Max Eastley, Paradigm (2000). The compilation of recordings from a 25-year collaboration. Interruptions, Terry Day, EMANEM 4125; Cusack plays on two tracks, recordings from 1978–1981. Voila Enough! 1979–1981 (Atavistic ALP239CD) – CD release of the group Alterations (Steve Beresford, Peter Cusack, Terry Day, David Toop) Baikal Ice, Peter Cusack, RER Megacorp / IODA (Spring 2003) Where is the Green Parrot?, Peter Cusack, RER Megacorp / IODA (1999) The Horse Was Alive, The Cow Was Dead, Peter Cusack album with 46 tracks Butlers Wharf, Peter Cusack Ghosts & Monsters: Technology & Personality in Contemporary Music, Composer: Robert Ashley, Frieder Butzmann, John Cage, Cornelius Cardew, Henning Christiansen, et al., Conductor: Christian von Borries, Guy Protheroe, Performers: Peter Cusack, Margaret Leng Tan, Jerry Hunt, Shelley Hirsch, Berliner Philharmoniker, Emf Media (2 May 2000) includes an extract from a Host, of Golden Daffodils – Nicolas Collins, Peter Cusack Haunted Weather, assorted artists, Staubgold Germany, 25 May 2004, includes "Flight Path Trace" by Peter Cusack (companion CD to Ghosts and Monsters: Technology and Personality in Contemporary Music, Leonardo Music Journal 8 (1998), Leonardo / MIT Press, 1998) Not Necessarily "English Music": A collection of experimental music from Great Britain, 1960–1977, curated by David Toop, Leonard Music Journal CD Series Volume 11, includes Geese recorded in 1974 by Peter Cusack and Simon Mayo (A Touch of the Sun), the companion CD to 2001 Volume of Leonardo Music Journal, MIT Press, 2001. Nightjars and Roe Deer, and Squabble (both from CD to Musicworks #59, Peter Cusack) included in Songs Soaring, (René van Peer, catalogue for festival Whistling in the Dark/Pfeifen in Walde, organised by Matthias Osterwold and Nicolas Collins in Podewil, Berlin (Germany), 9 to 18 September 1994, organised by Matthias Osterwold and Nicolas Collins in Podewil, Berlin (Germany), 9 to 18 September 1994, pub. by Volker Straebel and Matthias Osterwold, in association with Nicolas Collins, Valerian Maly and Elke Moltrecht. Distribution through Podewil and Maly Verlag, 1994.) TECHNO MIT STÖRUNGEN, an album recorded at festival "music unlimited" at Alter Schlachthof Wels, Austria, 11 November 1995. The album features Peter Cusack playing "bousouki & interactive birds" Operet, Peter Cusack and Viv Corringham, Rere121 Sounds from dangerous places book with audio CDs Curations Interpreting the Soundscape, curated by Peter Cusack, contributions by Tonya Wimmer, Andrea Polli and Joe Gilmore, Jacob Kirkegaard, Chris Watson, Rafal Flejter, Chris DeLaurenti, Christina Kubisch, Charles Stankievech, Sonic Postcards, Yannick Dauby and Pascal Battus. LMJ CD Series Volume 16 accompanying the 2006 Volume of Leonardo Music Journal, Leonardo Music Journal Volume 16 (2006), MIT press, 2006. Selected publications "Ghosts and Monsters": Contributors' Notes", Alexander Abramovitch Krejn, Christian von Borries, John Cage, Andrew Culver, John Tilbury, Paul de Marinis, Robert Ashley, Henning Christiansen, Alvin Lucier, Peter Cusack, Shelley Hirsch, Jerry Hunt, Michael Schell, Frieder Butzmann, Michael Snow, Leonardo Music Journal, Vol. 8, Ghosts and Monsters: Technology and Personality in Contemporary Music (1998), pp. 64–74, MIT Press, 1998. The Positive Soundscape Project: A re-evaluation of environmental sound, Mags Adams, Angus Carlyle, Peter Cusack, Bill Davies, Ken Hume, Paul Jennings, Chris Plack, Research Proposal. Dialogue, Peter Cusack, Soundscape—The Journal of Acoustic Ecology 1 (2) p8, 2000. References Haunted Weather: Music, Silence, and Memory, David Toop, Serpent's Tail, 1 July 2004, Notes External links Peter Cusack's official website Favourite sounds project's website Sounds from dangerous places project's website Peter Cusack entry in Allmusic, François Couture Alterations album reviews. This album was recorded in 1981 and released in 1999. Bead Records website, the record company Cusack co-founded in 1972, which has released more than 30 albums. Year of birth missing (living people) Living people English electronic musicians English experimental musicians Field recording People associated with the University of the Arts London
Peter Cusack (musician)
[ "Engineering" ]
2,080
[ "Audio engineering", "Field recording" ]
8,834,063
https://en.wikipedia.org/wiki/Cyphernetics
Cyphernetics Corporation was a commercial timesharing company founded in March 1969 and based in Ann Arbor, Michigan. The company had a sales offices in most major American cities and many international locations, providing communications and technical support for clients. As was the case with a number of commercial timesharing operators in the 1970s, Cyphernetics utilized the DECsystem-10 computer systems from Digital Equipment Corporation. Cyphernetics developed many products that were well ahead of their time, and whose concepts are contained in many of the most important PC applications, even today. Cyphernetics had an email system (called UTI:MEMO) in the early 1970s, as well as word processing (Cyphertext), spreadsheets (Cyphertab), project management, and time series data storage and analysis (TSAM). Despite comparatively weak CPUs, very limited memory and storage, and slow communications networks of the time, most modern PC applications were functional on the timesharing network at 300 to 1200 baud while running on a processor (that was much less powerful than a desktop PC is today) shared by over 50 simultaneous users. Cyphernetics was purchased by Automatic Data Processing in 1975 and renamed ADP Network Services. References 1969 establishments in Michigan 1975 disestablishments in Michigan 1975 mergers and acquisitions ADP (company) American companies established in 1969 American companies established in 1975 Companies based in Ann Arbor, Michigan Computer companies established in 1969 Computer companies established in 1975 Defunct computer companies of the United States Defunct computer hardware companies Software companies established in 1969 Software companies established in 1975 Time-sharing companies
Cyphernetics
[ "Technology" ]
331
[ "Computing stubs", "Computer company stubs" ]
8,834,198
https://en.wikipedia.org/wiki/Heptagonal%20tiling
In geometry, a heptagonal tiling is a regular tiling of the hyperbolic plane. It is represented by Schläfli symbol of {7,3}, having three regular heptagons around each vertex. Images Related polyhedra and tilings This tiling is topologically related as a part of sequence of regular polyhedra with Schläfli symbol {n,3}. From a Wythoff construction there are eight hyperbolic uniform tilings that can be based from the regular heptagonal tiling. Drawing the tiles colored as red on the original faces, yellow at the original vertices, and blue along the original edges, there are 8 forms. Hurwitz surfaces The symmetry group of the tiling is the (2,3,7) triangle group, and a fundamental domain for this action is the (2,3,7) Schwarz triangle. This is the smallest hyperbolic Schwarz triangle, and thus, by the proof of Hurwitz's automorphisms theorem, the tiling is the universal tiling that covers all Hurwitz surfaces (the Riemann surfaces with maximal symmetry group), giving them a tiling by heptagons whose symmetry group equals their automorphism group as Riemann surfaces. The smallest Hurwitz surface is the Klein quartic (genus 3, automorphism group of order 168), and the induced tiling has 24 heptagons, meeting at 56 vertices. The dual order-7 triangular tiling has the same symmetry group, and thus yields triangulations of Hurwitz surfaces. See also Hexagonal tiling Tilings of regular polygons List of uniform planar tilings List of regular polytopes References John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations) External links Hyperbolic and Spherical Tiling Gallery KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings Hyperbolic Planar Tessellations, Don Hatch Hyperbolic tilings Isogonal tilings Isohedral tilings Regular tilings
Heptagonal tiling
[ "Physics" ]
447
[ "Isogonal tilings", "Tessellation", "Hyperbolic tilings", "Isohedral tilings", "Symmetry" ]
8,834,365
https://en.wikipedia.org/wiki/Order-7%20triangular%20tiling
In geometry, the order-7 triangular tiling is a regular tiling of the hyperbolic plane with a Schläfli symbol of {3,7}. Hurwitz surfaces The symmetry group of the tiling is the (2,3,7) triangle group, and a fundamental domain for this action is the (2,3,7) Schwarz triangle. This is the smallest hyperbolic Schwarz triangle, and thus, by the proof of Hurwitz's automorphisms theorem, the tiling is the universal tiling that covers all Hurwitz surfaces (the Riemann surfaces with maximal symmetry group), giving them a triangulation whose symmetry group equals their automorphism group as Riemann surfaces. The smallest of these is the Klein quartic, the most symmetric genus 3 surface, together with a tiling by 56 triangles, meeting at 24 vertices, with symmetry group the simple group of order 168, known as PSL(2,7). The resulting surface can in turn be polyhedrally immersed into Euclidean 3-space, yielding the small cubicuboctahedron. The dual order-3 heptagonal tiling has the same symmetry group, and thus yields heptagonal tilings of Hurwitz surfaces. Related polyhedra and tiling It is related to two star-tilings by the same vertex arrangement: the order-7 heptagrammic tiling, {7/2,7}, and heptagrammic-order heptagonal tiling, {7,7/2}. This tiling is topologically related as a part of sequence of regular polyhedra with Schläfli symbol {3,p}. This tiling is a part of regular series {n,7}: From a Wythoff construction there are eight hyperbolic uniform tilings that can be based from the regular heptagonal tiling. Drawing the tiles colored as red on the original faces, yellow at the original vertices, and blue along the original edges, there are 8 forms. See also Order-7 tetrahedral honeycomb List of regular polytopes List of uniform planar tilings Tilings of regular polygons Triangular tiling Uniform tilings in hyperbolic plane References John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations) External links Hyperbolic and Spherical Tiling Gallery KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings Hyperbolic Planar Tessellations, Don Hatch Hyperbolic tilings Isogonal tilings Isohedral tilings Order-7 tilings Regular tilings Triangular tilings
Order-7 triangular tiling
[ "Physics" ]
560
[ "Isogonal tilings", "Tessellation", "Hyperbolic tilings", "Isohedral tilings", "Symmetry" ]
8,834,393
https://en.wikipedia.org/wiki/Alfred%20Mathieu%20Giard
Alfred Mathieu Giard (8 August 1846 – 8 August 1908) was a French zoologist born in Valenciennes. He served as a professor of zoology at the Faculty of Sciences in Lille. He specialized in parasitology and the genus Giardia was named after him by Johann Künstler in 1882. Biography Giard was born in Valenciennes to grocer Alfred François Émile and Jeanne Henriette Mortamais. At an early age he became interested in plants and insects. In 1867 he began his studies of natural sciences at the École Normale Supérieure, followed by work as préparateur de zoologie at the laboratory of Henri de Lacaze-Duthiers (1821–1901) in Paris and later the teratologist Gabriel Dareste de la Chavanne. In 1872 he defended his doctoral thesis with a study on compound ascidians titled "Recherches sur les ascidies composées ou synascidies". From 1873 to 1882, he was professeur suppléant of natural history at the faculty of sciences in Lille, and in the meantime, was also affiliated with the Institut industriel du Nord. In 1874 he founded a biological station at Wimereux in order to familiarize his students to marine and terrestrial organisms. At Lille, he is credited for putting together an active school of zoology. He also popularized the study of animal behaviour among his students. He also lectured at the School of Medicine and Pharmacy in Lille. In 1887 he became a lecturer at the École Normale Supérieure, and from 1888 until his death. He became a full professor in 1892 at the faculty of sciences in Paris, holding the chair of "evolution of living organisms". Following his death, he was succeeded at the Wimereux station by Maurice Caullery (1868–1958). Among his numerous students and assistants was philosopher of science Félix Le Dantec (1869–1917). Giard was influenced by the work of Ernst Haeckel, and considered Lamarckism and Darwinism to be complementary theories. From 1904 to 1908 he was president of the Société de biologie. Giard married Annie Bond-Cooke in 1892 in Paris. He died in Orsay on 27 May 1902, his sixty-second birthday. Research He was especially interested in the relationship between host and parasite in nature (both plants and animals), and used the term "parasitic castration" to define sexual characteristic changes in the host as a result of the parasite, even when the sex glands of the host are not directly involved. He is credited for providing a description of Giardia lamblia, a gastrointestinal protozoan parasite that is named after himself and Czech physician Vilem Dusan Lambl (1824–1895). The illness associated with the parasite is sometimes called giardiasis. In 1877 he was the first scientist to describe the phylum Orthonectida (parasites of Ophiurida). In 1894 he introduced the term "anhydrobiosis" (the ability of organisms to survive extreme dehydration). In 1905 Giard coined the word poecilogonie (poecilogony) to describe a phenomenon in which similar adults develop from dissimilar larvae in marine invertebrates. Although Christian, Giard supported Darwinian ideas which he called as "transformism" and wrote about these ideas in the periodical Bulletin scientifique de France et de Belgique that he founded in 1888. He is remembered for his extensive research of crustaceans, particularly Epicaridea (parasitic isopods) and members of the family Bopyridae. Amongst his very numerous publications are 300 devoted to entomology. He was a figure of importance in applied entomology in France and a member of the Société entomologique de France. References Other sources Lhoste, J. 1987 Les entomologistes français. 1750–1950. INRA (Institut National de la Recherche Agronomique), Paris. Mathieu Guerriaud, « Etudier à l'école de pharmacie de Lille avec Alfred Giard au XIXe siècle», Revue d'Histoire de la Pharmacie, vol. LXIII, no 386, 2015, p. 261–278 (ISSN 0035-2349, lire en ligne) Peyerimhoff, P. de 1932 La Société entomologique de France (1832–1931). Soc. Ent. France, Livre du Centenaire, Paris. Alfred Mathieu Giard @ Who Named It People from Valenciennes French zoologists Academic staff of the University of Paris Academic staff of the University of Lille Nord de France École Normale Supérieure alumni Science teachers 1846 births 1908 deaths French carcinologists French entomologists Presidents of the Société entomologique de France Lamarckism Members of the French Academy of Sciences Members of the Royal Academy of Belgium Knights of the Legion of Honour Members of the Ligue de la patrie française
Alfred Mathieu Giard
[ "Biology" ]
1,043
[ "Non-Darwinian evolution", "Biology theories", "Obsolete biology theories", "Lamarckism" ]
8,834,492
https://en.wikipedia.org/wiki/Triheptagonal%20tiling
In geometry, the triheptagonal tiling is a semiregular tiling of the hyperbolic plane, representing a rectified Order-3 heptagonal tiling. There are two triangles and two heptagons alternating on each vertex. It has Schläfli symbol of r{7,3}. Compare to trihexagonal tiling with vertex configuration 3.6.3.6. Images 7-3 Rhombille In geometry, the 7-3 rhombille tiling is a tessellation of identical rhombi on the hyperbolic plane. Sets of three and seven rhombi meet two classes of vertices. 7-3 rhombile tiling in band model Related polyhedra and tilings The triheptagonal tiling can be seen in a sequence of quasiregular polyhedrons and tilings: From a Wythoff construction there are eight hyperbolic uniform tilings that can be based from the regular heptagonal tiling. Drawing the tiles colored as red on the original faces, yellow at the original vertices, and blue along the original edges, there are 8 forms. See also Trihexagonal tiling - 3.6.3.6 tiling Rhombille tiling - dual V3.6.3.6 tiling Tilings of regular polygons List of uniform tilings References John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations) External links Hyperbolic and Spherical Tiling Gallery KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings Hyperbolic Planar Tessellations, Don Hatch Hyperbolic tilings Isogonal tilings Isotoxal tilings Quasiregular polyhedra Semiregular tilings
Triheptagonal tiling
[ "Physics" ]
393
[ "Isotoxal tilings", "Semiregular tilings", "Isogonal tilings", "Tessellation", "Hyperbolic tilings", "Symmetry" ]
8,834,541
https://en.wikipedia.org/wiki/Frataxin
Frataxin is a protein that in humans is encoded by the FXN gene. It is located in the mitochondrion and Frataxin mRNA is mostly expressed in tissues with a high metabolic rate. The function of frataxin is not clear but it is involved in assembly of iron-sulfur clusters. It has been proposed to act as either an iron chaperone or an iron storage protein. Reduced expression of frataxin is the cause of Friedreich's ataxia. Structure X-ray crystallography has shown that human frataxin consists of a β-sheet that supports a pair of parallel α-helices, forming a compact αβ sandwich. Frataxin homologues in other species are similar, sharing the same core structure. However, the frataxin tail sequences, extending from the end of one helix, diverge in sequence and differ in length. Human frataxin has a longer tail sequence than frataxin found in bacteria or yeast. It is hypothesized that the purpose of the tail is to stabilize the protein. Like most mitochondrial proteins, frataxin is synthesized in cytoplasmic ribosomes as large precursor molecules with mitochondrial targeting sequences. Upon entry into mitochondria, the molecules are broken down by a proteolytic reaction to yield mature frataxin. Function Frataxin is localized to the mitochondrion. The function of frataxin is not entirely clear, but it seems to be involved in assembly of iron-sulfur clusters. It has been proposed to act as either an iron chaperone or an iron storage protein. Frataxin mRNA is predominantly expressed in tissues with a high metabolic rate (including liver, kidney, brown fat and heart). Mouse and yeast frataxin homologues contain a potential N-terminal mitochondrial targeting sequence, and human frataxin has been observed to co-localise with a mitochondrial protein. Furthermore, disruption of the yeast gene has been shown to result in mitochondrial dysfunction. Friedreich's ataxia is thus believed to be a mitochondrial disease caused by a mutation in the nuclear genome (specifically, expansion of an intronic GAA triplet repeat in the FXN gene, which encodes the protein frataxin.). Clinical significance Reduced expression of frataxin is the cause of Friedreich's ataxia (FRDA), a neurodegenerative disease. The reduction in frataxin gene expression may be attributable from either the silencing of transcription of the frataxin gene because of epigenetic modifications in the chromosomal entity or from the inability of splicing the expanded GAA repeats in the first intron of the pre-mRNA as seen in bacteria and Human cells or both. The expansion of intronic trinucleotide repeat GAA results in Friedreich's ataxia. This expanded repeat causes R-loop formation, and using a repeat-targeted oligonucleotide to disrupt the R-loop can reactivate frataxin expression. 96% of FRDA patients have a GAA trinucleotide repeat expansion in intron 1 of both alleles of their FXN gene. Overall, this leads to a decrease in frataxin mRNA synthesis and a decrease (but not absence) in frataxin protein in people with FRDA. (A subset of FRDA patients have GAA expansion in one chromosome and a point mutation in the FXN exon in the other chromosome.) In the typical case, the length of the allele with the shorter GAA expansion inversely correlates with frataxin levels. FRDA patients’ peripheral tissues typically have less than 10% of the frataxin levels exhibited by unaffected people. Lower levels of frataxin result in earlier disease onset and faster progression. FRDA is characterized by ataxia, sensory loss, and cardiomyopathy. The reason frataxin deficiency causes these symptoms is not entirely clear. On a cellular level, it is linked to iron accumulation in the mitochondria and increased oxidant sensitivity. For reasons that are not well understood, this primarily affects the tissue of the dorsal root ganglia, cerebellum, and heart muscle. Animal studies In mice, complete inactivation of the FXN homolog (Frda) is lethal in the early embryonic stage. Although nearly all organisms express a frataxin homologue, the GAA repeat in intron 1 only exists in humans and other primates, so the mutation that causes FDRA can't occur naturally in other animals. Scientists have developed several options to model this disease in mice. One approach is to silence frataxin expression in just one specific tissue type of interest: the heart (mice modified this way are called MCK), all neurons (NSE), or just the spinal cord and cerebellum (PRP). Another approach involves inserting a GAA expansion into the first intron of the mouse FXN gene, which should inhibit frataxin production, just like in humans. Mice that are homozygous for this modified gene are called KIKI (knock-in knock-in), and the compound heterozygotes formed by crossing KIKI mice with frataxin knockout mice are called KIKO (knock-in knock-out). However, even KIKO mice still express 25-36% of the normal frataxin level, and show very mild symptoms. The final approach involves creating transgenic mice with a GAA-expanded version of the human frataxin gene. These mice are called YG22R (one GAA sequence of 190 repeats) and YG8R (two GAA sequences of 90 and 190 repeats). These mice show symptoms similar to human patients. An overexpression of frataxin in Drosophila has shown an increase in antioxidant capability, resistance to oxidative stress insults and longevity, supporting the theory that the role of frataxin is to protect the mitochondria from oxidative stress and the ensuing cellular damage. Fibroblasts from a mouse model of FRDA and FRDA patient fibroblasts show increased levels of DNA double-strand breaks. A lentivirus gene delivery system was used to deliver the frataxin gene to the FRDA mouse model and human patient cells, and this resulted in long-term restored expression of frataxin mRNA and frataxin protein. This restored expression of the frataxin gene was accompanied by a substantial reduction in the number of DNA double-strand breaks. The impaired frataxin in FRDA cells appears to cause reduced capacity for repair of DNA damage and this may contribute to neurodegeneration. Interactions Frataxin has been shown to biologically interact with the enzyme PMPCB. References Further reading External links GeneReviews/NCBI/NIH/UW entry on Friedreich Ataxia Proteins
Frataxin
[ "Chemistry" ]
1,416
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
8,834,676
https://en.wikipedia.org/wiki/Truncated%20triheptagonal%20tiling
In geometry, the truncated triheptagonal tiling is a semiregular tiling of the hyperbolic plane. There is one square, one hexagon, and one tetradecagon (14-sides) on each vertex. It has Schläfli symbol of Uniform colorings There is only one uniform coloring of a truncated triheptagonal tiling. (Naming the colors by indices around a vertex: 123.) Symmetry Each triangle in this dual tiling, order 3-7 kisrhombille, represent a fundamental domain of the Wythoff construction for the symmetry group [7,3]. Related polyhedra and tilings This tiling can be considered a member of a sequence of uniform patterns with vertex figure (4.6.2p) and Coxeter-Dynkin diagram . For p < 6, the members of the sequence are omnitruncated polyhedra (zonohedrons), shown below as spherical tilings. For p > 6, they are tilings of the hyperbolic plane, starting with the truncated triheptagonal tiling. From a Wythoff construction there are eight hyperbolic uniform tilings that can be based from the regular heptagonal tiling. Drawing the tiles colored as red on the original faces, yellow at the original vertices, and blue along the original edges, there are 8 forms. See also Tilings of regular polygons List of uniform planar tilings References John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations) External links Hyperbolic and Spherical Tiling Gallery KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings Hyperbolic Planar Tessellations, Don Hatch Hyperbolic tilings Isogonal tilings Semiregular tilings Truncated tilings
Truncated triheptagonal tiling
[ "Physics" ]
399
[ "Semiregular tilings", "Truncated tilings", "Isogonal tilings", "Tessellation", "Hyperbolic tilings", "Symmetry" ]
8,834,843
https://en.wikipedia.org/wiki/Lexicographic%20code
Lexicographic codes or lexicodes are greedily generated error-correcting codes with remarkably good properties. They were produced independently by Vladimir Levenshtein and by John Horton Conway and Neil Sloane. The binary lexicographic codes are linear codes, and include the Hamming codes and the binary Golay codes. Construction A lexicode of length n and minimum distance d over a finite field is generated by starting with the all-zero vector and iteratively adding the next vector (in lexicographic order) of minimum Hamming distance d from the vectors added so far. As an example, the length-3 lexicode of minimum distance 2 would consist of the vectors marked by an "X" in the following example: {| class="wikitable" |- ! Vector ! In code? |- | 000 | X |- | 001 | |- | 010 | |- | 011 | X |- | 100 | |- | 101 | X |- | 110 | X |- | 111 | |} Here is a table of all n-bit lexicode by d-bit minimal hamming distance, resulting of maximum 2m codewords dictionnary. For example, F4 code (n=4,d=2,m=3), extended Hamming code (n=8,d=4,m=4) and especially Golay code (n=24,d=8,m=12) shows exceptional compactness compared to neighbors. {| class="wikitable" |- ! n \ d ! 1 ! 2 ! 3 ! 4 ! 5 ! 6 ! 7 ! 8 ! 9 ! 10 ! 11 ! 12 ! 13 ! 14 ! 15 ! 16 ! 17 ! 18 |- ! 1 | 1 | | | | | | | | | | | | | | | | | |- ! 2 | 2 | 1 | | | | | | | | | | | | | | | | |- ! 3 | 3 | 2 | 1 | | | | | | | | | | | | | | | |- ! 4 | 4 | | 1 | 1 | | | | | | | | | | | | | | |- ! 5 | 5 | 4 | 2 | 1 | 1 | | | | | | | | | | | | | |- ! 6 | 6 | 5 | 3 | 2 | 1 | 1 | | | | | | | | | | | | |- ! 7 | 7 | 6 | 4 | 3 | 1 | 1 | 1 | | | | | | | | | | | |- ! 8 | 8 | 7 | 4 | | 2 | 1 | 1 | 1 | | | | | | | | | | |- ! 9 | 9 | 8 | 5 | 4 | 2 | 2 | 1 | 1 | 1 | | | | | | | | | |- ! 10 | 10 | 9 | 6 | 5 | 3 | 2 | 1 | 1 | 1 | 1 | | | | | | | | |- ! 11 | 11 | 10 | 7 | 6 | 4 | 3 | 2 | 1 | 1 | 1 | 1 | | | | | | | |- ! 12 | 12 | 11 | 8 | 7 | 4 | 4 | 2 | 2 | 1 | 1 | 1 | 1 | | | | | | |- ! 13 | 13 | 12 | 9 | 8 | 5 | 4 | 3 | 2 | 1 | 1 | 1 | 1 | 1 | | | | | |- ! 14 | 14 | 13 | 10 | 9 | 6 | 5 | 4 | 3 | 2 | 1 | 1 | 1 | 1 | 1 | | | | |- ! 15 | 15 | 14 | 11 | 10 | 7 | 6 | 5 | 4 | 2 | 2 | 1 | 1 | 1 | 1 | 1 | | | |- ! 16 | 16 | 15 | 11 | 11 | 8 | 7 | 5 | 5 | 2 | 2 | 1 | 1 | 1 | 1 | 1 | 1 | | |- ! 17 | 17 | 16 | 12 | 11 | 9 | 8 | 6 | 5 | 3 | 2 | 2 | 1 | 1 | 1 | 1 | 1 | 1 | |- ! 18 | 18 | 17 | 13 | 12 | 9 | 9 | 7 | 6 | 3 | 3 | 2 | 2 | 1 | 1 | 1 | 1 | 1 | 1 |- ! 19 | 19 | 18 | 14 | 13 | 10 | 9 | 8 | 7 | 4 | 3 | 2 | 2 | 1 | 1 | 1 | 1 | 1 | 1 |- ! 20 | 20 | 19 | 15 | 14 | 11 | 10 | 9 | 8 | 5 | 4 | 3 | 2 | 2 | 1 | 1 | 1 | 1 | 1 |- ! 21 | 21 | 20 | 16 | 15 | 12 | 11 | 10 | 9 | 5 | 5 | 3 | 3 | 2 | 2 | 1 | 1 | 1 | 1 |- ! 22 | 22 | 21 | 17 | 16 | 12 | 12 | 11 | 10 | 6 | 5 | 4 | 3 | 2 | 2 | 1 | 1 | 1 | 1 |- ! 23 | 23 | 22 | 18 | 17 | 13 | 12 | 12 | 11 | 6 | 6 | 5 | 4 | 2 | 2 | 2 | 1 | 1 | 1 |- ! 24 | 24 | 23 | 19 | 18 | 14 | 13 | 12 | | 7 | 6 | 5 | 5 | 3 | 2 | 2 | 2 | 1 | 1 |- ! 25 | 25 | 24 | 20 | 19 | 15 | 14 | 12 | 12 | 8 | 7 | 6 | 5 | 3 | 3 | 2 | 2 | 1 | 1 |- ! 26 | 26 | 25 | 21 | 20 | 16 | 15 | 12 | 12 | 9 | 8 | 7 | 6 | 4 | 3 | 2 | 2 | 2 | 1 |- ! 27 | 27 | 26 | 22 | 21 | 17 | 16 | 13 | 12 | 9 | 9 | 7 | 7 | 5 | 4 | 3 | 2 | 2 | 2 |- ! 28 | 28 | 27 | 23 | 22 | 18 | 17 | 13 | 13 | 10 | 9 | 8 | 7 | 5 | 5 | 3 | 3 | 2 | 2 |- ! 29 | 29 | 28 | 24 | 23 | 19 | 18 | 14 | 13 | 11 | 10 | 8 | 8 | 6 | 5 | 4 | 3 | 2 | 2 |- ! 30 | 30 | 29 | 25 | 24 | 19 | 19 | 15 | 14 | 12 | 11 | 9 | 8 | 6 | 6 | 5 | 4 | 2 | 2 |- ! 31 | 31 | 30 | 26 | 25 | 20 | 19 | 16 | 15 | 12 | 12 | 10 | 9 | 6 | 6 | 6 | 5 | 3 | 2 |- ! 32 | 32 | 31 | 26 | 26 | 21 | 20 | 16 | 16 | 13 | 12 | 11 | 10 | 7 | 6 | 6 | 6 | 3 | 3 |- ! 33 | ... | 32 | ... | 26 | ... | 21 | ... | 16 | ... | 13 | ... | 11 | ... | 7 | ... | 6 | ... | 3 |} All odd d-bit lexicode distances are exact copies of the even d+1 bit distances minus the last dimension, so an odd-dimensional space can never create something new or more interesting than the d+1 even-dimensional space above. Since lexicodes are linear, they can also be constructed by means of their basis. Implementation Following C generate lexicographic code and parameters are set for the Golay code (N=24, D=8). #include <stdio.h> #include <stdlib.h> int main() { /* GOLAY CODE generation */ int i, j, k; int _pc[1<<16] = {0}; // PopCount Macro for (i=0; i < (1<<16); i++) for (j=0; j < 16; j++) _pc[i] += (i>>j)&1; #define pc(X) (_pc[(X)&0xffff] + _pc[((X)>>16)&0xffff]) #define N 24 // N bits #define D 8 // D bits distance unsigned int * z = malloc(1<<29); for (i=j=0; i < (1<<N); i++) { // Scan all previous for (k=j-1; k >= 0; k--) // lexicodes. if (pc(z[k]^i) < D) // Reverse checking break; // is way faster... if (k == -1) { // Add new lexicode for (k=0; k < N; k++) // & print it printf("%d", (i>>k)&1); printf(" : %d\n", j); z[j++] = i; } } } Combinatorial game theory The theory of lexicographic codes is closely connected to combinatorial game theory. In particular, the codewords in a binary lexicographic code of distance d encode the winning positions in a variant of Grundy's game, played on a collection of heaps of stones, in which each move consists of replacing any one heap by at most d − 1 smaller heaps, and the goal is to take the last stone. Notes External links Bob Jenkins table of binary lexicodes On-line generator for lexicodes and their variants Error-Correcting Codes on Graphs: Lexicodes, Trellises and Factor Graphs Error detection and correction
Lexicographic code
[ "Engineering" ]
2,100
[ "Error detection and correction", "Reliability engineering" ]
8,835,245
https://en.wikipedia.org/wiki/Radio%20modem
Radio modems are modems that transfer data wirelessly across a range of up to tens of kilometres. Using radio modems is a modern way to create Private Radio Networks (PRN). Private radio networks are used in critical industrial applications, when real-time data communication is needed. Radio modems enable users to be independent of telecommunication or satellite network operators. In most cases users use licensed frequencies either in the UHF or VHF bands. In certain areas licensed frequencies may be reserved for a given user, thus ensuring that there is less likelihood of radio interference from other RF transmitters. Also licence free frequencies are available in most countries, enabling easy implementation, but at the same time other users may use the same frequency, thus making it possible that a given frequency is blocked. Typical users for radio modems are: Land survey differential GPS, fleet management applications, SCADA applications (utility distribution networks), automated meter reading (AMR), telemetry applications and many more. Since applications usually require high reliability of data transfer and very high uptime, radio performance plays a key role. Factors influencing radio performance are: antenna height and type, the sensitivity of the radio, the output power of the radio and the complete system design. See also Flow control (data) SATEL Racom References Wireless networking
Radio modem
[ "Technology", "Engineering" ]
263
[ "Wireless networking", "Computer networks engineering" ]
8,836,159
https://en.wikipedia.org/wiki/Alexander%20Braun
Alexander Carl Heinrich Braun (10 May 1805 – 29 March 1877) was a German botanist from Regensburg, Bavaria. His research centered on the morphology of plants and was a very influential teacher who worked as a professor of botany at the universities of Freiburg, Giessen, and Berlin at various times. He was also the director of the Berlin Botanical Garden. Biography Braun was born in Regensburg (Ratisbon) where his father Alexander was a tax inspector in the postal department. His mother Henriette was the daughter of a priest and mathematics professor. He studied at Karlsruhe and Freiburg (Breisgau) where his father was transferred. He went to the University of Heidelberg to study medicine. His teachers included Gottlieb Wilhelm Bischoff, Johann Heinrich Dierbach and Franz Joseph Schelver. At Heidelberg he studied with Louis Agassiz, Carl Schimper and George Engelmann. Agassiz would marry Braun's sister Cecilie while Schimper was engaged briefly to Braun's sister Emilie. He completed his studies at Paris and Munich. In 1833 he began teaching botany at the Polytechnic School of Karlsruhe, staying there until 1846. Afterwards he was a professor of botany in Freiburg (from 1846), Giessen (from 1850) and at the University of Berlin (1851), where he remained until 1877. While in Berlin, he was also director of the botanical garden. He designed the layout which was later documented by Paul Friedrich August Ascherson. In 1852, he was elected a foreign member of the Royal Swedish Academy of Sciences. With Gottlob Ludwig Rabenhorst (1806–1881) and Ernst Stizenberger (1827–1895), he was editor of the exsiccata series Die Characeen Europa's in getrockneten Exemplaren, unter Mitwirkung mehrerer Freunde der Botanik, gesammelt und herausgegeben von Prof. A. Braun, L. Rabenhorst und E. Stizenberger. Braun is largely known for his research involving plant morphology. He accepted evolution but was a critic of Darwinism. He was a proponent of vitalism, a popular 19th-century speculative theory that claimed that a regulative force existed within living matter in order to maintain functionality. Braun made important contributions in the field of cell theory. His students included August Wilhelm Eichler. From his 1830s analysis of the arrangement of scales on a pine cone he was a pioneer of mathematical phyllotaxis developing what is called the Schimper-Braun theory. In 1877, Wilhelm Philippe Schimper and Philipp Bruch named the plant genus Braunia in his honor. Also, a decorative plant known as "Braun's holly fern" (Polystichum braunii) commemorates his name. Published works 1831: Untersuchung über die Ordnung der Schuppen an den Tannenzapfen (Investigation on the order of shapes in pine cones). 1842: Nachträgliche Mitteilungen über die Gattungen Marsilia und Pilularia (Additional releases on the genera Marsilea and Pilularia). 1851: Betrachtungen über die Erscheinung der Verjüngung in der Natur, insbesondere in der Lebens- und Bildungsgeschichte der Pflanze (Leipzig, 198 pp.) (Reflections on the phenomenon of rejuvenation in nature, particularly in the life and developmental history of the plant). 1852: Über die Richtungsverhältnisse der Saftströme in den Zellen der Characeen. (on directional conditions involving juice flow in the cell of Characeae). 1853: Das Individuum der Pflanze in seinem Verhältnis zur Spezies etc. (The individual plant in its relation to species, etc.). 1854: Über den schiefen Verlauf der Holzfaser und die dadurch bedingte Drehung der Stämme 1854: Über einige neue und weniger bekannte Krankheiten der Pflanzen, welche durch Pilze erzeugt werden (On new and lesser-known diseases of plants produced by fungi). 1854: Das Individuum der Species in seinem Verhältnis zur Pflanze (The individual of the species in its relationship to the plant). 1855: "Algarum unicellularium genera nova et minus cognita". 1856: Über Chytridium, eine Gattung einzelliger Schmarotzergewächse auf Algen und Infusorien (On Chytridium, a genus of unicellular parasites on algae and infusoria). 1857: Über Parthenogenesis bei Pflanzen (On parthenogenesis in plants) 1860: Über Polyembryonie und Keimung von Caelebogyne (Polyembryony and germination of Caelebogyne). 1861: Index seminum Horti Botanici Berolinensis: Appendix Plantarum Novrum et minus cognitarum quea in Horto region botanico Berolinensi coluntur. 1862: Über die Bedeutung der Morphologie (On the importance of morphology). 1862: Zwei deutsche Isoetesarten (Two German Isoëtes species). 1863: Über Isoetes (On quillworts). 1865: Beitrag zur Kenntnis der Gattung Selaginella (Contribution to the knowledge of the genus Selaginella). 1867: Die Characeen Afrikas (African Characeae). 1867: "Conspectus systematicus Characearum europaearum". 1870: Neuere Untersuchungen über die Gattungen Marsilia und Pilularia (Recent studies on the genera Marsilea and Pilularia). 1872: Über die Bedeutung der Entwicklung in der Naturgeschichte (On the importance of development in natural history). See also University of Freiburg Faculty of Biology Notes References This article is based on a translation of the equivalent article at the German Wikipedia. Biography at Deutsche Biographie. Further reading Alexander Braun. In: Leopoldina — On line: part 1, 1871–1872, p. 50–60 A. W. Eichler. Rede bei der Enthüllung des Denkmals von Alexander Braun'', 1879 External links 1805 births 1877 deaths 19th-century German botanists Biologists from the Kingdom of Prussia Non-Darwinian evolution Scientists from Regensburg Scientists from the Kingdom of Bavaria Heidelberg University alumni University of Paris alumni Ludwig Maximilian University of Munich alumni Academic staff of the Humboldt University of Berlin Academic staff of the University of Freiburg Academic staff of the University of Giessen Members of the Royal Swedish Academy of Sciences Foreign associates of the National Academy of Sciences Vitalists Expatriates in France
Alexander Braun
[ "Biology" ]
1,470
[ "Non-Darwinian evolution", "Biology theories" ]
8,836,418
https://en.wikipedia.org/wiki/120347%20Salacia
Salacia (minor-planet designation: 120347 Salacia) is a large trans-Neptunian object (TNO) in the Kuiper belt, approximately in diameter. It was discovered on 22 September 2004, by American astronomers Henry Roe, Michael Brown and Kristina Barkume at the Palomar Observatory in California, United States. Salacia orbits the Sun at an average distance that is slightly greater than that of Pluto. It was named after the Roman goddess Salacia and has a single known moon, Actaea. Brown estimated that Salacia is nearly certainly a dwarf planet. However, William Grundy et al. argue that objects in the size range of 400–1,000 km, with densities of ≈ 1.2 g/cm3 or less and albedos less than ≈ 0.2, have likely never compressed into fully solid bodies or been resurfaced, let alone differentiated or collapsed into hydrostatic equilibrium, and so are highly unlikely to be dwarf planets. Salacia is at the upper end of this size range and has a very low albedo, though Grundy et al. later found it to have the relatively high density of . Orbit Salacia is a non-resonant object with a moderate eccentricity (0.11) and large inclination (23.9°), making it a scattered–extended object in the classification of the Deep Ecliptic Survey and a hot classical Kuiper belt object in the classification system of Gladman et al., which may be the same thing if they are part of a single population that formed during the outward migration of Neptune. Salacia's orbit is within the parameter space of the Haumea collisional family, but Salacia is not part of it, as evidenced by its lack of the strong water-ice absorption bands. Physical characteristics As of 2019, the total mass of the Salacia–Actaea system is estimated at , with an average system density of ; Salacia itself is estimated to be around 846 km in diameter. Salacia has the lowest albedo of any known large trans-Neptunian object. According to the estimate from 2017 based on an improved thermophysical modelling, the size of Salacia is slightly larger at 866 km and its density therefore slightly lower (calculated at with the old mass estimate discussed below). Salacia was previously believed to have a mass of around , in which case it would also have had the lowest density (around ) of any known large TNO; William Grundy and colleagues proposed that this low density would imply that Salacia never collapsed into a solid body, in which case it would not be in hydrostatic equilibrium. Salacia's infrared spectrum is almost featureless, indicating an abundance of water ice of less than 5% on the surface. Near-infrared spectroscopy by the James Webb Space Telescope (JWST) in 2022 revealed the presence of water ice in Salacia's surface. No signs of volatile ices such as methane were detected in JWST's spectrum of Salacia. Its light-curve amplitude is only 3%. Satellite Salacia has one known natural satellite, Actaea, that orbits its primary every at a distance of and with an eccentricity of . It was discovered on 21 July 2006 by Keith Noll, Harold Levison, Denise Stephens and William Grundy with the Hubble Space Telescope. Actaea is magnitudes fainter than Salacia, implying a diameter ratio of 2.98 for equal albedos. Hence, assuming equal albedos, it has a diameter of According to the estimate from 2017 based on an improved modelling, the size of Actaea is slightly larger at . Actaea has the same color as Salacia (V−I = and , respectively), supporting the assumption of equal albedos. It has been calculated that the Salacia system should have undergone enough tidal evolution to circularize their orbits, which is consistent with the low measured eccentricity, but that the primary need not be tidally locked. The ratio of its semi-major axis to its primary's Hill radius is 0.0023, the tightest trans-Neptunian binary with a known orbit. Salacia and Actaea will next occult each other in 2067. Name This minor planet was named after Salacia (), the goddess of salt water and the wife of Neptune. The naming citation was published on 18 February 2011 (). The moon's name, Actaea , was assigned on the same date. Actaea is a nereid or sea nymph. Planetary symbols are no longer used much in astronomy, so Salacia never received a symbol in the astronomical literature. Denis Moskowitz, a software engineer who designed most of the dwarf planet symbols, proposed a stylised hippocamp (, formerly ) as the symbol for Salacia; this symbol is not widely used. See also List of Solar System objects by size Notes References External links (120347) Salacia at Johnston's Archive Salacia: As big as Ceres, but much farther away (Emily Lakdawalla – 2012/06/26) 120347 120347 Discoveries by Michael E. Brown Named minor planets Binary trans-Neptunian objects 20040922
120347 Salacia
[ "Physics", "Astronomy" ]
1,115
[ "Concepts in astronomy", "Unsolved problems in astronomy", "Possible dwarf planets" ]
8,836,590
https://en.wikipedia.org/wiki/Integrated%20Geo%20Systems
Integrated Geo Systems (IGS) is a computational architecture system developed for managing geoscientific data through systems and data integration. Geosciences often involve large volumes of diverse data which have to be processed by computer and graphics intensive applications. The processes involved in processing these large datasets are often so complex that no single applications software can perform all the required tasks. Specialized applications have emerged for specific tasks. To get the required results, it is necessary that all applications software involved in various stages of data processing, analysis and interpretation effectively communicate with each other by sharing data. IGS provides a framework for maintaining an electronic workflow between various geoscience software applications through data connectivity. The main components of IGS are: Geographic information systems as a front end. Format engine for data connectivity link between various geoscience software applications. The format engine uses Output Input Language (OIL), an interpreted language, to define various data formats. An array of geoscience relational databases for data integration. Data highways as internal data formats for each data type. Specialized geoscience applications software as processing modules. Geoscientific processing libraries External links Geological Society Books American Association of Petroleum Geologists Book Store Integrated Geo Systems Research Paper Computer systems
Integrated Geo Systems
[ "Technology", "Engineering" ]
250
[ "Computer engineering", "Computer systems", "Computer science", "Computing stubs", "Computers" ]
8,836,978
https://en.wikipedia.org/wiki/Calypso%20%28electronic%20ticketing%20system%29
Calypso is an international electronic ticketing standard for microprocessor contactless smart cards, originally designed by a group of transit operators from 11 countries including Belgium, Canada, France, Germany, Italy, Latvia, México, Portugal and others. It ensures multi-sources of compatible products, and allows for interoperability between several transport operators in the same area. History Calypso was born in 1993 from a partnership between the Paris transit operator RATP and Innovatron, a company owned by the French smartcard inventor, Roland Moreno. The key features of the scheme were patented by Innovatron. Most European transit operators from Belgium, Germany, France, Italy and Portugal eventually joined the group in the following years. The first use of the technology was in 1996. In the same time, the international standard ISO/IEC 14443 for contactless smart cards was being designed, and the actors of Calypso strongly lobbied to have their technology included in the standard, but Innovatron's patents—and the price of the related royalties—were not compliant with ISO's policy. Therefore, despite their closeness, there are few significant differences between Calypso's historical contactless protocol and ISO/IEC 14443 Type B international standard. The European standard for ticketing data (EN1545) has also been contributed by the actors of Calypso. After a few years of trials, the system has been generalised in the early 2000s in major European cities such as Strasbourg, Paris, Venice, Lisbon, later followed by Turin, Porto, Marseille, Lyon, and many smaller cities. Calypso is extended now in other countries such as Belgium, Israel, Canada, Mexico, Colombia, etc. Technical aspects Calypso is based on two main technologies: The microprocessor smartcard, widely used in many monetary transactions; The contactless interface (improperly called RFID) ensuring both remote powering and communication between the reader and the card. A Calypso card, whatever its form (card, watch, mobile phone or other NFC object, etc.) has a microprocessor which contains all the information related to its owner rights for the application, and which implements the Calypso authentication scheme for security. This makes a difference with other e-ticketing system, such as London's Oyster card, where the card is only a memory chip with no processing capabilities. Calypso Networks Association A non-for-profit association, Calypso Networks Association (CNA), has been created to regroup the transit network operators using Calypso, and the suppliers of Calypso compliant equipment. This association promotes the standard to new operators and manufacturers, defines the certification policy to guarantee the compatibility of all current and future products, and governs the evolution of the standard. This technical job is actually performed mainly by a subcontractor, Spirtech. See also CIPURSE, open security standard for transit fare collection systems by Open Standard for Public Transportation (OSPT) Alliance References External links Calypso Networks Association Calypso Networks Association Radio-frequency identification Contactless smart cards
Calypso (electronic ticketing system)
[ "Engineering" ]
637
[ "Radio-frequency identification", "Radio electronics" ]
8,837,004
https://en.wikipedia.org/wiki/Dihydroartemisinin
Dihydroartemisinin (also known as dihydroqinghaosu, artenimol or DHA) is a drug used to treat malaria. Dihydroartemisinin is the active metabolite of all artemisinin compounds (artemisinin, artesunate, artemether, etc.) and is also available as a drug in itself. It is a semi-synthetic derivative of artemisinin and is widely used as an intermediate in the preparation of other artemisinin-derived antimalarial drugs. It is sold commercially in combination with piperaquine and has been shown to be equivalent to artemether/lumefantrine. Medical use Dihydroartemisinin is used to treat malaria, generally as a combination drug with piperaquine. In a systematic review of randomized controlled trials, both dihydroartemisinin-piperaquine and artemether-lumefantrine are very effective at treating malaria (high quality evidence). However, dihydroartemisinin-piperaquine cures slightly more patients than artemether-lumefantrine, and it also prevents further malaria infections for longer after treatment (high quality evidence). Dihydroartemisinin-piperaquine and artemether-lumefantrine probably have similar side effects (moderate quality evidence). The studies were all conducted in Africa. In studies of people living in Asia, dihydroartemisinin-piperaquine is as effective as artesunate plus mefloquine at treating malaria (moderate quality evidence). Artesunate plus mefloquine probably causes more nausea, vomiting, dizziness, sleeplessness, and palpitations than dihydroartemisinin-piperaquine (moderate quality evidence). Pharmacology and mechanism The proposed mechanism of action of artemisinin involves cleavage of endoperoxide bridges by iron, producing free radicals (hypervalent iron-oxo species, epoxides, aldehydes, and dicarbonyl compounds) which damage biological macromolecules causing oxidative stress in the cells of the parasite. Malaria is caused by apicomplexans, primarily Plasmodium falciparum, which largely reside in red blood cells and itself contains iron-rich heme-groups (in the form of hemozoin). In 2015 artemisinin was shown to bind to a large number targets suggesting that it acts in a promiscuous manner. Recent mechanism research discovered that artemisinin targets a broad spectrum of proteins in the human cancer cell proteome through heme-activated radical alkylation. Chemistry Dihydroartemisinin has a low solubility in water of less than 0.1 g/L. Consequently, its use may result in side effects caused by minor, yet much more soluble, additives (excipients) such as Cremophor EL. The lactone of artemisinin could selectively be reduced with mild hydride-reducing agents, such as sodium borohydride, potassium borohydride, and lithium borohydride to dihydroartemisinin (a lactol) in over 90% yield. It is a novel reduction, because normally lactones cannot be reduced with sodium borohydride under the same reaction conditions (0–5 ˚C in methanol). Reduction with LiAlH4 leads to some rearranged products. It was surprising to find that the lactone was reduced, but that the peroxy group survived. However, the lactone of deoxyartemisinin resisted reduction with sodium borohydride and could only be reduced with diisobutylaluminium hydride to the lactol deoxydihydroartimisinin. These results show that the peroxy group assists the reduction of lactone with sodium borohydride to a lactol, but not to the alcohol which is the over-reduction product. No clear evidence for this reduction process exists. Society and culture In combination with piperaquine, brands include: D-Artepp (GPSC) Artekin (Holleykin) Diphos (Genix Pharma) TimeQuin (Sami Pharma) Eurartesim (Sigma Tau; by Good Manufacturing Practices) Duocotecxin (Holley Pharm) Alone: Cotecxin (Zhejiang Holley Nanhu Pharmaceutical Co.) Research Accumulative research suggests that dihydroartemisinin and other artemisinin-based endoperoxide compounds may display activity as experimental cancer chemotherapeutics. Recent pharmacological evidence demonstrates that dihydroartemisinin targets human metastatic melanoma cells with induction of NOXA-dependent mitochondrial apoptosis that occurs downstream of iron-dependent generation of cytotoxic oxidative stress. References Further reading Antimalarial agents Organic peroxides Trioxanes Chinese discoveries Oxygen heterocycles Heterocyclic compounds with 4 rings Tetracyclic compounds Lactols
Dihydroartemisinin
[ "Chemistry" ]
1,082
[ "Organic compounds", "Lactols", "Functional groups", "Organic peroxides" ]
8,837,050
https://en.wikipedia.org/wiki/Copernican%20heliocentrism
Copernican heliocentrism is the astronomical model developed by Nicolaus Copernicus and published in 1543. This model positioned the Sun at the center of the Universe, motionless, with Earth and the other planets orbiting around it in circular paths, modified by epicycles, and at uniform speeds. The Copernican model displaced the geocentric model of Ptolemy that had prevailed for centuries, which had placed Earth at the center of the Universe. Although he had circulated an outline of his own heliocentric theory to colleagues sometime before 1514, he did not decide to publish it until he was urged to do so later by his pupil Rheticus. Copernicus's challenge was to present a practical alternative to the Ptolemaic model by more elegantly and accurately determining the length of a solar year while preserving the metaphysical implications of a mathematically ordered cosmos. Thus, his heliocentric model retained several of the Ptolemaic elements, causing inaccuracies, such as the planets' circular orbits, epicycles, and uniform speeds, while at the same time using ideas such as: The Earth is one of several planets revolving around a stationary sun in a determined order. The Earth has three motions: daily rotation, annual revolution, and annual tilting of its axis. Retrograde motion of the planets is explained by the Earth's motion. The distance from the Earth to the Sun is small compared to the distance from the Sun to the stars. Background Antiquity Philolaus (4th century BCE) was one of the first to hypothesize movement of the Earth, probably inspired by Pythagoras' theories about a spherical, moving globe. In the 3rd century BCE, Aristarchus of Samos proposed what was, so far as is known, the first serious model of a heliocentric Solar System, having developed some of Heraclides Ponticus' theories (speaking of a "revolution of the Earth on its axis" every 24 hours). Though his original text has been lost, a reference in Archimedes' book The Sand Reckoner (Archimedis Syracusani Arenarius & Dimensio Circuli) describes a work in which Aristarchus advanced the heliocentric model. Archimedes wrote: It is a common misconception that the heliocentric view was rejected by the contemporaries of Aristarchus. This is the result of Gilles Ménage's translation of a passage from Plutarch's On the Apparent Face in the Orb of the Moon. Plutarch reported that Cleanthes (a contemporary of Aristarchus and head of the Stoics) as a worshiper of the Sun and opponent to the heliocentric model, was jokingly told by Aristarchus that he should be charged with impiety. Ménage, shortly after the trials of Galileo and Giordano Bruno, amended an accusative (identifying the object of the verb) with a nominative (the subject of the sentence), and vice versa, so that the impiety accusation fell over the heliocentric sustainer. The resulting misconception of an isolated and persecuted Aristarchus is still transmitted today. Ptolemaic system The prevailing astronomical model of the cosmos in Europe in the 1,400 years leading up to the 16th century was the Ptolemaic System, a geocentric model created by the Roman citizen Claudius Ptolemy in his Almagest, dating from about 150 CE. Throughout the Middle Ages it was spoken of as the authoritative text on astronomy, although its author remained a little understood figure frequently mistaken as one of the Ptolemaic rulers of Egypt. The Ptolemaic system drew on many previous theories that viewed Earth as a stationary center of the universe. Stars were embedded in a large outer sphere which rotated relatively rapidly, while the planets dwelt in smaller spheres between—a separate one for each planet. To account for apparent anomalies in this view, such as the apparent retrograde motion of the planets, a system of deferents and epicycles was used. The planet was said to revolve in a small circle (the epicycle) about a center, which itself revolved in a larger circle (the deferent) about a center on or near the Earth. A complementary theory to Ptolemy's employed homocentric spheres: the spheres within which the planets rotated could themselves rotate somewhat. This theory predated Ptolemy (it was first devised by Eudoxus of Cnidus; by the time of Copernicus it was associated with Averroes). Also popular with astronomers were variations such as eccentrics—by which the rotational axis was offset and not completely at the center. The planets were also made to have exhibit irregular motions that deviated from a uniform and circular path. The eccentrics of the planets motions were analyzed to have made reverse motions over periods of observations. This retrograde motion created the foundation for why these particular pathways became known as epicycles. Ptolemy's unique contribution to this theory was the equant—a point about which the center of a planet's epicycle moved with uniform angular velocity, but which was offset from the center of its deferent. This violated one of the fundamental principles of Aristotelian cosmology—namely, that the motions of the planets should be explained in terms of uniform circular motion, and was considered a serious defect by many medieval astronomers. Aryabhata In 499 CE, the Indian astronomer and mathematician Aryabhata, influenced by Greek astronomy, propounded a planetary model that explicitly incorporated Earth's rotation about its axis, which he explains as the cause of what appears to be an apparent westward motion of the stars. He also believed that the orbits of planets are elliptical. Aryabhata's followers were particularly strong in South India, where his principles of the diurnal rotation of Earth, among others, were followed and a number of secondary works were based on them. Middle Ages Islamic astronomers Several Islamic astronomers questioned the Earth's apparent immobility and centrality within the universe. Some accepted that the Earth rotates around its axis, such as Al-Sijzi, who invented an astrolabe based on a belief held by some of his contemporaries "that the motion we see is due to the Earth's movement and not to that of the sky". That others besides Al-Sijzi held this view is further confirmed by a reference from an Arabic work in the 13th century which states: "According to the geometers [or engineers] (muhandisīn), the earth is in constant circular motion, and what appears to be the motion of the heavens is actually due to the motion of the earth and not the stars". In the 12th century, Nur ad-Din al-Bitruji proposed a complete alternative to the Ptolemaic system (although not heliocentric). He declared the Ptolemaic system as an imaginary model, successful at predicting planetary positions but not real or physical. Al-Btiruji's alternative system spread through most of Europe during the 13th century. Mathematical techniques developed in the 13th to 14th centuries by the Arab and Persian astronomers Mu'ayyad al-Din al-Urdi, Nasir al-Din al-Tusi, and Ibn al-Shatir for geocentric models of planetary motions closely resemble some of the techniques used later by Copernicus in his heliocentric models. European astronomers post-Ptolemy Martianus Capella (5th century CE) expressed the opinion that the planets Venus and Mercury did not go about the Earth but instead circled the Sun. Capella's model was discussed in the Early Middle Ages by various anonymous 9th-century commentators and Copernicus mentions him as an influence on his own work. Macrobius (420 CE) described a heliocentric model. John Scotus Eriugena (815–877 CE) proposed a model reminiscent of that from Tycho Brahe. Since the 13th century, European scholars were well aware of problems with Ptolemaic astronomy. The debate was precipitated by the reception by Averroes' criticism of Ptolemy, and it was again revived by the recovery of Ptolemy's text and its translation into Latin in the mid-15th century. Otto E. Neugebauer in 1957 argued that the debate in 15th-century Latin scholarship must also have been informed by the criticism of Ptolemy produced after Averroes, by the Ilkhanid-era (13th to 14th centuries) Persian school of astronomy associated with the Maragheh observatory (especially the works of al-Urdi, al-Tusi and al-Shatir). It has been argued that Copernicus could have independently discovered the Tusi couple or took the idea from Proclus's Commentary on the First Book of Euclid, which Copernicus cited. Another possible source for Copernicus' knowledge of this mathematical device is the Questiones de Spera of Nicole Oresme, who described how a reciprocating linear motion of a celestial body could be produced by a combination of circular motions similar to those proposed by al-Tusi. In Copernicus' day, the most up-to-date version of the Ptolemaic system was that of Georg von Peuerbach (1423–1461) and his student Regiomontanus (1436–1476). The state of the question as received by Copernicus is summarized in the Theoricae novae planetarum by Peuerbach, compiled from lecture notes by Regiomontanus in 1454, but not printed until 1472. Peuerbach attempts to give a new, mathematically more elegant presentation of Ptolemy's system, but he does not arrive at heliocentrism. Regiomontanus was the teacher of Domenico Maria Novara da Ferrara, who was in turn the teacher of Copernicus. There is a possibility that Regiomontanus had already arrived at a theory of heliocentrism before his death in 1476, as he paid particular attention to the heliocentric theory of Aristarchus in a late work and mentions the "motion of the Earth" in a letter. The state of knowledge on planetary theory received by Copernicus is summarized in Peuerbach's Theoricae Novae Planetarum (printed in 1472 by Regiomontanus). By 1470, the accuracy of observations by the Vienna school of astronomy, of which Peuerbach and Regiomontanus were members, was high enough to make the eventual development of heliocentrism inevitable, and indeed it is possible that Regiomontanus did arrive at an explicit theory of heliocentrism before his death in 1476, some 30 years before Copernicus. Copernican theory Copernicus' major work, De revolutionibus orbium coelestium (On the Revolutions of the Heavenly Spheres; first edition 1543 in Nuremberg, second edition 1566 in Basel), was a compendium of six books published during the year of his death, though he had arrived at his theory several decades earlier. The work marks the beginning of the shift away from a geocentric (and anthropocentric) universe with the Earth at its center. Copernicus held that the Earth is another planet revolving around the fixed Sun once a year and turning on its axis once a day. But while Copernicus put the Sun at the center of the celestial spheres, he did not put it at the exact center of the universe, but near it. Copernicus' system used only uniform circular motions, correcting what was seen by many as the chief inelegance in Ptolemy's system. The Copernican model replaced Ptolemy's equant circles with more epicycles. 1,500 years of Ptolemy's model helped to create a more accurate estimate of the planets' motions for Copernicus. That is the main reason that Copernicus' system had even more epicycles than Ptolemy's. The more epicycles proved to have more accurate measurements of how the planets were truly positioned, "although not enough to get excited about". The Copernican system can be summarized in several propositions, as Copernicus himself did in his early Commentariolus that he handed only to friends, probably in the 1510s. The "little commentary" was never printed. Its existence was only known indirectly until a copy was discovered in Stockholm around 1880, and another in Vienna a few years later. The major features of Copernican theory are: Heavenly motions are uniform, eternal, and circular or compounded of several circles (epicycles). The center of the universe is near the Sun. Around the Sun, in order, are Mercury, Venus, the Earth and Moon, Mars, Jupiter, Saturn, and the fixed stars. The Earth has three motions: daily rotation, annual revolution, and annual tilting of its axis. Retrograde motion of the planets is explained by the Earth's motion, which in short was also influenced by planets and other celestial bodies around Earth. The distance from the Earth to the Sun is small compared to the distance to the stars. Inspiration came to Copernicus not from observation of the planets, but from reading two authors, Cicero and Plutarch. In Cicero's writings, Copernicus found an account of the theory of Hicetas. Plutarch provided an account of the Pythagoreans Heraclides Ponticus, Philolaus, and Ecphantes. These authors had proposed a moving Earth, which did not revolve around a central Sun. Copernicus cited Aristarchus and Philolaus in an early manuscript of his book which survives, stating: "Philolaus believed in the mobility of the earth, and some even say that Aristarchus of Samos was of that opinion". For unknown reasons (although possibly out of reluctance to quote pre-Christian sources), Copernicus did not include this passage in the publication of his book. Copernicus used what is now known as the Urdi lemma and the Tusi couple in the same planetary models as found in Arabic sources. Furthermore, the exact replacement of the equant by two epicycles used by Copernicus in the Commentariolus was found in an earlier work by al-Shatir. Al-Shatir's lunar and Mercury models are also identical to those of Copernicus. This has led some scholars to argue that Copernicus must have had access to some yet to be identified work on the ideas of those earlier astronomers. However, no likely candidate for this conjectured work has come to light, and other scholars have argued that Copernicus could well have developed these ideas independently of the late Islamic tradition. Nevertheless, Copernicus cited some of the Islamic astronomers whose theories and observations he used in De Revolutionibus, namely al-Battani, Thabit ibn Qurra, al-Zarqali, Averroes, and al-Bitruji. It has been suggested that the idea of the Tusi couple may have arrived in Europe leaving few manuscript traces, since it could have occurred without the translation of any Arabic text into Latin. One possible route of transmission may have been through Byzantine science; Gregory Chioniades translated some of al-Tusi's works from Arabic into Byzantine Greek. Several Byzantine Greek manuscripts containing the Tusi-couple are still extant in Italy. De revolutionibus orbium coelestium When Copernicus' compendium was published, it contained an unauthorized, anonymous preface by a friend of Copernicus, the Lutheran theologian Andreas Osiander. This cleric stated that Copernicus wrote his heliocentric account of the Earth's movement as a mathematical hypothesis, not as an account that contained truth or even probability. Since Copernicus' hypothesis was believed to contradict the Old Testament account of the Sun's movement around the Earth (Joshua 10:12-13), this was apparently written to soften any religious backlash against the book. However, there is no evidence that Copernicus himself considered the heliocentric model as merely mathematically convenient, separate from reality. Copernicus' actual compendium began with a letter from his (by then deceased) friend Nikolaus von Schönberg, Cardinal Archbishop of Capua, urging Copernicus to publish his theory. Then, in a lengthy introduction, Copernicus dedicated the book to Pope Paul III, explaining his ostensible motive in writing the book as relating to the inability of earlier astronomers to agree on an adequate theory of the planets, and noting that if his system increased the accuracy of astronomical predictions it would allow the Church to develop a more accurate calendar. At that time, a reform of the Julian Calendar was considered necessary and was one of the major reasons for the Church's interest in astronomy. The work itself is divided into six books: The first is a general vision of the heliocentric theory, and a summarized exposition of his idea of the World. The second is mainly theoretical, presenting the principles of spherical astronomy and a list of stars (as a basis for the arguments developed in the subsequent books). The third is mainly dedicated to the apparent motions of the Sun and to related phenomena. The fourth is a description of the Moon and its orbital motions. The fifth is a concrete exposition of the new system, including planetary longitude. The sixth is further concrete exposition of the new system, including planetary latitude. Early criticisms From publication until about 1700, few astronomers were convinced by the Copernican system, though the work was relatively widely circulated (around 500 copies of the first and second editions have survived, which is a large number by the scientific standards of the time). Few of Copernicus' contemporaries were ready to concede that the Earth actually moved. Even forty-five years after the publication of De Revolutionibus, the astronomer Tycho Brahe went so far as to construct a cosmology precisely equivalent to that of Copernicus, but with the Earth held fixed in the center of the celestial sphere instead of the Sun. It was another generation before a community of practicing astronomers appeared who accepted heliocentric cosmology. For his contemporaries, the ideas presented by Copernicus were not markedly easier to use than the geocentric theory and did not produce more accurate predictions of planetary positions. Copernicus was aware of this and could not present any observational "proof", relying instead on arguments about what would be a more complete and elegant system. The Copernican model appeared to be contrary to common sense and to contradict the Bible. Tycho Brahe's arguments against Copernicus are illustrative of the physical, theological, and even astronomical grounds on which heliocentric cosmology was rejected. Tycho, arguably the most accomplished astronomer of his time, appreciated the elegance of the Copernican system, but objected to the idea of a moving Earth on the basis of physics, astronomy, and religion. The Aristotelian physics of the time (modern Newtonian physics was still a century away) offered no physical explanation for the motion of a massive body like Earth, but could easily explain the motion of heavenly bodies by postulating that they were made of a different sort of substance called aether that moved naturally. So Tycho said that the Copernican system "... expertly and completely circumvents all that is superfluous or discordant in the system of Ptolemy. On no point does it offend the principle of mathematics. Yet it ascribes to the Earth, that hulking, lazy body, unfit for motion, a motion as quick as that of the aethereal torches, and a triple motion at that." Thus many astronomers accepted some aspects of Copernicus's theory at the expense of others. Copernican Revolution The Copernican Revolution, a paradigm shift from the Ptolemaic model of the heavens, which described the cosmos as having Earth as a stationary body at the center of the universe, to the heliocentric model with the Sun at the center of the Solar System, spanned over a century, beginning with the publication of Copernicus' De revolutionibus orbium coelestium and ending with the work of Isaac Newton. While not warmly received by his contemporaries, his model did have a large influence on later scientists such as Galileo and Johannes Kepler, who adopted, championed and (especially in Kepler's case) sought to improve it. However, in the years following publication of de Revolutionibus, for leading astronomers such as Erasmus Reinhold, the key attraction of Copernicus's ideas was that they reinstated the idea of uniform circular motion for the planets. During the 17th century, several further discoveries eventually led to the wider acceptance of heliocentrism: Using detailed observations by Tycho Brahe, Kepler discovered Mars's orbit was an ellipse with the Sun at one focus, and its speed varied with its distance from the Sun. This discovery was detailed in his 1609 book Astronomia nova along with the claim that all planets had elliptical orbits and non-uniform motion, stating "And finally... the sun itself... will melt all this Ptolemaic apparatus like butter". Using the newly invented telescope, in 1610 Galileo observed the four large moons of Jupiter (evidence that the Solar System contained bodies that did not orbit Earth), the phases of Venus (more observational evidence not properly explained by the Ptolemaic theory) and the rotation of the Sun about a fixed axis: as indicated by the apparent annual variation in the motion of sunspots; With a telescope, Giovanni Zupi saw the phases of Mercury in 1639; Isaac Newton in 1687 proposed universal gravity and the inverse-square law of gravitational attraction to explain Kepler's elliptical planetary orbits. Modern views Substantially correct From a modern point of view, the Copernican model has a number of advantages. Copernicus gave a clear account of the cause of the seasons: that the Earth's axis is not perpendicular to the plane of its orbit. In addition, Copernicus's theory provided a strikingly simple explanation for the apparent retrograde motions of the planets—namely as parallactic displacements resulting from the Earth's motion around the Sun—an important consideration in Johannes Kepler's conviction that the theory was substantially correct. In the heliocentric model the planets' apparent retrograde motions' occurring at opposition to the Sun are a natural consequence of their heliocentric orbits. In the geocentric model, however, these are explained by the ad hoc use of epicycles, whose revolutions are mysteriously tied to that of the Sun. Modern historiography Whether Copernicus' propositions were "revolutionary" or "conservative" has been a topic of debate in the historiography of science. In his book The Sleepwalkers: A History of Man's Changing Vision of the Universe (1959), Arthur Koestler attempted to deconstruct the Copernican "revolution" by portraying Copernicus as a coward who was reluctant to publish his work due to a crippling fear of ridicule. Thomas Kuhn argued that Copernicus only transferred "some properties to the Sun's many astronomical functions previously attributed to the earth." Historians have since argued that Kuhn underestimated what was "revolutionary" about Copernicus' work, and emphasized the difficulty Copernicus would have had in putting forward a new astronomical theory relying alone on simplicity in geometry, given that he had no experimental evidence. See also Copernican principle Notes References Further reading Analyses the varieties of argument used by Copernicus in De revolutionibus. External links Heliocentric Pantheon History of astronomy Nicolaus Copernicus Copernican Revolution
Copernican heliocentrism
[ "Astronomy" ]
4,927
[ "Copernican Revolution", "History of astronomy" ]
8,837,155
https://en.wikipedia.org/wiki/Hakon%20Haugnes
Hakon Haugnes is one of the founders of the .name top-level domain founded and launched by Global Name Registry (GNR) in 2000/2001. Previously Mr Haugnes was a co-founder of Nameplanet.com, which provided personalized email addresses to 1 million users in March 2000. Hakon Haugnes is CFO/COO of Andurand Capital and responsible for all operational and financial (non-investment) aspects of the company. Hakon was previously Risk Manager for BlueGold Capital (2010-2012), reporting to the CFO and CIO on all risk management aspects of the hedge fund which at its peak managed over US$2 billion. Mr Haugnes also developed BlueGold's information systems and headed up the in-house development team. Hakon was Business Analyst for BlueGold from 2009 to 2010. Prior to BlueGold, Haugnes was co-founder and president of Global Name Registry, a private company which was sold in Q4 2008 to VeriSign Inc (NASDAQ:VRSN). He served with the Norwegian Armed Forces as Strategist and holds a master's degree (honours) in mathematical modelling from the Institute of Cybernetics at the Norwegian Institute of Science and Technology (NTNU) and studied engineering at Institut National des Sciences Appliquees (INSA) in Toulouse, France. External links Global Name Registry Nameplanet.com References People in information technology Living people Year of birth missing (living people) Place of birth missing (living people)
Hakon Haugnes
[ "Technology" ]
320
[ "People in information technology", "Information technology" ]
8,837,273
https://en.wikipedia.org/wiki/AHCC
Active hexose correlated compound (AHCC) is an alpha-glucan rich nutritional supplement produced from shiitake (Lentinula edodes). The product is a subject of research as a potential anti-cancer agent. AHCC is a popular alternative medicine in Japan. AHCC is a registered trademark of and manufactured by Amino Up Co., Ltd. in Sapporo City, Hokkaido, Japan. Development and chemical composition AHCC was developed by Amino Up Co., LTD. and Toshihiko Okamoto (School of Pharmaceutical Sciences, University of Tokyo) in 1989. Polysaccharides form a large part of the composition of AHCC. These include beta-glucan (β-glucan) and partially acylated α-glucan. Partially acylated α-glucan, produced by the patented long term culturing process, is unique to AHCC. Approximately 20% of the make up of AHCC is α-glucans. Glucans are saccharides, of which some are known to have immune stimulating effects. Potential mechanisms of action The manufacturer of AHCC, Amino Up Co., Ltd., states that the culturing process utilized in its manufacture favors the release of small bioactive molecules that act as nontoxic agonists for toll-like receptors (TLRs), specifically TLR-4, initiating a systemic anti-inflammatory response. AHCC is believed to bind to TLR-2 and TLR-4, and act as an immune modulator, as Immune cells such as CD4+ and CD8+ T cells and natural killer (NK) cells will produce cytokines by either cytokine stimulation by dendritic cells or ligand binding to TLRs. Use in integrative medicine AHCC is widely used in the world and many people use it for general health maintenance and treatment of various diseases. It is often used as a complementary and alternative medicine (CAM) for immune support, as reports in animal and clinical settings have indicated that AHCC is associated with an enhanced response to infection and increased survival. AHCC is in some cases also used by those undergoing conventional cancer therapy (e.g. chemotherapy) for its reported immunomodulatory functions. In Japan, AHCC is the 2nd most popular complementary and alternative medicine used by cancer patients. Agaricus blazei supplements are the most popular, outpacing AHCC use by a factor of 7:1. Research Laboratory research suggests AHCC may have immunostimulatory effects. AHCC has been proposed as a treatment for cancer, but research into its effectiveness has produced only uncertain and inconclusive evidence. Detailed research is needed into the pharmacology of AHCC before any recommendation of its use as an adjuvant therapy can be made. Studies have suggested that AHCC supplementation may affect immune outcomes and immune cell populations, suggesting that it has anti-inflammatory effects. Moreover, available data have demonstrated that AHCC may possibly reduce symptoms, improve survival, and shorten recovery time in animal models infected with viruses, bacteria, and fungal infections. See also Alternative cancer treatments Agaricus blazei mushroom Medicinal mushrooms Shiitake References Immune system Cancer research Dietary supplements
AHCC
[ "Biology" ]
669
[ "Immune system", "Organ systems" ]
8,837,306
https://en.wikipedia.org/wiki/World%20Energy%20Outlook
The annual World Energy Outlook (WEO) is the International Energy Agency's (IEA) flagship publication on global energy projections and analysis. It contains medium to long-term energy market projections, extensive statistics, analysis and advice for both governments and the energy business regarding energy security, environmental protection and economic development. The first WEO was published in 1977 and it has been an annual publication since 1998. The World Energy Outlook uses three scenarios to examine future energy trends. The Net Zero Emissions by 2050 Scenario is normative, in that it is designed to achieve specific outcomes – an emissions trajectory consistent with keeping the temperature rise in 2100 below 1.5 °C (with a 50% probability), universal access to modern energy services and major improvements in air quality – and shows a pathway to reach it. The Announced Pledges Scenario, and the Stated Policies Scenario are exploratory, in that they define a set of starting conditions, such as policies and targets, and then see where they lead based on model representations of energy systems, including market dynamics and technological progress. The scenarios are not predictions but enable policy-makers and other readers to compare different possible versions of the future and the levers and actions that produce them, with the aim of stimulating insights about the future of global energy. Since 1993, the IEA has provided medium- to long-term energy projections using a continually-evolving set of modelling tools. In 2021, the IEA adopted the Global Energy and Climate Model to develop the world's first comprehensive study of how to transition to an energy system at net zero CO2 emissions by 2050. This model is now the principle tool used to generate detailed sector-by-sector and region-by-region long-term scenarios for the World Energy Outlook and other IEA publications. World Energy Outlook Reports by Year See also World energy supply and consumption References External links The International Energy Agency Articles on energy in the OECD Observer World Energy Outlook 2007: "Everything is Getting Worse" Interview with Fatih Birol, IEA chief economist Energy policy Energy economics International Energy Agency
World Energy Outlook
[ "Environmental_science" ]
423
[ "Energy economics", "Environmental social science", "Energy policy" ]
8,837,430
https://en.wikipedia.org/wiki/Virtual%20Interface%20Architecture
The Virtual Interface Architecture (VIA) is an abstract model of a user-level zero-copy network, and is the basis for InfiniBand, iWARP and RoCE. Created by Microsoft, Intel, and Compaq, the original VIA sought to standardize the interface for high-performance network technologies known as System Area Networks (SANs; not to be confused with Storage Area Networks). Networks are a shared resource. With traditional network APIs such as the Berkeley socket API, the kernel is involved in every network communication. This presents a tremendous performance bottleneck when latency is an issue. One of the classic developments in computing systems is virtual memory, a combination of hardware and software that creates the illusion of private memory for each process. In the same school of thought, a virtual network interface protected across process boundaries could be accessed at the user level. With this technology, the "consumer" manages its own buffers and communication schedule while the "provider" handles the protection. Thus, the network interface card (NIC) provides a "private network" for a process, and a process is usually allowed to have multiple such networks. The virtual interface (VI) of VIA refers to this network and is merely the destination of the user's communication requests. Communication takes place over a pair of VIs, one on each of the processing nodes involved in the transmission. In "kernel-bypass" communication, the user manages its own buffers. Another facet of traditional networks is that arriving data is placed in a pre-allocated buffer and then copied to the user-specified final destination. Copying large messages can take a long time, and so eliminating this step is beneficial. Another classic development in computing systems is direct memory access (DMA), in which a device can access main memory directly while the CPU is free to perform other tasks. In a network with "remote direct memory access" (RDMA), the sending NIC uses DMA to read data in the user-specified buffer and transmit it as a self-contained message across the network. The receiving NIC then uses DMA to place the data into the user-specified buffer. There is no intermediary copying and all of these actions occur without the involvement of the CPUs, which has an added benefit of lower CPU utilization. For the NIC to actually access the data through DMA, the user's page must be in memory. In VIA, the user must "pin-down" its buffers before transmission, so as to prevent the OS from swapping the page out to the disk. This action—one of the few that involve the kernel—ties the page to physical memory. To ensure that only the process that owns the registered memory may access it, the VIA NICs require permission keys known as "protection tags" during communication. So essentially VIA is a standard that defines kernel bypassing and RDMA in a network. It also defines a programming library called "VIPL". It has been implemented, most notably in cLAN from Giganet (now Emulex). Mostly though, VIA's major contribution has been in providing a basis for the InfiniBand, iWARP and RoCE standards. External links Usenix Notes On VIA Distributed Enterprise Networks Virtual Interface Architecture, a book from Intel Supercomputing Computer networks engineering
Virtual Interface Architecture
[ "Technology", "Engineering" ]
677
[ "Supercomputing", "Computer engineering", "Computer network stubs", "Computer networks engineering", "Computing stubs" ]
8,837,864
https://en.wikipedia.org/wiki/Female%20bonding
In ethology and social science, female bonding is the formation of a close personal relationship and patterns of friendship, attachment, and cooperation in females. Examples Within the context of human relationships the definition and display of female bonding can be dependent on multiple factors such as age, sexual orientation, culture, race and marital status. For example, some studies have shown that there is relatively strong female bonding evidence which is shared among single women. It is evident that this particular cohort of women sees each other as lifelong confidants due to the absence of a lifelong commitment to a spouse. Along with this, the lack of commitment allows women to develop and maintain the strong ties between other single female friends. Female bonding can be further explored within the human context of relationships within the family. For example, the positive mother-daughter ties which develop have been described to provide immense emotional, financial and instrumental support; indicating that female bonding is present. In an alternative study, a mother described her daughters as "more like sisters, communicating that equality...was an essential feature of their current relationships. They used the language of companionate ties..." In addition to mother-daughter ties, sibling ties can be carefully examined for further exemplification in female bonding. There is much evidence that sister-sister ties are the strongest ties that exist, out of the possible combinations of gendered sibling ties which are shared. In a recent study, an interviewee described her relationship shared with her sister as the most enduring and intimate of her life. This further suggests the emotional sharing which is said to be the primary foundation on which female bonding is founded. There has also been evidence within animal context regarding the genetic theory behind female bonding. A study that "investigated the social network structure of an embayment population of Indo-Pacific bottlenose dolphins, Tursiops aduncus, ... examined the impact of sex...in maintaining the cohesion of the social network." The results of this article prove that there was "greater influence on female[s] than on male social relationships, as association strength was positively correlated with genetic relatedness between females". See also Affectional orientation Cross-sex friendship Feminine psychology Homosociality Human bonding Lesbian Male bonding Social connection Womance References Further reading Friendship Interpersonal relationships
Female bonding
[ "Biology" ]
470
[ "Behavior", "Interpersonal relationships", "Human behavior" ]
8,837,975
https://en.wikipedia.org/wiki/ISO/TC%20215
The ISO/TC 215 is the International Organization for Standardization's (ISO) Technical Committee (TC) on health informatics. TC 215 works on the standardization of Health Information and Communications Technology (ICT), to allow for compatibility and interoperability between independent systems. Working Groups ISO TC 215 consists of several Working Groups (WG), each dealing with an aspect of Electronic Health Records (EHR). Leadership The Technical Committee Chairs are: Standards You can search for published standards and those under development here: https://www.iso.org/committee/54960/x/catalogue/p/0/u/1/w/0/d/0 See also Medical record Electronic medical record International Medical Informatics Association Canada Health Infoway European Institute for Health Records National Resource Center for Health Information Technology CEN/TC 251 (European Union) Data governance External links http://www.iso.org/iso/standards_development/technical_committees/list_of_iso_technical_committees/iso_technical_committee.htm?commid=54960 Electronic health records 215
ISO/TC 215
[ "Technology" ]
231
[ "Electronic health records", "Information technology" ]
8,838,269
https://en.wikipedia.org/wiki/Thermo-hygrograph
A thermo-hygrograph or hygrothermograph is a chart recorder that measures and records both temperature and humidity (or dew point). Similar devices that record only one parameter are a thermograph for temperature and hygrograph for humidity. Thermographs where the variations are recorded using photography were described by several scientists as early as 1845, including Francis Ronalds who was Honorary Director of the Kew Observatory. An updated model of the initial machine was deployed across the national observational network set up by the new UK Met Office in 1867 and coordinated by Kew Observatory. These instruments then saw extended use around the world. An alternative thermograph configuration has a pen that records temperature on a revolving cylinder. The pen is at the end of a lever that is controlled by a bi-metal strip of temperature-sensitive metal which bends as the temperature changes. A human hair bundle can be used for humidity in such machines. References Meteorological instrumentation and equipment
Thermo-hygrograph
[ "Technology", "Engineering" ]
206
[ "Meteorological instrumentation and equipment", "Measuring instruments" ]
8,838,606
https://en.wikipedia.org/wiki/Homocystine
Homocystine is the organosulfur compound with the formula . It is disulfide derived from oxidation of homocysteine. Its relationship with homocysteine is analogous to the relationship between cystine and cysteine. References Organic disulfides Alpha-Amino acids
Homocystine
[ "Chemistry" ]
62
[ "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
8,838,698
https://en.wikipedia.org/wiki/Commodore%20Power/Play
Commodore Power/Play was one of a pair of computer magazines published by Commodore Business Machines in the United States in support of their 8-bit home computer lines of the 1980s. The other was called Commodore Interface, changed to just Commodore in 1981, Commodore Microcomputer in 1983, and finally to Commodore Microcomputers in 1984 and for the rest of its run. The two magazines were published on an alternating, bimonthly schedule. History and profile Power/Play was started in 1982 as a quarterly publication. The magazine was targeted at the home computer user, emphasizing video games, educational and hobbyist uses of the Commodore 64/128 and VIC-20 models. Commodore Microcomputers initially served Commodore's business customers using the PET and CBM lines but as the business market segments standardized on CP/M and later MS-DOS, the coverage of the two magazines essentially overlapped, until the November 1986 issue, when both magazines were switched from a bi-monthly to a monthly schedule and retitled Commodore Magazine. References External links Oct/Nov 1985 issue 1982 establishments in Pennsylvania 1986 disestablishments in Pennsylvania Bimonthly magazines published in the United States Commodore 8-bit computer magazines Defunct computer magazines published in the United States Home computer magazines Magazines established in 1982 Magazines disestablished in 1986 Magazines published in Philadelphia Quarterly magazines published in the United States Defunct video game magazines published in the United States
Commodore Power/Play
[ "Technology" ]
286
[ "Computing stubs", "Computer magazine stubs" ]
8,839,340
https://en.wikipedia.org/wiki/Graph%20toughness
In graph theory, toughness is a measure of the connectivity of a graph. A graph is said to be -tough for a given real number if, for every integer , cannot be split into different connected components by the removal of fewer than vertices. For instance, a graph is -tough if the number of components formed by removing a set of vertices is always at most as large as the number of removed vertices. The toughness of a graph is the maximum for which it is -tough; this is a finite number for all finite graphs except the complete graphs, which by convention have infinite toughness. Graph toughness was first introduced by . Since then there has been extensive work by other mathematicians on toughness; the recent survey by lists 99 theorems and 162 papers on the subject. Examples Removing vertices from a path graph can split the remaining graph into as many as connected components. The maximum ratio of components to removed vertices is achieved by removing one vertex (from the interior of the path) and splitting it into two components. Therefore, paths are -tough. In contrast, removing vertices from a cycle graph leaves at most remaining connected components, and sometimes leaves exactly connected components, so a cycle is -tough. Connection to vertex connectivity If a graph is -tough, then one consequence (obtained by setting ) is that any set of nodes can be removed without splitting the graph in two. That is, every -tough graph is also -vertex-connected. Connection to Hamiltonicity observed that every cycle, and therefore every Hamiltonian graph, is -tough; that is, being -tough is a necessary condition for a graph to be Hamiltonian. He conjectured that the connection between toughness and Hamiltonicity goes in both directions: that there exists a threshold such that every -tough graph is Hamiltonian. Chvátal's original conjecture that would have proven Fleischner's theorem but was disproved by . The existence of a larger toughness threshold for Hamiltonicity remains open, and is sometimes called Chvátal's toughness conjecture. Computational complexity Testing whether a graph is -tough is co-NP-complete. That is, the decision problem whose answer is "yes" for a graph that is not -tough, and "no" for a graph that is -tough, is NP-complete. The same is true for any fixed positive rational number : testing whether a graph is -tough is co-NP-complete . See also Strength of a graph, an analogous concept for edge deletions Tutte–Berge formula, a related characterization of the size of a maximum matching in a graph Harris graphs, a family of graphs that are tough, Eulerian, and non-Hamiltonian References . . . . Graph connectivity Graph invariants NP-complete problems
Graph toughness
[ "Mathematics" ]
566
[ "Graph connectivity", "Graph theory", "Computational problems", "Graph invariants", "Mathematical relations", "Mathematical problems", "NP-complete problems" ]
8,839,481
https://en.wikipedia.org/wiki/Plant%20health
Plant health includes the protection of plants, as well as scientific and regulatory frameworks for controlling plant pests or pathogens. Plant health is concerned with: Ecosystem health with a special focus on plants Tree health The control of plant pests The control of plant pathology See also Plant disease forecasting, predicting the occurrence or change in severity of plant diseases Animal and Plant Health Inspection Service American Phytopathological Society Plant Protection and Quarantine Agreement on the Application of Sanitary and Phytosanitary Measures Pest risk analysis Global Plant Clinic Medicinal plants References Botany Ecology
Plant health
[ "Biology" ]
114
[ "Ecology", "Plants", "Botany" ]
8,839,580
https://en.wikipedia.org/wiki/Artificial%20Minds
Artificial Minds: An Exploration of the Mechanisms of Mind is a book written by Stan Franklin and published in 1995 by MIT Press. The book is a wide-ranging tour of the development of artificial intelligence as of the time it was written. As well as discussing the theoretical and philosophical backgrounds of many approaches, it goes into some detail in explaining the workings of many of what the author considers to be the most promising examples of the era. References Causey, Robert L. (1998) Review of Artificial Minds by Stan Franklin. ACM SIGART Bulletin 9(1): 35–39. da Fontoura Costa, Luciano. (1999) "Franklin's New Infant Theory of Mind: Review of Artificial Minds: An Exploration of the Mechanisms of Mind by Stan Franklin." Psyche 5(29): n. pag. Wolpert, Seth. (1997) Review of Artificial Minds by Stan Franklin. Computers in Physics 11(3): 258–259. 1995 non-fiction books Non-fiction books about Artificial intelligence
Artificial Minds
[ "Technology" ]
210
[ "Computing stubs", "Computer book stubs" ]
8,840,064
https://en.wikipedia.org/wiki/DREAM%20%28protocol%29
DREAM is an ad hoc location-based routing protocol. DREAM stands for Distance Routing Effect Algorithm for Mobility. References "A distance routing effect algorithm for mobility (DREAM)" in Network protocols Ad hoc routing protocols
DREAM (protocol)
[ "Technology" ]
43
[ "Computing stubs", "Computer network stubs" ]
8,840,363
https://en.wikipedia.org/wiki/Simsapa%20tree
The Simsapa tree (Pali: ) is mentioned in ancient Buddhist discourses traditionally believed to have been delivered 2,500 years ago. The tree has been identified as either Dalbergia sissoo, a rosewood tree common to India and Southeast Asia, or Amherstia nobilis, another South Asian tree, of the family Caesalpiniaceae. Buddhist scriptural references In Buddhism's Pali Canon, there is a discourse entitled, "The Simsapa Grove" (Samyutta Nikaya 56.31). This discourse is described as having been delivered by the Buddha to monks while dwelling beneath a simsapa grove in the city of Kosambi. In this discourse, the Buddha compares a few simsapa leaves in his hand with the number of simsapa leaves overhead in the grove to illustrate what he teaches (in particular, the Four Noble Truths) and what he does not teach (things unrelated to the holy life). Elsewhere in the Pali Canon, simsapa groves are mentioned in the "Payasi Sutta" (Digha Nikaya 23) and the "Hatthaka Discourse" (Anguttara Nikaya 3.34). See also Ashoka tree Notes Sources Bodhi, Bhikkhu (trans., ed.) (2000). The Connected Discourses of the Buddha: A Translation of the Samyutta Nikaya. Boston: Wisdom Publications. . Rhys Davids, T.W. & William Stede (eds.) (1921-5). The Pali Text Society’s Pali–English Dictionary. Chipstead: Pali Text Society. A general online search engine for the PED is available at http://dsal.uchicago.edu/dictionaries/pali/. Thanissaro Bhikkhu (trans.) (1997). Simsapa Sutta: The Simsapa Leaves (SN 56.31). Retrieved 16 Nov 2008 from "Access to Insight" at http://www.accesstoinsight.org/tipitaka/sn/sn56/sn56.031.than.html. Thanissaro Bhikkhu (trans.) (1999). Hatthaka Sutta: To Hatthaka (on Sleeping Well in the Cold Forest) (excerpt) (AN 3.34). Retrieved 16 Nov 2008 from "Access to Insight" at http://www.accesstoinsight.org/tipitaka/an/an03/an03.034.than.html. Walshe, Maurice O'C. (trans.) (1985). Samyutta Nikaya: An Anthology (Part III) (Wheel Nos. 318-321). Kandy: Buddhist Publication Society. Retrieved 16 Nov 2008 from "Access to Insight" (2007) at http://www.accesstoinsight.org/lib/authors/walshe/wheel318.html. Walshe, Maurice (1987/1995). The Long Discourses of the Buddha: A Translation of the Digha Nikaya. Boston: Wisdom Publications. . Trees in Buddhism Plant common names
Simsapa tree
[ "Biology" ]
648
[ "Plant common names", "Common names of organisms", "Plants" ]
8,840,367
https://en.wikipedia.org/wiki/NGC%20602
NGC 602 is a young, bright open cluster of stars located in the Small Magellanic Cloud (SMC), a satellite galaxy to the Milky Way. It was discovered on 1 August 1826 by Scottish astronomer James Dunlop. It is embedded in a nebula known as N90. Radiation and shock waves from the stars of NGC 602 have pushed away much of the lighter surrounding gas and dust that is N90, and this in turn has triggered new star formation in the ridges (or "elephant trunks") of the nebula. These even younger, pre-main sequence stars are still enshrouded in dust but are visible to the Spitzer Space Telescope at infrared wavelengths. The cluster is of particular interest because it is located in the wing of the SMC leading to the Magellanic Bridge. Hence, while its chemical properties should be similar to those of the rest of the galaxy, it is relatively isolated and so easier to study. NGC 602 contains three main condensations of stars. The central core is NGC 602a, with the compact NGC 602b 100 arc-seconds to the NNW. NGC 602c is a looser grouping 11 arc-minutes to the NE, which includes the WO star AB8. NGC 602 includes many young O and B stars and young stellar objects, with few evolved stars. Ionisation in the nebula is dominated by Sk 183, an extremely hot O3 main sequence star visible as the bright isolated star at the centre of the Hubble image. A population of candidate brown dwarfs was found in NGC 602 in 2024. This was the first detection of brown dwarfs outside the Milky Way. A number of other, more distant galaxies also appear in the background of the Hubble Space Telescope images of NGC 602, making for a "tantalizing" and "grand" view. See also List of NGC objects (1–1000) References External links NGC 602: Taken Under the "Wing" of the Small Magellanic Cloud Jan 8, 2007 NASA/ESA HST news and photo release on N90 (at the heart of which lies NGC 602) NASA/ESA video 'Zooming on NGC 602' (Hubble Space Telescope) NGC 602 @ SEDS NGC objects pages\ "Progressive star formation in the young SMC cluster NGC 602" Carlson, L. R., et al., 2007 ApJL 665, 109 "The Initial Mass Function of the Stellar Association NGC 602 in the Small Magellanic Cloud with Hubble Space Telescope ACS Observations" Schmalzl, M., et al. 2008 ApJ 681, 290 "NGC 602 Environment, Kinematics and Origins" Nigra, L., et al., 2008 PASP 120, 972 0602 Open clusters Small Magellanic Cloud Hydrus 18260801 Discoveries by James Dunlop
NGC 602
[ "Astronomy" ]
592
[ "Hydrus", "Constellations" ]
8,840,590
https://en.wikipedia.org/wiki/Postelsia
Postelsia palmaeformis, also known as the sea palm (not to be confused with the southern sea palm) or palm seaweed, is a species of kelp and classified within brown algae. It is the only known species in the genus Postelsia. The sea palm is found along the western coast of North America, on rocky shores with constant waves. It is one of the few algae that can survive and remain erect out of the water; in fact, it spends most of its life cycle exposed to the air. It is an annual, and edible, though harvesting of the alga is discouraged due to the species' sensitivity to overharvesting. History The sea palm was known by the natives of California by the name of kakgunu-chale before any Europeans entered the region. Postelsia was first scientifically described by Franz Josef Ruprecht (1814–1870) in 1852 from a specimen found near Bodega Bay in California. Ruprecht, an Austro-Hungarian who became curator of botany at the Academy of Sciences in St. Petersburg in 1839, studied seaweed specimens collected by botanist Ilya Vosnesensky, and published a paper describing one seagrass and five seaweeds, one of which was Postelsia. The sea palm has been used by several textbooks, such as the Campbell–Reece Biology textbook, as an example of multicellular protists, as well as an example of the class Phaeophyceae. Etymology The generic name, Postelsia honors Alexander Philipov Postels, an Estonian-born geologist and artist who worked with Ruprecht, while the specific name, palmaeformis, describes the alga's superficial similarity in appearance to true palms. Fossil record Fossils from Monte Bolca, a lagerstätte near Verona, were originally named Zoophycos caput-medusae and previously thought to be trace fossils, but were later found to be plants instead and given the name Algarum by French zoologist Henri Milne-Edwards in 1866. The type specimen collected by Italian paleobotanist Abramo Bartolommeo Massalongo before 1855 is at the Natural History Museum of Verona and was preserved in a lithographic limestone upper and lower slab. When Italian botanist Achille Forti (1878–1937) worked on the specimens in 1926, they were reinterpreted as close relatives of Postelsia, now known to be a brown algae, which had lived in the coastal waters of the Eocene sea. Forti renamed the species Postelsiopsis caput-medusae commemorating the fossils' extreme similarity to the extant Postelsia palmaeformis. The appearance of the plant fossil is a holdfast on the bottom, with a stem-like stipe between there and the fronds which are about to . In life, the fronds would have been held vertically in the water column whenever the plant was submerged during high tide, and would have flopped over the stipe when the plant was exposed during low tide in a habitus similar to that of the living sea palm. Other specimens from this deposit collected and described by Massalongo in 1855 were actually trace fossils, and they remain assigned to Zoophycos; only the specimens of Z. caput-medusae have been assigned to Postelsiopsis, as those are fossils of the original plant, and not trace fossils. Morphology Postelsia has two distinct morphologies: one for its diploid, monoicous sporophyte stage, which is the dominant portion of the life cycle, and one for its smaller, haploid, dioecious gametophyte stage. Like all seaweeds, the sporophyte stage of Postelsia consists of a thallus, which is made up of a stem-like stipe topped with possibly over 100 leaf-like blades, and rests on a root-like holdfast. The holdfast anchors the organism to the rocks it lives on. The sea palm has no vascular system; the stipe is only for support of the organism and holds the fronds up over other organisms so they can receive more light. The stipe is merely a firm, hollow tube, able to withstand the open air of low tide conditions as well as the crashing waves of high tide. The blades are grooved, with the sporangia held within these grooves. The gametophyte stage is microscopic, consisting of only a few cells. The gametophytes produce sperm and eggs to create new sporophytes. Like all phaeophytes, sea palms use the pigments chlorophyll a, chlorophyll c, fucoxanthin, and carotenes in photosynthesis. Their cell walls are composed of alginate. They use laminarin and mannitol for storage. Life cycle and growth Like most brown algae, Postelsia goes through alternation of generations, and is an annual species. The diploid sporophyte produces, through meiosis, haploid spores, which drip down through the grooves in the blades onto the substrate, which may be mussels, barnacles, or bare rock. These spores develop, through mitosis, into small, multicellular haploid gametophytes, male and female. The male and female gametophytes create sperm and eggs, respectively. The sperm of the male reaches the female egg and fertilizes, resulting in a diploid zygote, which develops into a new sporophyte. Postelsia are green in color as juveniles, and change to a golden brown as they age, reaching a height of . As a Postelsia alga grows, its stipe thickens in the same manner as a tree's trunk. The cells beneath the epidermis, called the meristoderm, divide rapidly to form rings of growth, again, like a tree. However, the greater flexibility of Postelsia stipe over that of a woody tree makes for some distinct differences. Postelsia must be thicker than a tree of equal height in order to support itself. However, the stipe is very much more suited to the coastal habitat, as it allows the seaweed to bend with the constant wave action. Such an environment would cause the inflexible, woody tree to break. The blades of the new sporophyte grow from one or two initial blades by splitting. A tear forms in the middle of the blade at its base, which then continues along the entire length of the blade until it is split in two. Habitat Sea palms are found on the rocky shores of western North America, from as far north as Vancouver Island, to the southern central coast of California. They live in the middle to upper intertidal zones in very wavy areas. High wave action may increase nutrient availability and moves the blades of the thallus, allowing more sunlight to reach the organism so that it can photosynthesize. In addition, the constant wave action removes competitors, such as the California mussel. Recent studies have shown that Postelsia grows in greater numbers when such competition exists. A control group with no competition produced fewer offspring than an experimental group with mussels; from this it is thought that the mussels provide protection for the developing gametophytes. Alternatively, it is thought that the mussels may prevent the growth of competing algae such as Corallina or Halosaccion, allowing Postelsia to grow freely after wave action removes the mussels. When Postelsia release their spores, they tend to fall within a few meters of the parent sporophyte for two reasons. The first is that though spores are flagellated and can swim, they are often released at low tide and are deposited directly to the substrate below. Secondly, Postelsia gametophytes need to be close to each other in order for fertilization to occur. As such, sea palms tend to live very close to each other in large aggregations. Some juvenile sporophytes will grow on competing organisms, like mussels or barnacles, and rip them from the rocks when the waves come, gripping them with holdfasts of incredible strength. Epiphytes Two other, smaller brown algae, of the family Ectocarpaceae, Ectocarpus commensalis and Pylaiella gardneri, as well as the two red algae Microcladia borealis and Porphyra gardneri, are epiphytic on Postelsia. Pylaiella gardneri is an obligate epiphyte to Postelsia. As with all epiphytes, these algae are not harmful to Postelsia, and merely use the larger alga as a substrate to grow upon. Edibility The blades (and less often, the stipes) of Postelsia are sometimes used in certain dishes, usually in California. Postelsia is a protected species, however, and harvesting it is illegal throughout much of its range, as clipping the blades too low, below the meristem, prevents reproduction. Postelsia can regenerate blades cut above the meristem, but removing the blades can limit a sporophyte's ability to produce spores and contribute to subsequent populations. Postelsia has also been in danger of overharvesting at some points. It is illegal to harvest Postelsia in British Columbia, Washington and Oregon. In California, Postelsia is a partially protected species: recreational harvesting is illegal, but regulated, licensed commercial harvesting is legal. Between 2000 and 2001, an estimated 2 to 3 tons of Postelsia were harvested in California. The blades are eaten raw or are dried, and dried blades sell for up to US$45 per pound. Commercial harvesters of Postelsia must purchase a $100 license, pay a royalty to the State of California ($24 per wet ton of algae harvested), and submit a monthly harvest log. An experiment done to try to prove or disprove the claims of Postelsia harvesters that their gathering methods are sustainable yielded results stating that recovery from collection depended greatly on the season of collection. See also Algae Brown algae References External links Postelsia palmaeformis Ruprecht at AlgaeBase Laminariaceae Flora of the Pacific Marine biota of North America Flora of the West Coast of the United States Flora of California Edible algae Edible seaweeds Laminariales genera Monotypic brown algae genera Flora without expected TNC conservation status
Postelsia
[ "Biology" ]
2,169
[ "Edible algae", "Algae" ]
8,840,679
https://en.wikipedia.org/wiki/Reegle
reegle (lower-case) was a search engine specifically covering the fields of renewable energy, efficient energy use, and climate change issues. It was developed in 2005 by REEEP and REN21, with funding from several European government agencies. At one point, it had 220,000 visitors per month. It was launched in July 2006. It was conceived as a public resource for governments, project developers, banks and finance institutions, NGOs, and international organisations as well as the general public. The central function of the site was a search engine, which offered a "mind map" based search refinement function. Users were able to click on a map of the world and get information on renewable energy and energy efficiency in that specific country, including relevant government ministries, private companies, country energy statistics, and a sampling of clean energy development projects in that specific area. The website offered an online glossary covering about 4,000 terms from the clean energy and climate sector, with definitions from Open Data sources. Translations of many terms into additional languages was also available. As of 2021, the portal is no longer active. See also British Department for Environment, Food and Rural Affairs (Defra) Dutch Ministry of Housing, Spatial Planning and the Environment (VROM) International Energy Agency (IEA) Open energy system databases – database projects which collect, clean, and republish energy-related datasets OpenEI – a US website publishing open energy data Renewable energy commercialization Renewable Energy and Energy Efficiency Partnership (REEEP) Renewable Energy Policy Network for the 21st Century (REN21) References Internet search engines Renewable energy organizations Renewable energy policy Sustainability organizations
Reegle
[ "Engineering" ]
334
[ "Renewable energy organizations", "Energy organizations" ]
8,841,318
https://en.wikipedia.org/wiki/Sylvinite
Sylvinite is a sedimentary rock made of a mechanical mixture of the minerals sylvite (KCl, or potassium chloride) and halite (NaCl, or sodium chloride). Sylvinite is the most important source for the production of potash in North America, Russia and the UK. Most Canadian operations mine sylvinite with proportions of approximately 31% KCl and 66% NaCl with the balance being insoluble clays, anhydrite and in some locations carnallite. Other deposits of sylvinite are located in Belarus, Brazil, France, Germany, Kazakhstan, Slovakia and Spain. References Sedimentary rocks Evaporite Potash
Sylvinite
[ "Chemistry" ]
141
[ "Potash", "Salts" ]
9,506,857
https://en.wikipedia.org/wiki/History%20of%20general-purpose%20CPUs
The history of general-purpose CPUs is a continuation of the earlier history of computing hardware. 1950s: Early designs In the early 1950s, each computer design was unique. There were no upward-compatible machines or computer architectures with multiple, differing implementations. Programs written for one machine would run on no other kind, even other kinds from the same company. This was not a major drawback then because no large body of software had been developed to run on computers, so starting programming from scratch was not seen as a large barrier. The design freedom of the time was very important because designers were very constrained by the cost of electronics, and only starting to explore how a computer could best be organized. Some of the basic features introduced during this period included index registers (on the Ferranti Mark 1), a return address saving instruction (UNIVAC I), immediate operands (IBM 704), and detecting invalid operations (IBM 650). By the end of the 1950s, commercial builders had developed factory-constructed, truck-deliverable computers. The most widely installed computer was the IBM 650, which used drum memory onto which programs were loaded using either paper punched tape or punched cards. Some very high-end machines also included core memory which provided higher speeds. Hard disks were also starting to grow popular. A computer is an automatic abacus. The type of number system affects the way it works. In the early 1950s, most computers were built for specific numerical processing tasks, and many machines used decimal numbers as their basic number system; that is, the mathematical functions of the machines worked in base-10 instead of base-2 as is common today. These were not merely binary-coded decimal (BCD). Most machines had ten vacuum tubes per digit in each processor register. Some early Soviet computer designers implemented systems based on ternary logic; that is, a bit could have three states: +1, 0, or -1, corresponding to positive, zero, or negative voltage. An early project for the U.S. Air Force, BINAC attempted to make a lightweight, simple computer by using binary arithmetic. It deeply impressed the industry. As late as 1970, major computer languages were unable to standardize their numeric behavior because decimal computers had groups of users too large to alienate. Even when designers used a binary system, they still had many odd ideas. Some used sign-magnitude arithmetic (-1 = 10001), or ones' complement (-1 = 11110), rather than modern two's complement arithmetic (-1 = 11111). Most computers used six-bit character sets because they adequately encoded Hollerith punched cards. It was a major revelation to designers of this period to realize that the data word should be a multiple of the character size. They began to design computers with 12-, 24- and 36-bit data words (e.g., see the TX-2). In this era, Grosch's law dominated computer design: computer cost increased as the square of its speed. 1960s: Computer revolution and CISC One major problem with early computers was that a program for one would work on no others. Computer companies found that their customers had little reason to remain loyal to a given brand, as the next computer they bought would be incompatible anyway. At that point, the only concerns were usually price and performance. In 1962, IBM tried a new approach to designing computers. The plan was to make a family of computers that could all run the same software, but with different performances, and at different prices. As users' needs grew, they could move up to larger computers, and still keep all of their investment in programs, data and storage media. To do this, they designed one reference computer named System/360 (S/360). This was a virtual computer, a reference instruction set, and abilities that all machines in the family would support. To provide different classes of machines, each computer in the family would use more or less hardware emulation, and more or less microprogram emulation, to create a machine able to run the full S/360 instruction set. For instance, a low-end machine could include a very simple processor for low cost. However, this would require the use of a larger microcode emulator to provide the rest of the instruction set, which would slow it down. A high-end machine would use a much more complex processor that could directly process more of the S/360 design, thus running a much simpler and faster emulator. IBM chose consciously to make the reference instruction set quite complex, and very capable. Even though the computer was complex, its control store holding the microprogram would stay relatively small and could be made with very fast memory. Another important effect was that one instruction could describe quite a complex sequence of operations. Thus the computers would generally have to fetch fewer instructions from the main memory, which could be made slower, smaller and less costly for a given mix of speed and price. As the S/360 was to be a successor to both scientific machines like the 7090 and data processing machines like the 1401, it needed a design that could reasonably support all forms of processing. Hence the instruction set was designed to manipulate simple binary numbers, and text, scientific floating-point (similar to the numbers used in a calculator), and the binary-coded decimal arithmetic needed by accounting systems. Almost all following computers included these innovations in some form. This basic set of features is now called complex instruction set computing (CISC, pronounced "sisk"), a term not invented until many years later, when reduced instruction set computing (RISC) began to get market share. In many CISCs, an instruction could access either registers or memory, usually in several different ways. This made the CISCs easier to program, because a programmer could remember only thirty to a hundred instructions, and a set of three to ten addressing modes rather than thousands of distinct instructions. This was called an orthogonal instruction set. The PDP-11 and Motorola 68000 architecture are examples of nearly orthogonal instruction sets. There was also the BUNCH (Burroughs, UNIVAC, NCR, Control Data Corporation, and Honeywell) that competed against IBM at this time; however, IBM dominated the era with S/360. The Burroughs Corporation (which later merged with Sperry/Univac to form Unisys) offered an alternative to S/360 with their Burroughs large systems B5000 series. In 1961, the B5000 had virtual memory, symmetric multiprocessing, a multiprogramming operating system (Master Control Program (MCP)), written in ALGOL 60, and the industry's first recursive-descent compilers as early as 1964. 1970s: Microprocessor revolution The first commercial microprocessor, the binary-coded decimal (BCD) based Intel 4004, was released by Intel in 1971. In March 1972, Intel introduced a microprocessor with an 8-bit architecture, the 8008, an integrated pMOS logic re-implementation of the transistor–transistor logic (TTL) based Datapoint 2200 CPU. 4004 designers Federico Faggin and Masatoshi Shima went on to design the 8008's successor, the Intel 8080, a slightly more minicomputer-like microprocessor, largely based on customer feedback on the limited 8008. Much like the 8008, it was used for applications such as terminals, printers, cash registers and industrial robots. However, the more able 8080 also became the original target CPU for an early de facto standard personal computer operating system called CP/M and was used for such demanding control tasks as cruise missiles, and many other uses. Released in 1974, the 8080 became one of the first really widespread microprocessors. By the mid-1970s, the use of integrated circuits in computers was common. The decade was marked by market upheavals caused by the shrinking price of transistors. It became possible to put an entire CPU on one printed circuit board. The result was that minicomputers, usually with 16-bit words, and 4K to 64K of memory, became common. CISCs were believed to be the most powerful types of computers, because their microcode was small and could be stored in very high-speed memory. The CISC architecture also addressed the semantic gap as it was then perceived. This was a defined distance between the machine language, and the higher level programming languages used to program a machine. It was felt that compilers could do a better job with a richer instruction set. Custom CISCs were commonly constructed using bit slice computer logic such as the AMD 2900 chips, with custom microcode. A bit slice component is a piece of an arithmetic logic unit (ALU), register file or microsequencer. Most bit-slice integrated circuits were 4 bits wide. By the early 1970s, the 16-bit PDP-11 minicomputer was developed, arguably the most advanced small computer of its day. In the late 1970s, wider-word superminicomputers were introduced, such as the 32-bit VAX. IBM continued to make large, fast computers. However, the definition of large and fast now meant more than a megabyte of RAM, clock speeds near one megahertz, and tens of megabytes of disk drives. IBM's System 370 was a version of the 360 tweaked to run virtual computing environments. The virtual computer was developed to reduce the chances of an unrecoverable software failure. The Burroughs large systems (B5000, B6000, B7000) series reached its largest market share. It was a stack computer whose OS was programmed in a dialect of Algol. All these different developments competed for market share. The first single-chip 16-bit microprocessor was introduced in 1975. Panafacom, a conglomerate formed by Japanese companies Fujitsu, Fuji Electric, and Matsushita, introduced the MN1610, a commercial 16-bit microprocessor. According to Fujitsu, it was "the world's first 16-bit microcomputer on a single chip". The Intel 8080 was the basis for the 16-bit Intel 8086, which is a direct ancestor to today's ubiquitous x86 family (including Pentium and Intel Core). Every instruction of the 8080 has a direct equivalent in the large x86 instruction set, although the opcode values are different in the latter. Early 1980s–1990s: Lessons of RISC In the early 1980s, researchers at UC Berkeley and IBM both discovered that most computer language compilers and interpreters used only a small subset of the instructions of complex instruction set computing (CISC). Much of the power of the CPU was being ignored in real-world use. They realized that by making the computer simpler and less orthogonal, they could make it faster and less costly at the same time. At the same time, CPU calculation became faster in relation to the time for needed memory accesses. Designers also experimented with using large sets of internal registers. The goal was to cache intermediate results in the registers under the control of the compiler. This also reduced the number of addressing modes and orthogonality. The computer designs based on this theory were called reduced instruction set computing (RISC). RISCs usually had larger numbers of registers, accessed by simpler instructions, with a few instructions specifically to load and store data to memory. The result was a very simple core CPU running at very high speed, supporting the sorts of operations the compilers were using anyway. A common variant on the RISC design employs the Harvard architecture, versus Von Neumann architecture or stored program architecture common to most other designs. In a Harvard Architecture machine, the program and data occupy separate memory devices and can be accessed simultaneously. In Von Neumann machines, the data and programs are mixed in one memory device, requiring sequential accessing which produces the so-called Von Neumann bottleneck. One downside to the RISC design was that the programs that run on them tend to be larger. This is because compilers must generate longer sequences of the simpler instructions to perform the same results. Since these instructions must be loaded from memory anyway, the larger code offsets some of the RISC design's fast memory handling. In the early 1990s, engineers at Japan's Hitachi found ways to compress the reduced instruction sets so they fit in even smaller memory systems than CISCs. Such compression schemes were used for the instruction set of their SuperH series of microprocessors, introduced in 1992. The SuperH instruction set was later adapted for ARM architecture's Thumb instruction set. In applications that do not need to run older binary software, compressed RISCs are growing to dominate sales. Another approach to RISCs was the minimal instruction set computer (MISC), niladic, or zero-operand instruction set. This approach realized that most space in an instruction was used to identify the operands of the instruction. These machines placed the operands on a push-down (last-in, first out) stack. The instruction set was supplemented with a few instructions to fetch and store memory. Most used simple caching to provide extremely fast RISC machines, with very compact code. Another benefit was that the interrupt latencies were very small, smaller than most CISC machines (a rare trait in RISC machines). The Burroughs large systems architecture used this approach. The B5000 was designed in 1961, long before the term RISC was invented. The architecture puts six 8-bit instructions in a 48-bit word, and was a precursor to very long instruction word (VLIW) design (see below: 1990 to today). The Burroughs architecture was one of the inspirations for Charles H. Moore's programming language Forth, which in turn inspired his later MISC chip designs. For example, his f20 cores had 31 5-bit instructions, which fit four to a 20-bit word. RISC chips now dominate the market for 32-bit embedded systems. Smaller RISC chips are even growing common in the cost-sensitive 8-bit embedded-system market. The main market for RISC CPUs has been systems that need low power or small size. Even some CISC processors (based on architectures that were created before RISC grew dominant), such as newer x86 processors, translate instructions internally into a RISC-like instruction set. These numbers may surprise many, because the market is perceived as desktop computers. x86 designs dominate desktop and notebook computer sales, but such computers are only a tiny fraction of the computers now sold. Most people in industrialised countries own more computers in embedded systems in their car and house, than on their desks. Mid-to-late 1980s: Exploiting instruction level parallelism In the mid-to-late 1980s, designers began using a technique termed instruction pipelining, in which the processor works on multiple instructions in different stages of completion. For example, the processor can retrieve the operands for the next instruction while calculating the result of the current one. Modern CPUs may use over a dozen such stages. (Pipelining was originally developed in the late 1950s by International Business Machines (IBM) on their 7030 (Stretch) mainframe computer.) Minimal instruction set computers (MISC) can execute instructions in one cycle with no need for pipelining. A similar idea, introduced only a few years later, was to execute multiple instructions in parallel on separate arithmetic logic units (ALUs). Instead of operating on only one instruction at a time, the CPU will look for several similar instructions that do not depend on each other, and execute them in parallel. This approach is called superscalar processor design. Such methods are limited by the degree of instruction level parallelism (ILP), the number of non-dependent instructions in the program code. Some programs can run very well on superscalar processors due to their inherent high ILP, notably graphics. However, more general problems have far less ILP, thus lowering the possible speedups from these methods. Branching is one major culprit. For example, a program may add two numbers and branch to a different code segment if the number is bigger than a third number. In this case, even if the branch operation is sent to the second ALU for processing, it still must wait for the results from the addition. It thus runs no faster than if there was only one ALU. The most common solution for this type of problem is to use a type of branch prediction. To further the efficiency of multiple functional units which are available in superscalar designs, operand register dependencies were found to be another limiting factor. To minimize these dependencies, out-of-order execution of instructions was introduced. In such a scheme, the instruction results which complete out-of-order must be re-ordered in program order by the processor for the program to be restartable after an exception. Out-of-order execution was the main advance of the computer industry during the 1990s. A similar concept is speculative execution, where instructions from one direction of a branch (the predicted direction) are executed before the branch direction is known. When the branch direction is known, the predicted direction and the actual direction are compared. If the predicted direction was correct, the speculatively executed instructions and their results are kept; if it was incorrect, these instructions and their results are erased. Speculative execution, coupled with an accurate branch predictor, gives a large performance gain. These advances, which were originally developed from research for RISC-style designs, allow modern CISC processors to execute twelve or more instructions per clock cycle, when traditional CISC designs could take twelve or more cycles to execute one instruction. The resulting instruction scheduling logic of these processors is large, complex and difficult to verify. Further, higher complexity needs more transistors, raising power consumption and heat. In these, RISC is superior because the instructions are simpler, have less interdependence, and make superscalar implementations easier. However, as Intel has demonstrated, the concepts can be applied to a complex instruction set computing (CISC) design, given enough time and money. 1990 to today: Looking forward VLIW and EPIC The instruction scheduling logic that makes a superscalar processor is Boolean logic. In the early 1990s, a significant innovation was to realize that the coordination of a multi-ALU computer could be moved into the compiler, the software that translates a programmer's instructions into machine-level instructions. This type of computer is called a very long instruction word (VLIW) computer. Scheduling instructions statically in the compiler (versus scheduling dynamically in the processor) can reduce CPU complexity. This can improve performance, and reduce heat and cost. Unfortunately, the compiler lacks accurate knowledge of runtime scheduling issues. Merely changing the CPU core frequency multiplier will have an effect on scheduling. Operation of the program, as determined by input data, will have major effects on scheduling. To overcome these severe problems, a VLIW system may be enhanced by adding the normal dynamic scheduling, losing some of the VLIW advantages. Static scheduling in the compiler also assumes that dynamically generated code will be uncommon. Before the creation of Java and the Java virtual machine, this was true. It was reasonable to assume that slow compiles would only affect software developers. Now, with just-in-time compilation (JIT) virtual machines being used for many languages, slow code generation affects users also. There were several unsuccessful attempts to commercialize VLIW. The basic problem is that a VLIW computer does not scale to different price and performance points, as a dynamically scheduled computer can. Another issue is that compiler design for VLIW computers is very difficult, and compilers, as of 2005, often emit suboptimal code for these platforms. Also, VLIW computers optimise for throughput, not low latency, so they were unattractive to engineers designing controllers and other computers embedded in machinery. The embedded systems markets had often pioneered other computer improvements by providing a large market unconcerned about compatibility with older software. In January 2000, Transmeta Corporation took the novel step of placing a compiler in the central processing unit, and making the compiler translate from a reference byte code (in their case, x86 instructions) to an internal VLIW instruction set. This method combines the hardware simplicity, low power and speed of VLIW RISC with the compact main memory system and software reverse-compatibility provided by popular CISC. Intel's Itanium chip is based on what they call an explicitly parallel instruction computing (EPIC) design. This design supposedly provides the VLIW advantage of increased instruction throughput. However, it avoids some of the issues of scaling and complexity, by explicitly providing in each bundle of instructions information concerning their dependencies. This information is calculated by the compiler, as it would be in a VLIW design. The early versions are also backward-compatible with newer x86 software by means of an on-chip emulator mode. Integer performance was disappointing and despite improvements, sales in volume markets continue to be low. Multi-threading Current designs work best when the computer is running only one program. However, nearly all modern operating systems allow running multiple programs together. For the CPU to change over and do work on another program needs costly context switching. In contrast, multi-threaded CPUs can handle instructions from multiple programs at once. To do this, such CPUs include several sets of registers. When a context switch occurs, the contents of the working registers are simply copied into one of a set of registers for this purpose. Such designs often include thousands of registers instead of hundreds as in a typical design. On the downside, registers tend to be somewhat costly in chip space needed to implement them. This chip space might be used otherwise for some other purpose. Intel calls this technology "hyperthreading" and offers two threads per core in its current Core i3, Core i5, Core i7 and Core i9 Desktop lineup (as well as in its Core i3, Core i5 and Core i7 Mobile lineup), as well as offering up to four threads per core in high-end Xeon Phi processors. Multi-core Multi-core CPUs are typically multiple CPU cores on the same die, connected to each other via a shared L2 or L3 cache, an on-die bus, or an on-die crossbar switch. All the CPU cores on the die share interconnect components with which to interface to other processors and the rest of the system. These components may include a front-side bus interface, a memory controller to interface with dynamic random access memory (DRAM), a cache coherent link to other processors, and a non-coherent link to the southbridge and I/O devices. The terms multi-core and microprocessor unit (MPU) have come into general use for one die having multiple CPU cores. Intelligent RAM One way to work around the Von Neumann bottleneck is to mix a processor and DRAM all on one chip. The Berkeley IRAM Project eDRAM Computational RAM Memristor Reconfigurable logic Another track of development is to combine reconfigurable logic with a general-purpose CPU. In this scheme, a special computer language compiles fast-running subroutines into a bit-mask to configure the logic. Slower, or less-critical parts of the program can be run by sharing their time on the CPU. This process allows creating devices such as software radios, by using digital signal processing to perform functions usually performed by analog electronics. Open source processors As the lines between hardware and software increasingly blur due to progress in design methodology and availability of chips such as field-programmable gate arrays (FPGA) and cheaper production processes, even open source hardware has begun to appear. Loosely knit communities like OpenCores and RISC-V have recently announced fully open CPU architectures such as the OpenRISC which can be readily implemented on FPGAs or in custom produced chips, by anyone, with no license fees, and even established processor makers like Sun Microsystems have released processor designs (e.g., OpenSPARC) under open-source licenses. Asynchronous CPUs Yet another option is a clockless or asynchronous CPU. Unlike conventional processors, clockless processors have no central clock to coordinate the progress of data through the pipeline. Instead, stages of the CPU are coordinated using logic devices called pipe line controls or FIFO sequencers. Basically, the pipeline controller clocks the next stage of logic when the existing stage is complete. Thus, a central clock is unneeded. Relative to clocked logic, it may be easier to implement high performance devices in asynchronous logic: In a clocked CPU, no component can run faster than the clock rate. In a clockless CPU, components can run at different speeds. In a clocked CPU, the clock can go no faster than the worst-case performance of the slowest stage. In a clockless CPU, when a stage finishes faster than normal, the next stage can immediately take the results rather than waiting for the next clock tick. A stage might finish faster than normal because of the type of data inputs (e.g., multiplication can be very fast if it occurs by 0 or 1), or because it is running at a higher voltage or lower temperature than normal. Asynchronous logic proponents believe these abilities would have these benefits: lower power dissipation for a given performance highest possible execution speeds The biggest disadvantage of the clockless CPU is that most CPU design tools assume a clocked CPU (a synchronous circuit), so making a clockless CPU (designing an asynchronous circuit) involves modifying the design tools to handle clockless logic and doing extra testing to ensure the design avoids metastability problems. Even so, several asynchronous CPUs have been built, including the ORDVAC and the identical ILLIAC I (1951) the ILLIAC II (1962), then the fastest computer on Earth The Caltech Asynchronous Microprocessor, the world-first asynchronous microprocessor (1988) the ARM-implementing AMULET (1993 and 2000) the asynchronous implementation of MIPS Technologies R3000, named MiniMIPS (1998) the SEAforth multi-core processor from Charles H. Moore Optical communication In theory, an optical computer's components could directly connect through a holographic or phased open-air switching system. This would provide a large increase in effective speed and design flexibility, and a large reduction in cost. Since a computer's connectors are also its most likely failure points, a busless system may be more reliable. Further, as of 2010, modern processors use 64- or 128-bit logic. Optical wavelength superposition could allow data lanes and logic many orders of magnitude higher than electronics, with no added space or copper wires. Optical processors Another long-term option is to use light instead of electricity for digital logic. In theory, this could run about 30% faster and use less power, and allow a direct interface with quantum computing devices. The main problems with this approach are that, for the foreseeable future, electronic computing elements are faster, smaller, cheaper, and more reliable. Such elements are already smaller than some wavelengths of light. Thus, even waveguide-based optical logic may be uneconomic relative to electronic logic. As of 2016, most development effort is for electronic circuitry. Ionic processors Early experimental work has been done on using ion-based chemical reactions instead of electronic or photonic actions to implement elements of a logic processor. Belt machine architecture Relative to conventional register machine or stack machine architecture, yet similar to Intel's Itanium architecture, a temporal register addressing scheme has been proposed by Ivan Godard and company that is intended to greatly reduce the complexity of CPU hardware (specifically the number of internal registers and the resulting huge multiplexer trees). While somewhat harder to read and debug than general-purpose register names, it aids understanding to view the belt as a moving conveyor belt where the oldest values drop off the belt and vanish. It is implemented in the Mill architecture. Timeline of events 1964. IBM release the 32-bit IBM System/360 with memory protection. 1969. Intel 4004's initial design led by Intel's Ted Hoff and Busicom's Masatoshi Shima. 1970. Intel 4004's design completed by Intel's Federico Faggin and Busicom's Masatoshi Shima. 1971. IBM release the IBM System/370 successor to System/360. 1971. Intel release the 4-bit Intel 4004, the first commercial microprocessor. 1971. NEC release the μPD707 and μPD708, a two-chip 4-bit CPU. 1972. IBM announce "System/370 Advanced Function", adding support for virtual memory with demand paging 1972. NEC release single-chip 4-bit microprocessor, μPD700. 1973. NEC release 4-bit μCOM-4 (μPD751), combining the μPD707 and μPD708 into a single microprocessor. 1974. Intel release the Intel 8080, an 8-bit microprocessor, designed by Federico Faggin and Masatoshi Shima. 1975. MOS Technology release the 8-bit MOS Technology 6502, the first integrated processor to have an affordable price of $25 when the 6800 rival was $175. 1976. Zilog introduce the 8-bit Zilog Z80, designed by Federico Faggin and Masatoshi Shima. 1977. Digital Equipment Corporation introduced its first 32-bit VAX superminicomputer, the VAX-11/780. 1978. Intel introduces the Intel 8086 and Intel 8088, the first x86 chips. 1978. Fujitsu releases the MB8843 microprocessor. 1979. Zilog release the Zilog Z8000, a 16-bit microprocessor, designed by Federico Faggin and Masatoshi Shima. 1979. Motorola introduce the Motorola 68000, a 16/32-bit microprocessor. 1981. Stanford MIPS introduced, one of the first reduced instruction set computing (RISC) designs. 1982. Intel introduces the Intel 80286, which was the first Intel processor that could run all the software written for its predecessors, the 8086 and 8088. 1984. Motorola introduces the Motorola 68020, which enabled full 32-bit addressing, and the 68851 memory management unit, which supported demand paging. 1985. Intel introduces the Intel 80386, which adds a 32-bit instruction set to the x86 microarchitecture, and supports demand paging. 1985. ARM architecture introduced. 1989. Intel introduces the Intel 80486. 1992. Hitachi introduces SuperH architecture, which provides the basis for ARM's Thumb instruction set. 1993. Intel launches the original Pentium microprocessor, the first processor with a x86 superscalar microarchitecture. 1994. IBM introduce the first IBM mainframe models to use single-chip microprocessors as CPUs, the IBM System/390 9672 series. 1994. ARM's Thumb instruction set introduced, based on Hitachi's SuperH instruction set. 1995. Intel introduces the Pentium Pro which becomes the foundation for the Pentium II, Pentium III, Pentium M and Intel Core architectures. 2000. IBM introduce z/Architecture, the 64-bit version of their mainframe architecture. 2000. AMD announced x86-64 64-bit extension to the x86 microarchitecture. 2000. AMD hits 1 GHz with its Athlon microprocessor. 2000. Analog Devices introduces the Blackfin architecture. 2002. Intel released a Pentium 4 with hyper-threading, the first modern desktop processor to implement simultaneous multithreading (SMT). 2003. AMD released the Athlon 64, the first 64-bit consumer CPU. 2003. Intel introduced the Pentium M, a low power mobile derivative of the Pentium Pro architecture. 2005. AMD announced the Athlon 64 X2, their first x86 dual-core processor. 2006. Intel introduces the Core line of CPUs based on a modified Pentium M design. 2008. Over 10 billion Arm based CPUs shipped. 2010. Intel introduced the Core i3, i5, and i7, with 2, 4 and 4 cores respectively. 2011. ARM release ARMv8-A, supporting the 64-bit AAarch64 architecture. 2011. AMD announced the world's first 8-core CPU for desktop PCs. 2017. AMD announced Ryzen processors based on the Zen architecture, with up to 16 cores. 2017. Intel 8th generation Core i3, Core i5, Core i7 and Core i9, increased to approximately 4, 6, 8 and 8 cores respectively. 2017. Over 100 billion Arm based CPUs shipped. 2020. Apple launched their own M1 ARMv8-based system-on-a-chip (SoC), significant in that they switched their devices away from Intel CPUs. 2021. ARM release ARMv9 the first major upgrade in a decade, since Armv8 in 2011. 2021. Over 200 billion Arm based CPUs shipped. 2022. AMD 3rd Generation EPYC 64C processors power Frontier, the world's most powerful supercomputer. 2024. Apple launched the M4, their first SoC adopting the ARMv9 CPU architecture. See also Microprocessor chronology General-purpose computing on graphics processing units (GPGPU) References External links Great moments in microprocessor history by W. Warner, 2004 Great Microprocessors of the Past and Present (V 13.4.0) by: John Bayko, 2003 Bit by Bit: An Illustrated History of Computers, Stan Augarten, 1984. OCR with permission of the author Gallery of CPU and related PCBs (in Italian) Central processing unit General-purpose CPUs General-purpose CPUs
History of general-purpose CPUs
[ "Technology" ]
7,090
[ "History of computing hardware", "Computers", "History of computing" ]
9,506,868
https://en.wikipedia.org/wiki/FTP%20Explorer
FTP Explorer is an FTP client application for the Microsoft Windows operating system which was originally developed in 1996 by Alan Chavis, founder of FTPx Corp. One of the first "explorer style" FTP clients, FTP Explorer was designed to look and feel very similar to the explorer file system view of the Windows user interface, with a tree view containing folders on the left and a list view containing files and folders on the right. FTP Explorer pioneered more advanced FTP features such as background downloading and multiple active connections and became popular. FTP Explorer has been mentioned in numerous publications and included on the CD-ROM inserts of many books. Licensing FTP Explorer is free for educational use. After a 15-day trial period, users are required to purchase a license for US$35.99. See also Comparison of FTP client software External links FTP Explorer Official Site FTP clients
FTP Explorer
[ "Technology" ]
187
[ "Computing stubs", "Computer network stubs" ]
9,507,357
https://en.wikipedia.org/wiki/WokFi
WokFi (a portmanteau derived from blending the words Wok + Wi-Fi) is a slang term for a style of homemade Wi-Fi antenna consisting of a crude parabolic antenna made with a low-cost Asian kitchen wok, spider skimmer or similar household metallic dish. The dish forms a directional antenna which is pointed at the wireless access point antenna, allowing reception of the wireless signal at greater distances than standard omnidirectional Wi-Fi antennas. Description WokFi antennas are fabricated out of commonly available concave metal kitchen dishes or dish covers (which need not be perfectly parabolic); Asian woks are favored because they have shapes closest to parabolic. A commercial Wi-Fi antenna, usually a USB Wi-Fi dongle, is suspended in front of the dish, attached by cable to the computer. The WokFi antenna is considered simpler and cheaper than other home-built antenna projects (such as the popular cantenna), but is a very effective method to boost the Wi-Fi connection quality, audit access point coverage, and even quickly establish WLAN viability – perhaps if a more professional setup is eventually intended. Advantages A significant advantage is that with a USB modem the RF signal is converted to a conventional digital signal at the antenna. Therefore, by using standard USB extension cables, the antenna can be located at a distance from the computer of five meters or more, with no concerns over microwave signal losses that would occur in an RF coaxial cable feedline of that length used to attach a conventional antenna to the RF input of a computer modem. Chaining active USB repeaters, it is possible to locate the antenna at much greater distances from the computer, which is especially useful when line-of-sight (LOS) obstacles (such as vegetation and walls) require the antenna to be located on a roof, for example. If using mesh reflectors, usually with a grid under 5 mm, the antenna will be lighter and present a smaller wind-load than larger dishes. Performance WokFi gains are typically 10+ dB, with range boosts, thus can be 16-32 times over the antenna of a bare USB adapter. Ranges (LoS) are typically 3–5 km (2 to 3 miles), although an aligned pair of similar point-to-point transceiver setups may approach 10 km (6 miles) over a clear path. In addition, certain improved WokFi antennas, and antennas made using 60 to 90 cm (2-3 ft) diameter round or oval satellite TV dishes, allow even far greater range, up to 20 km (12 miles). Interference from nearby 2.4 GHz signals (perhaps from cordless phones, AV links, leaky microwave ovens, other APs or Bluetooth) can be nulled out—a useful feature in this increasingly crowded part of the RF spectrum. The performance of abundant, low-powered Wi-Fi "dongles", typically selling for approximately US$15–20, but of only 30–40 mW transmitter power and modest receiver sensitivity, can easily be boosted with little more than cheap cookware or pot lids. The "sweet spot" on such ad hoc reflectors can readily be found by taping a small (~2.5 cm, or 1 in) mirror on the surface of the dish, to see where the sun's rays focus. See also Cantenna References External links USB adaptors & DIY antenna = "Poor Man's WiFi" ? — Kiwi Stan Swan's site, where the whole WokFi thing sparked Radio frequency antenna types Antennas (radio) Wi-Fi
WokFi
[ "Technology" ]
749
[ "Wireless networking", "Wi-Fi" ]
9,508,138
https://en.wikipedia.org/wiki/Ideal%20sheaf
In algebraic geometry and other areas of mathematics, an ideal sheaf (or sheaf of ideals) is the global analogue of an ideal in a ring. The ideal sheaves on a geometric object are closely connected to its subspaces. Definition Let X be a topological space and A a sheaf of rings on X. (In other words, (X, A) is a ringed space.) An ideal sheaf J in A is a subobject of A in the category of sheaves of A-modules, i.e., a subsheaf of A viewed as a sheaf of abelian groups such that Γ(U, A) · Γ(U, J) ⊆ Γ(U, J) for all open subsets U of X. In other words, J is a sheaf of A-submodules of A. General properties If f: A → B is a homomorphism between two sheaves of rings on the same space X, the kernel of f is an ideal sheaf in A. Conversely, for any ideal sheaf J in a sheaf of rings A, there is a natural structure of a sheaf of rings on the quotient sheaf A/J. Note that the canonical map Γ(U, A)/Γ(U, J) → Γ(U, A/J) for open subsets U is injective, but not surjective in general. (See sheaf cohomology.) Algebraic geometry In the context of schemes, the importance of ideal sheaves lies mainly in the correspondence between closed subschemes and quasi-coherent ideal sheaves. Consider a scheme X and a quasi-coherent ideal sheaf J in OX. Then, the support Z of OX/J is a closed subspace of X, and (Z, OX/J) is a scheme (both assertions can be checked locally). It is called the closed subscheme of X defined by J. Conversely, let i: Z → X be a closed immersion, i.e., a morphism which is a homeomorphism onto a closed subspace such that the associated map i#: OX → i⋆OZ is surjective on the stalks. Then, the kernel J of i# is a quasi-coherent ideal sheaf, and i induces an isomorphism from Z onto the closed subscheme defined by J. A particular case of this correspondence is the unique reduced subscheme Xred of X having the same underlying space, which is defined by the nilradical of OX (defined stalk-wise, or on open affine charts). For a morphism f: X → Y and a closed subscheme  ⊆ Y defined by an ideal sheaf J, the preimage  ×Y X is defined by the ideal sheaf f⋆(J)OX = im(f⋆J → OX). The pull-back of an ideal sheaf J to the subscheme Z defined by J contains important information, it is called the conormal bundle of Z. For example, the sheaf of Kähler differentials may be defined as the pull-back of the ideal sheaf defining the diagonal X → X × X to X. (Assume for simplicity that X is separated so that the diagonal is a closed immersion.) Analytic geometry In the theory of complex-analytic spaces, the Oka-Cartan theorem states that a closed subset A of a complex space is analytic if and only if the ideal sheaf of functions vanishing on A is coherent. This ideal sheaf also gives A the structure of a reduced closed complex subspace. References Éléments de géométrie algébrique H. Grauert, R. Remmert: Coherent Analytic Sheaves. Springer-Verlag, Berlin 1984 Scheme theory Sheaf theory
Ideal sheaf
[ "Mathematics" ]
785
[ "Topology", "Sheaf theory", "Mathematical structures", "Category theory" ]
9,508,189
https://en.wikipedia.org/wiki/Steven%20Feld
Steven Feld (born August 20, 1949) is an American ethnomusicologist, anthropologist, and linguist, who worked for many years with the Kaluli (Bosavi) people of Papua New Guinea. He earned a MacArthur Fellowship in 1991. Early life Feld was born in Philadelphia, Pennsylvania, on August 20, 1949. He graduated with a BA cum laude at Hofstra University in anthropology in 1971. He first went to the Bosavi territory in 1976, accompanied by anthropologist Edward L. Schieffelin, whose recordings of the Bosavi inspired him to pursue this work. His work there fulfilled his dissertation (later published as Sound and Sentiment) for his PhD from Indiana University in 1979 (in anthropology/linguistics/ethnomusicology). Career Feld later returned several times in the 1980s and 1990s to Papua New Guinea to research Bosavi song, rainforest ecology, and cultural poetics. He has also made briefer research visits to various locations in Europe. He has taught at Columbia University, New York University, University of California at Santa Cruz, University of Texas at Austin, and University of Pennsylvania. He is currently (since 2003) a professor of anthropology and music at the University of New Mexico. Since 2001, he has also held a visiting appointment at the Grieg Academy, University of Bergen, Norway, as a professor of world music. In 2002, he founded the VoxLox label, "documentary sound art advocates for human rights and acoustic ecology." His most recent book Jazz Cosmopolitanism in Accra (2012) is based on five years of research and collaboration in Accra, Ghana. He is also a musician, and he has been active in the New Mexican music scene since the 1970s. Some of Feld's recordings are sampled on the track, "Kaluli Groove" on the 2007 album Global Drum Project by Mickey Hart, Zakir Hussain, Sikiru Adepoju, and Giovanni Hidalgo. Academic work Schizophonic mimesis Schizophonic mimesis is a term coined by Steven Feld that describes the separation of a sound from its source, and the recontextualizing of that sound into a separate sonic context. The term in and of itself describes how sound recordings, split from their source through the chain of audio production, circulation, and consumption, stimulate and license renegotiations of identity in an ethnomusicological perspective. The term is composed of two parts: schizophonia and mimesis. Firstly, schizophonia, a term coined by Canadian composer R. Murray Schafer, refers to the split between an original sound and the reproduction/transmission of this sound, be it in a recording, a song, etc. For example, any sound recording, radio, and telephone is a machine of schizophonia, in that they all separate the sound from its original source; in the case of radio, the source of a New York radio show is from New York, but a listener in Los Angeles hears the noises from Los Angeles. Secondly, mimesis describes an imitation or representation of that separated sound into another context. For example, mimesis has occurred if one places a recording of a baby's gurgle into a song. Notable examples In 1969, ethnomusicologist Hugo Zemp recorded a Solomon Island woman named Afunakwa singing a popular Solomon Islands lullaby called "Rorogwela". Then, in 1992, on Deep Forest's album Boheme, a song called "Sweet Lullaby" samples Zemp's field recording of Rorogwela. Furthermore, in 1996, Norwegian saxophonist Jan Garbarek sampled the melody of "Rorogwela" in his song "Pygmy Lullaby" on his album Visual World. The field recording is an example of schizophonia, and the placing of this field recording into "Sweet Lullaby" is an instance of schizophonic mimesis. The sampling of the melody in "Pygmy Lullaby" demonstrates further schizophonic mimesis. In 1966, ethnomusicologist Simha Arom recorded a particular style of music from the Ba-Benzélé Pygmies called Hindewhu, which consists of making music with a single-pitch flute and the human voice. Soon after, Herbie Hancock adapted the Hindewhu style by using a beer bottle instead of a flute in his 1973 remake of "Watermelon Man". Then, Madonna's song "Sanctuary" from the 1994 album Bedtime Stories sampled Hancock's adaptation of Hindewhu. Again, the field recording is an example of schizophonia, and the use of the Hindewhu style in Hancock's adaptation and "Sanctuary" are examples of schizophonic mimesis. Works Jazz Cosmopolitanism in Accra: Five Musical Years in Ghana. Duke University Press, 2012 Sound and Sentiment: Birds, Weeping, Poetics, and Song in Kaluli expression. University of Pennsylvania Press, 1982, 2nd ed. 1990; based on dissertation (with Charles Keil) Music Grooves. University of Chicago Press, 1994 (with Keith Basso, as eds.) Senses of Place. School of American Research Press, 1996 (with Bambi B. Schieffelin and others) Bosavi-English-Tok Pisin Dictionary. Australian National University, Pacific Linguistics C-153, 1998 (with Dick Blau, Charles Keil, and Angeliki V. Keil) Bright Balkan Morning: Romani Lives and the Power of Greek Music in Macedonia. Wesleyan University Press, 2002 Website (with Virginia Ryan) Exposures: A White Woman in West Africa Voxlox Publication, 2006 (with Nicola Scaldaferri) When the trees resound - Collaborative Media Research on an Italian Festival, Nota, Udine, 2019 Recordings Music of the Kaluli. Institute of Papua New Guinea Studies, 1981 The Kaluli of Papua Nugini: Weeping and Song. Bärenreiter Musicaphon, 1985 Voices of the Rainforest. Rykodisc, 1991 Rainforest Soundwalks: Ambiences of Bosavi, Papua New Guinea. Earth Ear, 2001 Bosavi: Rainforest Music from Papua New Guinea. Smithsonian Folkways, 2001 Bells and Winter Festivals of Greek Macedonia. Smithsonian Folkways, 2002 For VoxLox The Time of Bells Vol. 1 & 2, 2004; Vol. 3 (with Nii Noi Nortey), 2005; Vol. 4, 2006 Suikinkutsu: A Japanese Underground Water Zither, 2006 The Castaways Project (with Virginia Ryan) 2006 Topographies of The Dark:2007 Notes External links UNM faculty website VoxLox label website Grieg Academy faculty website Research Reports for the Ear: Soundscape Art in Scientific Presentations by Jim Cummings, including some sound samples of Feld's and an analysis of his recording work Interview with Carlos Palombini American ethnomusicologists MacArthur Fellows Academic staff of the University of Bergen Hofstra University alumni 1949 births Living people Field recording
Steven Feld
[ "Engineering" ]
1,471
[ "Audio engineering", "Field recording" ]
9,508,206
https://en.wikipedia.org/wiki/Nicol%C3%B2%20Barattieri
Nicolò Barattieri was a Lombard engineer active in Venice during the 12th century. In 1180 he raised St Mark's Campanile to 200 feet. In around 1181, he built the first bridge across the Grand Canal, a pontoon bridge then called the Ponte della Moneta that was the first version of the Rialto Bridge. Barattieri also erected the columns of San Marco and San Todaro in the Piazzetta di San Marco. References Engineers from Venice Structural engineers Year of death unknown Year of birth unknown
Nicolò Barattieri
[ "Engineering" ]
111
[ "Structural engineering", "Structural engineers" ]
9,508,218
https://en.wikipedia.org/wiki/Mitogen-activated%20protein%20kinase%20kinase
Mitogen-activated protein kinase kinase (also known as MAP2K, MEK, MAPKK) is a dual-specificity kinase enzyme which phosphorylates mitogen-activated protein kinase (MAPK). MAP2K is classified as . There are seven genes: (a.k.a. MEK1) (a.k.a. MEK2) (a.k.a. MKK3) (a.k.a. MKK4) (a.k.a. MKK5) (a.k.a. MKK6) (a.k.a. MKK7) The activators of p38 (MKK3 and MKK6), JNK (MKK4 and MKK7), and ERK (MEK1 and MEK2) define independent MAP kinase signal transduction pathways. The acronym MEK derives from MAPK/ERK Kinase. Role in melanoma MEK is a member of the MAPK signaling cascade that is activated in melanoma. When MEK is inhibited, cell proliferation is blocked and apoptosis (controlled cell death) is induced. See also Signal transduction MAP kinase MAP kinase kinase kinase MAP kinase kinase kinase kinase References External links Protein kinases EC 2.7.12 Genes associated with cancer
Mitogen-activated protein kinase kinase
[ "Chemistry" ]
279
[ "Biochemistry stubs", "Protein stubs" ]
9,508,538
https://en.wikipedia.org/wiki/Eukaryotic%20initiation%20factor
Eukaryotic initiation factors (eIFs) are proteins or protein complexes involved in the initiation phase of eukaryotic translation. These proteins help stabilize the formation of ribosomal preinitiation complexes around the start codon and are an important input for post-transcription gene regulation. Several initiation factors form a complex with the small 40S ribosomal subunit and Met-tRNAiMet called the 43S preinitiation complex (43S PIC). Additional factors of the eIF4F complex (eIF4A, E, and G) recruit the 43S PIC to the five-prime cap structure of the mRNA, from which the 43S particle scans 5'-->3' along the mRNA to reach an AUG start codon. Recognition of the start codon by the Met-tRNAiMet promotes gated phosphate and eIF1 release to form the 48S preinitiation complex (48S PIC), followed by large 60S ribosomal subunit recruitment to form the 80S ribosome. There exist many more eukaryotic initiation factors than prokaryotic initiation factors, reflecting the greater biological complexity of eukaryotic translation. There are at least twelve eukaryotic initiation factors, composed of many more polypeptides, and these are described below. eIF1 and eIF1A eIF1 and eIF1A both bind to the 40S ribosome subunit-mRNA complex. Together they induce an "open" conformation of the mRNA binding channel, which is crucial for scanning, tRNA delivery, and start codon recognition. In particular, eIF1 dissociation from the 40S subunit is considered to be a key step in start codon recognition. eIF1 and eIF1A are small proteins (13 and 16 kDa, respectively in humans) and are both components of the 43S PIC. eIF1 binds near the ribosomal P-site, while eIF1A binds near the A-site, in a manner similar to the structurally and functionally related bacterial counterparts IF3 and IF1, respectively. eIF2 eIF2 is the main protein complex responsible for delivering the initiator tRNA to the P-site of the preinitiation complex, as a ternary complex containing Met-tRNAiMet and GTP (the eIF2-TC). eIF2 has specificity for the methionine-charged initiator tRNA, which is distinct from other methionine-charged tRNAs used for elongation of the polypeptide chain. The eIF2 ternary complex remains bound to the P-site while the mRNA attaches to the 40s ribosome and the complex begins to scan the mRNA. Once the AUG start codon is recognized and located in the P-site, eIF5 stimulates the hydrolysis of eIF2-GTP, effectively switching it to the GDP-bound form via gated phosphate release. The hydrolysis of eIF2-GTP provides the conformational change to change the scanning complex into the 48S Initiation complex with the initiator tRNA-Met anticodon base paired to the AUG. After the initiation complex is formed the 60s subunit joins and eIF2 along with most of the initiation factors dissociate from the complex allowing the 60S subunit to bind. eIF1A and eIF5B-GTP remain bound to one another in the A site and must be hydrolyzed to be released and properly initiate elongation. eIF2 has three subunits, eIF2-α, β, and γ. The former α-subunit is a target of regulatory phosphorylation and is of particular importance for cells that may need to turn off protein synthesis globally as a response to cell signaling events. When phosphorylated, it sequesters eIF2B (not to be confused with eIF2β), a GEF. Without this GEF, GDP cannot be exchanged for GTP, and translation is repressed. One example of this is the eIF2α-induced translation repression that occurs in reticulocytes when starved for iron. In the case of viral infection, protein kinase R (PKR) phosphorylates eIF2α when dsRNA is detected in many multicellular organisms, leading to cell death. The proteins eIF2A and eIF2D are both technically named 'eIF2' but neither are part of the eIF2 heterotrimer and they seem to play unique functions in translation. Instead, they appear to be involved in specialized pathways, such as 'eIF2-independent' translation initiation or re-initiation, respectively. eIF3 eIF3 independently binds the 40S ribosomal subunit, multiple initiation factors, and cellular and viral mRNA. In mammals, eIF3 is the largest initiation factor, made up of 13 subunits (a-m). It has a molecular weight of ~800 kDa and controls the assembly of the 40S ribosomal subunit on mRNA that have a 5' cap or an IRES. eIF3 may use the eIF4F complex, or alternatively during internal initiation, an IRES, to position the mRNA strand near the exit site of the 40S ribosomal subunit, thus promoting the assembly of a functional pre-initiation complex. In many human cancers, eIF3 subunits are overexpressed (subunits a, b, c, h, i, and m) and underexpressed (subunits e and f). One potential mechanism to explain this disregulation comes from the finding that eIF3 binds a specific set of cell proliferation regulator mRNA transcripts and regulates their translation. eIF3 also mediates cellular signaling through S6K1 and mTOR/Raptor to effect translational regulation. eIF4 The eIF4F complex is composed of three subunits: eIF4A, eIF4E, and eIF4G. Each subunit has multiple human isoforms and there exist additional eIF4 proteins: eIF4B and eIF4H. eIF4G is a 175.5-kDa scaffolding protein that interacts with eIF3 and the Poly(A)-binding protein (PABP), as well as the other members of the eIF4F complex. eIF4E recognizes and binds to the 5' cap structure of mRNA, while eIF4G binds PABP, which binds the poly(A) tail, potentially circularizing and activating the bound mRNA. eIF4Aa DEAD box RNA helicaseis important for resolving mRNA secondary structures. eIF4B contains two RNA-binding domainsone non-specifically interacts with mRNA, whereas the second specifically binds the 18S portion of the small ribosomal subunit. It acts as an anchor, as well as a critical co-factor for eIF4A. It is also a substrate of S6K, and when phosphorylated, it promotes the formation of the pre-initiation complex. In vertebrates, eIF4H is an additional initiation factor with similar function to eIF4B. eIF5, eIF5A and eIF5B eIF5 is a GTPase-activating protein, which helps the large ribosomal subunit associate with the small subunit. It is required for GTP-hydrolysis by eIF2. eIF5A is the eukaryotic homolog of EF-P. It helps with elongation and also plays a role in termination. EIF5A contains the unusual amino acid hypusine. eIF5B is a GTPase, and is involved in assembly of the full ribosome. It is the functional eukaryotic analog of bacterial IF2. eIF6 eIF6 performs the same inhibition of ribosome assembly as eIF3, but binds with the large subunit. See also Eukaryotic translation Ded1/DDX3 DHX29 References Further reading External links Helicases Molecular biology Protein biosynthesis Gene expression
Eukaryotic initiation factor
[ "Chemistry", "Biology" ]
1,690
[ "Protein biosynthesis", "Gene expression", "Molecular genetics", "Biosynthesis", "Cellular processes", "Molecular biology", "Biochemistry" ]
9,508,543
https://en.wikipedia.org/wiki/Bacterial%20initiation%20factor
A bacterial initiation factor (IF) is a protein that stabilizes the initiation complex for polypeptide translation. Translation initiation is essential to protein synthesis and regulates mRNA translation fidelity and efficiency in bacteria. The 30S ribosomal subunit, initiator tRNA, and mRNA form an initiation complex for elongation. This complex process requires three essential protein factors in bacteria – IF1, IF2, and IF3. These factors bind to the 30S subunit and promote correct initiation codon selection on the mRNA. IF1, the smallest factor at 8.2 kDa, blocks elongator tRNA binding at the A-site. IF2 is the major component that transports initiator tRNA to the P-site. IF3 checks P-site codon-anticodon pairing and rejects incorrect initiation complexes. The orderly mechanism of initiation starts with IF3 attaching to the 30S subunit and changing its shape. IF1 joins next, followed by mRNA binding, and starts codon-P-site interaction. IF2 enters with the initiator tRNA and places it on the start codon. GTP hydrolysis by IF2 releases it and IF3, enabling 50S subunit joining. The coordinated binding and activities of IF1, IF2, and IF3 are essential for the rapid and precise translation initiation in bacteria. They facilitate start codon selection and assemble an active, protein-synthesis-ready 70S ribosome. IF1 Bacterial initiation factor 1 associates with the 30S ribosomal subunit in the A site and prevents an aminoacyl-tRNA from entering. It modulates IF2 binding to the ribosome by increasing its affinity. It may also prevent the 50S subunit from binding, stopping the formation of the 70S subunit. It also contains a β-domain fold common for nucleic acid-binding proteins. It is a homolog of eIF1A. Initiation factor IF-1 is the smallest translation factor at only 8.2kDa. Beyond blocking the A-site, it affects the dynamics of ribosome association and dissociation. IF-1 enhances dissociation with IF-3, likely by inducing conformational changes in the 30S subunit. It also increases the binding affinity of IF-2 to the 30S subunit, possibly by altering the subunit configuration. Though IF-1 occupies the A-site, it does so in a way that is distinct from tRNA binding. Structural studies show IF-1 inserts a loop into the minor groove of helix 44 of 16S rRNA, flipping out bases A1492 and A1493. This insertion repositions nucleotides of helix 44, transmitting a conformational change over a 70Å distance and rotating the head of the 30S subunit. IF-1 mutants can exhibit cold-sensitive phenotypes, indicating a role for the factor in cold shock adaptation. Certain mutations also lead to o of genes at low temperatures, suggesting IF-1 is involved in gene regulation. IF-1 actively modifies ribosome structure and dynamics during initiation, in addition to just blocking the A-site. IF2 The IF2 initiation factor is a crucial component in the process of protein synthesis. The largest among the three indispensable translation initiation factors is IF-2, which possesses a molecular mass of 97 kDa. The protein has many domains, including an N-terminal domain, a GTPase domain, a linker region, C1, C2, and C-terminal domains. The GTPase domain encompasses the G1-G5 motif, which is responsible for the binding and hydrolysis of GTP. The activity of IF2 is regulated by conformational changes induced by the binding and hydrolysis of GTP. The primary function of IF-2 is to transport the initiator fMet-tRNA to the P-site of the 30S ribosomal subunit. The C2 domain of IF2 has a unique recognition and binding affinity towards the initiator tRNA. The IF-2 protein has been observed to form a ternary complex when interacting with GTP and fMet-tRNA. This complex has been found to interact with the 30S subunit. The initiation of mRNA translation involves the placement of the start codon in the P-site through the codon-anticodon base matching with the tRNA anti-codon. IF2 regulates start codon selection accuracy and inhibits elongator tRNAs' binding by selectively binding to fMet-tRNA. Additionally, it relocates the initiator tRNA on the 30S subunit to enhance the optimum contact with the P-site. Furthermore, IF2 exhibits RNA chaperone activity, which enables it to rectify misfolded RNA structures. In general, the IF2 protein plays a crucial role in coordinating many steps of translation initiation, including the binding of mRNA and fMet-tRNA to the start codon, the joining of sub-units, and the activation of GTPase. IF3 Initiation factor IF3 is a small protein of 21 kDa containing two compact α/β domains (IF3C and IF3N) connected by a flexible lysine-rich linker. Most IF3 functions are mediated by the IF3C domain, while IF3N regulates 30S subunit binding. Bacterial initiation factor 3 (infC) is not universally found in all bacterial species but in E. coli it is required for the 30S subunit to bind to the initiation site in mRNA. IF3 is required by the small subunit to form initiation complexes, but has to be released to allow the 50S subunit to bind. IF3 attaches to the platform side of the 30S subunit, close to helices 23, 24, 25, 26 and 45 of 16S rRNA, as well as ribosomal proteins S7, S11, and S12. The IF3C domain interacts with the 30S subunit via its conserved basic residues R99, R116, R147 and R168 . A major function of IF3 is inspecting codon-anticodon pairing at the P-site during start codon selection. It accelerates the dissociation of non-canonical initiation complexes containing mismatched or incorrect tRNAs. IF3 also inspects the initiator tRNA, rejecting elongator tRNAs and it also promotes the dissociation of the 70S ribosome into subunits, providing a pool of free 30S subunits for initiation. Another key role of IF3 is repositioning mRNA on the 30S subunit from a standby site to the P-site decoding site for start codon selection. IF3 works cooperatively with IF1 and IF2 during initiation and modulates IF2 binding and enhances the fidelity of start codon selection. References External links Protein biosynthesis Gene expression
Bacterial initiation factor
[ "Chemistry", "Biology" ]
1,414
[ "Protein biosynthesis", "Gene expression", "Molecular genetics", "Biosynthesis", "Cellular processes", "Molecular biology", "Biochemistry" ]
9,508,664
https://en.wikipedia.org/wiki/EADAS
EADAS is an acronym for Engineering and Administrative Data Acquisition System created at Bell Laboratories in Columbus, Ohio and used in the Bell System during the 1970s. EADAS was an Operations Support System (OSS) developed for the AT&T long distance and local Bell System telephone networks. The EADAS system collected network traffic usage data, generated periodic management reports, provided network management code controls and routing controls, and provided automated trouble analysis support for network support staff. EADAS was developed by a collaborative effort between Wisconsin Bell, a major "trial site", and Bell Laboratories in both Holmdel, New Jersey and Columbus, Ohio. This OSS was deployed on DEC PDP-11 computer systems throughout the Bell System network of central offices. See the Bell System Technical Journal published during the mid-1970s for more details. References Network management Telecommunications systems
EADAS
[ "Technology", "Engineering" ]
170
[ "Computer networks engineering", "Telecommunications systems", "Network management" ]
9,509,341
https://en.wikipedia.org/wiki/Trunks%20Integrated%20Record%20Keeping%20System
Trunks Integrated Record Keeping System (TIRKS) is an operations support system from Telcordia Technologies (since acquired by Ericsson, Inc.), originally developed by the Bell System during the late 1970s. It was developed for inventory and order control management of interoffice trunk circuits that interconnect telephone switches. It grew to encompass and automate many functions required to build the ever-expanding data transport network. Supporting circuits from POTS and 150 baud modems up through T1, DS3, SONET and DWDM, it continues to evolve today, and unlike many software technologies today, provides complete backward compatibility. TIRKS was recently updated with a Java GUI, XML API, and WORD Sketch, which provides graphical views of the TIRKS Work Order Record and Details Document as well as SONET and DWDM networks. When TIRKS became a registered trademark in 1987, it became technically improper to use it as an acronym. TIRKS was one of many OSS technologies transferred to Bell Communications Research as part of the Modification of Final Judgment related to the AT&T divestiture on January 1, 1984. In the 1990s, the Facility and Equipment Planning System (FEPS) and Planning Workstation System (PWS) products were incorporated into the Telcordia TIRKS CE System. TIRKS is still in use at AT&T, Verizon, CenturyLink/Lumen Technologies, and altafiber. Telecommunications systems
Trunks Integrated Record Keeping System
[ "Technology" ]
294
[ "Telecommunications systems" ]
9,509,772
https://en.wikipedia.org/wiki/Suppression%20subtractive%20hybridization
Subtractive hybridization is a technology that allows for PCR-based amplification of only cDNA fragments that differ between a control (driver) and experimental transcriptome. cDNA is produced from mRNA. Differences in relative abundance of transcripts are highlighted, as are genetic differences between species. The technique relies on the removal of dsDNA formed by hybridization between a control and test sample, thus eliminating cDNAs or gDNAs of similar abundance, and retaining differentially expressed, or variable in sequence, transcripts or genomic sequences. Suppression subtractive hybridization has also been successfully used to identify strain- or species-specific DNA sequences in a variety of bacteria including Vibrio species (Metagenomics). See also Representational difference analysis External links Overview at evrogen.com Biotechnology
Suppression subtractive hybridization
[ "Chemistry", "Biology" ]
164
[ "Biotechnology stubs", "Biotechnology", "Biochemistry stubs", "nan", "Biochemistry" ]
9,510,020
https://en.wikipedia.org/wiki/Jeff%20Moss%20%28hacker%29
Jeff Moss (born January 1, 1975), also known as Dark Tangent, is an American hacker, computer and internet security expert who founded the Black Hat and DEF CON computer security conferences. Early life and education Moss received his first computer at the age of 10. He became fascinated because he wasn't old enough to drive a car or vote, but he could engage in adult conversation with people all over the country. Moss graduated from Gonzaga University with a BA in Criminal Justice. He worked for Ernst & Young, LLP in their Information System Security division and was a director at Secure Computing Corporation where he helped establish the Professional Services Department in the United States, Asia, and Australia. Security conferences In 1993 he created the first DEF CON hacker convention, based around a party for members of a Fido hacking network in Canada. It slowly grew, and by 1999 was attracting major attention. In 1997 he created Black Hat Briefings computer security conference that brings together a variety of people interested in information security. He sold Black Hat in 2005 to CMP Media, a subsidiary of UK-based United Business Media, for a reported $13.9 million USD. DEF CON was not included in the sale. In 2018 Jeff launched the first DEF CON hacker convention outside of the United States. Holding the same name DEF CON China was hosted in Beijing, China and co-hosted by Baidu. The first year of DEF CON China was labeled a [Beta] year, and in 2019 they formalized the conference with DEF CON China 1.0. Later career Moss is a member and regular attendee of the Washington D.C. based Council on Foreign Relations (CFR), an independent, nonpartisan membership organization, think tank, and publisher. In 2009 Moss was sworn into the Homeland Security Advisory Council of the Barack Obama administration. On April 28 2011 Jeff Moss was appointed ICANN Chief Security Officer. In July 2012, Secretary Janet Napolitano directed the Homeland Security Advisory Council to form the Task Force on CyberSkills in response to the increasing demand for the best and brightest in the cybersecurity field across industry, academia and government. The Task Force, co-chaired by Jeff Moss and Alan Paller, conducted extensive interviews with experts from government, the private sector, and academia in developing its recommendations to grow the advanced technical skills of the DHS cybersecurity workforce and expand the national pipeline of men and women with these cybersecurity skills. On October 1, the HSAC unanimously approved sending the Task Force recommendations to the Secretary. In October 2013, Jeff announced that he would be stepping down from his position at ICANN at the end of 2013. In 2013, Jeff was appointed as a Nonresident Senior Fellow at the Atlantic Council, associated with the Cyber Statecraft Initiative, within the Brent Scowcroft Center on International Security. In 2014, Jeff joined the Georgetown University School of Law School Cybersecurity Advisory Committee. 18 March 2016, Richemont announced his nomination for election to the Board of Directors. In 2017, Jeff was named a Commissioner at the Global Commission on the Stability of Cyberspace (GCSC). The GCSC is composed of 24 prominent independent Commissioners representing a wide range of geographic regions as well as government, industry, technical and civil society stakeholders with legitimacy to speak on different aspects of cyberspace. The Commission's stated aim is to develop proposals for norms and policies to enhance international security and stability and guide responsible state and non-state behavior in cyberspace. In 2017, Jeff spearheaded the creation of the DEF CON Voting Machine Village. Debuting at DEF CON 25, the Voting Machine Village allowed hackers to test the security of electronic voting machines, including several models still in active use in the US. The machines were all compromised over the course of the conference by DEF CON attendees, some within hours of the village's opening. The resulting media coverage of the vulnerability of all tested machines sparked a national conversation and inspired legislation in Virginia. In September 2017, the Voting Machine Village produced "DEF CON 25 Voting Machine Hacking Village: Report on Cyber Vulnerabilities in US Election Equipment, Databases and Infrastructure" summarizing its findings. The findings were publicly released at an event sponsored by the Atlantic Council and the paper went on to win an O'Reilly Defender Research Award. In March 2018, the DEF CON Voting Machine Hacking Village was awarded a Cybersecurity Excellence Award. The award cites both the spurring of a national dialog around securing the US election system and the release of the nation's first cybersecurity election plan. In December 2021, Moss was appointed as one of twenty-three members of a newly formed US DHS CISA cybersecurity advisory council. Other notable members include Alex Stamos, Steve Adler, Bobby Chesney, Thomas Fanning, Vijaya Gadde, Patrick Gallagher, and Alicia Tate-Nadeau. Current position Moss is currently based in Seattle, where he works as a security consultant for a company that is hired to test other companies' computer systems. He has been interviewed on issues including the internet situation between the United States and China, spoofing and other e-mail threats and the employment of hackers in a professional capacity, including in law enforcement. Film Moss was an Executive Producer on DEFCON: The Documentary (2013). The film follows the four days of the conference, events and people (attendees and staff), and covers history and philosophy behind DEF CON's success and unique experiences. He was also a cast member in the film Code 2600. Moss also works with the technical consulting team for the television series Mr. Robot. Popular culture references DEF CON was portrayed in The X-Files episode "Three of a Kind" featuring an appearance by the Lone Gunmen. DEF CON was portrayed as a United States government-sponsored convention instead of a civilian convention. Actor Will Smith visited DEF CON 21 to watch a talk by Apollo Robbins, the gentleman thief, and to study the DEF CON culture for an upcoming movie role. References External links Hackers Living people Gonzaga University alumni 1975 births Ernst & Young people Commissioners of the Global Commission on the Stability of Cyberspace Atlantic Council
Jeff Moss (hacker)
[ "Technology" ]
1,253
[ "Lists of people in STEM fields", "Hackers" ]
9,510,427
https://en.wikipedia.org/wiki/TGF%20beta%202
Transforming growth factor-beta 2 (TGF-β2) is a secreted protein known as a cytokine that performs many cellular functions and has a vital role during embryonic development (alternative names: Glioblastoma-derived T-cell suppressor factor, G-TSF, BSC-1 cell growth inhibitor, Polyergin, Cetermin). It is an extracellular glycosylated protein. It is known to suppress the effects of interleukin dependent T-cell tumors. There are two named isoforms of this protein, created by alternative splicing of the same gene (i.e., ). Further reading Proteins TGFβ domain
TGF beta 2
[ "Chemistry" ]
144
[ "Proteins", "Biomolecules by chemical classification", "Molecular biology" ]
9,510,546
https://en.wikipedia.org/wiki/ID-MM7
ID-MM7 is a protocol developed and promoted by the Liberty Alliance, driven by major mobile operators such as Vodafone and Telefónica Móviles, to standardize identity-based web services interfaces to mobile messaging. The ID-MM7 specification adds significant value to existing web services. MM7 has long been used by operators for relaying MMS and SMS traffic. ID-MM enables an entirely new business model wherein the content providers know their subscribers only pseudonymously - providing the capability to thwart spam, identity theft and fraud. Known implementations of the protocol include: Symlabs Federated Identity Platform ID-Messaging Computer access control protocols Mobile telecommunications standards Mobile web
ID-MM7
[ "Technology" ]
142
[ "Mobile telecommunications", "Wireless networking", "Mobile web", "Mobile telecommunications standards" ]
9,510,615
https://en.wikipedia.org/wiki/Focal%20infection%20theory
Focal infection theory is the historical concept that many chronic diseases, including systemic and common ones, are caused by focal infections. In present medical consensus, a focal infection is a localized infection, often asymptomatic, that causes disease elsewhere in the host, but focal infections are fairly infrequent and limited to fairly uncommon diseases. (Distant injury is focal infection's key principle, whereas in ordinary infectious disease, the infection itself is systemic, as in measles, or the initially infected site is readily identifiable and invasion progresses contiguously, as in gangrene.) Focal infection theory, rather, so explained virtually all diseases, including arthritis, atherosclerosis, cancer, and mental illnesses. An ancient concept that took modern form around 1900, focal infection theory was widely accepted in medicine by the 1920s. In the theory, the focus of infection might lead to secondary infections at sites particularly susceptible to such microbial species or toxin. Commonly alleged foci were diverse—appendix, urinary bladder, gall bladder, kidney, liver, prostate, and nasal sinuses—but most commonly were oral. Besides dental decay and infected tonsils, both dental restorations and especially endodontically treated teeth were blamed as foci. The putative oral sepsis was countered by tonsillectomies and tooth extractions, including of endodontically treated teeth and even of apparently healthy teeth, newly popular approaches—sometimes leaving individuals toothless—to treat or prevent diverse diseases. Drawing severe criticism in the 1930s, focal infection theory—whose popularity zealously exceeded consensus evidence—was discredited in the 1940s by research attacks that drew overwhelming consensus of this sweeping theory's falsity. Thereupon, dental restorations and endodontic therapy became again favored. Untreated endodontic disease retained mainstream recognition as fostering systemic disease. But only alternative medicine and later biological dentistry continued highlighting sites of dental treatment—still endodontic therapy, but, more recently, also dental implant, and even tooth extraction, too—as foci of infection causing chronic and systemic diseases. In mainstream dentistry and medicine, the primary recognition of focal infection is endocarditis, if oral bacteria enter blood and infect the heart, perhaps its valves. Entering the 21st century, scientific evidence supporting general relevance of focal infections remained slim, yet evolved understandings of disease mechanisms had established a third possible mechanism—altogether, metastasis of infection, metastatic toxic injury, and, as recently revealed, metastatic immunologic injury—that might occur simultaneously and even interact. Meanwhile, focal infection theory has gained renewed attention, as dental infections apparently are widespread and significant contributors to systemic diseases, although mainstream attention is on ordinary periodontal disease, not on hypotheses of stealth infections via dental treatment. Despite some doubts renewed in the 1990s by conventional dentistry's critics, dentistry scholars maintain that endodontic therapy can be performed without creating focal infections. Rise and popularity (1890s–1930s) Roots and dawn Germ theory Hippocrates, in ancient Greece, had reported cure of an arthritis case by tooth extraction. Yet focal infection, as such, appeared in modern medicine in 1877, when Karl Weigert reported "dissemination of 'tuberculosis poison' ". The prior year's breakthrough by Robert Koch, a fellow German, had launched medical bacteriology—a set of laboratory methods to isolate, culture, and multiply a single bacterium of one species—whereby Koch announced discovery of the "tubercle bacillus" in 1882, fully premising the modern principle of focal infection. In 1884, William Henry Welch, tasked to design the medical department at the newly forming Johns Hopkins University, imported the German model, "scientific medince", to America. As progressively more diseases drew an infectious hypothesis that led to a pathogen discovery, conjectures grew that virtually all diseases are infectious. In 1890, German dentist Willoughby D Miller attributed a set of oral diseases to infections, and attributed a set of extraoral diseases—as of lung, stomach, brain abscesses, and other conditions—to the oral infections. In 1894, Miller became the first to identify bacteria in samples of tooth pulp. Miller advised root canal therapy. Yet ancient and folk concepts, entrenched as Galenic principles of humoral medicine, found new outlet in medical bacteriology, a pillar of the new "scientific medicine". Around 1900, British surgeons, still knife-happy, were urging "surgical bacteriology". Autointoxication In 1877, French chemist Louis Pasteur adopted Robert Koch's bacteriology protocols, but soon directed them to developing the first modern vaccines, and ultimately introduced rabies vaccine in 1885. Its success funded Pasteur's formation of the globe's first biomedical research institute, the Pasteur Institute. In 1886, Pasteur welcomed to Paris the emigration from Russia by international scientific celebrity Elie Metchnikoff—discoverer of phagocytes, mediating innate immunity—whom Pasteur granted an entire floor of the Pasteur Institute, once it opened in 1888. Later the institute's director and a 1908 Nobelist, Metchnikoff believed, as did his German immunology rival Paul Ehrlich—theorist on antibody, mediating acquired immunity—and as did Pasteur, too, that nutrition influences immunity. Metchnikoff brought to France its first yogurt cultures for probiotic microorganisms to suppress the colon's putrefactive microorganisms, which allegedly fostered the colon's toxic seepage causing degenerative disease, the putative phenomenon termed autointoxication. Metchnikoff reasoned that the colon functions as a "vesitigal cesspool" that stores waste but is unneeded. Abdominal surgery's pioneer, Sir Arbuthnot Lane, based in London, drew from Metchnikoff and clinical observation to identify "chronic intestinal stasis"—in lay terms, intractable constipation—presumably, "flooding of the circulation with filthy material". Reporting surgical treatment in 1908, Lane eventually offered total colon removal, but later favored simply surgical release of colonic "kinks", and in 1925, abandoning surgery, began promoting prevention and intervention by diet and lifestyle, how Lane secured his contemporary reputation as a crank. Since 1875, in the American state Michigan, physician John Harvey Kellogg had targeted "bowel sepsis"—an allegedly prime cause of degeneration and disease—at his health resort, Battle Creek Sanitarium. Having, in fact, coined the term sanitarium, Kellogg yearly received several thousand patients, including US Presidents and celebrities, at his huge resort, advertised as the "University of Health". But in the 1910s, as North American medical schools emulated the German model—that is, "scientific medicine"—medical doctors who recognized "focal infection" were hinting a scientific basis versus the older, alleged "health faddists" like medical doctor Kellogg and like minister Sylvester Graham. Medical popularity Hunter on "oral sepsis" In 1900, British surgeon William Hunter blamed many disease cases on oral sepsis. In 1910, lecturing in Montreal at McGill University, Hunter declared, "The worst cases of anemia, gastritis, colitis, obscure fevers, nervous disturbances of all kinds from mental depression to actual lesions of the cord, chronic rheumatic infections, kidney diseases are those which owe their origin to or are gravely complicated by the oral sepsis produced by these gold traps of sepsis." Thus, he apparently indicted dental restorations. Incriminating their execution, rather, his American critics lobbied for stricter requirements on dentistry licensing. Still, Hunter's lecture—as later recalled—"ignited the fires of focal infection". Ten years later, he proudly accepted that credit. And yet, read carefully, his lecture asserts a sole cause of oral sepsis: dentists who instruct patients to never remove partial dentures. Billings & Rosenow Focal infection theory's modern era really began with physician Frank Billings, based in Chicago, and his case reports of tonsillectomies and tooth extractions that apparently cured infections of distant organs. Replacing Hunter's term oral sepsis with focal infection, Billings in November 1911 lectured at the Chicago Medical Society, and published it in 1912 as an article for the American medical community. In 1916, Billings lectured in California at Stanford University Medical School, this time printed in book format. Billings thus popularized intervention by tonsillectomy and tooth extraction. A pupil of Billings, Edward Rosenow held that extraction alone was often insufficient, and urged teamwork by dentistry and medicine. Rosenow developed the principle elective localization, whereby microorganisms have affinities for particular organs, and also espoused extreme pleomorphism, whereby a bacterium can drastically change form and perhaps evade conventional detection methods. Preeminent recognition Since 1889, in the American state Minnesota, brothers William Mayo and Charles Mayo had built an international reputation for surgical skill at their Mayo Clinic, by 1906 performing some 5,000 surgeries a year, over 50% intra-abdominal, a tremendous number at the time, with unusually low mortality and morbidity. Though originally distancing themselves from routine medicine and skeptical of laboratory data, they later recruited Edward Rosenow from Chicago to help improve Mayo Clinic's diagnosis and care and to enter basic research via experimental bacteriology. Rosenow influenced Charles Mayo, who by 1914 published to support focal infection theory alongside Rosenow. At Johns Hopkins University's medical school, launched in 1894 as America's first to teach "scientific medicine", the eminent Sir William Osler was succeeded as professor of medicine by Llewellys Barker, who became a prominent proponent of focal infection theory. Although many of the Hopkins medical faculty remained skeptics, Barker's colleague William Thayer cast support. As Hopkins' chief physician, Barker was a pivotal convert propelling the theory to the center of American routine medical practice. Russell Cecil, famed author of Cecil's Essentials of Medicine, too, lent support. In 1921, British surgeon William Hunter announced that oral sepsis was "coming of age". Although physicians had already interpreted pus within a bodily compartment as a systemic threat, pus from infected tooth roots often drained into the mouth and thereby was viewed as systemically inconsequential. Amid focal infection theory, it was concluded that that was often the case—while immune response prevented dissemination from the focus—but that immunity could fail to contain the infection, that dissemination from the focus could ensue, and that systemic disease, often neurological, could result. By 1930, excision of focal infections was considered a "rational form of therapy" undoubtedly resolving many cases of chronic diseases. Its inconsistent effectiveness was attributed to unrecognized foci—perhaps inside internal organs—that the clinicians had missed. Dental reception In 1923, upon some 25 years of researches, dentist Weston Andrew Price of Cleveland, Ohio, published a landmark book, then a related article in the Journal of the American Medical Association in 1925. Price concluded that after root canal therapy, teeth routinely host bacteria producing potent toxins. Transplanting the teeth into healthy rabbits, Price and his researchers duplicated heart and arthritic diseases. Although Price noted often seeing patients "suffering more from the inconvenience and difficulties of mastication and nourishment than they did from the lesions from which their physician or dentist had sought to give them relief", his 1925 debate with John P Buckley was decided in favor of Price's position: "practically all infected pulpless teeth should be extracted". As chairman of the American Dental Association's research division, Price was a leading influence on the dentistry profession's opinion. Into the late 1930s, textbook authors relied on Price's 1923 treatise. In 1911, the year that Frank Billings lectured on focal infection to the Chicago Medical Society, unsuspected periapical disease was first revealed by dental X-ray. Introduced by C. Edmund Kells, dental radiography to feed the "mania of extracting devitalized teeth". Even Price was cited as an authoritative source espousing conservative intervention at focal infections. Kells, too, advocated conservative dentistry. Many dentists were "100 percenters", extracting every tooth exhibiting either necrotic pulp or endodontic treatment, and extracted apparently healthy teeth, too, as suspected foci, leaving many persons toothless. A 1926 report published by several authors in Dental Cosmos—a dentistry journal where Willoughby Miller had published in the 1890s—advocated extraction of known healthy teeth to prevent focal infection. Endodontics nearly vanished from American dental education. Some dentists held that root canal therapy should be criminalized and penalized with six months of hard labor. Psychiatric promulgation Near the turn of the 20th century, psychiatry's predominant explanations of schizophrenia's causation, besides heredity, were focal infection and autointoxication. In 1907, psychiatrist Henry Andrews Cotton became director of the psychiatric asylum at Trenton State Hospital in the American state New Jersey. Influenced by focal infection theory's medical popularity, Cotton identified focal infections as the main causes of dementia praecox (now schizophrenia) and of manic depression (now bipolar disorder). Cotton routinely prescribed surgery not only to clean the nasal sinuses and to extract the tonsils and the teeth, but also to remove the appendix, gall bladder, spleen, stomach, colon, cervix, ovaries, and testicles, while Cotton claimed up to 85% cure rate. Despite Cotton's death rate of some 30%, his fame rapidly spread through America and Europe, and the asylum drew influx of patients. The New York Times heralded "high hope". Cotton made a European lecture tour, and Princeton University Press and Oxford University Press simultaneously published his book in 1922. Despite skepticism in the profession, psychiatrists sustained pressure to match Cotton's treatments, as patients would ask why they were being denied curative treatment. Other patients were pressured or compelled into the treatment without their own consent. Cotton had his two sons' teeth extracted as preventive healthcare—although each later committed suicide. In the 1930s, however, focal infection fell from psychiatry as an explanation, Cotton having died in 1933. Criticism and decline (1930s–1950s) Early skepticism Addressing the Eastern Medical Society in December 1918, New York City physician Robert Morris had explained that focal infection theory had drawn much interest but that understanding was incomplete, while the theory was earning disrepute through overzealousness of some advocates. Morris called for facts and explanation from scientists before physicians continued investing so steeply in it, already triggering vigorous disputes and embittering divisions among clinicians as well as uncertainty among patients. In 1919, the American Dental Association's forerunner, the National Dental Association, held in New Orleans its annual meeting, where C Edmund Kells, the originator and pioneer of dental X-ray, delivered a lecture, published in 1920 in the association's journal, largely discussing focal infection theory, which Kells condemned as a "crime". Kells stressed that X-ray technology is to improve dentistry, not to enhance the "mania of extracting devitalized teeth". Kells urged dentists to reject physicians' prescriptions of tooth extractions. Focal infection theory's elegance suggested simple application, but the surgical removals brought meager "cure" rate, occasional disease worsening, and inconsistent experimental results. Still, the lack of controlled clinical trials, among present criticism, was normal at the time—except in New York City. Around 1920, at Henry Cotton's claims of up to 85% success treating schizophrenia and manic depression, Cotton's major critic was George Kirby, director of the New York State Psychiatric Institute on Ward's Island. As colleagues of Kirby, two researchers—bacteriologist Nicolas Kopeloff and psychiatrist Clarence Cheney—ventured from Ward's Island to Trenton, New Jersey, to investigate Cotton's practice. Research attacks In two controlled clinical trials with alternate allocation of patients, Nicolas Kopeloff, Clarence Cheney, and George Kirby concluded Cotton's psychiatric surgeries ineffective: those who improved were already so prognosed, and others improved without surgery. Publishing two papers, the team presented the findings at the American Psychiatric Association's 1922 and 1923 annual meetings. At Johns Hopkins University, Phyllis Greenacre questioned most of Cotton's data, and later helped steer American psychiatry into psychoanalysis. Antipsychotic colectomy vanished except in Trenton until Cotton—who used publicity and word of mouth, kept the 30% death rate unpublicized, and passed a 1925 investigation by New Jersey Senate—died by heart attack in 1933. By 1927, Weston Price's researches had been criticized for allegedly "faulty bacterial technique". In the 1930s and 1940s, researchers and editors dismissed the studies of Price and of Edward Rosenow as flawed by insufficient controls, by massive doses of bacteria, and by contamination of endontically treated teeth during extraction. In 1938, Russell Cecil and D Murray Angevine reported 200 cases of rheumatoid arthritis, but no consistent cures by tonsillectomies or tooth extractions. They commented, "Focal infection is a splendid example of a plausible medical theory which is in danger of being converted by its enthusiastic supporters into the status of an accepted fact." Newly a critic, Cecil alleged that foci were "anything readily accessible to surgery". In 1939, E W Fish published landmark findings that would revive endodontics. Fish implanted bacteria into guinea pigs' jaws, and reported that four zones of reaction consequently developed. Fish reported that the first zone was the zone of infection, whereas the other three zones—surrounding the zone of infection—revealed immune cells or other host cells but no bacteria. Fish theorized that by removing the infectious nidus, dentists would permit recovery from the infection This reasoning and conclusion by Fish became the basis for successful root-canal treatment. Still, endodontic therapy of the era indeed posed substantial risk of failure, and fear of focal infection crucially motivated endontologists to develop new and improved technology and techniques. End of the focal era The review and "critical appraisal" by Hobart A Reimann and W Paul Havens, published in January 1940, was perhaps the most influential criticism of focal infection theory. Recasting British surgeon William Hunter's landmark pronouncements of 30 years earlier as widely misinterpreted, they summarized that "the removal of infectious dental focal infections in the hope of influencing remote or general symptoms of disease must still be regarded as an experimental procedure not devoid of hazard". By 1940, Louis I Grossman's textbook Root Canal Therapy flatly rejected the methods and conclusions made earlier by Weston Price and especially by Edward Rosenow. Amid improvements in endodontics and medicine, including release of sulfa drugs and antibiotics, a backlash to the "orgy" of tooth extractions and tonsillectomies ensued. K A Easlick's 1951 review in the Journal of the American Dental Association notes, "Many authorities who formerly felt that focal infection was an important etiologic factor in systemic disease have become skeptical and now recommend less radical procedures in the treatment of such disorders". A 1952 editorial in Journal of the American Medical Association tolled the era's end by stating that "many patients with diseases presumably caused by foci of infection have not been relieved of their symptoms by removal of the foci", that "many patients with these same systemic diseases have no evident focus of infection", and that "foci of infection are as common in apparently healthy persons as in those with disease". Although some support extended into the late 1950s, focal infection vanished as the primary explanation of chronic, systemic diseases, and the theory was generally abandoned in the 1950s. Revival and evolution (1990s–2010s) Despite the general theory's demise, focal infection remained a formal, if rare, diagnosis, as in idiopathic scrotal gangrene and angioneurotic edema. Meanwhile, by way of continuing case reports claiming cures of chronic diseases like arthritis after extraction of infected or root-filled teeth, and despite lack of scientific evidence, "dental focal infection theory never died". In fact, severe endodontic disease resembles classic focal infection theory. In 1986, it was noted that, "in spite of a decline in recognition of the focal-infection theory, the association of decayed teeth with systemic disease is taken very seriously". Eventually, the theory of focal infection drew reconsideration. Conversely, attribution of endocarditis to dentistry has entered doubt via case-control study, as the species usually involved is present throughout the human body. Stealth pathogens With the 1950s introduction of antibiotics, attempts to explain unexplained diseases via bacterial etiology seemed all the more unlikely. By the 1970s, however, it was established that antibiotics could trigger bacteria switch to their L phase. Eluding detection by traditional methods of medical microbiology, bacterial L forms and the similar mycoplasma—and, later, viruses—became the entities expected in the theory of focal infection. Yet until the 1980s, such researchers were scarce, largely due to scarce funding for such investigations. Despite the limited funding, research established that L forms can adhere to red blood cells and thereby disseminate from foci within internal organs such as the spleen, or from oral tissues and the intestines, especially during dysbiosis. Perhaps some of Weston Price's identified "toxins" in endodontically treated teeth were L forms, thought nonexistent by bacteriologists of his time and widely overlooked into the 21st century. Apparently, dental infections, including by uncultured or cryptic microorganisms, contribute to systemic diseases. Periodontal medicine At the 1990s' emergence of epidemiological associations between dental infections and systemic diseases, American dentistry scholars have been cautious, some seeking successful intervention to confirm causality. Some American sources emphasized epidemiology's inability to determine causality, categorized the phenomena as progressive invasion of local tissues, and distinguished that from focal infection theory—which they assert was evaluated and disproved by the 1940s. Others have found focal infection theory's scientific evidence still slim, but have conceded that evolving science might establish it. Yet select American authors affirm the return of a modest theory of focal infection. European sources find it more certain that dental infections drive systemic diseases, at least by driving systemic inflammation, and probably, among other immunologic mechanisms, by molecular mimicry resulting in antigenic crossreaction with host biomolecules, while some seemingly find progressive invasion of local tissues compatible with focal infection theory. Acknowledging that beyond epidemiological associations, successful intervention is needed to establish causality, they emphasize that biological explanation is needed atop both, and the biological aspect is thoroughly established already, such that general healthcare, as for cardiovascular disease, must address prevalent periodontal disease, a stance matched in Indian literature. Thus, there has emerged the concept periodontal medicine. Dental controversies During the 1980s, dentist Hal Huggins, sparking severe controversy, spawned biological dentistry, which claims that conventional tooth extraction routinely leaves within the tooth socket the periodontal ligament that often becomes gangrenous, then, forming a jawbone cavitation seeping infectious and toxic material. Sometimes forming elsewhere in bones after injury or ischemia, jawbone cavitations are recognized as foci also in osteopathy and in alternative medicine, but conventional dentists generally conclude them nonexistent. Although the International Academy of Oral Medicine & Toxicology claims that the scientific evidence establishing existence of jawbone cavitations is overwhelming and even published in textbooks, the diagnosis and related treatment remain controversial, and allegations of quackery persist. Huggins and many biological dentists also espouse Weston Price's findings on endodontically treated teeth routinely being foci of infection, although these dentists have been accused of quackery. Conventional belief is that microorganisms within inaccessible regions of a tooth's roots are rendered harmless once entrapped by the filling material, although little evidence supports this. A H Rogers in 1976 and E H Ehrmann in 1977 had dismissed any relation between endodontics and focal infection. At dentist George Meinig's 1994 book, Root Canal Cover-Up, discussing researches of Rosenow and of Price, some dentistry scholars reasserted that the claims were evaluated and disproved by the 1940s. Yet Meinig was but one of at least three authors who in the early 1990s independently renewed the concern. Boyd Haley and Curt Pendergrass reporting finding especially high levels of bacterial toxins in root-filled teeth. Although such possibility appears especially likely amid compromised immunity—as in individuals cirrhotic, asplenic, elderly, rheumatoid arthritic, or using steroid drugs—there remained a lack of carefully controlled studies definitely establishing adverse systemic effects. Conversely, some if few studies have investigated effects of systemic disease on root-canal therapy's outcomes, which tend to worsen with poor glycemic control, perhaps via impaired immune response, a factor largely ignored until recently, but now recognized as important. Still, even by 2010, "the potential association between systemic health and root canal therapy has been strongly disputed by dental governing bodies and there remains little evidence to substantiate the claims". The traditional root-filling material is gutta-percha, whereas a new material, Biocalex, drew initial optimism even in alternative dentistry, but Biocalex-filled teeth were later reported by Boyd Haley to likewise seep toxic byproducts of anaerobic bacterial metabolism. Seeking to sterilize the tooth interior, some dentists, both alternative and conventional, have applied laser technology. Although endodontic therapy can fail and eventually often does, dentistry scholars maintain that it can be performed without creating focal infections. And even by 2010, molecular methods had rendered no consensus reports of bacteremia traced to asymptomatic endodontic infection. In any event, the predominant view is that shunning endodonthic therapy or routinely extracting endodontically treated teeth to treat or prevent systemic diseases remains unscientific and misguided. Footnotes Diseases of oral cavity, salivary glands and jaws Epidemiology
Focal infection theory
[ "Environmental_science" ]
5,437
[ "Epidemiology", "Environmental social science" ]
9,511,063
https://en.wikipedia.org/wiki/Borden%20Base%20Line
The Borden Base Line is a historic survey line (7.42 miles, long) running north–south through Hatfield and South Deerfield, Massachusetts. It was completed in 1831. It was designated a National Historic Civil Engineering Landmark by the American Society of Civil Engineers in 1981. The baseline measurement was the first project of its kind undertaken in America, and essential for Massachusetts' pioneering Trigonometrical Survey, performed under chief engineer Robert Treat Paine. Its careful measurement was critical since the accuracy of the whole triangulation network depended on it. The baseline was measured with greater accuracy than previously possible by using a new measuring device invented by Simeon Borden, which employed a bi-metallic measuring instrument to provide constant readings despite temperature variations. His apparatus was long, enclosed in a tube, and employed with four compound microscopes. Borden was a highly competent engineer whose ability was widely recognized. Indeed, the entire project became generally known as the Borden Survey. He measured the baseline with a nominal accuracy of better than one part in 5 million. As Professor A. D. Butterfield has written, "The work performed and results obtained far surpassed in magnitude and attainment of any previous work of this kind in America." It appears that the north end of the baseline lies just south of the intersection of today's Route 116 and Route 5 in South Deerfield, Massachusetts. According to the Valley Historians, the south end is still marked by a copper plug set into a boulder, located in the back yard of the house at 30 Bridge Street, Hatfield, Massachusetts. See also Surveying External links American Society of Civil Engineers -Borden baseline 1831 landmark References Butterfield, A. D., "History and Development of Triangulation in Massachusetts", in The Journal of the Worcester Polytechnic Institute, Volume I, Nos. 3 and 4, pp. 285–299 and 335–355, 1898. Surveying Geography of the United States Historic Civil Engineering Landmarks Hatfield, Massachusetts Deerfield, Massachusetts
Borden Base Line
[ "Engineering" ]
401
[ "Civil engineering", "Historic Civil Engineering Landmarks" ]
9,511,813
https://en.wikipedia.org/wiki/Zero%20waste%20agriculture
Zero waste agriculture is a type of sustainable agriculture which optimizes use of the five natural kingdoms, i.e. plants, animals, bacteria, fungi and algae, to produce biodiverse-food, energy and nutrients in a synergistic integrated cycle of profit making processes where the waste of each process becomes the feedstock for another process. History The integration of shallow oxidisation ponds of microalgae was demonstrated by Golueke & Oswald in the 1960s. The widespread global implementation of these systems can be largely credited to Prof George Lai Chan-Yu-Thim (2 March 1924 Mauritius-8 October 2016 Mauritius) from ZERI. Zero waste agriculture is now practiced in China (ecological farming), Columbia (integrated food & waste management systems) & Fiji (integrated farming systems), India (integrated biogas farming), South Africa (BEAT Coop & African Agroecological Biotechnology Initiative) and Mauritius. The Brazilian government has adopted integrated farming system as a major social technology for the uplifting of marginalized and subsistence farmers through coordination with TECPAR. Zero waste agriculture combines mature ecological farming practices that delivers an integrated balance of job creation, poverty relief, food security, energy security, water conservation, climate change relief, land security & stewardship. Practice Zero waste agriculture is optimally practiced on small 1-5 ha sized family owned and managed farms and it complements traditional farming & animal husbandry as practiced in most third world communities. Zero Waste Agriculture also preserves local indigenous systems and existing agrarian cultural values and practices. Zero waste agriculture presents a balance of economically, socially and ecologically benefits as it: optimizes food production in an ecological sound manner reduces water consumption through recycling and reduced evaporation provides energy security through the harvesting of biomethane (biogas) and the extraction of biodiesel from micro-algae, as a by-product of food production provides climate change relief through the substantial reduction in greenhouse gas emissions from both traditional agriculture practices and fossil fuel usage reduces the use of pesticides through biodiverse farming Certification of such farming practices is both challenging and an opportunity. See also Agricultural technology a/k/a Agritech Integrated Multi-Trophic Aquaculture Miniwaste References Further reading Sustainable agriculture Waste Food waste
Zero waste agriculture
[ "Physics" ]
459
[ "Materials", "Waste", "Matter" ]
9,511,878
https://en.wikipedia.org/wiki/Depletion%20gilding
Depletion gilding is a method for producing a layer of nearly pure gold on an object made of gold alloy by removing the other metals from its surface. It is sometimes referred to as a "surface enrichment" process. Process Most gilding methods are additive: they deposit gold that was not there before onto the surface of an object. By contrast, depletion gilding is a subtractive process whereby material is removed to increase the purity of gold that is already present on an object's surface. In depletion gilding, other metals are etched away from the surface of an object composed of a gold alloy by the use of acids or salts, often in combination with heat. Since no gold is added, only an object made of an alloy that already contains gold can be depletion gilded. Depletion gilding relies on the fact that gold is highly resistant to oxidation or corrosion by most common chemicals, whereas many other metals are not. Depletion gilding is most often used to treat alloys of gold with copper or silver. Unlike gold, both copper and silver readily react with a variety of chemicals. For example, nitric acid is effective as an etching agent for both copper and silver. Under the proper circumstances, even ordinary table salt will react with either metal. The object to be gilded is coated, immersed, or packed in a suitable acid or salt, and usually heated to speed the process. These chemicals then attack the metallic copper and silver in the object's surface, transforming it to various copper and silver compounds. The resulting copper and silver compounds can be removed from the object's surface by a number of processes. Washing, chemical leaching, heating, or even physical absorption by porous materials such as brick dust have all been used historically. Meanwhile, the relatively inert gold is left unaffected. The result is a thin layer of nearly pure gold on the surface of the original object. There is no well-defined minimum gold content required to successfully depletion gild an object. However, the less gold that is present, the more other material must be etched away to produce the desired surface appearance. In addition, the removal of the other metals usually leaves the surface covered with microscopic voids and pits. This can make the surface soft and "spongy" with a dull or matte appearance. This effect becomes more pronounced as more base metal is removed. For this reason, most depletion gilded objects are burnished to make their surfaces more durable and give them a more attractive polished finish. Like other gilding processes, depletion gilding provides a way to produce the appearance of pure gold without its disadvantages: its cost and rarity, and its softness and denseness. By producing a layer of gold over a layer of copper or other metal, objects can be made that are lighter, sturdier, and cheaper while still appearing to be nearly pure gold. Variations The term depletion gilding usually refers to the production of a layer of gold. However, it can also be used to produce a layer that is an alloy of gold and silver, sometimes referred to as electrum. Certain chemicals, such as oxalic acid, attack copper but do not affect either silver or gold. Using such a chemical, it is possible to remove only the copper in an alloy, leaving both silver and the gold behind. Thus, if the original object is composed of copper, silver, and gold, it can be given a gold surface by removing both silver and copper, or an electrum surface by removing only the copper. Likewise, with an appropriate chemical, a layer of nearly pure silver can be produced on an object made of copper and silver. For instance, sterling silver can be depleted—'depletion silvering'—to produce a fine silver surface, perhaps as preamble to application of gold, as in the Keum-boo technique. However, in the majority of cases depletion gilding is in fact used to produce a gold finish, rather than one of electrum or silver. Applications Depletion gilding is a decorative process, with no significant industrial applications. It is not widely used in modern times, having been superseded by electroplating. Some individual artisans and small shops continue to practice it. However, depletion gilding was widely used in antiquity. While it requires skill to execute it well, the process itself is technologically simple, and uses chemicals that were readily available to most ancient civilizations. Some form of depletion gilding has been used by nearly every culture that developed metalworking. The South American Sican culture in particular developed depletion gilding to a high art. Some ancient alloys, such as tumbaga, may have been developed specifically for use in depletion gilding. The technique was not known to be used by Anglo-Saxons until detailed examination with electron microscopes of treasures such as the Staffordshire Hoard revealed its use in the twenty-first century. Certain cultures are thought to have attached mythical or spiritual significance to the process. Gold was considered sacred in many early civilizations and was highly valued in nearly all of them, and anything relating to it had the potential to take on cultural importance. Moreover, the ability to turn what appeared to be an object made of copper into what seemed to be pure gold would be very impressive. There is some speculation that depletion gilding may have contributed to the concepts of alchemy, a major goal of which was to physically transform one metal into another. References External links The Ganoksin Project The Surface Enrichment of Carat Gold Alloys - Depletion Gilding (also used as a reference) Gilding Metallurgy
Depletion gilding
[ "Chemistry", "Materials_science", "Engineering" ]
1,155
[ "Metallurgy", "Materials science", "nan" ]
9,512,445
https://en.wikipedia.org/wiki/Manufacturing%20Engineering%20Centre
The Manufacturing Engineering Centre (MEC) is an international R&D Centre of Excellence for Advanced Manufacturing and Information Technology. The MEC was founded in 1996 under the directorship of Professor Duc Truong Pham. The Centre forms part of Cardiff University, which dates back to 1883 and is one of Britain's major civic universities. The MEC's purpose is to conduct research and development in all major areas of Advanced Manufacturing and use the output to promote the introduction of new manufacturing technology and practice to industry. It was the first autonomous research centre created by Cardiff University. Research The MEC conducts basic, strategic and applied research as well as technology transfer with partners from 22 countries in Europe, Asia and the Americas. The research spans a broad spectrum of subjects, including robotics and microsystems, sensor systems, high-speed automation and intelligent control, rapid manufacturing, micromanufacturing, nanotechnology, quality engineering, multimedia, virtual reality and enterprise information management. Since 1996, the Centre has received over £50 million in grants and contracts and has attracted hundreds of industrial partners. In 2004, the MEC won two EC 6th Framework Networks of Excellence contracts totalling 15M Euros in value. The two Networks of Excellence led by the MEC, I*PROMS and 4M, involve some 50 centres of excellence in the field of Advanced Manufacturing across the EU. As a Centre of Excellence for Technology and Industrial Collaboration (CETIC) sponsored by the Welsh Assembly Government (WAG) and the European Regional Development Fund (ERDF), the MEC has contributed significantly to the Welsh economy, having completed thousands of projects with local companies and helped to generate and safeguard jobs in the region. Awards Under Professor Pham's leadership, the MEC was awarded the DTI University/Industry First Prize by the Secretary of State for Trade and Industry for its success in building research partnerships with industry (March 1999), and the Queen's Anniversary Prize for Higher and Further Education in recognition of its contribution made to the economy (February 2001). References Cardiff University Engineering universities and colleges in the United Kingdom Industrial engineering Nanotechnology institutions
Manufacturing Engineering Centre
[ "Materials_science", "Engineering" ]
432
[ "Nanotechnology", "Nanotechnology institutions", "Industrial engineering" ]
9,512,449
https://en.wikipedia.org/wiki/Canonical%20units
A canonical unit is a unit of measurement agreed upon as default in a certain context. In astrodynamics In astrodynamics, canonical units are defined in terms of some important object’s orbit that serves as a reference. In this system, a reference mass, for example the Sun’s, is designated as 1 “canonical mass unit” and the mean distance from the orbiting object to the reference object is considered the “canonical distance unit”. Canonical units are useful when the precise distances and masses of objects in space are not available. Moreover, by designating the mass of some chosen central or primary object to be “1 canonical mass unit” and the mean distance of the reference object to another object in question to be “1 canonical distance unit”, many calculations can be simplified. Overview The Canonical Distance Unit is defined to be the mean radius of the reference orbit. The Canonical Time Unit is defined by the gravitational parameter : where is the gravitational constant is the mass of the central reference body In canonical units, the gravitational parameter is given by: Any triplet of numbers, and that satisfy the equation above is a “canonical” set. The quantity of the time unit can be solved in another unit system (e.g. the metric system) if the mass and radius of the central body have been determined. Using the above equation and applying dimensional analysis, set the two equations expressing equal to each other: The time unit () can be converted to another unit system for a more useful qualitative solution using the following equation: For Earth-orbiting satellites, approximate unit conversions are as follows: 1 = 6378.1 km = 20,925,524.97 ft 1 = 7.90538 km/s = 25,936.29 ft/sec 1 = 806.80415 s Astronomical Unit The astronomical unit () is the canonical distance unit for the orbit around the Sun of the combined Earth-Moon system (based on the formerly best-known value). The corresponding time unit is the (sidereal) year)), and the mass is the total mass of the Sun (). See also Astronomical unit Conversion of units Footnotes References Astrodynamics Celestial mechanics
Canonical units
[ "Physics", "Engineering" ]
452
[ "Astrodynamics", "Classical mechanics", "Astrophysics", "Aerospace engineering", "Celestial mechanics" ]
9,512,454
https://en.wikipedia.org/wiki/Valve%20guide
Valve guides are cylindrical metal bushes, pressed or integrally cast into the cylinder head of most types of reciprocating engines, to support the poppet valves so that they may make proper contact with its valve seat. Along with a corresponding valve spring, they are one component of an engine’s valve train. Guides also serve to conduct heat from the combustion process out from the exhaust valve and into the cylinder head where it may be taken up by the cooling system. Bronze is commonly used, as are various iron alloys; a balance between stiffness and wear on the valve is essential to achieve a useful service life. The clearance between the inner diameter of the valve guide and the outer diameter of the poppet valve stem is critical for the proper performance of an engine. If there is too little clearance, the valve may stick as oil contaminants and thermal expansion become factors. If there is too much clearance, the valve may not seat properly and excessive oil consumption can occur. Oil seal The upper part of the valve stem, within the rocker box, is lubricated by oil. If this oil travels unchecked along the valve stem, engine HC emissions will become excessive. To control this, an elastomeric seal is fitted over the top of the valve guide. These may wear or stiffen with age, so are usually replaced whenever valves are removed for servicing. Wear Over time, the inner diameter of the valve guide and the outer diameter of the valve stem may become worn. Reaming In the 1980s, many U.S. production engine remanufacturers began reaming valve guides, rather than replacing them, as part of their remanufacturing process. They found that by reaming all the valve guides in a head to one standard size (typically 0.008 in. diametrically oversized), and installing remanufactured engine valves having stems that are also oversized, a typical engine head can be remanufactured in much less time. Since the reaming process leaves the valve guide with a much better surface finish and shape than typical replacement guides, and since the oversize valves often have chrome plated stems, remanufacturers also discovered that valve train warranty issues are virtually eliminated. Studies have been conducted which show that through the proper selection of the reamer and reaming process, valve guides can be quickly and efficiently reamed to a consistently repeatable size. Replacement Valve guides are typically shaped in a tube with a flare at one end. Their replacement involves removing the worn part by driving it out with a hammer and specifically-shaped punch. Installation may involve shrink-fitting, heating the cylinder head and cooling the valve guide so as to ease insertion, then driving the new guide in quickly with a press or a hammer. Once the parts return to room temperature the new valve guide will be solidly in place and ready to be reamed and honed to proper diameter. References Automobile engines Mechanisms (engineering) Engine valves
Valve guide
[ "Technology", "Engineering" ]
615
[ "Mechanisms (engineering)", "Mechanical engineering", "Engines", "Automobile engines" ]
9,512,766
https://en.wikipedia.org/wiki/Extrinsic%20semiconductor
An extrinsic semiconductor is one that has been doped; during manufacture of the semiconductor crystal a trace element or chemical called a doping agent has been incorporated chemically into the crystal, for the purpose of giving it different electrical properties than the pure semiconductor crystal, which is called an intrinsic semiconductor. In an extrinsic semiconductor it is these foreign dopant atoms in the crystal lattice that mainly provide the charge carriers which carry electric current through the crystal. The doping agents used are of two types, resulting in two types of extrinsic semiconductor. An electron donor dopant is an atom which, when incorporated in the crystal, releases a mobile conduction electron into the crystal lattice. An extrinsic semiconductor that has been doped with electron donor atoms is called an n-type semiconductor, because the majority of charge carriers in the crystal are negative electrons. An electron acceptor dopant is an atom which accepts an electron from the lattice, creating a vacancy where an electron should be called a hole which can move through the crystal like a positively charged particle. An extrinsic semiconductor which has been doped with electron acceptor atoms is called a p-type semiconductor, because the majority of charge carriers in the crystal are positive holes. Doping is the key to the extraordinarily wide range of electrical behavior that semiconductors can exhibit, and extrinsic semiconductors are used to make semiconductor electronic devices such as diodes, transistors, integrated circuits, semiconductor lasers, LEDs, and photovoltaic cells. Sophisticated semiconductor fabrication processes like photolithography can implant different dopant elements in different regions of the same semiconductor crystal wafer, creating semiconductor devices on the wafer's surface. For example a common type of transistor, the n-p-n bipolar transistor, consists of an extrinsic semiconductor crystal with two regions of n-type semiconductor, separated by a region of p-type semiconductor, with metal contacts attached to each part. Conduction in semiconductors A solid substance can conduct electric current only if it contains charged particles, electrons, which are free to move about and not attached to atoms. In a metal conductor, it is the metal atoms that provide the electrons; typically each metal atom releases one of its outer orbital electrons to become a conduction electron which can move about throughout the crystal, and carry electric current. Therefore the number of conduction electrons in a metal is equal to the number of atoms, a very large number, making metals good conductors. Unlike in metals, the atoms that make up the bulk semiconductor crystal do not provide the electrons which are responsible for conduction. In semiconductors, electrical conduction is due to the mobile charge carriers, electrons or holes which are provided by impurities or dopant atoms in the crystal. In an extrinsic semiconductor, the concentration of doping atoms in the crystal largely determines the density of charge carriers, which determines its electrical conductivity, as well as a great many other electrical properties. This is the key to semiconductors' versatility; their conductivity can be manipulated over many orders of magnitude by doping. Semiconductor doping Semiconductor doping is the process that changes an intrinsic semiconductor to an extrinsic semiconductor. During doping, impurity atoms are introduced to an intrinsic semiconductor. Impurity atoms are atoms of a different element than the atoms of the intrinsic semiconductor. Impurity atoms act as either donors or acceptors to the intrinsic semiconductor, changing the electron and hole concentrations of the semiconductor. Impurity atoms are classified as either donor or acceptor atoms based on the effect they have on the intrinsic semiconductor. Donor impurity atoms have more valence electrons than the atoms they replace in the intrinsic semiconductor lattice. Donor impurities "donate" their extra valence electrons to a semiconductor's conduction band, providing excess electrons to the intrinsic semiconductor. Excess electrons increase the electron carrier concentration (n0) of the semiconductor, making it n-type. Acceptor impurity atoms have fewer valence electrons than the atoms they replace in the intrinsic semiconductor lattice. They "accept" electrons from the semiconductor's valence band. This provides excess holes to the intrinsic semiconductor. Excess holes increase the hole carrier concentration (p0) of the semiconductor, creating a p-type semiconductor. Semiconductors and dopant atoms are defined by the column of the periodic table in which they fall. The column definition of the semiconductor determines how many valence electrons its atoms have and whether dopant atoms act as the semiconductor's donors or acceptors. Group IV semiconductors use group V atoms as donors and group III atoms as acceptors. Group III–V semiconductors, the compound semiconductors, use group VI atoms as donors and group II atoms as acceptors. Group III–V semiconductors can also use group IV atoms as either donors or acceptors. When a group IV atom replaces the group III element in the semiconductor lattice, the group IV atom acts as a donor. Conversely, when a group IV atom replaces the group V element, the group IV atom acts as an acceptor. Group IV atoms can act as both donors and acceptors; therefore, they are known as amphoteric impurities. The two types of semiconductor N-type semiconductors N-type semiconductors are created by doping an intrinsic semiconductor with an electron donor element during manufacture. The term n-type comes from the negative charge of the electron. In n-type semiconductors, electrons are the majority carriers and holes are the minority carriers. A common dopant for n-type silicon is phosphorus or arsenic. In an n-type semiconductor, the Fermi level is greater than that of the intrinsic semiconductor and lies closer to the conduction band than the valence band. Examples: phosphorus, arsenic, antimony, etc. P-type semiconductors P-type semiconductors are created by doping an intrinsic semiconductor with an electron acceptor element during manufacture. The term p-type refers to the positive charge of a hole. As opposed to n-type semiconductors, p-type semiconductors have a larger hole concentration than electron concentration. In p-type semiconductors, holes are the majority carriers and electrons are the minority carriers. A common p-type dopant for silicon is boron or gallium. For p-type semiconductors the Fermi level is below the intrinsic semiconductor and lies closer to the valence band than the conduction band. Examples: boron, aluminium, gallium, etc. Use of extrinsic semiconductors Extrinsic semiconductors are components of many common electrical devices. A semiconductor diode (devices that allow current in only one direction) consists of p-type and n-type semiconductors placed in junction with one another. Currently, most semiconductor diodes use doped silicon or germanium. Transistors (devices that enable current switching) also make use of extrinsic semiconductors. Bipolar junction transistors (BJT), which amplify current, are one type of transistor. The most common BJTs are NPN and PNP type. NPN transistors have two layers of n-type semiconductors sandwiching a p-type semiconductor. PNP transistors have two layers of p-type semiconductors sandwiching an n-type semiconductor. Field-effect transistors (FET) are another type of transistor which amplify current implementing extrinsic semiconductors. As opposed to BJTs, they are called unipolar because they involve single carrier type operationeither N-channel or P-channel. FETs are broken into two families, junction gate FET (JFET), which are three-terminal semiconductors, and insulated gate FET (IGFET), which are four-terminal semiconductors. Other devices implementing the extrinsic semiconductor: Lasers Solar cells Photodetectors Light-emitting diodes Thyristors See also Intrinsic semiconductor Doping (semiconductor) List of semiconductor materials References External links Howstuffworks: How Semiconductors Work Semiconductor material types
Extrinsic semiconductor
[ "Chemistry" ]
1,656
[ "Semiconductor material types", "Semiconductor materials" ]
9,512,856
https://en.wikipedia.org/wiki/Otto%20Scherzer
Otto Scherzer (9 March 1909 – 15 November 1982) was a German theoretical physicist who made contributions to electron microscopy. Education Scherzer studied physics at the Munich Technical University and the Ludwig Maximilians University of Munich (LMU) from 1927 to 1931. At LMU his thesis advisor was Arnold Sommerfeld, and he was granted his doctorate in 1931. His thesis was on the quantum theory of Bremsstrahlung. From 1932 to 1933, Scherzer was an assistant to Carl Ramsauer at the Allgemeine Elektrizitäts-Gesellschaft, an electric combine with headquarters in Berlin and Frankfurt-on-Main. There, he did research on electron optics. He completed his Habilitation in 1934, and he then became a Privatdozent at LMU and an assistant to Sommerfeld. Career In 1935, Scherzer moved to the Technische Hochschule Darmstadt In 1936, he became an extraordinarius professor and director of the theoretical physics department. In a landmark 1936 paper, Scherzer proved that the spherical and chromatic aberrations of a rotationally symmetric, static, space-charge-free, dioptric lens for electron beams cannot be eliminated by skillful design, in contrast to the case for glass lenses. This was later called Scherzer's theorem and is the only named and well-established theorem in the field of charged particle optics. In 1947, Scherzer published a sequel to this paper proposing various corrected lenses, dependent upon abandoning one or other requirements as set forth in the 1936 paper. Scherzer’s derivations contributed to the development of electron microscopy. From 1939 to 1945, Scherzer worked on radar at the communications research headquarters of the German Navy (Nachrichtenmittel-Versuchskommando der Kriegsmarine). In a communication with Sommerfeld, dated 2 December 1944, Scherzer reported war damage in Darmstadt and commented on his work on radar. From 1944 to 1945, Scherzer was head of radar finding research (Arbeitsbereich Funkmesstechnik) for the Reich Research Council (Reichsforschungsrat), which was the coordinating agency in the Reich Education Ministry (Reichsziehungsministerium) for the centralized planning of basic and applied research. In 1954, Scherzer became ordinarius professor at the Technische Hochschule Darmstadt, where he helped found the Society for Heavy Ion Research. A literature citation places Scherzer at Darmstadt as late as 1978. Scherzer died in Darmstadt. Awards 1983 – Microscopy Society of America, Distinguished Scientist Award, Physical Sciences Selected bibliography - English translation published as O. Scherzer, Sphärische und chromatische Korrektur von Elektronenlinsen, Optik 2 114–132 (1947) as cited in Peter Hawkes - Recent Advances in Electron Optics and Electron Microscopy. O. Scherzer (Signal Corps Engineering Laboratories, Fort Monmouth, New Jersey) The Theoretical Resolution Limit of the Electron Microscope, Journal of Applied Physics Volume 20, Issue 1, pp. 20–29 (1948). Received June 14, 1948. O. Scherzer, "Limitations for the resolving power of electron microscopes", Proceedings ICEM-9 Volume 3, 123–9 (1978) as cited in Peter Hawkes - The Long Road to Spherical Aberration Correction. Books E. Brüche and O. Scherzer Geometrische Elektronenoptik: Grundlagen und Anwendungen (Springer, 1934) Notes References Klaus Hentschel (editor) and Ann M. Hentschel, (editorial assistant and translator), Physics and National Socialism: An Anthology of Primary Sources (Birkhäuser, 1996) 1909 births 1982 deaths People from Passau 20th-century German physicists Technical University of Munich alumni Microscopy
Otto Scherzer
[ "Chemistry" ]
807
[ "Microscopy" ]
9,513,764
https://en.wikipedia.org/wiki/Generic%20Authentication%20Architecture
Generic Authentication Architecture (GAA) is a standard made by 3GPP defined in TR 33.919. Taken from the document: "This Technical Report aims to give an overview of the different mechanisms that mobile applications can rely upon for authentication between server and client (i.e. the UE). Additionally it provides guidelines related to the use of GAA and to the choice of authentication mechanism in a given situation and for a given application". Related standards are Generic Bootstrapping Architecture (GBA) and Support for Subscriber Certificates (SSC). External links 3GPP Mobile telecommunications standards 3GPP standards
Generic Authentication Architecture
[ "Technology" ]
128
[ "Mobile telecommunications", "Mobile telecommunications standards" ]
9,514,440
https://en.wikipedia.org/wiki/Math%20Curse
Math Curse is a children's picture book written by Jon Scieszka and illustrated by Lane Smith. Published in 1995 through Viking Press, the book tells the story of a student who is cursed by the manner in which mathematics is connected to everyday life. In 2009, a film based on the book was released by Weston Woods Studios, Inc. Plot summary The nameless student begins with a seemingly innocent statement by her math teacher: "you know, almost everything in life can be considered a math problem." The next morning, the hero finds herself thinking of the time she needs to get up along the lines of algebra. Next comes the mathematical school of probability, followed by charts and statistics. As the narrator slowly turns into a "math zombie", everything in her life is transformed into a problem. A class treat of cupcakes becomes a study in fractions, while a trip to the store turns into a problem of money. Finally, she is left painstakingly calculating how many minutes of "math madness" will be in her life now that she is a "mathematical lunatic." Her sister asks her what her problem is, and she responds, "365 days x 24 hours x 60 minutes." Finally, she collapses on her bed, and dreams that she is trapped in a blackboard-room covered in math problems. Armed with only a piece of chalk, she must escape and she manages to do just that by breaking the chalk in half, because "two halves make a whole." She escapes through this "whole", and awakens the next morning with the ability to solve any problem. Her curse is broken...until the next day, when her science teacher mentions that in life, everything can be viewed as a science experiment. Math problems The book is full of actual math problems (and some rather unrelated questions, such as "What does this inkblot look like?"). Readers can try to solve the problems and check their answers, which are located on the back cover of the book. Stage adaptation The book was also adapted for the stage by Heath Corson and Kathleen Collins in 1997. It was first performed at the A Red Orchid Theatre in Chicago, Illinois, in 1997, with subsequent productions at other locations. Its West Coast premiere was in 2003 at the Powerhouse Theatre of Santa Monica, California. It was directed by Collins, and the cast included Kerry Lacy, Thomas Colby, Will Moran, Andrew David James, and Emily Marver. The play met with warm reviews and succeeded with its audiences as well as local school children. Awards The book was critically acclaimed, receiving a number of awards and accolades, including Maine's Student Favorite Book Award, the Texas Bluebonnet Award, and New Hampshire's The Great Stone Face Book Award. American picture books 1995 children's books Children's fiction books Mathematics fiction books Mathematics books
Math Curse
[ "Mathematics" ]
582
[ "Recreational mathematics", "Mathematics fiction books" ]
9,514,491
https://en.wikipedia.org/wiki/Sarcalumenin
Sarcalumenin is a protein that in humans is encoded by the SRL gene. Sarcalumenin is a calcium-binding protein that can be found in the sarcoplasmic reticulum of striated muscle. Sarcalumenin is partially responsible for calcium buffering in the lumen of the sarcoplasmic reticulum and helps out calcium pump proteins. Additionally, sarcalumenin is necessary for keeping a normal sinus rhythm during both aerobic and anaerobic exercise activity. Sarcalumenin is a calcium-binding glycoprotein composed of 473 acidic amino acids with a molecular weight of 160 KDa. Together along with other luminal calcium buffer proteins, sarcalumenin plays an important role in regulation of calcium uptake and release during excitation-contraction coupling (ECC) in muscle fibers. References External links Cell signaling Signal transduction
Sarcalumenin
[ "Chemistry", "Biology" ]
187
[ "Biotechnology stubs", "Signal transduction", "Biochemistry stubs", "Biochemistry", "Neurochemistry" ]
9,514,636
https://en.wikipedia.org/wiki/Parvalbumin
Parvalbumin (PV) is a calcium-binding protein with low molecular weight (typically 9-11 kDa). In humans, it is encoded by the PVALB gene. It is a member of the albumin family; it is named for its size (parv-, from Latin which means "small") and its ability to coagulate. It has three EF hand motifs and is structurally related to calmodulin and troponin C. Parvalbumin is found in fast-contracting muscles, where its levels are highest, as well as in the brain and some endocrine tissues. Parvalbumin is a small, stable protein containing EF-hand type calcium binding sites. It is involved in calcium signaling. Typically, this protein is broken into three domains, domains AB, CD and EF, each individually containing a helix-loop-helix motif. The AB domain houses a two amino-acid deletion in the loop region, whereas domains CD and EF contain the N-terminal and C-terminal, respectively. Calcium binding proteins like parvalbumin play a role in many physiological processes, namely cell-cycle regulation, second messenger production, muscle contraction, organization of microtubules and phototransduction. Therefore, calcium-binding proteins must distinguish calcium in the presence of high concentrations of other metal ions. The mechanism for the calcium selectivity has been extensively studied. Location and function In neural tissue Parvalbumin is present in some GABAergic interneurons in the nervous system, especially the reticular thalamus, and expressed predominantly by chandelier and basket cells in the cortex. In the cerebellum, PV is expressed in Purkinje cells and molecular layer interneurons. In the hippocampus, PV+ interneurons are subdivided into basket, axo-axonic, and bistratified cells, each subtype targeting distinct compartments of pyramidal cells. PV interneurons' connections are mostly perisomatic (around the cell body of neurons). Most of the PV interneurons are fast-spiking. They are also thought to give rise to gamma waves recorded in EEG. PV-expressing interneurons represent approximately 25% of GABAergic cells in the primate DLPFC. Other calcium-binding protein markers are calretinin (most abundant subtype in DLPFC, about 50%) and calbindin. Interneurons are also divided into subgroups by the expression of neuropeptides such as somatostatin, neuropeptide Y, cholecystokinin. In muscular tissue PV is known to be involved in relaxation of fast-twitch muscle fibers. This function is associated with the PV role in calcium sequestration. During muscle contraction, the action potential stimulates voltage-sensitive proteins in the T-tubule membrane. These proteins stimulate the opening of Ca2+ channels in the sarcoplasmic reticulum, leading to release of Ca2+ in the sarcoplasm. The Ca2+ ions bind to troponin, which causes the displacement of tropomyosin, a protein that prevents myosin walking along actin. The displacement of tropomyosin exposes the myosin-binding sites on actin, permitting muscle contraction. This way, while muscle contraction is driven by Ca2+ release, muscle relaxation is driven by Ca2+ removal from sarcoplasm. Along with Ca2+ pumps, PV contributes to Ca2+ removal from cytoplasm: PV binds to Ca2+ ions in the sarcoplasm, and then shuttles it to the sarcoplasmic reticulum. Clinical significance Decreased PV and GAD67 expression was found in PV+ GABAergic interneurons in schizophrenia. Parvalbumin has been identified as the major allergen causing fish meat allergy (but not shellfish allergy). Most bony fishes manifest β-parvalbumins as major allergens and cartilaginous fishes such as sharks and rays manifest α-parvalbumins as major allergens; allergenicity to bony fishes has a low cross-reactivity to cartilaginous fishes and also chicken meat. Parvalbumins, an ancient family of proteins Parvalbumins and their genes have only been found in jawed vertebrate species so far. From the evolutionary level of sharks, already three major lineages of parvalbumins can be distinguished: (1) α-parvalbumins, which include the above discussed human "parvalbumin"; (2) oncomodulins (sometimes called "β-1 parvalbumins"), which are also found in human and mouse; and (3) β-2 parvalbumins, which are the major allergens in most bony fish and were lost in human and mouse but conserved in some primitive mammals. All parvalbumins share a highly conserved structure (see the figure), which explains their high level of sequence conservation, resulting in the above-mentioned cross-reactivity in allergenic reactions against different bony fish species and even species from other animal clades such as chicken. Bony fishes have, depending on the species, combined for all three parvalbumin lineages between 7 and 22 genes. Although in most bony fishes the β-2 parvalbumins are the major allergens, in some bony fishes the α-parvalbumins are the highest expressed in muscle and were identified as the allergens. The allergen nomenclature is partly based on the order of allergen detection per species, and therefore identical allergen numbers in different fish species do not always refer to the same gene (see the table). History The protein was discovered in 1965 as a component of the fast-twitching white muscle of fish. It was described as a low molecular-weight "albumin". It is unknown who coined the term parvalbumin, but the word is already in use by 1967. References External links Molecular neuroscience EF-hand-containing proteins Articles containing video clips
Parvalbumin
[ "Chemistry" ]
1,280
[ "Molecular neuroscience", "Molecular biology" ]
9,515,578
https://en.wikipedia.org/wiki/Cellular%20manufacturing
Cellular manufacturing is a process of manufacturing which is a subsection of just-in-time manufacturing and lean manufacturing encompassing group technology. The goal of cellular manufacturing is to move as quickly as possible, make a wide variety of similar products, while making as little waste as possible. Cellular manufacturing involves the use of multiple "cells" in an assembly line fashion. Each of these cells is composed of one or multiple different machines which accomplish a certain task. The product moves from one cell to the next, each station completing part of the manufacturing process. Often the cells are arranged in a "U-shape" design because this allows for the overseer to move less and have the ability to more readily watch over the entire process. One of the biggest advantages of cellular manufacturing is the amount of flexibility that it has. Since most of the machines are automatic, simple changes can be made very rapidly. This allows for a variety of scaling for a product, minor changes to the overall design, and in extreme cases, entirely changing the overall design. These changes, although tedious, can be accomplished extremely quickly and precisely. A cell is created by consolidating the processes required to create a specific output, such as a part or a set of instructions. These cells allow for the reduction of extraneous steps in the process of creating the specific output, and facilitate quick identification of problems and encourage communication of employees within the cell in order to resolve issues that arise quickly. Once implemented, cellular manufacturing has been said to reliably create massive gains in productivity and quality while simultaneously reducing the amount of inventory, space and lead time required to create a product. It is for this reason that the one-piece-flow cell has been called "the ultimate in lean production." History Cellular manufacturing is derivative of principles of group technology, which were proposed by American industrialist Ralph Flanders in 1925 and adopted in Russia by the scientist Sergei Mitrofanov in 1933 (whose book on the subject was translated into English in 1959). Burbidge actively promoted group technology in the 1970s. "Apparently, Japanese firms began implementing cellular manufacturing sometime in the 1970s," and in the 1980s cells migrated to the United States as an element of just-in-time (JIT) production. One of the first English-language books to discuss cellular manufacturing, that of Hall in 1983, referred to a cell as a “U-line,” for the common, or ideal, U-shaped configuration of a cell—ideal because that shape puts all cell processes and operatives into a cluster, affording high visibility and contact. By 1990 cells had come to be treated as foundation practices in JIT manufacturing, so much so that Harmon and Peterson, in their book, Reinventing the Factory, included a section entitled, "Cell: Fundamental Factory of the Future". Cellular manufacturing was carried forward in the 1990s, when just-in-time was renamed lean manufacturing. Finally, when JIT/lean became widely attractive in the service sector, cellular concepts found their way into that realm; for example, Hyer and Wemmerlöv's final chapter is devoted to office cells. Cell design Cells are created in a workplace to facilitate flow. This is accomplished by bringing together operations or machines or people involved in a processing sequence of a products natural flow and grouping them close to one another, distinct from other groups. This grouping is called a cell. These cells are used to improve many factors in a manufacturing setting by allowing one-piece flow to occur. An example of one-piece flow would be in the production of a metallic case part that arrives at the factory from the vendor in separate pieces, requiring assembly. First, the pieces would be moved from storage to the cell, where they would be welded together, then polished, then coated, and finally packaged. All of these steps would be completed in a single cell, so as to minimize various factors (called non-value-added processes/steps) such as time required to transport materials between steps. Some common formats of single cells are: the U-shape (good for communication and quick movement of workers), the straight line, or the L-shape. The number of workers inside these formations depend on current demand and can be modulated to increase or decrease production. For example, if a cell is normally occupied by two workers and demand is doubled, four workers should be placed in the cell. Similarly, if demand halves, one worker will occupy the cell. Since cells have a variety of differing equipment, it is therefore a requirement that any employee is skilled at multiple processes. While there exist many advantages to forming cells, there are some obvious benefits. It is quickly evident from observation of cells where inefficiencies lie, such as when an employee is too busy or relatively inactive. Resolving these inefficiencies can increase production and productivity by up to and above 100% in many cases. In addition to this, formation of cells consistently frees up floor space in the manufacturing/assembly environment (by having inventory only where it is absolutely required), improves safety in the work environment (due to smaller quantities of product/inventory being handled), improves morale (by imparting feelings of accomplishment and satisfaction in employees), reduces cost of inventory, and reducing inventory obsolescence. When formation of a cell would be too difficult, a simple principle is applied in order to improve efficiencies and flow, that is, to perform processes in a specific location and gather materials to that point at a rate dictated by an average of customer demand (this rate is called the takt time). This is referred to as the Pacemaker Process. Despite the advantages of designing for one-piece-flow, the formation of a cell must be carefully considered before implementation. Use of costly and complex equipment that tends to break down can cause massive delays in the production and will ruin output until they can be brought back online. The short travel distances within cells serve to quicken the flows. Moreover, the compactness of a cell minimizes space that might allow build-ups of inventory between cell stations. To formalize that advantage, cells often have designed-in rules or physical devices that limit the amount of inventory between stations. Such a rule is known, in JIT/lean parlance, as kanban (from the Japanese), which establishes a maximum number of units allowable between a providing and a using work station. (Discussion and illustrations of cells in combinations with kanban are found in) The simplest form, kanban squares, are marked areas on floors or tables between work stations. The rule, applied to the producing station: "If all squares are full, stop. If not, fill them up." An office cell applies the same ideas: clusters of broadly trained cell-team members that, in concert, quickly handle all of the processing for a family of services or customers. A virtual cell is a variation in which all cell resources are not brought together in a physical space. In a virtual cell, as in the standard model, team members and their equipment are dedicated to a family of products or services. Although people and equipment are physically dispersed, as in a job shop, their narrow product focus aims for and achieves quick throughput, with all its advantages, just as if the equipment were moved into a cellular cluster. Lacking the visibility of physical cells, virtual cells may employ the discipline of kanban rules in order to tightly link the flows from process to process. A simple but rather complete description of cell implementation comes from a 1985 booklet of 96 pages by Kone Corp. in Finland, producer of elevators, escalators, and the like. Excerpts follow: Implementation process In order to implement cellular manufacturing, a number of steps must be performed. First, the parts to be made must be grouped by similarity (in design or manufacturing requirements) into families. Then a systematic analysis of each family must be performed; typically in the form of production flow analysis (PFA) for manufacturing families, or in the examination of design/product data for design families. This analysis can be time-consuming and costly, but is important because a cell needs to be created for each family of parts. Clustering of machines and parts is one of the most popular production flow analysis methods. The algorithms for machine part grouping include Rank Order Clustering, Modified Rank Order Clustering, and Similarity coefficients. There are also a number of mathematical models and algorithms to aid in planning a cellular manufacturing center, which take into account a variety of important variables such as, "multiple plant locations, multi-market allocations with production planning and various part mix." Once these variables are determined with a given level of uncertainty, optimizations can be performed to minimize factors such as, "total cost of holding, inter-cell material handling, external transportation, fixed cost for producing each part in each plant, machine and labor salaries." Difficulties in creating flow The key to creating flow is continuous improvement to production processes. Upon implementation of cellular manufacturing, management commonly "encounters strong resistance from production workers". It will be beneficial to allow the change to cellular manufacturing to happen gradually. In this process. It is also difficult to fight the desire to have some inventory on hand. It is tempting, since it would be easier to recover from an employee suddenly having to take sick leave. Unfortunately, in cellular manufacturing, it is important to remember the main tenets: "You sink or swim together as a unit" and that "Inventory hides problems and inefficiencies." If the problems are not identified and subsequently resolved, the process will not improve. Another common set of problems stems from the need to transfer materials between operations. These problems include, "exceptional elements, number of voids, machine distances, bottleneck machines and parts, machine location and relocation, part routing, cell load variation, inter and intracellular material transferring, cell reconfiguring, dynamic part demands, and operation and completion times." These difficulties need to be considered and addressed to create efficient flow in cellular manufacturing. Benefits and costs Cellular manufacturing brings scattered processes together to form short, focused paths in concentrated physical space. So constructed, by logic a cell reduces flow time, flow distance, floor space, inventory, handling, scheduling transactions, and scrap and rework (the latter because of quick discovery of nonconformities). Moreover, cells lead to simplified, higher validity costing, since the costs of producing items are contained within the cell rather than scattered in distance and the passage of reporting time. Cellular manufacturing facilitates both production and quality control. Cells that are underperforming in either volume or quality can be easily isolated and targeted for improvement. The segmentation of the production process allows problems to be easily located and it is more clear which parts are affected by the problem. There are also a number of benefits for employees working in cellular manufacturing. The small cell structure improves group cohesiveness and scales the manufacturing process down to a more manageable level for the workers. Workers can more easily see problems or possible improvements within their own cells and tend to be more self-motivated to propose changes. Additionally, these improvements that are instigated by the workers themselves cause less and less need for management, so over time overhead costs can be reduced. Furthermore, the workers often are able to rotate between tasks within their cell, which offers variety in their work. This can further increase efficiency because work monotony has been linked to absenteeism and reduced production quality. Case studies in just-in-time and lean manufacturing are replete with impressive quantitative measures along those lines. For example, BAE Systems, Platform Solutions (Fort Wayne, Ind.), producing aircraft engine monitors and controls, implemented cells for 80 percent of production, reducing customer lead time 90 percent, work-in-process inventory 70 percent, space for one product family from 6,000 square feet to 1,200 square feet, while increasing product reliability 300 percent, multi-skilling the union-shop work force, and being designated an Industry Week Best Plant for the year 2000. By five years later, rework and scrap had been cut 50 percent, new product introduction cycles 60 percent, and transactions 90 percent, while also increasing inventory turns three-fold and service turn times 30 percent, and being awarded a Shingo Prize for the year 2005. It appears to be difficult to isolate how much of those benefits accrue from cellular organization itself; among many case studies researched for this article few include attempts at isolating the benefits. One exception is the contention, at Steward, Inc. (Chattanooga, Tenn.), producing nickel zinc ferrite parts for electromagnetic interference suppression. According to case study authors, cells resulted in reductions of cycle time from 14 to 2 days, work-in-process inventories by 80 percent, finished inventories by 60 percent, lateness by 96 percent, and space by 56 percent. Another cellular case study includes quantitative estimates of the extent to which cells contributed to overall benefits. At Hughes Ground Systems Group (Fullerton, Calif.), producing circuit cards for defense equipment, the first cell, which began as a pilot project with 15 volunteers, was launched in 1987. One month later a second cell began, and by 1992 all production employees, numbering about 150, had been integrated into seven cells. Prior to cells, circuit card cycle time, from kit release to shipment to the customer, had been 38 weeks. After the cells had taken over the full production sequence (mechanical assembly, wave solder, thermal cycle, and conformal coat), cycle time had fallen to 30.5 weeks, of which production manager John Reiss attributed 20 weeks to use of a "WIP chart system" by the cell teams and the other 10.5 weeks to the cellular organization itself. Later, when it seemed that the cells were overly large and cumbersome, cell sizes were shrunk by two-thirds, resulting in “micro cells” that cut cycle time by another 1.5 weeks. Finally, by adopting certain other improvements, cycle times had decreased to four weeks. Other improvements included reducing work-in-process inventory from 6 or 7 days to one day and percent defective from 0.04 to 0.01 Switching from a functional (job-shop) layout to cells often costs has a minus net cost, inasmuch as the cell reduces costs of transport, work-in-process and finished inventory, transactions, and rework. When large, heavy, expensive pieces of equipment (sometimes called “monuments” in lean lingo) must be moved, however, the initial costs can be high to the point where cells are not feasible. There are a number of possible limitations to implementing cellular manufacturing. Some argue that cellular manufacturing can lead to a decrease in production flexibility. Cells are typically designed to maintain a specific flow volume of parts being produced. Should the demand or necessary quantity decrease, the cells may have to be realigned to match the new requirements, which is a costly operation, and one not typically required in other manufacturing setups. See also Cross-training (business) Lean manufacturing Production flow analysis References Further reading Anbumalar, V.; Raja Chandra Sekar, M (December 2015). "METHODS FOR SOLVING CELL FORMATION, STATIC LAYOUT AND DYNAMIC LAYOUT CELLULAR MANUFACTURING SYSTEM PROBLEMS: A REVIEW" (PDF). Asian Journal of Science and Technology. Black, J. T. (1991). The Design of the Factory with a Future, New York, NY: McGraw-Hill, Inc., 1991. Black, J. T. (2000). 'Lean Manufacturing Implementation', in Paul M. Swamidass (ed.), Innovations in competitive manufacturing, Boston, Mass.; London: Kluwer Academic, 177–86. Burbidge, J.L. (1978), The Principles of Production Control, MacDonald and Evans, England, . Brandon, John. (1996). Cellular Manufacturing: Integrating Technology and Management, Somerset, England: Research Studies Press LTD. Feld, William M., (2001). Lean Manufacturing: tools, techniques, and how to use them, Boca Raton, FL; Alexandria, VA: St. Lucie Press; Apics. Hyer, N.; Brown, K.A. 2003. Work cells with staying power: lessons for process-complete operations. California Management Review 46/1 (Fall): 37–52. Houshyar, A. Nouri; Leman, Z; Pakzad Moghadam, H; Sulaiman, R (August 2014). "Review on Cellular Manufacturing System and its Components". International Journal of Engineering and Advanced Technology (IJEAT). İşlier, Attila (2015-01-01). "Cellular Manufacturing Systems: Organization, Trends And Innovative Methods". Alphanumeric Journal 3 (2). ISSN 2148-2225 Irani, Shahrukh. (1999). Handbook of Cellular Manufacturing Systems, New York, NY: John Wiley & Sons, Inc., 1999. Kannan, V.R. 1996. A virtual cellular manufacturing approach to batch production. Decision Sciences. 27 (3), 519–539. McLean, C.R., H.M. Bloom, and T.H. Hopp. 1982. The virtual manufacturing cell. Proceedings of the Fourth IFAC/IFIP Conference on Information Control Problems in Manufacturing Technology. Gaithersburg, Md. (October). Singh, Nanua and Divakar Rajamani. (1996). Cellular Manufacturing Systems Design, Planning and Control, London, UK: Chapman & Hall. Schonberger, R.J. 2004. Make work cells work for you. Quality Progress 3/74 (April 2004): 58–63. Swamdimass, Paul M. and Darlow, Neil R. (2000). 'Manufacturing Strategy', in Paul M. Swamidass (ed.), Innovations in competitive manufacturing, Boston, Mass.; London: Kluwer Academic, 17–24. Lean manufacturing
Cellular manufacturing
[ "Engineering" ]
3,674
[ "Lean manufacturing" ]
9,516,170
https://en.wikipedia.org/wiki/Polyanhydride
Polyanhydrides are a class of biodegradable polymers characterized by anhydride bonds that connect repeat units of the polymer backbone chain. Their main application is in the medical device and pharmaceutical industry. In vivo, polyanhydrides degrade into non-toxic diacid monomers that can be metabolized and eliminated from the body. Owing to their safe degradation products, polyanhydrides are considered to be biocompatible. Applications The characteristic anhydride bonds in polyanhydrides are water-labile (the polymer chain breaks apart at the anhydride bond). This results in two carboxylic acid groups which are easily metabolized and biocompatible. Biodegradable polymers, such as polyanhydrides, are capable of releasing physically entrapped or encapsulated drugs by well-defined kinetics and are a growing area of medical research. Polyanhydrides have been investigated as an important material for the short-term release of drugs or bioactive agents. The rapid degradation and limited mechanical properties of polyanhydrides render them ideal as controlled drug delivery devices. One example, Gliadel, is a device in clinical use for the treatment of brain cancer. This product is made of a polyanhydride wafer containing a chemotherapeutic agent. After removal of a cancerous brain tumor, the wafer is inserted into the brain releasing a chemotherapy agent at a controlled rate proportional to the degradation rate of the polymer. The localized treatment of chemotherapy protects the immune system from high levels of radiation. Other applications of polyanhydrides include the use of unsaturated polyanhydrides in bone replacement, as well as polyanhydride copolymers as vehicles for vaccine delivery. Classes There are three main classes of polyanhydrides: aliphatic, unsaturated, and aromatic. These classes are determined by examining their R groups (the chemistry of the molecule between the anhydride bonds). Aliphatic polyanhydrides consist of R groups containing carbon atoms bonded in straight or branched chains. This class of polymers is characterized by a crystalline structure, melting temperature range of 50–90 °C, and solubility in chlorinated hydrocarbons. They degrade and are eliminated from the body within weeks of being introduced to the bodily environment. Unsaturated polyanhydrides consist of organic R groups with one or more double bonds (or degrees of unsaturation). This class of polymers has a highly crystalline structure and is insoluble in common organic solvents. Aromatic polyanhydrides consist of R groups containing a benzene (aromatic) ring. Properties of this class include a crystalline structure, insolubility in common organic solvents, and melting points greater than 100 °C. They are very hydrophobic and therefore degrade slowly when in the bodily environment. This slow degradation rate makes aromatic polyanhydrides less suitable for drug delivery when used as homopolymers, but they can be copolymerized with the aliphatic class to achieve the desired degradation rate. Synthesis and characterization Polyanhydrides are synthesized using either melt condensation or solution polymerization. Depending on the synthesis method used, various characteristics of polyanhydrides can be altered to achieve the desired product. Characterization of polyanhydrides determines the structure, composition, molecular weight, and thermal properties of the molecule. These properties are determined by using various light-scattering and size-exclusion methods. Polymerization Polyanhydrides can be easily prepared by using available, low cost resources. The process can be varied to achieve desirable characteristics. Traditionally, polyanhydrides have been prepared by melt condensation polymerization, which results in high molecular weight polymers. Melt condensation polymerization involves reacting dicarboxylic acid monomers with excess acetic anhydride at a high temperature and under a vacuum to form the polymers. Catalysts may be used to achieve higher molecular weights and shorter reaction times. Generally, a one-step synthesis (method involving only one reaction) is used which does not require purification. There are many other methods used to synthesize polyanhydrides. Some of the other methods include: microwave heating, high-throughput synthesis (synthesis of polymers in parallel), ring opening polymerization (removal of cyclic monomers), interfacial condensation (high temperature reaction of two monomers), dehydrative coupling agents (removing the water group from two carboxyl groups), and solution polymerization (reacting in a solution). Chemical structure and composition analysis The chemical structure and composition of polyanhydrides can be determined using nuclear magnetic resonance (NMR) spectroscopy. The positions of peaks in proton NMR spectroscopy are determined by the class of polanhydride (aromatic, aliphatic, or unsaturated), and so provide information regarding structural features of the polymer, including whether a copolymer has a random or block-like structure. Molecular weight and degradation rate can also determined by spectroscopically. Molecular weight analysis Aside from using NMR to determine a polyanhydride’s molecular weight, gel permeation chromatography (GPC), and viscosity measurements may also be used. Thermal properties Differential scanning calorimetry (DSC) is used to determine the thermal properties of polyanhydrides. Glass transition temperature, melting temperature, and heat of fusion can all be determined by DSC. Crystallinity of a polyanhydride can be determined using DSC, Small angle X-ray scattering (SAXS), Nuclear magnetic resonance (NMR), and X-ray diffraction. Degradation The erosion and degradation of a polymer describe how the polymer physically loses mass (degrades). The two common erosion mechanisms are surface and bulk erosion. Polyanhydrides are surface eroding polymers. Surface eroding polymers do not allow water to penetrate into the material. They erode layer by layer, like a lollipop. The hydrophobic backbone with hydrolytically labile anhydride linkages allows hydrolytic degradation to be controlled by manipulating the polymer composition. This manipulation can occur by adding a hydrophilic group to the polyanhydride to make a copolymer. Polyanhydride copolymers with hydrophilic groups exhibit bulk eroding characteristics. Bulk eroding polymers take in water like a sponge (throughout the material) and erode inside and on the surface of the polymer. Drug release from bulk eroding polymers is difficult to characterize because the primary mode of release from these polymers is diffusion. Unlike surface eroding polymers, bulk eroding polymers show a very weak relationship between the rate of polymer degradation and the rate of drug release. Therefore, the development of surface eroding polyanhydrides incorporated into the bulk eroding polymers is of increased importance. Biocompatibility Biocompatibility and toxicity of a polymeric material is evaluated by examining systemic toxic responses, local tissue responses, carcinogenic and mutagenic responses, and allergic responses to the material's degradation products. Animal studies are conducted to test the polymer’s effect on each of these negative responses. Polyanhydrides and their degradation products have not been found to cause significant harmful responses and are considered to be biocompatible. References Domb, A., Amselem, S., Langer, R., and Manair, M. “Chapter 3: Polyanhydrides as Carriers of Drugs.” Biomedical Polymers Designed –to –Degrade Systems. Hanser Publishers: Munich, Vienna, NY, 1994. Kumar, N., Langer, R., and Domb, A. “Polyanhydrides: an overview.” Advanced Drug Delivery Reviews, 2002. “Polyanhydride Synthesis Techniques.” Wyatt Technology Corp. Tamada, J. and Langer, R. “The development of polyanhydrides for drug delivery applications.” Journal of Biomaterials Science, Polymer Ed. Vol. 3, No. 4, pp. 315–353, 1992. Torres, M. P.; Determan, A. S.; Malapragada, S. K.; Narasimhan, B. “Polyanhydrides.” Encyclopedia of Chemical Processing. 2006. B.M. Vogel, S.K. Mallapragada, and B. Narasimhan, “Rapid Synthesis of Polyanhydrides By Microwave Polymerization”, Macromolecular Rapid Communications 25, 330-333, 2004. B.M. Vogel, S.K. Mallapragada, “Synthesis of Novel Biodegradable Polyanhydrides Containing Aromatic and Glycol Functionality for Tailoring of Hydrophilicity in Controlled Drug Delivery Devices”, Biomaterials, 26, 721-728, 2004. B.M. Vogel, Naomi Eidelman, S.K. Mallapragada and B. Narasimhan, “Parallel Synthesis and Dissolution Testing of Polyanhydride Random Copolymers”, Journal of Combinatorial Chemistry, 7, 921-928, 2005. B.M. Vogel and S.K. Mallapragada, “The Synthesis of Polyanhydrides”, in Handbook of Biodegradable Materials and their Applications, edited by S.K. Mallapragada and Balaji Narasimhan, ASP Publishers, Vol. 1, 1-19, 2005. P.Guruprasad Reddy and A.J.Domb, “Polyanhydride Chemistry”. Biomacromolecules, 2022, 23(12), 4959-4984. doi: 10.1021/acs.biomac.2c01180. Biomaterials Biological engineering Polymers
Polyanhydride
[ "Physics", "Chemistry", "Materials_science", "Engineering", "Biology" ]
2,090
[ "Biomaterials", "Biological engineering", "Materials", "Polymer chemistry", "Polymers", "Matter", "Medical technology" ]
1,026,848
https://en.wikipedia.org/wiki/Weyl%20tensor
In differential geometry, the Weyl curvature tensor, named after Hermann Weyl, is a measure of the curvature of spacetime or, more generally, a pseudo-Riemannian manifold. Like the Riemann curvature tensor, the Weyl tensor expresses the tidal force that a body feels when moving along a geodesic. The Weyl tensor differs from the Riemann curvature tensor in that it does not convey information on how the volume of the body changes, but rather only how the shape of the body is distorted by the tidal force. The Ricci curvature, or trace component of the Riemann tensor contains precisely the information about how volumes change in the presence of tidal forces, so the Weyl tensor is the traceless component of the Riemann tensor. This tensor has the same symmetries as the Riemann tensor, but satisfies the extra condition that it is trace-free: metric contraction on any pair of indices yields zero. It is obtained from the Riemann tensor by subtracting a tensor that is a linear expression in the Ricci tensor. In general relativity, the Weyl curvature is the only part of the curvature that exists in free space—a solution of the vacuum Einstein equation—and it governs the propagation of gravitational waves through regions of space devoid of matter. More generally, the Weyl curvature is the only component of curvature for Ricci-flat manifolds and always governs the characteristics of the field equations of an Einstein manifold. In dimensions 2 and 3 the Weyl curvature tensor vanishes identically. In dimensions ≥ 4, the Weyl curvature is generally nonzero. If the Weyl tensor vanishes in dimension ≥ 4, then the metric is locally conformally flat: there exists a local coordinate system in which the metric tensor is proportional to a constant tensor. This fact was a key component of Nordström's theory of gravitation, which was a precursor of general relativity. Definition The Weyl tensor can be obtained from the full curvature tensor by subtracting out various traces. This is most easily done by writing the Riemann tensor as a (0,4) valence tensor (by contracting with the metric). The (0,4) valence Weyl tensor is then where n is the dimension of the manifold, g is the metric, R is the Riemann tensor, Ric is the Ricci tensor, s is the scalar curvature, and denotes the Kulkarni–Nomizu product of two symmetric (0,2) tensors: In tensor component notation, this can be written as The ordinary (1,3) valent Weyl tensor is then given by contracting the above with the inverse of the metric. The decomposition () expresses the Riemann tensor as an orthogonal direct sum, in the sense that This decomposition, known as the Ricci decomposition, expresses the Riemann curvature tensor into its irreducible components under the action of the orthogonal group. In dimension 4, the Weyl tensor further decomposes into invariant factors for the action of the special orthogonal group, the self-dual and antiself-dual parts C+ and C−. The Weyl tensor can also be expressed using the Schouten tensor, which is a trace-adjusted multiple of the Ricci tensor, Then In indices, where is the Riemann tensor, is the Ricci tensor, is the Ricci scalar (the scalar curvature) and brackets around indices refers to the antisymmetric part. Equivalently, where S denotes the Schouten tensor. Properties Conformal rescaling The Weyl tensor has the special property that it is invariant under conformal changes to the metric. That is, if for some positive scalar function then the (1,3) valent Weyl tensor satisfies . For this reason the Weyl tensor is also called the conformal tensor. It follows that a necessary condition for a Riemannian manifold to be conformally flat is that the Weyl tensor vanish. In dimensions ≥ 4 this condition is sufficient as well. In dimension 3 the vanishing of the Cotton tensor is a necessary and sufficient condition for the Riemannian manifold being conformally flat. Any 2-dimensional (smooth) Riemannian manifold is conformally flat, a consequence of the existence of isothermal coordinates. Indeed, the existence of a conformally flat scale amounts to solving the overdetermined partial differential equation In dimension ≥ 4, the vanishing of the Weyl tensor is the only integrability condition for this equation; in dimension 3, it is the Cotton tensor instead. Symmetries The Weyl tensor has the same symmetries as the Riemann tensor. This includes: In addition, of course, the Weyl tensor is trace free: for all u, v. In indices these four conditions are Bianchi identity Taking traces of the usual second Bianchi identity of the Riemann tensor eventually shows that where S is the Schouten tensor. The valence (0,3) tensor on the right-hand side is the Cotton tensor, apart from the initial factor. See also Curvature of Riemannian manifolds Christoffel symbols provides a coordinate expression for the Weyl tensor. Lanczos tensor Peeling theorem Petrov classification Plebanski tensor Weyl curvature hypothesis Weyl scalar Notes References . . Curvature tensors Riemannian geometry Tensors in general relativity
Weyl tensor
[ "Physics", "Engineering" ]
1,101
[ "Tensors", "Physical quantities", "Tensor physical quantities", "Curvature tensors", "Tensors in general relativity" ]
1,026,901
https://en.wikipedia.org/wiki/University%20of%20Virginia%20Darden%20School%20of%20Business
The Darden School of Business is the graduate business school of the University of Virginia, a public research university in Charlottesville, Virginia. The school offers MBA, PhD, and Executive Education programs. The school was founded in 1955 and named after Colgate Whitehead Darden Jr., a former Democratic congressman, governor of Virginia, and president of the University of Virginia. It is located on the grounds of the University of Virginia. Its faculty use the case method as their method of teaching courses. History The Darden School is a graduate school of business of the Southern United States, it was founded in 1955. The original business school was nestled in the central grounds of the University of Virginia, before being moved its current location at the North Grounds. Designed by the Driehaus Prize winner Robert A. M. Stern, the Darden school's buildings feature sand-struck Virginia brick, Chippendale balustrades and red-metal standing seam roofs. In 2018, the Sands Family Grounds was inaugurated by the Darden School, in Arlington County, Virginia, in proximity to Washington D.C.'s central business district. The Sands Family Grounds occupy the top two floors of a 31-story skyscraper.. Locations The full-time MBA program is located in Charlottesville, Virginia at the UVA Darden Goodwin Family Grounds, which is roughly two hours from Washington, D.C. In 2017, it was announced that Darden would establish dedicated facilities in Rosslyn, formerly introduced as the UVA Darden Sands Family Grounds in February 2019, as the new home base for the Executive MBA formats and new M.S. in Business Analytics degree launched with the McIntire School of Commerce. MBA Designed for students who seek to strengthen their leadership, business and communication skills, Darden's two-year MBA program combines core and elective courses in Charlottesville, Virginia with opportunities for every student to study abroad. Admissions Admission requirements for the MBA include an earned four-year bachelor's degree from an accredited U.S. institution or the international equivalent, completion of application forms and essays, GMAT or GRE score, academic transcripts, two professional recommendations, and the payment of a fee. The MBA Class of 2023 has an average GMAT score of 716 and an average GPA of 3.51, and an average age of 27 years old. Of the 352 students enrolled, 41% are international students, 37% are women and 14% are domestic minority students. The School had an acceptance rate of 26% as of 2019. Study abroad Students are offered study abroad programs as well as Darden Worldwide Courses which offer international immersion courses which are funded by a $15 million gift from philanthropist and donor, Frank Batten. Executive MBA formats Designed with a hybrid structure of online learning with in-person residences at the new UVA Darden Sands Family Grounds in the Washington, D.C., area, two formats of the MBA are offered which provide the same degree as the MBA. The EMBA (Executive MBA) is designed for working professionals and the GEMBA (Global Executive MBA) is an option that provides additional global residences compared to the EMBA. Both formats have the same core curriculum over a period of twenty-one months with all students entering in the same cohort each academic year. Global residencies include Brazil, Chile, China, Germany, Japan, Ghana, Israel, India, Estonia and Cuba with changes in locations possible each year. Darden Executive Education The inaugural Executive Education program was offered in 1955. Darden Executive Education offers both short courses and custom solutions, as well as consortia, corporate university design and development, and industry specific partnerships. Short course focus areas include leadership, general management, strategy and decision-making, negotiation, growth and innovation, project management, sales and marketing, financial management and corporate aviation. Rankings Darden's current rankings are as follows: MBA rankings #3 Bloomberg Businessweek 2023 #10 U.S. News & World Report 2024 #13 Forbes 2019 #16 (North America) - The Economist 2019 #16 (Global) - The Economist 2019 MBA Specialty rankings #1 Best Professors - The Princeton Review 2019 #2 Best MBA For Consulting - The Princeton Review 2019 #2 Best MBA For Management - The Princeton Review 2019 #4 Best Campus Environment - The Princeton Review 2019 #6 Entrepreneurship - The Princeton Review for Entrepreneur magazine 2019 #1 Education Experience in United States - The Economist 2019 #1 Corporate Social Responsibility - Financial Times 2019 #1 General Management - Financial Times 2016 #2 Learning - Bloomberg Businessweek 2019 #11 Career Services Rank - Financial Times 2019 Executive Education rankings #1 Course Design (Global) - Financial Times 2016-2018 #1 Faculty (Global) - Financial Times 2004-2011 #7 Facilities (Global) - Financial Times 2019 #20 Open-Enrollment Programs (Global) - Financial Times 2019 #52 Custom Programs (Global) - Financial Times 2019 Notable alumni Darden's list of alumni includes: Leslie M. Baker Jr. (MBA '69), former CEO of Wachovia John H. Bryan (MBA '60), CEO and chairman of Sara Lee from 1976 to 2001 Eric Chewning (MBA '08), partner at McKinsey & Company; former chief of staff to the Secretary of Defense Robert Citrone (MBA '90) co-founder of Discovery Capital Management Guillaume M. Cuvelier (MBA '91), founder of Svedka vodka George David (MBA '67), CEO and chairman of United Technologies Corporation Helen Dragas, businesswoman; first woman to be rector for the University of Virginia Board of Visitors Jay Faison (MBA '95), founder of ClearPath Foundation Bill Hawkins (MBA '82), former President and CEO, Medtronic Inc.; CEO, Immucor Inc. Robert J. Hugin (MBA '85), CEO of Celgene Corporation Hal Lawton (MBA ‘00), President & CEO, Tractor Supply Doug Lebda (MBA '14), founder & CEO of LendingTree Carolyn Miles (MBA '88), former CEO of Save The Children Thomas Neir (MBA '88), businessman; founder of Pacific Coffee Company Michael E. O'Neill (MBA '74), former chairman of Citigroup Chris Patrick (MBA '06), general manager of the Washington Capitals Lewis F. Payne, Jr. (MBA '73), former Virginia congressman J. Michael Pearson (MBA '84), former CEO of Valeant Pharmaceuticals International Steven Reinemund (MBA '78), former CEO and Chairman of PepsiCo Hugo F. Rodriguez (MBA '00), United States Ambassador to Nicaragua Mark Sanford (MBA '88), former Governor of South Carolina Thomas A. Saunders III (MBA '67), former Morgan Stanley partner and Wall Street innovator Goli Sheikholeslami (MBA '94), CEO of POLITICO Marc Short (MBA '04), former chief of staff to Vice President Mike Pence John Strangfeld (MBA '77), Chairman and CEO, Prudential Financial Mark B. Templeton (MBA '78), President and CEO, Citrix Systems Inc. Henri Termeer (MBA '73), former CEO of Genzyme Steven C. Voorhees (MBA '80), former CEO of WestRock Roger L. Werner (MBA '77), former CEO of ESPN List of deans See also Economics Glossary of economics List of United States business school rankings List of business schools in the United States References Business schools in Virginia University of Virginia schools Universities and colleges established in 1954 1954 establishments in Virginia Life sciences industry New Classical architecture in the United States Robert A. M. Stern buildings
University of Virginia Darden School of Business
[ "Biology" ]
1,573
[ "Life sciences industry" ]
1,027,042
https://en.wikipedia.org/wiki/Medicago%20truncatula
Medicago truncatula, the barrelclover, strong-spined medick, barrel medic, or barrel medick, is a small annual legume native to the Mediterranean region that is used in genomic research. It is a low-growing, clover-like plant tall with trifoliate leaves. Each leaflet is rounded, long, often with a dark spot in the center. The flowers are yellow, produced singly or in a small inflorescence of two to five together; the fruit is a small, spiny pod. This species is studied as a model organism for legume biology because it has a small diploid genome, is self-fertile, has a rapid generation time and prolific seed production, is amenable to genetic transformation, and its genome has been sequenced. It forms symbioses with nitrogen-fixing rhizobia (Sinorhizobium meliloti and Sinorhizobium medicae) and arbuscular mycorrhizal fungi including Rhizophagus irregularis (previously known as Glomus intraradices). The model plant Arabidopsis thaliana does not form either symbiosis, making M. truncatula an important tool for studying these processes. It is also an important forage crop species in Australia. Sequencing of the genome The draft sequence of the genome of M. truncatula cultivar A17 was published in the journal Nature in 2011. The sequencing was carried out by an international partnership of research laboratories involving researchers from the University of Oklahoma (US), J. Craig Venter Institute (US), Genoscope (France), and Sanger Centre (UK). Partner institutions included the University of Minnesota (US), University of California-Davis (US), the National Center for Genomic Resources (US), John Innes Centre (UK), Institut National de Recherche Agronomique (France), Munich Information Center for Protein Sequences (Germany), Wageningen University (the Netherlands), and Ghent University (Belgium). The Medicago truncatula Sequencing Consortium began in 2001 with a seed grant from the Samuel Roberts Noble Foundation. In 2003, the National Science Foundation and the European Union 6th Framework Programme began providing most of the funding. By 2009, 84% of the genome assembly had been completed. The assembly of the genome sequence in M. truncatula was based on bacterial artificial chromosomes (BACs). This is the same approach used to sequence the genomes of humans, the fruitfly, Drosophila melanogaster, and the model plant, Arabidopsis thaliana. In July 2013, version 4.0 of the genome was released. This version combined sequences gained from shotgun sequencing with the BAC-based sequence assemblies, which has helped to fill in the gaps in the previously mapped sequences. A parallel group known as the International Medicago Gene Annotation Group (IMGAG) is responsible for identifying and describing putative gene sequences within the genome sequence. Symbioses with soil microorganisms Researcher Toby Kiers of VU University Amsterdam and associates used M. truncatula to study symbioses between plants and fungi – and to see whether the partners in the relationship could distinguish between good and bad traders/suppliers. By using labeled carbon to track the source of nutrient flowing through the arbuscular mycorrhizal system, the researchers have proven that the plants had indeed given more carbon to the more generous fungus species. By restricting the amount of carbon the plants gave to the fungus, the researchers also demonstrated that the fungi did pass along more of their phosphorus to the more generous plants. Proteome Proteomic investigation by mass spectrometry has been performed by Wienkoop et al 2004 and Larrainzar et al 2007. See also Genomics References Further reading External links The Medicago truncatula Consortium Medicago truncatula Hapmap Project TIGR's link to Genome Browser and Gene Index The Medicago Gene Expression Atlas at Samuel Roberts Noble Foundation Medicago truncatula eFP Browser Viewer for gene expression data from the Medicago Gene Expression Atlas project, at the Provart Lab's Bio-Array Resource website INRA Medicago truncatula Stock Center – France NCGR European Research Programmes on the model legume Medicago truncatula Why sequence medicago truncatula? Forages Flora naturalised in Australia Genomics truncatula Plant models Flora of Lebanon and Syria
Medicago truncatula
[ "Biology" ]
947
[ "Model organisms", "Plant models" ]
1,027,046
https://en.wikipedia.org/wiki/Psychic%20vampire
A psychic vampire is a creature in folklore said to feed off the "life force" of other living creatures. The term can also be used to describe a person who gets increased energy around other people, but leaves those other people exhausted or "drained" of energy. Psychic vampires are represented in the occult beliefs of various cultures and in fiction. Psychic energy Terms used to describe the substance or essence that psychic vampires take or receive from others include: energy, qi (or ch'i), life force, prana, and vitality. There is no scientific or medical evidence supporting the existence of the bodily or psychic energy they allegedly drain. Emotional vampires American author Albert Bernstein uses the phrase "emotional vampire" for people with various personality disorders who are often considered to drain emotional energy from others. Energy vampires The term "energy vampire" is also used metaphorically to refer to people whose influence leaves a person feeling exhausted, unfocused, and depressed, without ascribing the phenomenon to psychic interference. Dion Fortune wrote of psychic parasitism in relation to vampirism as early as 1930 in her book, Psychic Self-Defense. Fortune considered psychic vampirism a combination of psychic and psychological pathology, and distinguished between what she considered to be true psychic vampirism and mental conditions that produce similar symptoms. For the latter, she named folie à deux and similar phenomena. The term "psychic vampire" was popularized in the 1960s by Anton LaVey and his Church of Satan. LaVey wrote on the topic in his book, The Satanic Bible, and claimed to have coined the term. LaVey used psychic vampire to mean a spiritually or emotionally weak person who drains vital energy from other people. Adam Parfrey likewise attributed the term to LaVey in an introduction to The Devil's Notebook. The English singer-songwriter Peter Hammill credits his erstwhile Van der Graaf Generator colleague, violinist Graham Smith, with coining the term "energy vampires" in the 1970s in order to describe intrusive, over-zealous fans. Hammill included a song of the same name on his 1978 album The Future Now. In the 1982 horror movie One Dark Night, Karl “Raymar” Raymarseivich is the name of a Russian psychic vampire who gains power from the lifeforce of young victims by frightening them to death. This is done by demonstrations of telekinesis which emanates as visible electrical currents of bioenergy. How he dies is unclear, but his malevolence posthumously remains in his body. Effectively, Raymar is a poltergeist in the mausoleum he is interred in, opening crypts (including his own), sliding out the caskets to the floor and randomly exhuming his fellow corpses to terrify unfortunate teenagers who have chosen the wrong place to have an overnight initiation. The terms "energy vampire" and "psychic vampire" have been used as synonyms in Russia since the fall of the Soviet Union as part of an occult revival. The 2019 American comedy horror television series What We Do in the Shadows includes the character Colin Robinson, a metaphorical and literal "energy vampire" who drains people's life forces by being boring or frustrating. Vampire subculture Sociologists such as Mark Benecke and A. Asbjørn Jøn have identified a subculture of people who present themselves as vampires. Jon has noted that enthusiasts of the vampire subculture emulate traditional psychic vampires in that they describe 'prey[ing] upon life-force or 'pranic' energy'. Prominent figures in the subculture include Michelle Belanger, a self-described psychic vampire, who wrote a book titled The Psychic Vampire Codex: A Manual of Magick and Energy Work, published in 2004 by Weiser Books. Belanger details a vampiric approach to energy work which she believes psychic vampires can use to heal others, representing an attempt to disassociate the psychic vampire subculture from negative connotations of vampirism. Sexual vampires A related mythological creature is a sexual vampire, which is supposed to feed off sexual energy. Sexual vampires include succubi or incubi. See also Asura Huli Jing Hungry ghost Lifeforce (film) Doctor Sleep (2019 film) Obake Odic force Pranayama Rakshasa What We Do in the Shadows (TV series) References Further reading External links Energy Vampires(Band): Energy Vampires Llewellyn (Bookstore): Psychic Vampires Article on Identifying Energy Vampires In Our Life By Divya Toshniwal Church of Satan Magical terminology Psychics Vampires Vampirism Vitalism
Psychic vampire
[ "Biology" ]
940
[ "Non-Darwinian evolution", "Vitalism", "Biology theories" ]
1,027,061
https://en.wikipedia.org/wiki/Ensifer%20meliloti
Ensifer meliloti (formerly Rhizobium meliloti and Sinorhizobium meliloti) are an aerobic, Gram-negative, and diazotrophic species of bacteria. S. meliloti are motile and possess a cluster of peritrichous flagella. S. meliloti fix atmospheric nitrogen into ammonia for their legume hosts, such as alfalfa. S. meliloti forms a symbiotic relationship with legumes from the genera Medicago, Melilotus and Trigonella, including the model legume Medicago truncatula. This symbiosis promotes the development of a plant organ, termed a root nodule. Because soil often contains a limited amount of nitrogen for plant use, the symbiotic relationship between S. meliloti and their legume hosts has agricultural applications. These techniques reduce the need for inorganic nitrogenous fertilizers. Symbiosis Symbiosis between S. meliloti and its legume hosts begins when the plant secretes an array of betaines and flavonoids into the rhizosphere: 4,4′-dihydroxy-2′-methoxychalcone, chrysoeriol, cynaroside, 4′,7-dihydroxyflavone, 6′′-O-malonylononin, liquiritigenin, luteolin, 3′,5-dimethoxyluteolin, 5-methoxyluteolin, medicarpin, stachydrine, and trigonelline. These compounds attract S. meliloti to the surface of the root hairs of the plant where the bacteria begin secreting nod factors. This initiates root hair curling. The rhizobia then penetrate the root hairs and proliferate to form an infection thread. Through the infection thread, the bacteria move toward the main root. The bacteria develop into bacteroids within newly formed root nodules and perform nitrogen fixation for the plant. A S. meliloti bacterium does not perform nitrogen fixation until it differentiates into a endosymbiotic bacteroid. A bacteroid depends on the plant for survival. Leghemoglobin, produced by leguminous plants after colonization of S. meliloti, interacts with the free oxygen in the root nodule where the rhizobia reside. Rhizobia are contained within symbiosomes in the root nodules of leguminous plants. The leghemoglobin reduces the amount of free oxygen present. Oxygen disrupts the function of the nitrogenase enzyme in the rhizobia, which is responsible for nitrogen fixation. Genome The S. meliloti genome contains four genes coding for flagellin. These include fliC1C2–fliC3C4. The genome contains three replicons: a chromosome (~3.7 megabases), a chromid (pSymB; ~1.7 megabases), and a plasmid (pSymA; ~1.4 megabases). Individual strains may possess additional, accessory plasmids. Five S. meliloti genomes have been sequenced to date: Rm1021, AK83, BL225C, Rm41, and SM11 with 1021 considered to be the wild type. Indeterminate nodule symbiosis by S. meliloti is conferred by genes residing on pSymA. DNA repair The proteins encoded by E. meliloti genes uvrA, uvrB and uvrC are employed in the repair of DNA damages by the process of nucleotide excision repair. E. meliloti is a desiccation tolerant bacterium. However, E. meliloti mutants defective in either genes uvrA, uvrB or uvrC are sensitive to desiccation, as well as to UV light. This finding indicates that the desiccation tolerance of wild-type E. meliloti depends on the repair of DNA damages that can be caused by desiccation. Bacteriophage Several bacteriophages that infect Sinorhizobium meliloti have been described: Φ1, Φ1A, Φ2A, Φ3A, Φ4 (=ΦNM8), Φ5t (=ΦNM3), Φ6 (=ΦNM4), Φ7 (=ΦNM9), Φ7a, Φ9 (=ΦCM2), Φ11 (=ΦCM9), Φ12 (=ΦCM6), Φ13, Φ16, Φ16-3, Φ16a, Φ16B, Φ27, Φ32, Φ36, Φ38, Φ43, Φ70, Φ72, Φ111, Φ143, Φ145, Φ147, Φ151, Φ152, Φ160, Φ161, Φ166, Φ2011, ΦA3, ΦA8, ΦA161, ΦAL1, ΦCM1, ΦCM3, ΦCM4, ΦCM5, ΦCM7, ΦCM8, ΦCM20, ΦCM21, ΦDF2, Φf2D, ΦF4, ΦFAR, ΦFM1, ΦK1, ΦL1, ΦL3, ΦL5, ΦL7, ΦL10, ΦL20, ΦL21, ΦL29, ΦL31, ΦL32, ΦL53, ΦL54, ΦL55, ΦL56, ΦL57, ΦL60, ΦL61, ΦL62, ΦLO0, ΦLS5B, ΦM1, ΦM1, ΦM1-5, ΦM2, ΦM3, ΦM4, ΦM5, ΦM5 (=ΦF20), ΦM5N1, ΦM6, ΦM7, ΦM8, ΦM9, ΦM10, ΦM11, ΦM11S, ΦM12, ΦM14, ΦM14S, ΦM19, ΦM20S, ΦM23S, ΦM26S, ΦM27S, ΦMl, ΦMM1C, ΦMM1H, ΦMP1, ΦMP2, ΦMP3, ΦMP4, ΦN2, ΦN3, ΦN4, ΦN9, ΦNM1, ΦNM2, ΦNM6, ΦNM7, ΦP6, ΦP10, ΦP33, ΦP45, ΦPBC5, ΦRm108, ΦRmp26, ΦRmp36, ΦRmp38, ΦRmp46, ΦRmp50, ΦRmp52, ΦRmp61, ΦRmp64, ΦRmp67, ΦRmp79, ΦRmp80, ΦRmp85, ΦRmp86, ΦRmp88, ΦRmp90, ΦRmp145, ΦSP, ΦSSSS304, ΦSSSS305, ΦSSSS307, ΦSSSS308, and ΦT1. Of these, ΦM5, ΦM12, Φ16-3 and ΦPBC5 have been sequenced. As of March 2020 the International Committee on Taxonomy of Viruses (ICTV) has accepted the following species in its Master Species List 2019.v1 (#35): Realm: Duplodnaviria, Kingdom: Heunggongvirae, Phylum: Uroviricota Order: Caudovirales, Family: Myoviridae, Genus: Emdodecavirus (formerly M12virus) Species: Sinorhizobium virus M7 (alias ΦM7) Species: Sinorhizobium virus M12 (alias DNA phage ΦM12, type species) Species: Sinorhizobium virus N3 (alias ΦN3) References External links Sinorhizobium meliloti Genome Project Sinorhizobium meliloti 1021 Genome Page Further reading Model organisms Rhizobiaceae Bacteria described in 1994
Ensifer meliloti
[ "Biology" ]
1,740
[ "Model organisms", "Biological models" ]
1,027,166
https://en.wikipedia.org/wiki/Roman%20ring
In general relativity, a Roman ring (proposed by Matt Visser in 1997 and named after the Roman arch, a concept proposed by Mike Morris and Kip Thorne in 1988 and named after physicist Tom Roman) is a configuration of wormholes where no subset of wormholes is near to chronology violation, though the combined system can be arbitrarily close to chronology violation. Examples For example, an Earth–Moon wormhole whose far end is 0.5 seconds in the "past" will not violate causality, since information sent to the far end via the wormhole and back through normal space will still arrive back on Earth (-0.5 + 1) = 0.5 seconds after it was transmitted; but an additional wormhole in the other direction will allow information to arrive back on Earth 1 second before it was transmitted (time travel). However, it is believed that relative time between the transmission of the information in one wormhole throat and out of the other end in a ring structure will remain the same, because light wouldn't have violated local proper time, because the distance traveled by the information would take time, either by going the long way or through the wormhole. Chronology protection Semiclassical approaches to incorporating quantum effects into general relativity seem to show that the chronology protection conjecture postulated by physicist Stephen Hawking fails to prevent the formation of such rings, although Matt Visser feels that there are reasons to think the semiclassical approach is unreliable here, and that a full theory of quantum gravity will likely uphold chronology protection. Gallery Notes References General relativity Time travel Wormhole theory
Roman ring
[ "Physics", "Astronomy" ]
322
[ "Astronomical hypotheses", "Physical quantities", "Time", "Time travel", "General relativity", "Relativity stubs", "Theory of relativity", "Spacetime", "Wormhole theory" ]
1,027,207
https://en.wikipedia.org/wiki/Pay-per-click
Pay-per-click (PPC) is an internet advertising model used to drive traffic to websites, in which an advertiser pays a publisher (typically a search engine, website owner, or a network of websites) when the ad is clicked. Pay-per-click is usually associated with first-tier search engines (such as Google Ads, Amazon Advertising, and Microsoft Advertising). With search engines, advertisers typically bid on keyword phrases relevant to their target market and pay when ads (text-based search ads or shopping ads that are a combination of images and text) are clicked. In contrast, content sites commonly charge a fixed price per click rather than use a bidding system. PPC display advertisements, also known as banner ads, are shown on websites with related content that have agreed to show ads and are typically not pay-per-click advertising, but instead, usually charge on a cost per thousand impressions (CPM). Social networks such as Facebook, Instagram, LinkedIn, Reddit, Pinterest, TikTok, and Twitter have also adopted pay-per-click as one of their advertising models. The amount advertisers pay depends on the publisher and is usually driven by two major factors: the quality of the ad, and the maximum bid the advertiser is willing to pay per click measured against its competitors' bids. In general, the higher the quality of the ad, the lower the cost per click is charged, and vice versa. However, websites can offer PPC ads. Websites that utilize PPC ads will display an advertisement when a query (keyword or phrase) matches an advertiser's keyword list that has been added in different ad groups, or when a content site displays relevant content. Such advertisements are called sponsored links or sponsored ads, and appear adjacent to, above, or beneath organic results on search engine results pages (SERPs), or anywhere a web developer chooses on a content site. The PPC advertising model is open to abuse through click fraud, although Google and others have implemented automated systems to guard against abusive clicks by competitors or corrupt web developers. Purpose Pay-per-click, along with cost per impression (CPM) and cost per order, is used to assess the cost-effectiveness and profitability of internet marketing and drive the cost of running an advertisement campaign as low as possible while retaining set goals. In Cost Per Thousand Impressions (CPM), the advertiser only pays for every 1000 impressions of the ad. Pay-per-click (PPC) has an advantage over cost-per-impression in that it conveys information about how effective the advertising was. Clicks are a way to measure attention and interest. If the main purpose of an ad is to generate a click, or more specifically drive traffic to a destination, then pay-per-click is the preferred metric. The quality and placement of the advertisement will affect click through rates and the resulting total pay-per-click cost. Construction Cost-per-click (CPC) is calculated by dividing the advertising cost by the number of clicks generated by an advertisement. The basic formula is: Cost-per-click ($) = Advertising cost ($) / Ads clicked (#) There are two primary models for determining pay-per-click: flat-rate and bid-based. In both cases, the advertiser must consider the potential value of a click from a given source. This value is based on the type of individual the advertiser is expecting to receive as a visitor to their website, and what the advertiser can gain from that visit, which is usually short-term or long-term revenue. As with other forms of advertising, targeting is key, and factors that often play into PPC campaigns include the target's interest (often defined by a search term they have entered into a search engine or the content of a page that they are browsing), intent (e.g., to purchase or not), location (for geo targeting), a device used (e.g. whether the user is searching from a desktop device or mobile) and the day and time that they are browsing. Flat-rate PPC In the flat-rate model, the advertiser and publisher agree upon a fixed amount that will be paid for each click. In many cases, the publisher has a rate card that lists the pay-per-click (PPC) within different areas of their website or network. These various amounts are often related to the content on pages, with content that generally attracts more valuable visitors having a higher cost per click than content that attracts less valuable visitors. However, in many cases, advertisers can negotiate lower rates, especially when committing to a long-term or high-value contract. The flat-rate model is particularly common on comparison shopping engines, which typically publish rate cards. However, these rates are sometimes minimal, and advertisers can pay more for greater visibility. These sites are usually neatly compartmentalized into product or service categories, allowing a high degree of targeting by advertisers. In many cases, the entire core content of these sites is paid ads. Bid-based PPC The advertiser signs a contract that allows them to compete against other advertisers in a private auction hosted by a publisher or, more commonly, an advertising network. Each advertiser informs the host of the maximum amount that he or she is willing to pay for a given ad spot (often based on a keyword), usually using online tools to do so. The auction plays out in an automated fashion every time a visitor triggers the ad spot. When the ad spot is part of a search engine results page (SERP), the automated auction takes place whenever a search for the keyword that is being bid upon occurs. All bids for the keyword that targets the searcher's Geo-location, the day and time of the search, etc. are then compared, and the winner is determined. All this happens in real-time, therefore this is called real-time-bidding or RTB, and in a fraction of a second. In situations where there are multiple ad spots, a common occurrence on SERPs, there can be multiple winners whose positions on the page are influenced by the amount each has bid and the quality of their ad. The bid and Quality Score are used to give each advertiser's advert an ad rank. The ad with the highest ad rank shows up first. The predominant three match types for both Google and Bing are Broad, Exact, and Phrase Match. Google Ads and Bing Ads also offer the Broad Match Modifier type (although Google retired it in July 2021) which differs from broad match in that the keyword must contain the actual keyword terms in any order and doesn't include relevant variations of the terms. In addition to ad spots on SERPs, the major advertising networks allow for contextual ads to be placed on the properties of 3rd-parties with whom they have partnered. These publishers sign up to host ads on behalf of the network. In return, they receive a portion of the ad revenue that the network generates, which can be anywhere from 50% to over 80% of the gross revenue paid by advertisers. These properties are often referred to as a content network and the ads on them as contextual ads because the ad spots are associated with keywords based on the context of the page on which they are found. In general, ads on content networks have a much lower click-through rate (CTR) and conversion rate (CR) than ads found on SERPs and consequently are less highly valued. Content network properties can include websites, newsletters, and e-mails. Advertisers pay for every single click they receive, with the actual amount paid based on the amount of bid. It is common practice amongst auction hosts to charge a winning bidder just slightly more (e.g. one penny) than the next highest bidder or the actual amount bid, whichever is lower. This avoids situations where bidders are constantly adjusting their bids by very small amounts to see if they can still win the auction while paying just a little bit less per click. In order to maximize success and achieve scale, automated bid management systems can be deployed. These systems can be used directly by the advertiser, though they are more commonly used by advertising agencies that offer PPC bid management as a service. These tools generally allow for bid management at scale, with thousands or even millions of PPC bids controlled by a highly automated system. The system generally sets each bid based on the goal that has been set for it, such as maximizing profit, maximizing traffic, getting the very targeted customer at break even, and so forth. The system is usually tied into the advertiser's website and fed the results of each click, which then allows it to set bids. The effectiveness of these systems is directly related to the quality and quantity of the performance data that they have to work with — low-traffic ads can lead to a scarcity of data problem that renders many bid management tools useless at worst, or inefficient at best. As a rule, the contextual advertising system (Google Ads, Yandex.Direct, etc.) uses an auction approach as the advertising payment system. History Several sites claim to be the first PPC model on the web, with many appearing in the mid-1990s. For example, in 1996, the first known and documented version of a PPC was included in a web directory called Planet Oasis. This was a desktop application featuring links to informational and commercial websites, and it was developed by Ark Interface II, a division of Packard Bell NEC Computers. The initial reactions from commercial companies to Ark Interface II's "pay-per-visit" model were skeptical, however. By the end of 1997, over 400 major brands were paying between $.005 to $.25 per click plus a placement fee. In February 1998 Jeffrey Brewer of Goto.com, a 25-employee startup company (later Overture, now part of Yahoo!), presented a pay per click search engine proof-of-concept to the TED conference in California. This presentation and the events that followed created the PPC advertising system. Credit for the concept of the PPC model is generally given to Idealab and Goto.com founder Bill Gross. Google started search engine advertising in December 1999. It was not until October 2000 that the AdWords system was introduced, allowing advertisers to create text ads for placement on the Google search engine. However, PPC was only introduced in 2002; until then, advertisements were charged at cost-per-thousand impressions or Cost per mille (CPM). Overture has filed a patent infringement lawsuit against Google, saying the rival search service overstepped its bounds with its ad placement tools. Although GoTo.com started PPC in 1998, Yahoo! did not start syndicating GoTo.com (later Overture) advertisers until November 2001. Prior to this, Yahoo's primary source of SERPs advertising included contextual IAB advertising units (mainly 468x60 display ads). When the syndication contract with Yahoo! was up for renewal in July 2003, Yahoo! announced its intent to acquire Overture for $1.63 billion. Today, companies such as adMarketplace, ValueClick and acknowledge offering PPC services, as an alternative to AdWords and AdCenter. Among PPC providers, Google Ads (formerly Google AdWords), Microsoft adCenter and Yahoo! Search Marketing had been the three largest network operators, all three operating under a bid-based model. For example, in the year 2014, PPC(AdWords) or online advertising contributed approximately US$45 billion of the total US$66 billion of Google's annual revenue In 2010, Yahoo and Microsoft launched their combined effort against Google, and Microsoft's Bing began to be the search engine that Yahoo used to provide its search results. Since they joined forces, their PPC platform was renamed AdCenter. Their combined network of third-party sites that allow AdCenter ads to populate banner and text ads on their site is called BingAds. PPC Statistics Customers are 50% more likely to purchase something after clicking a paid ad. SMEs spend $108,000 to $120,000 annually on PPC ads. 57.5% of users don't recognize paid ads when they see them. Click bots and fake traffic cost online advertisers $35 Billion Legal In 2012, Google was initially ruled to have engaged in misleading and deceptive conduct by the Australian Competition & Consumer Commission (ACCC) in possibly the first legal case of its kind. The ACCC ruled that Google was responsible for the content of its sponsored AdWords ads that had shown links to a car sales website Carsales. The ads had been shown by Google in response to a search for Honda Australia. The ACCC said the ads were deceptive, as they suggested Carsales was connected to the Honda company. The ruling was later overturned when Google appealed to the High Court of Australia. Google was found not liable for the misleading advertisements run through AdWords despite the fact that the ads were served up by Google and created using the company's tools. Click fraud A common concern amongst advertisers is the practice known as "click fraud". This takes two forms: Publishers who illegitimately click on or fraudulently arrange for clicks to be generated on adverts, in order to increase their own publisher revenues. In 2018, the FBI, in partnership with Google and other major industry ad platforms, cracked down on an illegal ad fraud scheme known as "3ve", which was estimated to have defrauded advertisers several millions of dollars in combined ad costs. The case highlighted the extent of ad fraud; as of 2018, ad fraud annual revenue was on track to be worth more than the illicit drug trade, with over $19 billion estimated to have been stolen by click fraudsters. Advertisers who attempt to derail competitors' adverts, by clicking on them in an effort to raise their competitors' costs in order to give themselves an unfair advantage in the advertising space. The Google Ads platforms claim to be able to identify such clicks, and label such traffic as "invalid clicks". See also Advertising Click-through rate Digital marketing Opportunity to see Pay-per-call advertising Paid to click Search engine marketing Search engine optimization References Compensation methods Pricing Online advertising methods Internet terminology Contexts for auctions Search engine optimization Digital marketing Marketing analytics
Pay-per-click
[ "Technology" ]
2,998
[ "Computing terminology", "Internet terminology" ]
1,027,229
https://en.wikipedia.org/wiki/Method%20of%20analytic%20tableaux
In proof theory, the semantic tableau (; plural: tableaux), also called an analytic tableau, truth tree, or simply tree, is a decision procedure for sentential and related logics, and a proof procedure for formulae of first-order logic. An analytic tableau is a tree structure computed for a logical formula, having at each node a subformula of the original formula to be proved or refuted. Computation constructs this tree and uses it to prove or refute the whole formula. The tableau method can also determine the satisfiability of finite sets of formulas of various logics. It is the most popular proof procedure for modal logics. A method of truth trees contains a fixed set of rules for producing trees from a given logical formula, or set of logical formulas. Those trees will have more formulas at each branch, and in some cases, a branch can come to contain both a formula and its negation, which is to say, a contradiction. In that case, the branch is said to close. If every branch in a tree closes, the tree itself is said to close. In virtue of the rules for construction of tableaux, a closed tree is a proof that the original formula, or set of formulas, used to construct it was itself self-contradictory, and therefore false. Conversely, a tableau can also prove that a logical formula is tautologous: if a formula is tautologous, its negation is a contradiction, so a tableau built from its negation will close. History In his Symbolic Logic Part II, Charles Lutwidge Dodgson (also known by his literary pseudonym, Lewis Carroll) introduced the Method of Trees, the earliest modern use of a truth tree. The method of semantic tableaux was invented by the Dutch logician Evert Willem Beth (Beth 1955) and simplified, for classical logic, by Raymond Smullyan (Smullyan 1968, 1995). Smullyan's simplification, "one-sided tableaux", is described here. Smullyan's method has been generalized to arbitrary many-valued propositional and first-order logics by Walter Carnielli (Carnielli 1987). Tableaux can be intuitively seen as sequent systems upside-down. This symmetrical relation between tableaux and sequent systems was formally established in (Carnielli 1991). Propositional logic Background A formula in propositional logic consists of letters, which stand for propositions, and connectives for conjunction, disjunction, conditionals, biconditionals, and negation. The truth or falsehood of a proposition is called its truth value. A formula, or set of formulas, is said to be satisfiable if there is a possible assignment of truth-values to the propositional letters such that the entire formula, which combines the letters with connectives, is itself true as well. Such an assignment is said to satisfy the formula. A tableau checks whether a given set of formulae is satisfiable or not. It can be used to check either validity or entailment: a formula is valid if its negation is unsatisfiable, and formulae imply if is unsatisfiable. The following table shows some notational variants for logical connectives, for readers who may be more familiar with a different notation from the one used here. In general, as of the time of the inclusion of this sentence, the first symbol in each line has been used in the text of this article; however, since Wikipedia editors are not rule-bound to use consistent notation within or between articles, this may change. General method The main principle of propositional tableaux is to attempt to "break" complex formulae into smaller ones until complementary pairs of literals are produced or no further expansion is possible. The method works on a tree whose nodes are labeled with formulae. At each step, this tree is modified; in the propositional case, the only allowed changes are additions of a node as descendant of a leaf. The procedure starts by generating the tree made of a chain of all formulae in the set to prove unsatisfiability. Then, the following procedure may be repeatedly applied nondeterministically: Pick an open leaf node. (The leaf node in the initial chain is marked open). Pick an applicable node on the branch above the selected node. Apply the applicable node, which corresponds to expanding the tree below the selected leaf node based on some expansion rule (detailed below). For every newly created node that is both a literal/negated literal, and whose complement appears in a prior node on the same branch, mark the branch as closed. Mark all other newly created nodes as open. Eventually, this procedure will terminate, because at some point every applicable node gets applied, and the expansion rules guarantee that every node in the tree is simpler than the applicable node used to create it. The principle of tableau is that formulae in nodes of the same branch are considered in conjunction while the different branches are considered to be disjuncted. As a result, a tableau is a tree-like representation of a formula that is a disjunction of conjunctions. This formula is equivalent to the set to prove unsatisfiability. The procedure modifies the tableau in such a way that the formula represented by the resulting tableau is equivalent to the original one. One of these conjunctions may contain a pair of complementary literals, in which case that conjunction is proved to be unsatisfiable. If all conjunctions are proved unsatisfiable, the original set of formulae is unsatisfiable. And Whenever a branch of a tableau contains a formula that is the conjunction of two formulae, these two formulae are both consequences of that formula. This fact can be formalized by the following rule for expansion of a tableau: () If a branch of the tableau contains a conjunctive formula , add to its leaf the chain of two nodes containing the formulae and This rule is generally written as follows: A variant of this rule allows a node to contain a set of formulae rather than a single one. In this case, the formulae in this set are considered in conjunction, so one can add at the end of a branch containing . More precisely, if a node on a branch is labeled , one can add to the branch the new leaf . Or If a branch of a tableau contains a formula that is a disjunction of two formulae, such as , the following rule can be applied: () If a node on a branch contains a disjunctive formula , then create two sibling children to the leaf of the branch, containing the formulae and , respectively. This rule splits a branch into two, differing only for the final node. Since branches are considered in disjunction to each other, the two resulting branches are equivalent to the original one, as the disjunction of their non-common nodes is precisely . The rule for disjunction is generally formally written using the symbol for separating the formulae of the two distinct nodes to be created: If nodes are assumed to contain sets of formulae, this rule is replaced by: if a node is labeled , a leaf of the branch this node is in can be appended two sibling child nodes labeled and , respectively. Not The aim of tableaux is to generate progressively simpler formulae until pairs of opposite literals are produced or no other rule can be applied. Negation can be treated by initially making formulae in negation normal form, so that negation only occurs in front of literals. Alternatively, one can use De Morgan's laws during the expansion of the tableau, so that for example is treated as . Rules that introduce or remove a pair of negations (such as in ) are also used in this case (otherwise, there would be no way of expanding a formula like : Closure Every tableau can be considered as a graphical representation of a formula, which is equivalent to the set the tableau is built from. This formula is as follows: each branch of the tableau represents the conjunction of its formulae; the tableau represents the disjunction of its branches. The expansion rules transforms a tableau into one having an equivalent represented formula. Since the tableau is initialized as a single branch containing the formulae of the input set, all subsequent tableaux obtained from it represent formulae which are equivalent to that set (in the variant where the initial tableau is the single node labeled true, the formulae represented by tableaux are consequences of the original set.) The method of tableaux works by starting with the initial set of formulae and then adding to the tableau simpler and simpler formulae until contradiction is shown in the simple form of opposite literals. Since the formula represented by a tableau is the disjunction of the formulae represented by its branches, contradiction is obtained when every branch contains a pair of opposite literals. Once a branch contains a literal and its negation, its corresponding formula is unsatisfiable. As a result, this branch can be now "closed", as there is no need to further expand it. If all branches of a tableau are closed, the formula represented by the tableau is unsatisfiable; therefore, the original set is unsatisfiable as well. Obtaining a tableau where all branches are closed is a way for proving the unsatisfiability of the original set. In the propositional case, one can also prove that satisfiability is proved by the impossibility of finding a closed tableau, provided that every expansion rule has been applied everywhere it could be applied. In particular, if a tableau contains some open (non-closed) branches and every formula that is not a literal has been used by a rule to generate a new node on every branch the formula is in, the set is satisfiable. This rule takes into account that a formula may occur in more than one branch (this is the case if there is at least a branching point "below" the node). In this case, the rule for expanding the formula has to be applied so that its conclusion(s) are appended to all of these branches that are still open, before one can conclude that the tableau cannot be further expanded and that the formula is therefore satisfiable. Set-labeled tableau A variant of tableau is to label nodes with sets of formulae rather than single formulae. In this case, the initial tableau is a single node labeled with the set to be proved satisfiable. The formulae in a set are therefore considered to be in conjunction. The rules of expansion of the tableau can now work on the leaves of the tableau, ignoring all internal nodes. For conjunction, the rule is based on the equivalence of a set containing a conjunction with the set containing both and in place of it. In particular, if a leaf is labeled with , a node can be appended to it with label : For disjunction, a set is equivalent to the disjunction of the two sets and . As a result, if the first set labels a leaf, two children can be appended to it, labeled with the latter two formulae. Finally, if a set contains both a literal and its negation, this branch can be closed: A tableau for a given finite set X is a finite (upside down) tree with root X in which all child nodes are obtained by applying the tableau rules to their parents. A branch in such a tableau is closed if its leaf node contains "closed". A tableau is closed if all its branches are closed. A tableau is open if at least one branch is not closed. Below are two closed tableaux for the set Each rule application is marked at the right hand side. Both achieve the same effect, the first closes faster. The only difference is the order in which the reduction is performed. and second, longer one, with the rules applied in a different order: The first tableau closes after only one rule application while the second one misses the mark and takes a lot longer to close. Clearly, we would prefer to always find the shortest closed tableaux but it can be shown that one single algorithm that finds the shortest closed tableaux for all input sets of formulae cannot exist. The three rules , and given above are then enough to decide if a given set of formulae in negated normal form are jointly satisfiable: Just apply all possible rules in all possible orders until we find a closed tableau for or until we exhaust all possibilities and conclude that every tableau for is open. In the first case, is jointly unsatisfiable and in the second the case the leaf node of the open branch gives an assignment to the atomic formulae and negated atomic formulae which makes jointly satisfiable. Classical logic actually has the rather nice property that we need to investigate only (any) one tableau completely: if it closes then is unsatisfiable and if it is open then is satisfiable. But this property is not generally enjoyed by other logics. These rules suffice for all of classical logic by taking an initial set of formulae X and replacing each member C by its logically equivalent negated normal form C' giving a set of formulae X' . We know that X is satisfiable if and only if X' is satisfiable, so it suffices to search for a closed tableau for X' using the procedure outlined above. By setting we can test whether the formula A is a tautology of classical logic: If the tableau for closes then is unsatisfiable and so A is a tautology since no assignment of truth values will ever make A false. Otherwise any open leaf of any open branch of any open tableau for gives an assignment that falsifies A. Conditional Classical propositional logic usually has a connective to denote material implication. If we write this connective as ⇒, then the formula A ⇒ B stands for "if A then B". It is possible to give a tableau rule for breaking down A ⇒ B into its constituent formulae. Similarly, we can give one rule each for breaking down each of ¬(A ∧ B), ¬(A ∨ B), ¬(¬A), and ¬(A ⇒ B). Together these rules would give a terminating procedure for deciding whether a given set of formulae is simultaneously satisfiable in classical logic since each rule breaks down one formula into its constituents but no rule builds larger formulae out of smaller constituents. Thus we must eventually reach a node that contains only atoms and negations of atoms. If this last node matches (id) then we can close the branch, otherwise it remains open. But note that the following equivalences hold in classical logic where (...) = (...) means that the left hand side formula is logically equivalent to the right hand side formula: If we start with an arbitrary formula C of classical logic, and apply these equivalences repeatedly to replace the left hand sides with the right hand sides in C, then we will obtain a formula C' which is logically equivalent to C but which has the property that C' contains no implications, and ¬ appears in front of atomic formulae only. Such a formula is said to be in negation normal form and it is possible to prove formally that every formula C of classical logic has a logically equivalent formula C' in negation normal form. That is, C is satisfiable if and only if C' is satisfiable. Propositional tableau with unification The above rules for propositional tableau can be simplified by using uniform notation. In uniform notation, each formula is either of type (alpha) or of type (beta). Each formula of type alpha is assigned the two components , and each formula of type beta is assigned the two components . Formulae of type alpha can be thought of as being conjunctive, as both and are implied by being true. Formulae of type beta can be thought of as being disjunctive, as either or is implied by being true. The below tables shows how to determine the type, and the components, of any given propositional formula: In each table, the left-most column shows all the possible structures for the formulae of type alpha or beta, and the right-most columns show their respective components. Alternatively, the rules for uniform notation can be expressed using signed formulae: When constructing a propositional tableau using the above notation, whenever one encounters a formula of type alpha, its two components are added to the current branch that is being expanded. Whenever one encounters a formula of type beta on some branch , one can split into two branches, one with the set {, } of formulae, and the other with the set {, } of formulae. First-order logic tableau Tableaux are extended to first-order predicate logic by two rules for dealing with universal and existential quantifiers, respectively. Two different sets of rules can be used; both employ a form of Skolemization for handling existential quantifiers, but differ on the handling of universal quantifiers. The set of formulae to check for validity is here supposed to contain no free variables; this is not a limitation as free variables are implicitly universally quantified, so universal quantifiers over these variables can be added, resulting in a formula with no free variables. First-order tableau without unification A first-order formula implies all formulae where is a ground term. The following inference rule is therefore correct: where is an arbitrary ground term Contrarily to the rules for the propositional connectives, multiple applications of this rule to the same formula may be necessary. As an example, the set can only be proved unsatisfiable if both and are generated from . Existential quantifiers are dealt with by means of Skolemization. In particular, a formula with a leading existential quantifier like generates its Skolemization , where is a new constant symbol. where is a new constant symbol The Skolem term is a constant (a function of arity 0) because the quantification over does not occur within the scope of any universal quantifier. If the original formula contained some universal quantifiers such that the quantification over was within their scope, these quantifiers have evidently been removed by the application of the rule for universal quantifiers. The rule for existential quantifiers introduces new constant symbols. These symbols can be used by the rule for universal quantifiers, so that can generate even if was not in the original formula but is a Skolem constant created by the rule for existential quantifiers. The above two rules for universal and existential quantifiers are correct, and so are the propositional rules: if a set of formulae generates a closed tableau, this set is unsatisfiable. Completeness can also be proved: if a set of formulae is unsatisfiable, there exists a closed tableau built from it by these rules. However, actually finding such a closed tableau requires a suitable policy of application of rules. Otherwise, an unsatisfiable set can generate an infinite-growing tableau. As an example, the set is unsatisfiable, but a closed tableau is never obtained if one unwisely keeps applying the rule for universal quantifiers to , generating for example . A closed tableau can always be found by ruling out this and similar "unfair" policies of application of tableau rules. The rule for universal quantifiers is the only non-deterministic rule, as it does not specify which term to instantiate with. Moreover, while the other rules need to be applied only once for each formula and each path the formula is in, this one may require multiple applications. Application of this rule can however be restricted by delaying the application of the rule until no other rule is applicable and by restricting the application of the rule to ground terms that already appear in the path of the tableau. The variant of tableaux with unification shown below aims at solving the problem of non-determinism. First-order tableau with unification The main problem of tableau without unification is how to choose a ground term for the universal quantifier rule. Indeed, every possible ground term can be used, but clearly most of them might be useless for closing the tableau. A solution to this problem is to "delay" the choice of the term to the time when the consequent of the rule allows closing at least a branch of the tableau. This can be done by using a variable instead of a term, so that generates , and then allowing substitutions to later replace with a term. The rule for universal quantifiers becomes: where is a variable not occurring everywhere else in the tableau While the initial set of formulae is supposed not to contain free variables, a formula of the tableau may contain the free variables generated by this rule. These free variables are implicitly considered universally quantified. This rule employs a variable instead of a ground term. What is gained by this change is that these variables can be then given a value when a branch of the tableau can be closed, solving the problem of generating terms that might be useless. {| |- | | if is the most general unifier of two literals and , where and the negation of occur in the same branch of the tableau, can be applied at the same time to all formulae of the tableau |} As an example, can be proved unsatisfiable by first generating ; the negation of this literal is unifiable with , the most general unifier being the substitution that replaces with ; applying this substitution results in replacing with , which closes the tableau. This rule closes at least a branch of the tableau—the one containing the considered pair of literals. However, the substitution has to be applied to the whole tableau, not only on these two literals. This is expressed by saying that the free variables of the tableau are rigid: if an occurrence of a variable is replaced by something else, all other occurrences of the same variable must be replaced in the same way. Formally, the free variables are (implicitly) universally quantified and all formulae of the tableau are within the scope of these quantifiers. Existential quantifiers are dealt with by Skolemization. Contrary to the tableau without unification, Skolem terms may not be simple constants. Indeed, formulae in a tableau with unification may contain free variables, which are implicitly considered universally quantified. As a result, a formula like may be within the scope of universal quantifiers; if this is the case, the Skolem term is not a simple constant but a term made of a new function symbol and the free variables of the formula. where is a new function symbol and the free variables of This rule incorporates a simplification over a rule where are the free variables of the branch, not of alone. This rule can be further simplified by the reuse of a function symbol if it has already been used in a formula that is identical to up to variable renaming. The formula represented by a tableau is obtained in a way that is similar to the propositional case, with the additional assumption that free variables are considered universally quantified. As for the propositional case, formulae in each branch are conjoined and the resulting formulae are disjoined. In addition, all free variables of the resulting formula are universally quantified. All these quantifiers have the whole formula in their scope. In other words, if is the formula obtained by disjoining the conjunction of the formulae in each branch, and are the free variables in it, then is the formula represented by the tableau. The following considerations apply: The assumption that free variables are universally quantified is what makes the application of a most general unifier a sound rule: since means that is true for every possible value of , then is true for the term that the most general unifier replaces with. Free variables in a tableau are rigid: all occurrences of the same variable have to be replaced all with the same term. Every variable can be considered a symbol representing a term that is yet to be decided. This is a consequence of free variables being assumed universally quantified over the whole formula represented by the tableau: if the same variable occurs free in two different nodes, both occurrences are in the scope of the same quantifier. As an example, if the formulae in two nodes are and , where is free in both, the formula represented by the tableau is something in the form . This formula implies that is true for any value of , but does not in general imply for two different terms and , as these two terms may in general take different values. This means that cannot be replaced by two different terms in and . Free variables in a formula to check for validity are also considered universally quantified. However, these variables cannot be left free when building a tableau, because tableau rules works on the converse of the formula but still treats free variables as universally quantified. For example, is not valid (it is not true in the model where , and the interpretation where ). Consequently, is satisfiable (it is satisfied by the same model and interpretation). However, a closed tableau could be generated with and , and substituting with would generate a closure. A correct procedure is to first make universal quantifiers explicit, thus generating . The following two variants are also correct. Applying to the whole tableau a substitution to the free variables of the tableau is a correct rule, provided that this substitution is free for the formula representing the tableau. In other worlds, applying such a substitution leads to a tableau whose formula is still a consequence of the input set. Using most general unifiers automatically ensures that the condition of freeness for the tableau is met. While in general every variable has to be replaced with the same term in the whole tableau, there are some special cases in which this is not necessary. Tableaux with unification can be proved complete: if a set of formulae is unsatisfiable, it has a tableau-with-unification proof. However, actually finding such a proof may be a difficult problem. Contrarily to the case without unification, applying a substitution can modify the existing part of a tableau; while applying a substitution closes at least a branch, it may make other branches impossible to close (even if the set is unsatisfiable). A solution to this problem is delayed instantiation: no substitution is applied until one that closes all branches at the same time is found. With this variant, a proof for an unsatisfiable set can always be found by a suitable policy of application of the other rules. This method however requires the whole tableau to be kept in memory: the general method closes branches, which can be then discarded, while this variant does not close any branch until the end. The problem that some tableaux that can be generated are impossible to close even if the set is unsatisfiable is common to other sets of tableau expansion rules: even if some specific sequences of application of these rules allow constructing a closed tableau (if the set is unsatisfiable), some other sequences lead to tableaux that cannot be closed. General solutions for these cases are outlined in the "Searching for a tableau" section. Tableau calculi and their properties A tableau calculus is a set of rules that allows building and modification of a tableau. Propositional tableau rules, tableau rules without unification, and tableau rules with unification, are all tableau calculi. Some important properties a tableau calculus may or may not possess are completeness, destructiveness, and proof confluence. A tableau calculus is called complete if it allows building a tableau proof for every given unsatisfiable set of formulae. The tableau calculi mentioned above can be proved complete. A remarkable difference between tableau with unification and the other two calculi is that the latter two calculi only modify a tableau by adding new nodes to it, while the former one allows substitutions to modify the existing part of the tableau. More generally, tableau calculi are classed as destructive or non-destructive depending on whether they only add new nodes to tableau or not. Tableau with unification is therefore destructive, while propositional tableau and tableau without unification are non-destructive. Proof confluence is the property of a tableau calculus being able to obtain a proof for an arbitrary unsatisfiable set from an arbitrary tableau, assuming that this tableau has itself been obtained by applying the rules of the calculus. In other words, in a proof confluent tableau calculus, from an unsatisfiable set one can apply whatever set of rules and still obtain a tableau from which a closed one can be obtained by applying some other rules. Proof procedures A tableau calculus is simply a set of rules that prescribes how a tableau can be modified. A proof procedure is a method for actually finding a proof (if one exists). In other words, a tableau calculus is a set of rules, while a proof procedure is a policy of application of these rules. Even if a calculus is complete, not every possible choice of application of rules leads to a proof of an unsatisfiable set. For example, is unsatisfiable, but both tableaux with unification and tableaux without unification allow the rule for the universal quantifiers to be applied repeatedly to the last formula, while simply applying the rule for disjunction to the third one would directly lead to closure. For proof procedures, a definition of completeness has been given: a proof procedure is strongly complete if it allows finding a closed tableau for any given unsatisfiable set of formulae. Proof confluence of the underlying calculus is relevant to completeness: proof confluence is the guarantee that a closed tableau can be always generated from an arbitrary partially constructed tableau (if the set is unsatisfiable). Without proof confluence, the application of a 'wrong' rule may result in the impossibility of making the tableau complete by applying other rules. Propositional tableaux and tableaux without unification have strongly complete proof procedures. In particular, a complete proof procedure is that of applying the rules in a fair way. This is because the only way such calculi cannot generate a closed tableau from an unsatisfiable set is by not applying some applicable rules. For propositional tableaux, fairness amounts to expanding every formula in every branch. More precisely, for every formula and every branch the formula is in, the rule having the formula as a precondition has been used to expand the branch. A fair proof procedure for propositional tableaux is strongly complete. For first-order tableaux without unification, the condition of fairness is similar, with the exception that the rule for universal quantifiers might require more than one application. Fairness amounts to expanding every universal quantifier infinitely often. In other words, a fair policy of application of rules cannot keep applying other rules without expanding every universal quantifier in every branch that is still open once in a while. Searching for a closed tableau If a tableau calculus is complete, every unsatisfiable set of formulae has an associated closed tableau. While this tableau can always be obtained by applying some of the rules of the calculus, the problem of which rules to apply for a given formula still remains. As a result, completeness does not automatically imply the existence of a feasible policy of application of rules that always leads to a closed tableau for every given unsatisfiable set of formulae. While a fair proof procedure is complete for ground tableau and tableau without unification, this is not the case for tableau with unification. A general solution for this problem is that of searching the space of tableaux until a closed one is found (if any exists, that is, the set is unsatisfiable). In this approach, one starts with an empty tableau and then recursively applies every possible applicable rule. This procedure visits a (implicit) tree whose nodes are labeled with tableaux, and such that the tableau in a node is obtained from the tableau in its parent by applying one of the valid rules. Since each branch can be infinite, this tree has to be visited breadth-first rather than depth-first. This requires a large amount of space, as the breadth of the tree can grow exponentially. A method that may visit some nodes more than once but works in polynomial space is to visit in a depth-first manner with iterative deepening: one first visits the tree depth first up to a certain depth, then increases the depth and perform the visit again. This particular procedure uses the depth (which is also the number of tableau rules that have been applied) for deciding when to stop at each step. Various other parameters (such as the size of the tableau labeling a node) have been used instead. Reducing search The size of the search tree depends on the number of (children) tableaux that can be generated from a given (parent) one. Reducing the number of such tableaux therefore reduces the required search. A way for reducing this number is to disallow the generation of some tableaux based on their internal structure. An example is the condition of regularity: if a branch contains a literal, using an expansion rule that generates the same literal is useless because the branch containing two copies of the literals would have the same set of formulae of the original one. This expansion can be disallowed because if a closed tableau exists, it can be found without it. This restriction is structural because it can be checked by looking at the structure of the tableau to expand only. Different methods for reducing search disallow the generation of some tableaux on the ground that a closed tableau can still be found by expanding the other ones. These restrictions are called global. As an example of a global restriction, one may employ a rule that specifies which of the open branches is to be expanded. As a result, if a tableau has for example two non-closed branches, the rule specifies which one is to be expanded, disallowing the expansion of the second one. This restriction reduces the search space because one possible choice is now forbidden; completeness is however not harmed, as the second branch will still be expanded if the first one is eventually closed. As an example, a tableau with root , child , and two leaves and can be closed in two ways: applying first to and then to , or vice versa. There is clearly no need to follow both possibilities; one may consider only the case in which is first applied to and disregard the case in which it is first applied to . This is a global restriction because what allows neglecting this second expansion is the presence of the other tableau, where expansion is applied to first and afterwards. Clause tableaux When applied to sets of clauses (rather than of arbitrary formulae), tableaux methods allow for a number of efficiency improvements. A first-order clause is a formula that does not contain free variables and such that each is a literal. The universal quantifiers are often omitted for clarity, so that for example actually means . Note that, if taken literally, these two formulae are not the same as for satisfiability: rather, the satisfiability is the same as that of . That free variables are universally quantified is not a consequence of the definition of first-order satisfiability; it is rather used as an implicit common assumption when dealing with clauses. The only expansion rules that are applicable to a clause are and ; these two rules can be replaced by their combination without losing completeness. In particular, the following rule corresponds to applying in sequence the rules and of the first-order calculus with unification. where is obtained by replacing every variable with a new one in When the set to be checked for satisfiability is only composed of clauses, this and the unification rules are sufficient to prove unsatisfiability. In other worlds, the tableau calculi composed of and is complete. Since the clause expansion rule only generates literals and never new clauses, the clauses to which it can be applied are only clauses of the input set. As a result, the clause expansion rule can be further restricted to the case where the clause is in the input set. where is obtained by replacing every variable with a new one in , which is a clause of the input set Since this rule directly exploits the clauses in the input set there is no need to initialize the tableau to the chain of the input clauses. The initial tableau can therefore be initialize with the single node labeled ; this label is often omitted as implicit. As a result of this further simplification, every node of the tableau (apart from the root) is labeled with a literal. A number of optimizations can be used for clause tableau. These optimization are aimed at reducing the number of possible tableaux to be explored when searching for a closed tableau as described in the "Searching for a closed tableau" section above. Connection tableau Connection is a condition over tableau that forbids expanding a branch using clauses that are unrelated to the literals that are already in the branch. Connection can be defined in two ways: strong connectedness when expanding a branch, use an input clause only if it contains a literal that can be unified with the negation of the literal in the current leaf weak connectedness allow the use of clauses that contain a literal that unifies with the negation of a literal on the branch Both conditions apply only to branches consisting not only of the root. The second definition allows for the use of a clause containing a literal that unifies with the negation of a literal in the branch, while the first only further constraint that literal to be in leaf of the current branch. If clause expansion is restricted by connectedness (either strong or weak), its application produces a tableau in which substitution can applied to one of the new leaves, closing its branch. In particular, this is the leaf containing the literal of the clause that unifies with the negation of a literal in the branch (or the negation of the literal in the parent, in case of strong connection). Both conditions of connectedness lead to a complete first-order calculus: if a set of clauses is unsatisfiable, it has a closed connected (strongly or weakly) tableau. Such a closed tableau can be found by searching in the space of tableaux as explained in the "Searching for a closed tableau" section. During this search, connectedness eliminates some possible choices of expansion, thus reducing search. In other worlds, while the tableau in a node of the tree can be in general expanded in several different ways, connection may allow only few of them, thus reducing the number of resulting tableaux that need to be further expanded. This can be seen on the following (propositional) example. The tableau made of a chain for the set of clauses can be in general expanded using each of the four input clauses, but connection only allows the expansion that uses . This means that the tree of tableaux has four leaves in general but only one if connectedness is imposed. This means that connectedness leaves only one tableau to try to expand, instead of the four ones to consider in general. In spite of this reduction of choices, the completeness theorem implies that a closed tableau can be found if the set is unsatisfiable. The connectedness conditions, when applied to the propositional (clausal) case, make the resulting calculus non-confluent. As an example, is unsatisfiable, but applying to generates the chain , which is not closed and to which no other expansion rule can be applied without violating either strong or weak connectedness. In the case of weak connectedness, confluence holds provided that the clause used for expanding the root is relevant to unsatisfiability, that is, it is contained in a minimally unsatisfiable subset of the set of clauses. Unfortunately, the problem of checking whether a clause meets this condition is itself a hard problem. In spite of non-confluence, a closed tableau can be found using search, as presented in the "Searching for a closed tableau" section above. While search is made necessary, connectedness reduces the possible choices of expansion, thus making search more efficient. Regular tableaux A tableau is regular if no literal occurs twice in the same branch. Enforcing this condition allows for a reduction of the possible choices of tableau expansion, as the clauses that would generate a non-regular tableau cannot be expanded. These disallowed expansion steps are however useless. If is a branch containing a literal , and is a clause whose expansion violates regularity, then contains . In order to close the tableau, one needs to expand and close, among others, the branch where , where occurs twice. However, the formulae in this branch are exactly the same as the formulae of alone. As a result, the same expansion steps that close also close . This means that expanding was unnecessary; moreover, if contained other literals, its expansion generated other leaves that needed to be closed. In the propositional case, the expansion needed to close these leaves are completely useless; in the first-order case, they may only affect the rest of the tableau because of some unifications; these can however be combined to the substitutions used to close the rest of the tableau. Tableaux for modal logics In a modal logic, a model comprises a set of possible worlds, each one associated to a truth evaluation; an accessibility relation specifies when a world is accessible from another one. A modal formula may specify not only conditions over a possible world, but also on the ones that are accessible from it. As an example, is true in a world if is true in all worlds that are accessible from it. As for propositional logic, tableaux for modal logics are based on recursively breaking formulae into its basic components. Expanding a modal formula may however require stating conditions over different worlds. As an example, if is true in a world then there exists a world accessible from it where is false. However, one cannot simply add the following rule to the propositional ones. In propositional tableaux all formulae refer to the same truth evaluation, but the precondition of the rule above holds in one world while the consequence holds in another. Not taking this into account would generate incorrect results. For example, formula states that is true in the current world and is false in a world that is accessible from it. Simply applying and the expansion rule above would produce and , but these two formulae should not in general generate a contradiction, as they hold in different worlds. Modal tableaux calculi do contain rules of the kind of the one above, but include mechanisms to avoid the incorrect interaction of formulae referring to different worlds. Technically, tableaux for modal logics check the satisfiability of a set of formulae: they check whether there exists a model and world such that the formulae in the set are true in that model and world. In the example above, while states the truth of in , the formula states the truth of in some world that is accessible from and which may in general be different from . Tableaux calculi for modal logic take into account that formulae may refer to different worlds. This fact has an important consequence: formulae that hold in a world may imply conditions over different successors of that world. Unsatisfiability may then be proved from the subset of formulae referring to a single successor. This holds if a world may have more than one successor, which is true for most modal logics. If this is the case, a formula like is true if a successor where holds exists and a successor where holds exists. In the other way around, if one can show unsatisfiability of in an arbitrary successor, the formula is proved unsatisfiable without checking for worlds where holds. At the same time, if one can show unsatisfiability of , there is no need to check . As a result, while there are two possible way to expand , one of these two ways is always sufficient to prove unsatisfiability if the formula is unsatisfiable. For example, one may expand the tableau by considering an arbitrary world where holds. If this expansion leads to unsatisfiability, the original formula is unsatisfiable. However, it is also possible that unsatisfiability cannot be proved this way, and that the world where holds should have been considered instead. As a result, one can always prove unsatisfiability by expanding either only or only; however, if the wrong choice is made the resulting tableau may not be closed. Expanding either subformula leads to tableau calculi that are complete but not proof-confluent. Searching as described in the "Searching for a closed tableau" may therefore be necessary. Depending on whether the precondition and consequence of a tableau expansion rule refer to the same world or not, the rule is called static or transactional. While rules for propositional connectives are all static, not all rules for modal connectives are transactional: for example, in every modal logic including axiom T, it holds that implies in the same world. As a result, the relative (modal) tableau expansion rule is static, as both its precondition and consequence refer to the same world. Formula-deleting tableau A method for avoiding formulae referring to different worlds interacting in the wrong way is to make sure that all formulae of a branch refer to the same world. This condition is initially true as all formulae in the set to be checked for consistency are assumed referring to the same world. When expanding a branch, two situations are possible: either the new formulae refer to the same world as the other one in the branch or not. In the first case, the rule is applied normally. In the second case, all formulae of the branch that do not also hold in the new world are deleted from the branch, and possibly added to all other branches that are still relative to the old world. As an example, in S5 every formula that is true in a world is also true in all accessible worlds (that is, in all accessible worlds both and are true). Therefore, when applying , whose consequence holds in a different world, one deletes all formulae from the branch, but can keep all formulae , as these hold in the new world as well. In order to retain completeness, the deleted formulae are then added to all other branches that still refer to the old world. World-labeled tableau A different mechanism for ensuring the correct interaction between formulae referring to different worlds is to switch from formulae to labeled formulae: instead of writing , one would write to make it explicit that holds in world . All propositional expansion rules are adapted to this variant by stating that they all refer to formulae with the same world label. For example, generates two nodes labeled with and ; a branch is closed only if it contains two opposite literals of the same world, like and ; no closure is generated if the two world labels are different, like in and . A modal expansion rule may have a consequence that refers to different worlds. For example, the rule for would be written as follows The precondition and consequent of this rule refer to worlds and , respectively. The various calculi use different methods for keeping track of the accessibility of the worlds used as labels. Some include pseudo-formulae like to denote that is accessible from . Some others use sequences of integers as world labels, this notation implicitly representing the accessibility relation (for example, is accessible from .) Set-labeling tableaux The problem of interaction between formulae holding in different worlds can be overcome by using set-labeling tableaux. These are trees whose nodes are labeled with sets of formulae; the expansion rules explain how to attach new nodes to a leaf, based only on the label of the leaf (and not on the label of other nodes in the branch). Tableaux for modal logics are used to verify the satisfiability of a set of modal formulae in a given modal logic. Given a set of formulae , they check the existence of a model and a world such that . The expansion rules depend on the particular modal logic used. A tableau system for the basic modal logic K can be obtained by adding to the propositional tableau rules the following one: Intuitively, the precondition of this rule expresses the truth of all formulae at all accessible worlds, and truth of at some accessible worlds. The consequence of this rule is a formula that must be true at one of those worlds where is true. More technically, modal tableaux methods check the existence of a model and a world that make set of formulae true. If are true in , there must be a world that is accessible from and that makes true. This rule therefore amounts to deriving a set of formulae that must be satisfied in such . While the preconditions are assumed satisfied by , the consequences are assumed satisfied in : same model but possibly different worlds. Set-labeled tableaux do not explicitly keep track of the world where each formula is assumed true: two nodes may or may not refer to the same world. However, the formulae labeling any given node are assumed true at the same world. As a result of the possibly different worlds where formulae are assumed true, a formula in a node is not automatically valid in all its descendants, as every application of the modal rule corresponds to a move from a world to another one. This condition is automatically captured by set-labeling tableaux, as expansion rules are based only on the leaf where they are applied and not on its ancestors. Notably, does not directly extend to multiple negated boxed formulae such as in : while there exists an accessible world where is false and one in which is false, these two worlds are not necessarily the same. Differently from the propositional rules, states conditions over all its preconditions. For example, it cannot be applied to a node labeled by ; while this set is inconsistent and this could be easily proved by applying , this rule cannot be applied because of formula , which is not even relevant to inconsistency. Removal of such formulae is made possible by the rule: The addition of this rule (thinning rule) makes the resulting calculus non-confluent: a tableau for an inconsistent set may be impossible to close, even if a closed tableau for the same set exists. Rule is non-deterministic: the set of formulae to be removed (or to be kept) can be chosen arbitrarily; this creates the problem of choosing a set of formulae to discard that is not so large it makes the resulting set satisfiable and not so small it makes the necessary expansion rules inapplicable. Having a large number of possible choices makes the problem of searching for a closed tableau harder. This non-determinism can be avoided by restricting the usage of so that it is only applied before a modal expansion rule, and so that it only removes the formulae that make that other rule inapplicable. This condition can be also formulated by merging the two rules in a single one. The resulting rule produces the same result as the old one, but implicitly discard all formulae that made the old rule inapplicable. This mechanism for removing has been proved to preserve completeness for many modal logics. Axiom T expresses reflexivity of the accessibility relation: every world is accessible from itself. The corresponding tableau expansion rule is: This rule relates conditions over the same world: if is true in a world, by reflexivity is also true in the same world. This rule is static, not transactional, as both its precondition and consequent refer to the same world. This rule copies from the precondition to the consequent, in spite of this formula having been "used" to generate . This is correct, as the considered world is the same, so also holds there. This "copying" is necessary in some cases. It is for example necessary to prove the inconsistency of : the only applicable rules are in order , from which one is blocked if is not copied. Auxiliary tableaux A different method for dealing with formulae holding in alternate worlds is to start a different tableau for each new world that is introduced in the tableau. For example, implies that is false in an accessible world, so one starts a new tableau rooted by . This new tableau is attached to the node of the original tableau where the expansion rule has been applied; a closure of this tableau immediately generates a closure of all branches where that node is, regardless of whether the same node is associated other auxiliary tableaux. The expansion rules for the auxiliary tableaux are the same as for the original one; therefore, an auxiliary tableau can have in turns other (sub-)auxiliary tableaux. Global assumptions The above modal tableaux establish the consistency of a set of formulae, and can be used for solving the local logical consequence problem. This is the problem of telling whether, for each model , if is true in a world , then is also true in the same world. This is the same as checking whether is true in a world of a model, in the assumption that is also true in the same world of the same model. A related problem is the global consequence problem, where the assumption is that a formula (or set of formulae) is true in all possible worlds of the model. The problem is that of checking whether, in all models where is true in all worlds, is also true in all worlds. Local and global assumption differ on models where the assumed formula is true in some worlds but not in others. As an example, entails globally but not locally. Local entailment does not hold in a model consisting of two worlds making and true, respectively, and where the second is accessible from the first; in the first world, the assumptions are true but is false. This counterexample works because can be assumed true in a world and false in another one. If however the same assumption is considered global, is not allowed in any world of the model. These two problems can be combined, so that one can check whether is a local consequence of under the global assumption . Tableaux calculi can deal with global assumption by a rule allowing its addition to every node, regardless of the world it refers to. Notations The following conventions are sometimes used. Uniform notation When writing tableaux expansion rules, formulae are often denoted using a convention, so that for example is always considered to be . The following table provides the notation for formulae in propositional, first-order, and modal logic. Each label in the first column is taken to be either formula in the other columns. An overlined formula such as indicates that is the negation of whatever formula appears in its place, so that for example in formula the subformula is the negation of . Since every label indicates many equivalent formulae, this notation allows writing a single rule for all these equivalent formulae. For example, the conjunction expansion rule is formulated as: Signed formulae A formula in a tableau is assumed true. Signed tableaux allows stating that a formula is false. This is generally achieved by adding a label to each formula, where the label T indicates formulae assumed true and F those assumed false. A different but equivalent notation is that to write formulae that are assumed true at the left of the node and formulae assumed false at its right. See also Resolution (logic) Notes References Reprinted in External links TABLEAUX: an annual international conference on automated reasoning with analytic tableaux and related methods JAR: Journal of Automated Reasoning The tableaux package: an interactive prover for propositional and first-order logic using tableaux Tree proof generator: another interactive prover for propositional and first-order logic using tableaux LoTREC: a generic tableaux-based prover for modal logics from IRIT/Toulouse University Logical calculi Automated theorem proving Methods of proof
Method of analytic tableaux
[ "Mathematics" ]
11,782
[ "Automated theorem proving", "Proof theory", "Mathematical logic", "Methods of proof", "Computational mathematics", "Logical calculi" ]
1,027,252
https://en.wikipedia.org/wiki/Windows%20Glyph%20List%204
Windows Glyph List 4, or more commonly WGL4 for short, also known as the Pan-European character set, is a character repertoire on Microsoft operating systems comprising 657 Unicode characters, two of them for private use. Its purpose is to provide an implementation guideline for producers of fonts for the representation of European natural languages; fonts that provide glyphs for the entire set of characters can claim WGL4 compliance and thus can expect to be compatible with a wide range of software. , WGL4 characters were the only ones guaranteed to display correctly on Microsoft Windows. More recent versions of Windows display far more glyphs. Because many fonts are designed to fulfill the WGL4 set, this set of characters is likely to work (display as other than replacement glyphs) on many computer systems. For example, all the non-private-use characters in the table below are likely to display properly, compared to the many missing characters that may be seen in other articles about Unicode. Repertoire The repertoire, defined by Microsoft, encompasses all the characters found in Windows code pages 1252 (Windows Western), 1250 (Windows Central European), 1251 (Windows Cyrillic), 1253 (Windows Greek), 1254 (Windows Turkish), and 1257 (Windows Baltic), as well as characters from DOS code page 437. It does not cover the combining diacritics used by Vietnamese-related code page 1258, the Thai letters used in code page 874, Hebrew and Arabic letters covered by code pages 1255 and 1256, or the ideographic characters used by code pages 932, 936, 949 and 950. It also does not cover the Romanian letters Ș, ș, Ț, and ț (U+0218–B), which were added to several of Microsoft's fonts for Windows Vista (long after the WGL4 repertoire was originally defined). In version 1.5 of the OpenType Specification (May 2008) four Cyrillic characters were added to the WGL4 character set: Ѐ (U+0400), Ѝ (U+040D), ѐ (U+0450) and ѝ (U+045D). Character table Legend See also Adobe Glyph List World Glyph Set (W1G) Multilingual European Subsets MES-1 and MES-2 DIN 91379 Unicode subset for Europe OpenType: WGL4 was an appendix of the OpenType specification until 1.8.4 in November 2020 References External links https://www.ibm.com/docs/en/zos/2.3.0?topic=collection-worldtype-fonts Digital typography Microsoft Windows multimedia technology Character encoding Glyphs Articles with unsupported Private Use Area characters
Windows Glyph List 4
[ "Technology" ]
582
[ "Natural language and computing", "Character encoding" ]
1,027,276
https://en.wikipedia.org/wiki/ASCI%20Blue%20Pacific
ASCI Blue Pacific was a supercomputer installed at the Lawrence Livermore National Laboratory (LLNL) in Livermore, CA at the end of . It was a collaboration between IBM and LLNL. It was an IBM RS/6000 SP massively parallel processing system. It contained 5,856 PowerPC 604e microprocessors. Its theoretical top performance was 3.9 teraflops. It was built as a stage of the Accelerated Strategic Computing Initiative (ASCI) started by the U.S. Department of Energy and the National Nuclear Security Administration to build a simulator to replace live nuclear weapon testing following the moratorium on testing started by President George H. W. Bush in 1992 and extended by Bill Clinton in 1993. External links One-of-a-kind computers Lawrence Livermore National Laboratory IBM supercomputers
ASCI Blue Pacific
[ "Technology" ]
175
[ "Computing stubs", "Computer hardware stubs" ]
1,027,403
https://en.wikipedia.org/wiki/G-code
G-code (also RS-274) is the most widely used computer numerical control (CNC) and 3D printing programming language. It is used mainly in computer-aided manufacturing to control automated machine tools, as well as for 3D-printer slicer applications. The G stands for geometry. G-code has many variants. G-code instructions are provided to a machine controller (industrial computer) that tells the motors where to move, how fast to move, and what path to follow. The two most common situations are that, within a machine tool such as a lathe or mill, a cutting tool is moved according to these instructions through a toolpath cutting away material to leave only the finished workpiece and/or an unfinished workpiece is precisely positioned in any of up to nine axes around the three dimensions relative to a toolpath and, either or both can move relative to each other. The same concept also extends to noncutting tools such as forming or burnishing tools, photoplotting, additive methods such as 3D printing, and measuring instruments. History The first implementation of a numerical control programming language was developed at the MIT Servomechanisms Laboratory in the 1950s. In the decades that followed, many implementations were developed by numerous organizations, both commercial and noncommercial. Elements of G-code had often been used in these implementations. The first standardized version of G-code used in the United States, RS-274, was published in 1963 by the Electronic Industries Alliance (EIA; then known as Electronic Industries Association). In 1974, EIA approved RS-274-C, which merged RS-273 (variable block for positioning and straight cut) and RS-274-B (variable block for contouring and contouring/positioning). A final revision of RS-274 was approved in 1979, as RS-274-D. In other countries, the standard ISO 6983 (finalized in 1982) is often used, but many European countries use other standards. For example, DIN 66025 is used in Germany, and PN-73M-55256 and PN-93/M-55251 were formerly used in Poland. During the 1970s through 1990s, many CNC machine tool builders attempted to overcome compatibility difficulties by standardizing on machine tool controllers built by Fanuc. Siemens was another market dominator in CNC controls, especially in Europe. In the 2010s, controller differences and incompatibility were mitigated with the widespread adoption of CAD/CAM applications that were capable of outputting machine operations in the appropriate G-code for a specific machine through a software tool called a post-processor (sometimes shortened to just a "post"). Syntax G-code began as a limited language that lacked constructs such as loops, conditional operators, and programmer-declared variables with natural-word-including names (or the expressions in which to use them). It was unable to encode logic but was just a way to "connect the dots" where the programmer figured out many of the dots' locations longhand. The latest implementations of G-code include macro language capabilities somewhat closer to a high-level programming language. Additionally, all primary manufacturers (e.g., Fanuc, Siemens, Heidenhain) provide access to programmable logic controller (PLC) data, such as axis positioning data and tool data, via variables used by NC programs. These constructs make it easier to develop automation applications. Extensions and variations Extensions and variations have been added independently by control manufacturers and machine tool manufacturers, and operators of a specific controller must be aware of the differences between each manufacturer's product. One standardized version of G-code, known as BCL (Binary Cutter Language), is used only on very few machines. Developed at MIT, BCL was developed to control CNC machines in terms of straight lines and arcs. Some CNC machines use "conversational" programming, which is a wizard-like programming mode that either hides G-code or completely bypasses the use of G-code. Some popular examples are Okuma's Advanced One Touch (AOT), Southwestern Industries' ProtoTRAK, Mazak's Mazatrol, Hurco's Ultimax and Winmax, Haas' Intuitive Programming System (IPS), and Mori Seiki's CAPS conversational software. See also Canned cycle Direct Numerical Control LinuxCNC List of computer-aided manufacturing software References Bibliography External links CNC G-Code and M-Code Programming http://museum.mit.edu/150/86 Has several links (including history of MIT Servo Lab) Complete list of G-code used by most 3D printers at reprap.org Fanuc and Haas G-code Reference Fanuc and Haas G-code Tutorial Haas Milling Manual G Code For Lathe & Milling M Code for Lathe & Milling Computer-aided engineering Domain-specific programming languages Encodings Metalworking
G-code
[ "Engineering" ]
1,013
[ "Construction", "Industrial engineering", "Computer-aided engineering" ]
1,027,466
https://en.wikipedia.org/wiki/Xprize%20Foundation
XPRIZE Foundation is a non-profit organization that designs and hosts public competitions intended to encourage technological development. The XPRIZE mission is to bring about "radical breakthroughs for the benefit of humanity" through incentivized competition. It aims to motivate individuals, companies, and organizations to develop ideas and technologies. The Ansari X Prize relating to spacecraft development was awarded in 2004, intended to inspire research and development into technology for space exploration. Background The first XPRIZE, the Ansari XPRIZE, was inspired by the Orteig Prize, a $25,000 prize offered in 1919 by French hotelier Raymond Orteig for the first nonstop flight between New York City and Paris. In 1927, underdog Charles Lindbergh won the prize in a modified single-engine Ryan aircraft called the Spirit of St. Louis. In total, nine teams spent $400,000 in pursuit of the Orteig Prize. In 1996, entrepreneur Peter Diamandis offered a $10-million prize to the first privately financed team that could build and fly a three-passenger vehicle 100 kilometers into space twice within two weeks. The contest, later titled the Ansari XPRIZE for Suborbital Spaceflight, motivated 26 teams from seven nations to invest more than $100 million in pursuit of the $10 million purse. On October 4, 2004, the Ansari XPRIZE was won by Mojave Aerospace Ventures, who successfully completed the contest in their spacecraft SpaceShipOne. The prize was awarded in a ceremony at the Saint Louis Science Center in St. Louis, Missouri. The foundation has also created the XPRIZE Cup rocket challenge competition. XPRIZE unifying principles XPRIZES are monetary rewards to incentivize three primary goals: Attract investments from outside the sector that take new approaches to difficult problems. Create significant results that are real and meaningful. Competitions have measurable goals, and are created to promote adoption of innovation. Cross national and disciplinary boundaries to encourage teams around the world to invest the intellectual and financial capital required to solve difficult challenges. Other organizations such as the Nobel Prize committee award prizes and financial rewards to individuals or organizations that produce novel advances in science, medicine and technology. One difference between the XPRIZE foundation and other similar organizations is the awarding of prizes based on the first to achieve objective 'finish line' requirements rather than a selection committee discussing the relative merits of different endeavors. For instance, the Archon Genomics XPRIZE target was to sequence 100 human genomes in 10 days or less, with less than one error per 100,000 DNA base pairs, covering 98% of the genome and costing less than $10,000 per genome (this prize was canceled because it was outpaced by innovation). The prize can increase attention to endeavors that otherwise might not receive much publicity. XPRIZE is currently developing new prizes in Exploration (Space and Oceans), Life Sciences, Energy & Environment, Education and Global Development. The prizes will aim to help improve lives, create equity of opportunity and stimulate new, important discoveries. Prizes and events overseen Past contests 1996–2004 Ansari XPRIZE for Suborbital Spaceflight The Ansari XPRIZE for Suborbital Spaceflight was the first prize from the foundation. It successfully challenged teams to build private spaceships capable of carrying three people and fly two times within two weeks to open the space frontier. The first part of the Ansari XPRIZE requirements was fulfilled by Mike Melvill on September 29, 2004, On SpaceShipOne, a spacecraft designed by Burt Rutan and financed by Paul Allen, co-founder and former CEO of Microsoft. On that ship, Melvill broke the 100-kilometer (62.5 mi) mark, internationally recognized as the boundary of outer space. Brian Binnie completed the second part of the requirements on October 4, 2004, winning the prize. As a result, US$10 million was awarded to the winner, but more than $100 million was invested in new technologies in pursuit of the prize. Awarding this first prize gave XPRIZE as much publicity as the winners themselves. After the 2004 success there was ample media coverage to afford both Scaled Composites and XPRIZE additional support for them to expand and continue to pursue their aims. Following this early success several other XPRIZES were announced. The Ansari XPRIZE won the Space Foundation's Douglas S. Morrow Public Outreach Award in 2005. The award is given annually to an individual or organization that has made significant contributions to public awareness of space programs. 2007–2010 Progressive Insurance Automotive XPRIZE The goal of the Progressive Insurance Automotive XPRIZE was to design, build and race super-efficient vehicles that achieve 100 MPGe (2.35 liter/100 kilometer) efficiency, produce less than 200 grams/mile well-to-wheel CO2 equivalent emissions, and could be manufactured for the mass market. The winners of the competition were announced on September 16, 2010. Team Edison2 won the $5 million Mainstream competition with its four-passenger Very Light Car, obtaining 102.5 MPGe running on E85 fuel. Team Li-Ion Motors won the $2.5 million Alternative Side-by-Side competition with their aerodynamic Wave-II electric vehicle achieving 187 MPGe. Team X-Tracer Switzerland won the $2.5 million Alternative Tandem competition with their 205.3 MPGe faired electric motorcycle. 2010–2011 Wendy Schmidt Oil Cleanup XCHALLENGE The Wendy Schmidt Oil Cleanup XCHALLENGE was introduced on July 29, 2010. The $1 million prize had a goal to inspire a new generation of innovative solutions that will speed the pace of cleaning up seawater surface oil resulting from spillage from ocean platforms, tankers, and other sources. The team of Elastec/American Marine won the challenge by developing a device that skims oil off water three times faster than previously existing technology. 2006–2009 Northrop Grumman Lunar Lander XCHALLENGE The Northrop Grumman Lunar Lander XCHALLENGE (NGLLXPC) was a competition (co-hosted by NASA) to build precise, efficient small rocket systems. It was introduced in 2006 and the US$1 million top prize was awarded on November 5, 2009 to Masten Space Systems, led by David Masten; while Armadillo Aerospace, led by id Software founder John Carmack took home the second place prize of US$500,000, plus an additional $500,000 in 2008. 2012–2014 The Nokia Sensing XCHALLENGE The Nokia Sensing XCHALLENGE goal is accelerating the use of sensors and sensing technology to tackle health care problems and find ways for people to monitor and maintain their personal well-being. It was composed of two distinct Challenges held in 2013 and 2014. It was announced in 2012 and 12 finalists announced in 2013. On November 11, 2014, the winner was named to be team DMI, led by Eugene Y. Chan, MD, whose entry was the rHEALTH technology which used lasers and nanostrips to perform vast multiplexing on samples. In this competition, prize purses totaling $2.25 million were awarded. 2013–2015 The Wendy Schmidt Ocean Health XPRIZE The Wendy Schmidt Ocean Health XPRIZE is a $2 million competition to improve our understanding of ocean acidification. On July 20, 2015, the winners of the challenge were announced. 2011–2017 Qualcomm Tricorder XPRIZE The Qualcomm Tricorder XPRIZE was announced on May 10, 2011, and is sponsored by Qualcomm Foundation. It was officially launched on January 10, 2012. The $10 million prize is awarded for creating a mobile device that can "diagnose patients better than or equal to a panel of board certified physicians". The name is taken from the tricorder device in Star Trek which can be used to instantly diagnose ailments. No team met all the requirements needed to win the full prize purse. Reduced prizes were made to the strongest performers (US$2.6 million for Final Frontier Medical Devices, US$1 million for Dynamical Biomarkers, and $100,000 for Cloud DX, named "Bold Epic Innovator"). For the first time at any XPRIZE, the leftover funds from the main prize purse were diverted for consumer testing for commercialization ($3.8 million) and for adapting tricorders for use in hospitals in developing countries ($1.6 million). 2016–2018 Anu & Naveen Jain Women's Safety XPRIZE The Anu & Naveen Jain Women's Safety XPRIZE was launched on October 24, 2016, and has a $1 million purse. The goal for competing teams is to develop a safety device for women that can autonomously and inconspicuously trigger an emergency alert while transmitting information to a network of community responders. On June 7, 2018, Leaf Wearables received the grand prize winner of the $1M. 2016–2018 Water Abundance XPRIZE On October 20, 2018, the XPRIZE Foundation awarded The Water Abundance XPRIZE, which launched on October 24, 2016, with a purse of $1.75 million provided by the Tata Group and Australian Aid, to the Skysource/Skywater Alliance based in Venice, California, who received a grand prize of $1.5 million. An additional award of $150,000 went to the second place team, JMCC WING, based in South Point, Hawaii, to acknowledge the team's ingenuity in developing a unique technological approach. Over a 24-hour period, the Skysource/Skywater Alliance successfully extracted over 2,000 liters of water using only renewable energy, at a cost of US$0.02 per liter. The team, led by architect David Hertz, intends to use the award to productize the system to address water scarcity in the developing world. 2014–2019 The Global Learning XPRIZE The Global Learning XPRIZE, launched in September 2014, is a $15-million prize to create mobile apps to improve reading, writing, and arithmetic in developing nations. Each application will be developed during an 18-month period and the top five teams will receive $1 million each, with each of the winning apps being made available under an open-source license. The finalist of the group, that then develops an app producing the highest performance gains, will win an additional $10 million top prize. On May 15, 2019, the grand prize winners were announced; there was a tie between Kitkit School from South Korea and the United States, and one billion from Kenya and the United Kingdom. 2015–2019 Shell Ocean Discovery XPRIZE On December 14, 2015, XPRIZE Founder Peter Diamandis announced the launch of a new $7 million prize that will be a three-year global competition that challenges researchers to build better technologies for mapping Earth's seafloor. On May 31, 2019, the grand prize winner, receiving a total of $4M, was GEBCO-NF Alumni, an international team based in the United States, while KUROSHIO, from Japan, claimed $1M as the runner-up. GEBCO-NF Alumni used the unmanned boat Maxlimer to autonomously map of seafloor. 2015–2019 Adult Literacy XPRIZE The challenge set was to find or create solutions for improving the literacy proficiency of adults in reading within a 12-month period. The challenge was announced on June 8, 2015, and awarded $7 million by Barbara Bush Foundation for Family Literacy and the Dollar General Literacy Foundation. The winners, Learning Upgrade and People ForWords were announced February 7, 2019. 2020 Next-Gen Mask Challenge The $1 million Next-Gen Mask Prize is open to only 16–24 year olds and was sponsored by Marc Benioff and Jim Cramer, the host of Mad Money on CNBC. On December 23, 2020, The Luminosity Lab was named the winning team with their anti-fog mask design, taking home $500,000. 2020–2021 Pandemic Response XPRIZE This was a four-month challenge focused on the development of AI-driven systems to predict COVID-19 infection rates and to prescribe intervention plans. The $500,000 award was funded by Cognizant. The winners, VALENCIA IA4COVID19 from Spain and JSI vs COVID from Slovenia were announced on March 9, 2021. 2018–2022 ANA Avatar XPRIZE The $10M ANA Avatar XPRIZE aimed to create avatar systems that can transport human presence to remote locations in real time. The participants of this competition developed robotic systems that allow operators to see, hear, and interact with a remote environment in a way that feels as if they are truly there. On the other hand, people in the remote environment were given the impression that the operator was present inside the avatar robot. At the competition finals, held in November 2022 in Long Beach, CA, USA, the avatar systems were evaluated on their support for remotely interacting with humans, exploring new environments, and employing specialized skills. The winners of the competition were: Team NimbRo, University of Bonn, Germany won the grand prize of $5,000,000 Pollen Robotics, France won $2,000,000 Team Northeastern, Northeastern University, USA won $1,000,000 Canceled contests 2006–2013 Archon Genomics XPRIZE The Archon Genomics XPRIZE, the second XPRIZE to be offered by the foundation, was announced on October 4, 2006. The goal of the Archon Genomics XPRIZE was to greatly reduce the cost and increase the speed of human genome sequencing to create a new era of personalized, predictive, and preventive medicine, eventually transforming medical care from reactive to proactive. The $10 million prize purse was promised to the first team that can build a device and use it to sequence 100 human genomes within 10 days or less, with an accuracy of no more than one error in every 100,000 bases sequenced, with sequences accurately covering at least 98% of the genome, and at a recurring cost of no more than $1,000 per genome. If more than one team attempted the competition at the same time, and more than one team fulfilled all the criteria, then teams would have been ranked according to the time of completion. No more than three teams would have been ranked and would have shared the purse in the following manner: $7.5 million to the winner and $2.5 million to the second place team if two teams were successful, or $7 million, $2 million and $1 million if three teams are successful. Actual competition events were originally scheduled to occur twice a year, with all eligible teams given the opportunity to attempt, starting at precisely the same time as the other teams. This was changed to a single competition scheduled for September 5, 2013, to October 1, 2013, which was canceled on August 22, 2013. The CEO articulated the rationale for the change, "companies can do this for less than $5,000 per genome, in a few days or less – and are moving quickly towards the goals we set for the prize. For this reason, we have decided to cancel an XPRIZE for the first time ever." A public debate concerning the validity and potential implications of the cancellation was published March 27, 2014. 2007–2018 Google Lunar XPRIZE The Google Lunar XPRIZE was introduced on September 13, 2007. The goal of the prize was similar to that of the Ansari XPRIZE, to inspire a new generation of private investment in space exploration and technology. The challenge called for teams to compete in successfully launching, landing, and operating a rover on the lunar surface. The prize would award $20 million to the first team to land a rover on the Moon that successfully roved more than 500 meters and transmitted back high-definition images and video. There was a $5 million second prize, as well as $5 million in potential bonus prizes for extra features such as roving long distances (greater than 5,000 meters), capturing images of man-made objects on the Moon, or surviving a lunar night. On January 23, 2018, the prize ended when no team could schedule, confirm, and pay for a launch attempt. The XPRIZE Foundation announced that "no team would be able to make a launch attempt to reach the Moon by the March 31, 2018 deadline... and the US $30 million Google Lunar XPRIZE will go unclaimed." 2019-2024 Rainforest XPRIZE On November 19, 2019, the $10 million Rainforest XPrize was announced. Registration opened in February 2020 and the first round will began in September 2020. On November 18, 2024, the winning teams were announced. Limelight Rainforest won the $5 million first place award, Map of Life Rapid Assessments won the $2 million second place award, the Brazilian Team won the $1 million third place award, and a special award of $250,000 for integration of technology and outreach was awarded to ETH BiodivX. Active contests 2014 IBM Watson A.I. XPRIZE The A.I. XPRIZE was announced as having the aim to use an artificial intelligence system to deliver a compelling TED talk. Diamandis hopes to contrast the benevolent value of AI against the dystopian point of view that sometimes enter AI conversations. The winning team of the contest, which is scheduled for 2020, will be determined by the audience. 2015 NRG Cosia Carbon XPRIZE On September 29, 2015, Peter Diamandis, chairman and CEO of X Prize, announced the launch of a $20 million prize for a 4.5-year competition on testing technologies that converts CO2 into products with the highest net value to reduce carbon dioxide emissions of either coal or a natural gas power plant. Round three began in April 2018 as the 27 semifinalists were cut down to ten finalists; each is receiving an equal share of $5 million milestone prize money. Five teams will compete at a coal-fired power plant in Gillette, Wyoming. The remaining five teams will compete at a natural gas-fired power plant in Alberta, Canada. In February 2020 this operational round will conclude and winners will be announced the following month. A delay occurred, and in April 2021, the winners were announced: CarbonCure Technologies (Canada) and CarbonBuilt (United States). 2020 Rapid Covid Testing XPRIZE Rapid Covid Testing is a $6 million, six-month competition to develop faster, cheaper, and easier to use COVID-19 testing methods at scale. 2020 Feed the next billion XPRIZE In 2020, the XPRIZE "Feed the next billion" challenge was launched as a $15 million 3-year competition with the goal of developing authentic chicken breast or fish filet alternatives, made from non-animal based ingredients. The challenge is currently in the final round, with 6 teams competing for the finals in July 2024. 2021 Gigaton Scale Carbon Removal Funded by Elon Musk and the Musk Foundation, the $100 million carbon removal competition is the so far largest incentive prize in history. It is aimed "to inspire and help scale efficient solutions to collectively achieve the 10 gigaton per year carbon removal target by 2050, to help fight climate change and restore the Earth’s carbon balance". In April 2022, XPRIZE and the Musk Foundation announced that in celebration of Earth Day, 15 teams had been designated as milestone winners in the $100 million XPRIZE carbon removal competition. The milestone winners have received $1 million each, with the overall winners to be awarded $80 million in 2025. 2023 XPRIZE Healthspan In November 2023, XPRIZE announced the largest prize to date of $101 million for medical interventions targeting the biology of aging that show a restoration of 10 or more years of function in muscle, cognitive, and immune clinical endpoints. Winners are planned to be announced at the end of 2030. See also DARPA Grand Challenge Elevator:2010 Global Security Challenge H-Prize Hutter Prize Inducement prize contest L Prize Methuselah prize Orteig Prize References External links Non-profit organizations based in California Scientific research foundations in the United States Challenge awards Transhumanism 1995 establishments in California Organizations based in Culver City, California
Xprize Foundation
[ "Technology", "Engineering", "Biology" ]
4,131
[ "Genetic engineering", "Transhumanism", "Ethics of science and technology" ]
1,027,467
https://en.wikipedia.org/wiki/Reaction%20control%20system
A reaction control system (RCS) is a spacecraft system that uses thrusters to provide attitude control and translation. Alternatively, reaction wheels can be used for attitude control. Use of diverted engine thrust to provide stable attitude control of a short-or-vertical takeoff and landing aircraft below conventional winged flight speeds, such as with the Harrier "jump jet", may also be referred to as a reaction control system. Reaction control systems are capable of providing small amounts of thrust in any desired direction or combination of directions. An RCS is also capable of providing torque to allow control of rotation (roll, pitch, and yaw). Reaction control systems often use combinations of large and small (vernier) thrusters, to allow different levels of response. Uses Spacecraft reaction control systems are used for: attitude control during different stages of a mission; station keeping in orbit; close maneuvering during docking procedures; control of orientation, or "pointing the nose" of the craft; a backup means of deorbiting; ullage motors to prime the fuel system for a main engine burn. Because spacecraft only contain a finite amount of fuel and there is little chance to refill them, alternative reaction control systems have been developed so that fuel can be conserved. For stationkeeping, some spacecraft (particularly those in geosynchronous orbit) use high-specific impulse engines such as arcjets, ion thrusters, or Hall effect thrusters. To control orientation, a few spacecraft, including the ISS, use momentum wheels which spin to control rotational rates on the vehicle. Location of thrusters on spacecraft The Mercury space capsule and Gemini reentry module both used groupings of nozzles to provide attitude control. The thrusters were located off their center of mass, thus providing a torque to rotate the capsule. The Gemini capsule was also capable of adjusting its reentry course by rolling, which directed its off-center lifting force. The Mercury thrusters used a hydrogen peroxide monopropellant which turned to steam when forced through a tungsten screen, and the Gemini thrusters used hypergolic mono-methyl hydrazine fuel oxidized with nitrogen tetroxide. The Gemini spacecraft was also equipped with a hypergolic Orbit Attitude and Maneuvering System, which made it the first crewed spacecraft with translation as well as rotation capability. In-orbit attitude control was achieved by firing pairs of eight thrusters located around the circumference of its adapter module at the extreme aft end. Lateral translation control was provided by four thrusters around the circumference at the forward end of the adaptor module (close to the spacecraft's center of mass). Two forward-pointing thrusters at the same location, provided aft translation, and two thrusters located in the aft end of the adapter module provided forward thrust, which could be used to change the craft's orbit. The Gemini reentry module also had a separate Reentry Control System of sixteen thrusters located at the base of its nose, to provide rotational control during reentry. The Apollo Command Module had a set of twelve hypergolic thrusters for attitude control, and directional reentry control similar to Gemini. The Apollo Service Module and Lunar Module each had a set of sixteen R-4D hypergolic thrusters, grouped into external clusters of four, to provide both translation and attitude control. The clusters were located near the craft's average centers of mass, and were fired in pairs in opposite directions for attitude control. A pair of translation thrusters are located at the rear of the Soyuz spacecraft; the counter-acting thrusters are similarly paired in the middle of the spacecraft (near the center of mass) pointing outwards and forward. These act in pairs to prevent the spacecraft from rotating. The thrusters for the lateral directions are mounted close to the center of mass of the spacecraft, in pairs as well. Location of thrusters on spaceplanes The suborbital X-15 and a companion training aero-spacecraft, the NF-104 AST, both intended to travel to an altitude that rendered their aerodynamic control surfaces unusable, established a convention for locations for thrusters on winged vehicles not intended to dock in space; that is, those that only have attitude control thrusters. Those for pitch and yaw are located in the nose, forward of the cockpit, and replace a standard radar system. Those for roll are located at the wingtips. The X-20, which would have gone into orbit, continued this pattern. Unlike these, the Space Shuttle Orbiter had many more thrusters, which were required to control vehicle attitude in both orbital flight and during the early part of atmospheric entry, as well as carry out rendezvous and docking maneuvers in orbit. Shuttle thrusters were grouped in the nose of the vehicle and on each of the two aft Orbital Maneuvering System pods. No nozzles interrupted the heat shield on the underside of the craft; instead, the nose RCS nozzles which control positive pitch were mounted on the side of the vehicle, and were canted downward. The downward-facing negative pitch thrusters were located in the OMS pods mounted in the tail/afterbody. International Space Station systems The International Space Station uses electrically powered control moment gyroscopes (CMG) for primary attitude control, with RCS thruster systems as backup and augmentation systems. References External links NASA.gov Space Shuttle RCS Spacecraft attitude control Spacecraft design Spacecraft propulsion
Reaction control system
[ "Engineering" ]
1,118
[ "Spacecraft design", "Design", "Aerospace engineering" ]
1,027,480
https://en.wikipedia.org/wiki/List%20of%207400-series%20integrated%20circuits
The following is a list of 7400-series digital logic integrated circuits. In the mid-1960s, the original 7400-series integrated circuits were introduced by Texas Instruments with the prefix "SN" to create the name SN74xx. Due to the popularity of these parts, other manufacturers released pin-to-pin compatible logic devices and kept the 7400 sequence number as an aid to identification of compatible parts. However, other manufacturers use different prefixes and suffixes on their part numbers. Overview Some TTL logic parts were made with an extended military-specification temperature range. These parts are prefixed with 54 instead of 74 in the part number. A short-lived 64 prefix on Texas Instruments parts indicated an industrial temperature range; this prefix had been dropped from the TI literature by 1973. Most recent 7400-series parts are fabricated in CMOS or BiCMOS technology rather than TTL. Surface-mount parts with a single gate (often in a 5-pin or 6-pin package) are prefixed with 741G instead of 74. Some manufacturers released some 4000-series equivalent CMOS circuits with a 74 prefix, for example, the 74HC4066 was a replacement for the 4066 with slightly different electrical characteristics (different power-supply voltage ratings, higher frequency capabilities, lower "on" resistances in analog switches, etc.). See List of 4000-series integrated circuits. Conversely, the 4000-series has "borrowed" from the 7400 series such as the CD40193 and CD40161 being pin-for-pin functional replacements for 74C193 and 74C161. Older TTL parts made by manufacturers such as Signetics, Motorola, Mullard and Siemens may have different numeric prefix and numbering series entirely, such as in the European FJ family FJH101 is an 8-input NAND gate like a 7430. A few alphabetic characters to designate a specific logic subfamily may immediately follow the 74 or 54 in the part number, e.g., 74LS74 for low-power Schottky. Some CMOS parts such as 74HCT74 for high-speed CMOS with TTL-compatible input thresholds are functionally similar to the TTL part. Not all functions are available in all families. The generic descriptive feature of these alphabetic characters was diluted by various companies participating in the market at its peak and are not always consistent especially with more recent offerings. The National Semiconductor trademarks of the words FAST and FACT are usually cited in the descriptions from other companies when describing their own unique designations. In a few instances, such as the 7478 and 74107, the same suffix in different families do not have completely equivalent logic functions. Another extension to the series is the 7416xxx variant, representing mostly the 16-bit-wide counterpart of otherwise 8-bit-wide "base" chips with the same three ending digits. Thus e.g. a "7416373" would be the 16-bit-wide equivalent of a "74373". Some 7416xxx parts, however, do not have a direct counterpart from the standard 74xxx range but deliver new functionality instead, which needs making use of the 7416xxx series' higher pin count. For more details, refer primarily to the Texas Instruments documentation mentioned in the References section. For CMOS (AC, HC, etc.) subfamilies, read "open drain" for "open collector" in the table below. There are a few numeric suffixes that have multiple conflicting assignments, such as the 74453. Logic gates Since there are numerous 7400-series parts, the following groups related parts to make it easier to pick a useful part number. This section only includes combinational logic gates. For part numbers in this section, "x" is the 7400-series logic family, such as LS, ALS, HCT, AHCT, HC, AHC, LVC, ... Normal inputs / push–pull outputs {| class="wikitable" |- ! Configuration !! Buffer !! Inverter |- | Hex 1-input || 74x34 || 74x04 |} {| class="wikitable" |- ! Configuration !! AND !! NAND !! OR !! NOR !! XOR !! XNOR |- | Quad 2-input || 74x08 || 74x00 || 74x32 || 74x02 || 74x86 || 74x7266 |- | Triple 3-Input || 74x11 || 74x10 || 74x4075 || 74x27 || style="background: grey; text-align: center;" | n/a || style="background: grey; text-align: center;" | n/a |- | Dual 4-input || 74x21 || 74x20 || 74x4072 || 74x29 || style="background: grey; text-align: center;" | n/a || style="background: grey; text-align: center;" | n/a |- | Single 8-input || style="background: grey; text-align: center;" | n/a || 74x30 || 74x4078 || 74x4078 || style="background: grey; text-align: center;" | n/a || style="background: grey; text-align: center;" | n/a |} Schmitt-trigger inputs / push–pull outputs {| class="wikitable" |- ! Configuration !! Buffer !! Inverter |- | Hex 1-input || 74x7014 || 74x14 |} {| class="wikitable" |- ! Configuration !! AND !! NAND !! OR !! NOR |- | Quad 2-input || 74x7001 || 74x132 || 74x7032 || 74x7002 |- | Dual 4-input || style="background: grey; text-align: center;" | n/a || 74x13 || style="background: grey; text-align: center;" | n/a || style="background: grey; text-align: center;" | n/a |} Normal inputs / open-collector outputs {| class="wikitable" |- ! Configuration !! Buffer !! Inverter |- | Hex 1-input || 74x07 || 74x05 |} {| class="wikitable" |- ! Configuration !! AND !! NAND !! OR !! NOR !! XOR !! XNOR |- | Quad 2-input || 74x09 || 74x03 || style="background: grey; text-align: center;" | n/a || 74x33 || 74x136 || 74x266 |- | Triple 3-input || 74x15 || 74x12 || style="background: grey; text-align: center;" | n/a || style="background: grey; text-align: center;" | n/a || style="background: grey; text-align: center;" | n/a || style="background: grey; text-align: center;" | n/a |- | Dual 4-input || style="background: grey; text-align: center;" | n/a || 74x22 || style="background: grey; text-align: center;" | n/a || style="background: grey; text-align: center;" | n/a || style="background: grey; text-align: center;" | n/a || style="background: grey; text-align: center;" | n/a |} Schmitt-trigger inputs / three-state outputs {| class="wikitable" |- ! Configuration !! Buffer !Inverter |- | Octal 1-input || 74x24174x244 | 74x240 |} AND-OR-invert (AOI) logic gates NOTE: in past decades, a number of AND-OR-invert (AOI) parts were available in 7400 TTL families, but currently most are obsolete. SN5450 = dual 2-2 AOI gate, one is expandable (SN54 is military version of SN74) SN74LS51 = 2-2 AOI gate and 3-3 AOI gate SN54LS54 = single 2-3-3-2 AOI gate Larger footprints Parts in this section have a pin count of 14 pins or more. The lower part numbers were established in the 1960s and 1970s, then higher part numbers were added incrementally over decades. IC manufacturers continue to make a core subset of this group, but many of these part numbers are considered obsolete and no longer manufactured. Older discontinued parts may be available from a limited number of sellers as new old stock (NOS), though some are much harder to find. For the following table: Part number column the "x" is a place holder for the logic subfamily name. For example, 74x00 in "LS" logic family would be "74LS00". Description column simplified to make it easier to sort, thus isn't identical to datasheet title. The terms Schmitt trigger, open-collector/open-drain, three-state were moved to the input and output columns to make it easier to sort by those features. Input column a blank cell means a normal input for the logic family type. Output column a blank cell means a "totem pole" output, also known as a push–pull output, with the ability to drive ten standard inputs of the same logic subfamily (fan-out NO = 10). Outputs with higher output currents are often called drivers or buffers. Pins column number of pins for the dual in-line package (DIP) version; a number in parentheses (round brackets) indicates that there is no known dual in-line package version of this IC. Widebus Devices The widebus range in the 74xxx series includes higher-numbered parts like the 7416xxx and others, designed for extended functionality beyond standard chips. These components often feature 16-bit or wider data handling, serving as direct expansions of existing 8-bit designs (e.g., 74373 to 7416373) or introducing entirely new capabilities. They utilize higher pin counts to support larger data buses, advanced operations, and scalable digital logic solutions for more complex circuit requirements. Smaller footprints As board designs have migrated away from large amounts of logic chips, so has the need for many of the same gate in one package. Since about 1996, there has been an ongoing trend towards one / two / three logic gates per chip. Now logic can be placed where it is physically needed on a board, instead of running long signal traces to a full-size logic chip that has many of the same gate. All chips in the following sections are available 5- to 10-pin surface-mount packages. The right digits, after the 1G/2G/3G, typically has the same functional features as older legacy chips, except for the multifunctional chips and 4-digit chip numbers, which are unique to these newer families. The "x" in the part number is a place holder for the logic family name. For example, 74x1G14 in "LVC" logic family would be "74LVC1G14". The previously stated prefixes of "SN-" and "MC-" are used to denote manufacturers, Texas Instruments and ON Semiconductor respectively. Some of the manufacturers that make these smaller IC chips are: Diodes Incorporated, Nexperia (NXP Semiconductors), ON Semiconductor (Fairchild Semiconductor), Texas Instruments (National Semiconductor), Toshiba. The logic families available in small footprints are: AHC, AHCT, AUC, AUP, AXP, HC, HCT, LVC, VHC, NC7S, NC7ST, NC7SU, NC7SV. The LVC family is very popular in small footprints because it supports the most common logic voltages of 1.8 V, 3.3 V, 5 V, its inputs are 5 V tolerant when the device is powered at a lower voltage, and an output drive of 24 mA. Gates that are commonly available across most small footprint families are 00, 02, 04, 08, 14, 32, 86, 125, 126. Chips in this section typically contain the number of units noted by the number immediately before the 'G' in their prefix (e.g. 2G = 2 gates). Voltage translation All chips in this section have two power-supply pins to translate unidirectional logic signals between two different logic voltages. The logic families that support dual-supply voltage translation are AVC, AVCH, AXC, AXCH, AXP, LVC, where the "H" in AVCH and AXCH means "bus hold" feature. Chips in the above table support the following voltage ranges on either power supply pin: AXC = 0.65 to 3.6 V. Only available from Texas Instruments. AXP = 0.9 to 5.5 V. Only available from Nexperia. LVC = 1.65 to 5.5 V. Available from Diodes Inc, Nexperia, Texas Instruments. See also 4000-series integrated circuits List of 4000-series integrated circuits Push–pull output, Open-collector output, Three-state output Schmitt trigger input Logic gate, Logic family Programmable logic device Pin compatibility References Further reading Digital Integrated Circuits, National Semiconductor Corporation, January 1974 Logic/Memories/Interface/Analog/Microprocessor/Military Data Manual, Signetics Corporation, 1976 The TTL Data Book for Design Engineers, Second Edition, Texas Instruments, 1976 Bipolar LSI 1982 Databook, Monolithic Memories Incorporated, September 1981 Schottky TTL Data, DL121R1 Series D Third Printing, Motorola, 1983 High-Speed CMOS Logic Data Book, Texas Instruments, 1984 Logic: Overview, Texas Instruments Incorporated ALVC Advanced Low-Voltage CMOS Including SSTL, HSTL, And ALB (Rev. B), Texas Instruments, 2002 IC Master, 1976 7400 Electronic design Electronics lists 7400
List of 7400-series integrated circuits
[ "Technology", "Engineering" ]
3,089
[ "Computer engineering", "Digital electronics", "Electronic design", "Electronic engineering", "Design", "Integrated circuits" ]
1,027,608
https://en.wikipedia.org/wiki/Ejabberd
ejabberd is an Extensible Messaging and Presence Protocol (XMPP) application server and an MQ Telemetry Transport (MQTT) broker, written mainly in the Erlang programming language. It can run under several Unix-like operating systems such as macOS, Linux, FreeBSD, NetBSD, OpenBSD and OpenSolaris. Additionally, ejabberd can run under Microsoft Windows. The name ejabberd stands for Erlang Jabber Daemon (Jabber being a former name for XMPP) and is written in lowercase only, as is common for daemon software. ejabberd is free software, distributed under the terms of the GNU GPL-2.0-or-later. , it is one of the most popular open source applications written in Erlang. XMPP: The Definitive Guide (O'Reilly Media, 2009) praised ejabberd for its scalability and clustering feature, at the same time pointing out that being written in Erlang is a potential acceptance issue for users and contributors. The software's creator, Alexey Shchepin was awarded the Erlang User of the Year award at the 2006 Erlang user conference. ejabberd has a number of notable deployments, IETF Groupchat Service, BBC Radio LiveText, Nokia's Ovi, KDE Talk and one in development at Facebook. ejabberd is the most popular server among smaller XMPP-powered sites that register on xmpp.org. With the next major release after version 2 (previously called ejabberd 3), the versioning scheme was changed to reflect release dates as "Year.Month-Revision" (starting with 13.04-beta1). It was also announced that further development will be split into an "ejabberd Community Server" and an "ejabberd Commercial Edition [which] targets carriers, websites, service providers, large corporations, universities, game companies, that need high level of commitment from ProcessOne, stability and performance and a unique set of features to run their business successfully." Project history Alexey Shchepin started ejabberd in November 2002 for three main reasons: success with Tkabber (his previous project, an XMPP client), a rather unstable first alpha release of jabberd2, and his wish to play with Erlang features. Shchepin has stated that he would have not started ejabberd without Erlang. Ejabberd hit version 1.0 in December 2005. Features ejabberd has a high level of compliance with XMPP. It provides a web interface which can be translated into other languages. ejabberd supports distributed computing by clustering, supports live upgrades, shared roster groups and provides support for virtual hosts. Database management systems supported include PostgreSQL and MySQL, and ODBC is supported for connectivity to other systems. LDAP authentication is supported, as is login via SSL/TLS, SASL and STARTTLS. ejabberd is extensible via modules, which can provide support for additional capabilities such as saving offline messages, connecting with IRC channels, or a user database which makes use of user's vCards (saving vCards in LDAP or an ODBC compatible database is possible with other modules). In addition, modules can provide support for extensions of the XMPP protocol, such as MUC, HTTP polling, Publish-Subscribe, and gathering statistics via XMPP. Starting with version 2.0.0 ejabberd also includes support for the Proxy65 file transfer proxy which enabled Jabber/XMPP users behind firewalls to share files through a SOCKS 5 proxy. ejabberd can communicate with other XMPP servers and with non-XMPP instant messaging networks as well, using a special type of XMPP component called transport or gateway. Distribution methods In addition to the source code package and binary installers for Linux, macOS, and Windows, ejabberd is also available in several operating system distributions as is typical in FOSS, including Debian, Fedora, Gentoo, Ubuntu, Arch Linux, OpenSolaris, FreeBSD Ports, OpenBSD ports, NetBSD port and Mac OS X's Fink. Other methods in which ejabberd is available are the TurnKey Linux Virtual Appliance Library and Comprehensive Erlang Archive Network (CEAN). An old version of ejabberd is included in the Unison unified communications software. Notable deployments ejabberd is known to be used by XMPP-related sites and a number of companies, either for providing an XMPP instant messaging service, as a meeting chat room service, or as middleware for other software (usually by means of the Publish-Subscribe service). XMPP servers deployed for XMPP-related sites are usually run using ejabberd, both in case of large and small servers. One large public XMPP servers runs ejabberd: the Russian jabber.ru, that handles between 10,000 and 20,000 concurrent users at any time. Among smaller XMPP-related sites, ejabberd is also the most popular server. When not taking into account the size or nature of the server, ejabberd is also the most widely deployed: according to IMtrends report from July 2008 based in automated server detection, 37% of 7292 servers were running ejabberd; the second position being jabberd14 with 22.4% and the third Openfire with 18.4%. Among generic instant messaging deployments are ISPs like the Portuguese SAPO, and the German United Internet for services like GMX and Web.de. The Russian search engine Yandex uses a highly modified version of ejabberd, named Yabberd. Nokia's Ovi uses ejabberd with some customizations. Major League Baseball offers instant messaging and chatrooms using a customized ejabberd. Mxit was a large server for mobile instant messaging client that started using ejabberd in 2005, but was replaced with a custom IM engine. Universities known to use ejabberd include: Saint Petersburg State University, Taganrog State University and the Division of Information Technology of the University of Wisconsin–Madison. In the FOSS world, there is a pair of notable generic deployments of ejabberd, namely the KDE Talk and the Fellowship of the Free Software Foundation Europe. ejabberd chatroom feature provides the IETF Groupchat Service, used by the various working groups, areas, and BOF sessions during meetings and at other times. Other deployments use ejabberd in more novel ways. For instance, BBC Radio LiveText uses ejabberd's Publish-Subscribe service to synchronously broadcast text content with the radio stream. sameplace.cc is a Mozilla Firefox extension that integrates Jabber/XMPP in the web browser, and uses ejabberd for the XMPP server. Other deployments include Chesspark (online chess playing site), Collecta (real-time search), and Notifixious (notifications of website subscriptions). One Laptop per Child's School server uses ejabberd with OLPC-specific patches as the instant messaging server. In 2008 Facebook announced that they will support XMPP for their chat service. Facebook developers made a presentation on the topic at Commercial Users of Functional Programming (CUFP) 2009 conference, and in November 2009 chat.facebook.com was detected as running a modified version of ejabberd. Om Malik commented on the development as "disruptive" competition for "older IM networks such as AOL's AIM and Microsoft's MSN". On Feb 10th 2010, the Facebook blog announced the opening of the XMPP interface to Facebook chat, based on ejabberd. Another social media and blogging service that uses ejabberd is LiveJournal Talk. The Spanish-focused Tuenti social network uses a modified ejabberd to provide a live chat service. The worldwide jabber.org XMPP server, with a userbase of 330,000 users and 15,000 users online at any one time in December 2009, have used ejabberd since February 2006 until January 2010. (In 2010 Jabber.org migrated to M-Link XMPP server from Isode Limited.) Nintendo Switch uses ejabberd in its "Nintendo Switch Push Notification infrastructure" (NPNS) handling 10 million simultaneous connections. Publications and reception Two articles are published about ejabberd in magazines: "Démarrer avec ejabberd" in the French magazine PROgrammez! and "Passing notes in class", a post in Free Software Magazine. Computerworld Australia interviewed Erlang creator Joe Armstrong in June 2009, and he referred to ejabberd in this way: Q: "What's the most interesting program(s) you've seen written with Erlang for business?" A: "That's difficult to answer, there are many good applications. Possibly Ejabberd which is an open-source Jabber/XMPP instant messaging server. Ejabberd appears to be the market leading XMPP server and things like Google Wave which runs on top of XMPP will probably attract a lot of people into building applications on XMPP servers." Builder Australia interviewed Andre Pang in September 2007, and referred to ejabberd in those terms: "the apps that Erlang is suited for really aren't CPU bound that often, if you look at ejabberd, it serves some absolutely crazy amount of concurrent connections, well over 100,000, and they're running it on, I'm not sure, but it's something like a Quad core XEON machine." ejabberd is mentioned in several books related to the XMPP protocol and the Erlang language. XMPP: The Definitive Guide (O'Reilly Media, 2009) refers to ejabberd in those terms: The server is well-known for its scalability, and it can be clustered across multiple instances. A 2006 internal review paper in the IT department of Cambridge University found it the best choice amongst Jabber servers. In the same year Alexey Shchepin was awarded the "User of the Year" award at the 12th International Erlang/OTP User Conference. Other published books that mention ejabberd are: "Programming Erlang: Software for a Concurrent World" (Pragmatic Bookshelf, 2008) "Erlang Programming: A Concurrent Approach to Software Development" (O'Reilly Media, 2009) "Openfire Administration: A practical step-by-step guide to rolling out a secure Instant Messaging service over your network" (Packt Publishing, 2008) "Fedora 11 and Red Hat Enterprise Linux Bible" (Wiley, 2009) ejabberd was used in research works of papers published in international conferences proceedings and journals: XMPP for cloud computing in bioinformatics supporting discovery and invocation of asynchronous web services Kestrel: an XMPP-based framework for many task computing applications IM'ing overload: Libraryh3lp to the rescue Towards an Enhanced Adaptability and Usability of Web-Based Collaborative Systems Leveraging Visual Tailoring and Synchronous Awareness in Web-Based Collaborative Systems Adding New Communication Services to the FIPA Message Transport System There are four patent applications published in the United States Patent and Trademark Office that involve ejabberd: US 2007/0271367 A1: Systems and Methods for Location-Based Social Web Interaction and Instant Messaging System US 2008/0062969 A1: Instant Message Call Connect System Apparatus and Database US 2008/0062970 A1: Instant Message Call Connect System Method and Interface US 2008/0235189 A1: System for Searching for Information Based on Personal Interactions and Presences and Methods Thereof See also References External links Instant messaging server software Erlang (programming language) Free software programmed in Erlang XMPP
Ejabberd
[ "Technology" ]
2,540
[ "Instant messaging", "XMPP", "Instant messaging server software" ]
1,027,729
https://en.wikipedia.org/wiki/Fluoride%20toxicity
Fluoride toxicity is a condition in which there are elevated levels of the fluoride ion in the body. Although fluoride is safe for dental health at low concentrations, sustained consumption of large amounts of soluble fluoride salts is dangerous. Referring to a common salt of fluoride, sodium fluoride (NaF), the lethal dose for most adult humans is estimated at 5 to 10 g (which is equivalent to 32 to 64 mg elemental fluoride/kg body weight). Ingestion of fluoride can produce gastrointestinal discomfort at doses at least 15 to 20 times lower (0.2–0.3 mg/kg or 10 to 15 mg for a 50 kg person) than lethal doses. Although it is helpful topically for dental health in low dosage, chronic ingestion of fluoride in large amounts interferes with bone formation. In this way, the most widespread examples of fluoride poisoning arise from consumption of ground water that is abnormally fluoride-rich. Recommended levels For optimal dental health, the World Health Organization recommends a level of fluoride from 0.5 to 1.0 mg/L (milligrams per liter), depending on climate. Fluorosis becomes possible above this recommended dosage. As of 2015, the United States Health and Human Services Department recommends a maximum of 0.7 milligrams of fluoride per liter of water – updating and replacing the previous recommended range of 0.7 to 1.2 milligrams issued in 1962. The new recommended level is intended to reduce the occurrence of dental fluorosis while maintaining water fluoridation. Toxicity Chronic In India an estimated 60 million people have been poisoned by well water contaminated by excessive fluoride, which is dissolved from the granite rocks. The effects are particularly evident in the bone deformities of children. Similar or larger problems are anticipated in other countries including China, Uzbekistan, and Ethiopia. Acute Historically, most cases of acute fluoride toxicity have followed accidental ingestion of sodium fluoride based insecticides or rodenticides. Currently, in advanced countries, most cases of fluoride exposure are due to the ingestion of dental fluoride products. Other sources include glass-etching or chrome-cleaning agents like ammonium bifluoride or hydrofluoric acid, industrial exposure to fluxes used to promote the flow of a molten metal on a solid surface, volcanic ejecta (for example, in cattle grazing after an 1845–1846 eruption of Hekla and the 1783–1784 flood basalt eruption of Laki), and metal cleaners. Malfunction of water fluoridation equipment has happened several times, including a notable incident in Alaska. Occurrence Organofluorine compounds Twenty percent of modern pharmaceuticals contain fluorine. These organofluorine compounds are not sources of fluoride poisoning, as the carbon–fluorine bond is too strong to release fluoride. Fluoride in toothpaste Children may experience gastrointestinal distress upon ingesting excessive amounts of flavored toothpaste. Between 1990 and 1994, over 628 people, mostly children were treated after ingesting too much fluoride-containing toothpaste. "While the outcomes were generally not serious," gastrointestinal symptoms appear to be the most common problem reported. However given the low concentration of fluoride in dental products, this is potentially due to the consumption of other major components. Fluoride in drinking water Around one-third of the world's population drinks water from groundwater resources. Of this, about 10 percent, approximately 300 million people, obtains water from groundwater resources that are heavily contaminated with arsenic or fluoride. These trace elements derive mainly from leaching of minerals. Maps are available for locations of potential problematic wells via the Groundwater Assessment Platform (GAP). Effects Excess fluoride consumption has been studied as a factor in the following: Brain Some research has suggested that high levels of fluoride exposure may adversely affect neurodevelopment in children, but the evidence is of insufficient quality to allow any firm conclusions to be drawn. In 2024, a U.S. government study released by HHS found higher levels of fluoride exposure, such as drinking water containing more than 1.5 milligrams of fluoride per liter (which is the recommended safe limit set by the WHO), are associated with lower IQ in children. Bones Whilst fluoridated water is associated with decreased levels of fractures in a population, toxic levels of fluoride have been associated with a weakening of bones and an increase in hip and wrist fractures. The U.S. National Research Council concludes that fractures with fluoride levels 1–4 mg/L, suggesting a dose-response relationship, but states that there is "suggestive but inadequate for drawing firm conclusions about the risk or safety of exposures at [2 mg/L]". Consumption of fluoride at levels beyond those used in fluoridated water for a long period of time causes skeletal fluorosis. In some areas, particularly the Asian subcontinent, skeletal fluorosis is endemic. It is known to cause irritable-bowel symptoms and joint pain. Early stages are not clinically obvious, and may be misdiagnosed as (seronegative) rheumatoid arthritis or ankylosing spondylitis. Kidney Fluoride induced nephrotoxicity is kidney injury due to toxic levels of serum fluoride, commonly due to release of fluoride from fluorine-containing drugs, such as methoxyflurane. Within the recommended dose, no effects are expected, but chronic ingestion in excess of 12 mg/day are expected to cause adverse effects, and an intake that high is possible when fluoride levels are around 4 mg/L. Those with impaired kidney function are more susceptible to adverse effects. The kidney injury is characterised by failure to concentrate urine, leading to polyuria, and subsequent dehydration with hypernatremia and hyperosmolarity. Inorganic fluoride inhibits adenylate cyclase activity required for antidiuretic hormone effect on the distal convoluted tubule of the kidney. Fluoride also stimulates intrarenal vasodilation, leading to increased medullary blood flow, which interferes with the counter current mechanism in the kidney required for concentration of urine. Fluoride induced nephrotoxicity is dose dependent, typically requiring serum fluoride levels exceeding 50 micromoles per liter (about 1 ppm) to cause clinically significant renal dysfunction, which is likely when the dose of methoxyflurane exceeds 2.5 MAC hours. (Note: "MAC hour" is the multiple of the minimum alveolar concentration (MAC) of the anesthetic used times the number of hours the drug is administered, a measure of the dosage of inhaled anesthetics.) Elimination of fluoride depends on glomerular filtration rate. Thus, patients with chronic kidney disease will maintain serum fluoride for longer period of time, leading to increased risk of fluoride induced nephrotoxicity. Teeth The only generally accepted adverse effect of fluoride at levels used for water fluoridation is dental fluorosis, which can alter the appearance of children's teeth during tooth development; this is mostly mild and usually only an aesthetic concern. Compared to unfluoridated water, fluoridation to 1 mg/L is estimated to cause fluorosis in one of every 6 people (range 4–21), and to cause fluorosis of aesthetic concern in one of every 22 people (range 13.6–∞). Thyroid Fluoride's suppressive effect on the thyroid is more severe when iodine is deficient, and fluoride is associated with lower levels of iodine. Thyroid effects in humans were associated with fluoride levels 0.05–0.13 mg/kg/day when iodine intake was adequate and 0.01–0.03 mg/kg/day when iodine intake was inadequate. Its mechanisms and effects on the endocrine system remain unclear. Testing on mice shows that the medication gamma-Aminobutyric acid (GABA) can be used to treat fluoride toxicity of the thyroid and return normal function. Effects on aquatic organisms Fluoride accumulates in the bone tissues of fish and in the exoskeleton of aquatic invertebrates. The mechanism of fluoride toxicity in aquatic organisms is believed to involve the action of fluoride ions as enzymatic poisons. In soft waters with low ionic content, invertebrates and fishes may develop adverse effects from fluoride concentration as low as 0.5 mg/L. Negative effects are less in hard waters and seawaters, as the bioavailability of fluoride ions is reduced with increasing water hardness Seawater contains fluoride at a concentration of 1.3 mg/L. Mechanism Like most soluble materials, fluoride compounds are readily absorbed by the stomach and intestines, and excreted through the urine. Urine tests have been used to ascertain rates of excretion in order to set upper limits in exposure to fluoride compounds and associated detrimental health effects. Ingested fluoride initially acts locally on the intestinal mucosa, where it forms hydrofluoric acid in the stomach. References External links Element toxicology Toxic effects of substances chiefly nonmedicinal as to source Fluorides
Fluoride toxicity
[ "Chemistry", "Environmental_science" ]
1,990
[ "Element toxicology", "Toxicology", "Biology and pharmacology of chemical elements", "Salts", "Toxic effects of substances chiefly nonmedicinal as to source", "Fluorides" ]
1,027,770
https://en.wikipedia.org/wiki/Society%20for%20the%20Protection%20of%20Ancient%20Buildings
The Society for the Protection of Ancient Buildings (SPAB) (also known as Anti-Scrape) is an amenity society founded by William Morris, Philip Webb, and others in 1877 to oppose the destructive 'restoration' of ancient buildings occurring in Victorian England. "Ancient" is used here in the wider sense rather than the more usual modern sense of "pre-medieval." History Morris' call for the society to be founded was provoked by Sir Gilbert Scott's proposed restoration of Tewkesbury Abbey. In an 1877 letter printed in The Athenæum, he wrote Alongside Morris, Philip Webb was instrumental in establishing the society in the month following Morris' letter. Initial supporters announced at the group's initial meeting included Thomas Carlyle, John Ruskin, James Bryce, Sir John Lubbock, Leslie Stephen, Coventry Patmore, Edward Burne-Jones, Holman Hunt, Lord Houghton and A. J. Mundella. Morris drafted a manifesto, and served as Honorary Secretary for the society's first year, continuing as an active member for the remainder of his life. Morris was particularly concerned about the practice, which he described as "forgery", of attempting to return functioning buildings to an idealized state from the distant past, often involving the removal of elements added in their later development, which he thought had contributed to their interest as documents of the past. Instead, he proposed that ancient buildings should be repaired, not restored, to protect as cultural heritage their entire history. Today, these principles are widely accepted. Morris referred to the society as "Anti-scrape", a reference to the practice of scraping plaster and other later additions from buildings in order to reveal bare stonework. Early causes taken on by the society included Scott's plans for Tewkesbury Abbey; a planned restoration of the choir of Canterbury Cathedral; destruction of Christopher Wren's churches in the City of London; and the rebuilding of the nave roof of St Alban's Abbey. In 1879, they organised representations against a proposal to rebuild the West front of St Mark's Basilica, Venice. The approach to conservation advocated by the SPAB was influential upon the National Trust after it acquired its first building, Alfriston Clergy House, in 1895. The SPAB had earlier been consulted on the building and had put the owners in contact with the nascent National Trust. The Trust opted to take a preservation approach to the building, in line with SPAB ideas, which has remained its principle for all its buildings acquired since. The architect A.R. Powys served as the Secretary of the SPAB for 25 years in the early 20th century. Organization and activities Today, the SPAB still operates according to Morris's original manifesto. It campaigns, advises, runs training programmes and courses, conducts research, and publishes information. As one of the National Amenity Societies, the Society is a statutory consultee on alterations to listed buildings, and by law must be notified of any application in England and Wales to demolish any listed building in whole or in part. The society, which is a registered charity, is based at 37 Spital Square, London. The society has branches in Scotland, Ireland, and Wales. In 2022, the society reported 6579 members. For its dedicated service to heritage, the society was awarded the European Union Prize for Cultural Heritage / Europa Nostra Award in 2012. The society's Mills Section is concerned with the protection, repair, and continued use of traditional windmills and watermills. Ken Major carried out much work on its behalf. An annual award honours the memory of church enthusiast and SPAB member Sir John Betjeman. The award is presented for outstanding repairs to the fabric of places of worship in England and Wales completed in the last 18 months. See also Ancient Monuments Society The Georgian Group Building Preservation Trust Building preservation and conservation trusts in the UK Architectural Heritage Society of Scotland Scottish Civic Trust References Further reading Miele, Chris, Ed. (2005) From William Morris. Building Conservation and the Arts and Crafts Cult of Authenticity 1877–1939. New Haven and London: Yale University Press. Donovan, Andrea (2007) William Morris and the Society for the Protection of Ancient Buildings. London: Routledge. Vallance, Aymer (1897/1995) The Life and Work of William Morris. George Bell and Sons 1897. Reprint Studio Editions. London. 1995. Beatty, Claudius J.P. (1995) Thomas Hardy: Conservation Architect – His Work for the Society for the Protection of Ancient Buildings. Dorset Natural History and Archaeological Society. Lethaby, W.R.(1935/1979) Philip Webb and His Work. Oxford University Press 1935. Reprint Raven Oak Press. London. 1979. MacCarthy, Fiona (1994) William Morris. A Life for Our Time. London:Faber and Faber. Snell, Reginald (1986) William Weir and Dartington Hall. Dartington Hall Trust. External links 1877 establishments in England Architecture in England Clubs and societies in London Charities based in London Heritage organisations in England Heritage organisations in Scotland Architectural history Conservation and restoration organizations
Society for the Protection of Ancient Buildings
[ "Engineering" ]
1,035
[ "Architectural history", "Architecture" ]
1,027,784
https://en.wikipedia.org/wiki/Simplicial%20set
In mathematics, a simplicial set is a sequence of sets with internal order structure (abstract simplices) and maps between them. Simplicial sets are higher-dimensional generalizations of directed graphs. Every simplicial set gives rise to a "nice" topological space, known as its geometric realization. This realization consists of geometric simplices, glued together according to the rules of the simplicial set. Indeed, one may view a simplicial set as a purely combinatorial construction designed to capture the essence of a topological space for the purposes of homotopy theory. Specifically, the category of simplicial sets carries a natural model structure, and the corresponding homotopy category is equivalent to the familiar homotopy category of topological spaces. Formally, a simplicial set may be defined as a contravariant functor from the simplex category to the category of sets. Simplicial sets were introduced in 1950 by Samuel Eilenberg and Joseph A. Zilber. Simplicial sets are used to define quasi-categories, a basic notion of higher category theory. A construction analogous to that of simplicial sets can be carried out in any category, not just in the category of sets, yielding the notion of simplicial objects. Motivation A simplicial set is a categorical (that is, purely algebraic) model capturing those topological spaces that can be built up (or faithfully represented up to homotopy) from simplices and their incidence relations. This is similar to the approach of CW complexes to modeling topological spaces, with the crucial difference that simplicial sets are purely algebraic and do not carry any actual topology. To get back to actual topological spaces, there is a geometric realization functor which turns simplicial sets into compactly generated Hausdorff spaces. Most classical results on CW complexes in homotopy theory are generalized by analogous results for simplicial sets. While algebraic topologists largely continue to prefer CW complexes, there is a growing contingent of researchers interested in using simplicial sets for applications in algebraic geometry where CW complexes do not naturally exist. Intuition Simplicial sets can be viewed as a higher-dimensional generalization of directed multigraphs. A simplicial set contains vertices (known as "0-simplices" in this context) and arrows ("1-simplices") between some of these vertices. Two vertices may be connected by several arrows, and directed loops that connect a vertex to itself are also allowed. Unlike directed multigraphs, simplicial sets may also contain higher simplices. A 2-simplex, for instance, can be thought of as a two-dimensional "triangular" shape bounded by a list of three vertices A, B, C and three arrows B → C, A → C and A → B. In general, an n-simplex is an object made up from a list of n + 1 vertices (which are 0-simplices) and n + 1 faces (which are (n − 1)-simplices). The vertices of the i-th face are the vertices of the n-simplex minus the i-th vertex. The vertices of a simplex need not be distinct and a simplex is not determined by its vertices and faces: two different simplices may share the same list of faces (and therefore the same list of vertices), just like two different arrows in a multigraph may connect the same two vertices. Simplicial sets should not be confused with abstract simplicial complexes, which generalize simple undirected graphs rather than directed multigraphs. Formally, a simplicial set X is a collection of sets Xn, n = 0, 1, 2, ..., together with certain maps between these sets: the face maps dn,i : Xn → Xn−1 (n = 1, 2, 3, ... and 0 ≤ i ≤ n) and degeneracy maps sn,i : Xn→Xn+1 (n = 0, 1, 2, ... and 0 ≤ i ≤ n). We think of the elements of Xn as the n-simplices of X. The map dn,i assigns to each such n-simplex its i-th face, the face "opposite to" (i.e. not containing) the i-th vertex. The map sn,i assigns to each n-simplex the degenerate (n+1)-simplex which arises from the given one by duplicating the i-th vertex. This description implicitly requires certain consistency relations among the maps dn,i and sn,i. Rather than requiring these simplicial identities explicitly as part of the definition, the short modern definition uses the language of category theory. Formal definition Let Δ denote the simplex category. The objects of Δ are nonempty totally ordered sets. Each object is uniquely order isomorphic to an object of the form [n] = {0, 1, ..., n} with n ≥ 0. The morphisms in Δ are (non-strictly) order-preserving functions between these sets. A simplicial set X is a contravariant functor X : Δ → Set where Set is the category of sets. (Alternatively and equivalently, one may define simplicial sets as covariant functors from the opposite category Δop →f Set.) Given a simplicial set X, we often write Xn instead of X([n]). Simplicial sets form a category, usually denoted sSet, whose objects are simplicial sets and whose morphisms are natural transformations between them. This is the category of presheaves on Δ. As such, it is a topos. Face and degeneracy maps and simplicial identities The morphisms (maps) of the simplex category Δ are generated by two particularly important families of morphisms, whose images under a given simplicial set functor are called the face maps and degeneracy maps of that simplicial set. The face maps of a simplicial set X are the images in that simplicial set of the morphisms , where is the only (order-preserving) injection that "misses" . Let us denote these face maps by respectively, so that is a map . If the first index is clear, we write instead of . The degeneracy maps of the simplicial set X are the images in that simplicial set of the morphisms , where is the only (order-preserving) surjection that "hits" twice. Let us denote these degeneracy maps by respectively, so that is a map . If the first index is clear, we write instead of . The defined maps satisfy the following simplicial identities: if i < j. (This is short for if 0 ≤ i < j ≤ n.) if i < j. if i = j or i = j + 1. if i > j + 1. if i ≤ j. Conversely, given a sequence of sets Xn together with maps and that satisfy the simplicial identities, there is a unique simplicial set X that has these face and degeneracy maps. So the identities provide an alternative way to define simplicial sets. Examples Given a partially ordered set (S, ≤), we can define a simplicial set NS, called the nerve of S, as follows: for every object [n] of Δ we set NS([n]) = homposet( [n] , S), the set of order-preserving maps from [n] to S. Every morphism φ: [n] → [m] in Δ is an order preserving map, and via composition induces a map NS(φ) : NS([m]) → NS([n]). It is straightforward to check that NS is a contravariant functor from Δ to Set: a simplicial set. Concretely, the n-simplices of the nerve NS, i.e. the elements of NSn = NS([n]), can be thought of as ordered length-(n+1) sequences of elements from S: (a0 ≤ a1 ≤ ... ≤ an). The face map di drops the i-th element from such a list, and the degeneracy maps si duplicates the i-th element. A similar construction can be performed for every category C, to obtain the nerve NC of C. Here, NC([n]) is the set of all functors from [n] to C, where we consider [n] as a category with objects 0,1,...,n and a single morphism from i to j whenever i ≤ j. Concretely, the n-simplices of the nerve NC can be thought of as sequences of n composable morphisms in C: a0 → a1 → ... → an. (In particular, the 0-simplices are the objects of C and the 1-simplices are the morphisms of C.) The face map d0 drops the first morphism from such a list, the face map dn drops the last, and the face map di for 0 < i < n drops ai and composes the i-th and (i + 1)-th morphisms. The degeneracy maps si lengthen the sequence by inserting an identity morphism at position i. We can recover the poset S from the nerve NS and the category C from the nerve NC; in this sense simplicial sets generalize posets and categories. Another important class of examples of simplicial sets is given by the singular set SY of a topological space Y. Here SYn consists of all the continuous maps from the standard topological n-simplex to Y. The singular set is further explained below. The standard n-simplex and the category of simplices The standard n-simplex, denoted Δn, is a simplicial set defined as the functor homΔ(-, [n]) where [n] denotes the ordered set {0, 1, ... ,n} of the first (n + 1) nonnegative integers. (In many texts, it is written instead as hom([n],-) where the homset is understood to be in the opposite category Δop.) By the Yoneda lemma, the n-simplices of a simplicial set X stand in 1–1 correspondence with the natural transformations from Δn to X, i.e. . Furthermore, X gives rise to a category of simplices, denoted by , whose objects are maps (i.e. natural transformations) Δn → X and whose morphisms are natural transformations Δn → Δm over X arising from maps [n] → [m] in Δ. That is, is a slice category of Δ over X. The following isomorphism shows that a simplicial set X is a colimit of its simplices: where the colimit is taken over the category of simplices of X. Geometric realization There is a functor |•|: sSet → CGHaus called the geometric realization taking a simplicial set X to its corresponding realization in the category |•|: sSet → CGHaus of compactly-generated Hausdorff topological spaces. Intuitively, the realization of X is the topological space (in fact a CW complex) obtained if every n-simplex of X is replaced by a topological n-simplex (a certain n-dimensional subset of (n + 1)-dimensional Euclidean space defined below) and these topological simplices are glued together in the fashion the simplices of X hang together. In this process the orientation of the simplices of X is lost. To define the realization functor, we first define it on standard n-simplices Δn as follows: the geometric realization |Δn| is the standard topological n-simplex in general position given by The definition then naturally extends to any simplicial set X by setting |X| = limΔn → X | Δn| where the colimit is taken over the n-simplex category of X. The geometric realization is functorial on sSet. It is significant that we use the category CGHaus of compactly-generated Hausdorff spaces, rather than the category Top of topological spaces, as the target category of geometric realization: like sSet and unlike Top, the category CGHaus is cartesian closed; the categorical product is defined differently in the categories Top and CGHaus, and the one in CGHaus corresponds to the one in sSet via geometric realization. Singular set for a space The singular set of a topological space Y is the simplicial set SY defined by (SY)([n]) = homTop(|Δn|, Y) for each object [n] ∈ Δ. Every order-preserving map φ:[n]→[m] induces a continuous map |Δn|→|Δm| in a natural way, which by composition yields SY(φ) : SY([m]) → SY([n]). This definition is analogous to a standard idea in singular homology of "probing" a target topological space with standard topological n-simplices. Furthermore, the singular functor S is right adjoint to the geometric realization functor described above, i.e.: homTop(|X|, Y) ≅ homsSet(X, SY) for any simplicial set X and any topological space Y. Intuitively, this adjunction can be understood as follows: a continuous map from the geometric realization of X to a space Y is uniquely specified if we associate to every simplex of X a continuous map from the corresponding standard topological simplex to Y, in such a fashion that these maps are compatible with the way the simplices in X hang together. Homotopy theory of simplicial sets In order to define a model structure on the category of simplicial sets, one has to define fibrations, cofibrations and weak equivalences. One can define fibrations to be Kan fibrations. A map of simplicial sets is defined to be a weak equivalence if its geometric realization is a weak homotopy equivalence of spaces. A map of simplicial sets is defined to be a cofibration if it is a monomorphism of simplicial sets. It is a difficult theorem of Daniel Quillen that the category of simplicial sets with these classes of morphisms becomes a model category, and indeed satisfies the axioms for a proper closed simplicial model category. A key turning point of the theory is that the geometric realization of a Kan fibration is a Serre fibration of spaces. With the model structure in place, a homotopy theory of simplicial sets can be developed using standard homotopical algebra methods. Furthermore, the geometric realization and singular functors give a Quillen equivalence of closed model categories inducing an equivalence |•|: Ho(sSet) ↔ Ho(Top) between the homotopy category for simplicial sets and the usual homotopy category of CW complexes with homotopy classes of continuous maps between them. It is part of the general definition of a Quillen adjunction that the right adjoint functor (in this case, the singular set functor) carries fibrations (resp. trivial fibrations) to fibrations (resp. trivial fibrations). Simplicial objects A simplicial object X in a category C is a contravariant functor X : Δ → C or equivalently a covariant functor X: Δop → C, where Δ still denotes the simplex category and op the opposite category. When C is the category of sets, we are just talking about the simplicial sets that were defined above. Letting C be the category of groups or category of abelian groups, we obtain the categories sGrp of simplicial groups and sAb of simplicial abelian groups, respectively. Simplicial groups and simplicial abelian groups also carry closed model structures induced by that of the underlying simplicial sets. The homotopy groups of simplicial abelian groups can be computed by making use of the Dold–Kan correspondence which yields an equivalence of categories between simplicial abelian groups and bounded chain complexes and is given by functors N: sAb → Ch+ and Γ: Ch+ →  sAb. History and uses of simplicial sets Simplicial sets were originally used to give precise and convenient descriptions of classifying spaces of groups. This idea was vastly extended by Grothendieck's idea of considering classifying spaces of categories, and in particular by Quillen's work of algebraic K-theory. In this work, which earned him a Fields Medal, Quillen developed surprisingly efficient methods for manipulating infinite simplicial sets. These methods were used in other areas on the border between algebraic geometry and topology. For instance, the André–Quillen homology of a ring is a "non-abelian homology", defined and studied in this way. Both the algebraic K-theory and the André–Quillen homology are defined using algebraic data to write down a simplicial set, and then taking the homotopy groups of this simplicial set. Simplicial methods are often useful when one wants to prove that a space is a loop space. The basic idea is that if is a group with classifying space , then is homotopy equivalent to the loop space . If itself is a group, we can iterate the procedure, and is homotopy equivalent to the double loop space . In case is an abelian group, we can actually iterate this infinitely many times, and obtain that is an infinite loop space. Even if is not an abelian group, it can happen that it has a composition which is sufficiently commutative so that one can use the above idea to prove that is an infinite loop space. In this way, one can prove that the algebraic -theory of a ring, considered as a topological space, is an infinite loop space. In recent years, simplicial sets have been used in higher category theory and derived algebraic geometry. Quasi-categories can be thought of as categories in which the composition of morphisms is defined only up to homotopy, and information about the composition of higher homotopies is also retained. Quasi-categories are defined as simplicial sets satisfying one additional condition, the weak Kan condition. See also Delta set Dendroidal set, a generalization of simplicial set Simplicial presheaf Quasi-category Kan complex Dold–Kan correspondence Simplicial homotopy Simplicial sphere Abstract simplicial complex Notes References (An elementary introduction to simplicial sets). Further reading May, J. Peter. Simplicial Objects in Algebraic Topology, University of Chicago Press 1967 Algebraic topology Homotopy theory Functors
Simplicial set
[ "Mathematics" ]
4,093
[ "Functions and mappings", "Mathematical structures", "Mathematical objects", "Algebraic topology", "Basic concepts in set theory", "Families of sets", "Mathematical relations", "Simplicial sets", "Functors", "Topology", "Fields of abstract algebra", "Category theory" ]
1,028,158
https://en.wikipedia.org/wiki/Canonical%20basis
In mathematics, a canonical basis is a basis of an algebraic structure that is canonical in a sense that depends on the precise context: In a coordinate space, and more generally in a free module, it refers to the standard basis defined by the Kronecker delta. In a polynomial ring, it refers to its standard basis given by the monomials, . For finite extension fields, it means the polynomial basis. In linear algebra, it refers to a set of n linearly independent generalized eigenvectors of an n×n matrix , if the set is composed entirely of Jordan chains. In representation theory, it refers to the basis of the quantum groups introduced by Lusztig. Representation theory The canonical basis for the irreducible representations of a quantized enveloping algebra of type and also for the plus part of that algebra was introduced by Lusztig by two methods: an algebraic one (using a braid group action and PBW bases) and a topological one (using intersection cohomology). Specializing the parameter to yields a canonical basis for the irreducible representations of the corresponding simple Lie algebra, which was not known earlier. Specializing the parameter to yields something like a shadow of a basis. This shadow (but not the basis itself) for the case of irreducible representations was considered independently by Kashiwara; it is sometimes called the crystal basis. The definition of the canonical basis was extended to the Kac-Moody setting by Kashiwara (by an algebraic method) and by Lusztig (by a topological method). There is a general concept underlying these bases: Consider the ring of integral Laurent polynomials with its two subrings and the automorphism defined by . A precanonical structure on a free -module consists of A standard basis of , An interval finite partial order on , that is, is finite for all , A dualization operation, that is, a bijection of order two that is -semilinear and will be denoted by as well. If a precanonical structure is given, then one can define the submodule of . A canonical basis of the precanonical structure is then a -basis of that satisfies: and for all . One can show that there exists at most one canonical basis for each precanonical structure. A sufficient condition for existence is that the polynomials defined by satisfy and . A canonical basis induces an isomorphism from to . Hecke algebras Let be a Coxeter group. The corresponding Iwahori-Hecke algebra has the standard basis , the group is partially ordered by the Bruhat order which is interval finite and has a dualization operation defined by . This is a precanonical structure on that satisfies the sufficient condition above and the corresponding canonical basis of is the Kazhdan–Lusztig basis with being the Kazhdan–Lusztig polynomials. Linear algebra If we are given an n × n matrix and wish to find a matrix in Jordan normal form, similar to , we are interested only in sets of linearly independent generalized eigenvectors. A matrix in Jordan normal form is an "almost diagonal matrix," that is, as close to diagonal as possible. A diagonal matrix is a special case of a matrix in Jordan normal form. An ordinary eigenvector is a special case of a generalized eigenvector. Every n × n matrix possesses n linearly independent generalized eigenvectors. Generalized eigenvectors corresponding to distinct eigenvalues are linearly independent. If is an eigenvalue of of algebraic multiplicity , then will have linearly independent generalized eigenvectors corresponding to . For any given n × n matrix , there are infinitely many ways to pick the n linearly independent generalized eigenvectors. If they are chosen in a particularly judicious manner, we can use these vectors to show that is similar to a matrix in Jordan normal form. In particular, Definition: A set of n linearly independent generalized eigenvectors is a canonical basis if it is composed entirely of Jordan chains. Thus, once we have determined that a generalized eigenvector of rank m is in a canonical basis, it follows that the m − 1 vectors that are in the Jordan chain generated by are also in the canonical basis. Computation Let be an eigenvalue of of algebraic multiplicity . First, find the ranks (matrix ranks) of the matrices . The integer is determined to be the first integer for which has rank (n being the number of rows or columns of , that is, is n × n). Now define The variable designates the number of linearly independent generalized eigenvectors of rank k'' (generalized eigenvector rank; see generalized eigenvector) corresponding to the eigenvalue that will appear in a canonical basis for . Note that Once we have determined the number of generalized eigenvectors of each rank that a canonical basis has, we can obtain the vectors explicitly (see generalized eigenvector). Example This example illustrates a canonical basis with two Jordan chains. Unfortunately, it is a little difficult to construct an interesting example of low order. The matrix has eigenvalues and with algebraic multiplicities and , but geometric multiplicities and . For we have has rank 5, has rank 4, has rank 3, has rank 2. Therefore Thus, a canonical basis for will have, corresponding to one generalized eigenvector each of ranks 4, 3, 2 and 1. For we have has rank 5, has rank 4. Therefore Thus, a canonical basis for will have, corresponding to one generalized eigenvector each of ranks 2 and 1. A canonical basis for is is the ordinary eigenvector associated with . and are generalized eigenvectors associated with . is the ordinary eigenvector associated with . is a generalized eigenvector associated with . A matrix in Jordan normal form, similar to is obtained as follows: where the matrix is a generalized modal matrix for and . See also Canonical form Change of basis Normal basis Normal form (disambiguation) Polynomial basis Notes References Linear algebra Abstract algebra Lie algebras Representation theory Quantum groups
Canonical basis
[ "Mathematics" ]
1,273
[ "Algebra", "Fields of abstract algebra", "Linear algebra", "Representation theory", "Abstract algebra" ]
1,028,202
https://en.wikipedia.org/wiki/House%20law
House laws () are rules that govern a royal family or dynasty in matters of eligibility for succession to a throne, membership in a dynasty, exercise of a regency, or entitlement to dynastic rank, titles and styles. Prevalent in European monarchies during the nineteenth century, few countries have house laws any longer, so that they are, as a category of law, of more historical than current significance. If applied today, house laws are mostly upheld by members of royal and princely families as a matter of tradition. Some dynasties have codified house laws, which then form a distinct section of the laws of the realm, e.g., Monaco, Japan, Liechtenstein and, formerly, most of Germany's principalities, as well as Austria and Russia. Other monarchies had few laws regulating royal life. In still others, whatever laws existed were not gathered in any particular section of the nation's laws. In Germany where many dynasties reigned as more or less independent sovereigns, laws governing dynastic rights constituted a distinct branch of jurisprudence called private princely law (). The house laws of the German ruling families had a direct influence on Scandinavian kingdoms including Denmark and Sweden. Dynastic traditions In some cases, house laws are rules or traditions that are treated as if they have the force of law. In the United Kingdom an example of this might be considered the custom whereby a wife shares in her husband's hereditary titles and rank. While this is settled common law with respect to the wives of peers and commoners, it is less clear when it comes to consorts of the king and princes. When, in 1923, Prince Albert, Duke of York became the first male member of the British royal family to marry a non-princess in more than 300 years (with the sovereign's approval), so an announcement was apparently issued by Buckingham Palace and carried in the London Gazette and The Times, "It is officially announced that, in accordance with the settled general rule that a wife takes the status of her husband, Lady Elizabeth Bowes-Lyon on her marriage has become Her Royal Highness the Duchess of York, with the status of a Princess". This issue was re-visited by the British government in 1937 and 2005, when the marriages of a former and a future king to divorcées cast into doubt what titulature was appropriate for women who were to become, essentially, the private wives of royal princes. As can be gleaned from discussions at the time, popular certainty that "a woman is entitled to share her husband's status", has by no means been seen as absolutely clear by government experts and lawyers upon examining the matter. In the case of the marriage of Prince Charles to Camilla Parker Bowles, in 2005, the matter was settled by the decision that Camilla, whilst legally the Princess of Wales, would only use her secondary title of Duchess of Cornwall, out of respect to public sensibilities and to her predecessor, Diana, Princess of Wales. Extraordinary law Where they have existed, dynastic house laws have often been extraordinary compared to other national laws. The house laws of the families of the Austrian and German emperors were not made public until after the fall of the monarchy in 1918. Luxembourg's grand duke has made modifications to his country's dynastic law that remain unknown to the public at present. Russia's house laws were applied—or not—at the tsar's discretion. Even today, the house laws of the dynasty that has exclusive right to succeed to the throne of Liechtenstein may not be amended by either the parliament or populace of the principality, and until the late 1990s the reigning Prince could not be dethroned except according to the house law—which stipulated that ouster was only possible by a vote of his own family members. Royal marriages Nearly all house laws have regulated dynasts' right to marry. Paul I of Russia established the house law of the Romanovs (the Pauline Laws), one of the strictest in Europe. The consorts of Russian dynasts had to be "equally born" (i.e., belong to a royal or ruling house) and be approved by the tsar. While some German dynasties included in their laws language requiring or urging the monarch to consent to any "equal" marriage, some heads of dynastic houses rejected royal matches on behalf of their family members. The French pretender denied his daughter, Princess Hélène d'Orléans, the opportunity to become Queen Consort of Britain by refusing her permission to convert to Anglicanism to marry Prince Albert Victor, Duke of Clarence. In the late 19th or early 20th centuries the monarchs of Belgium, Russia, and Spain all withheld consent from members of their families to marry for love into foreign dynasties: Grand Duke Cyril Vladimirovich of Russia and Infante Alfonso de Borbon-Orléans of Spain sought to marry a pair of sisters who were also British princesses, Princess Victoria Melita of Edinburgh and Princess Beatrice of Edinburgh, choosing to elope and endure (temporary) banishment rather than obey their sovereigns' commands. Evolution of dynastic law European dynasties dethroned at the end of World War I continue to enforce their house laws even though they had no legal authority to do so. Some continued doing so through the 20th century (Bourbon-Sicily, Prussia, Württemberg). Governments in extant monarchies, without calling the legal mechanisms house laws, have generally strengthened their control over the marriages of members of their royal families since the second half of the 20th century. Previously a prince could often morganatically marry a woman not deemed acceptable as a royal consort, relegating her and their children to a sub-royal status. That is rarely an option anymore. In most Western European monarchies of today, a prince must renounce or forfeit membership in the royal house if his chosen spouse is not deemed suitable, e.g., Prince Friso of Orange-Nassau. See also Salic law Imperial Household Law Hereditary monarchy References Kinship and descent Monarchy
House law
[ "Biology" ]
1,215
[ "Behavior", "Human behavior", "Kinship and descent" ]
1,028,264
https://en.wikipedia.org/wiki/Asterism%20%28gemology%29
An asterism () is a star-shaped concentration of light reflected or refracted from a gemstone. It can appear when a suitable stone is cut en cabochon (i.e. shaped and polished, not faceted). A gemstone that exhibits this effect is called a star stone or asteria. The best known is star sapphire, but many other minerals can also be asteria, usually due to impurities in the crystal structure. Archetype The archetypal asteria is the star sapphire, generally corundum with near uniform impurities which is bluish-grey and milky or opalescent, which when lit has a star of six rays. In the red instance stellate reflection is rarer; the star-ruby occasionally found with the star-sapphire in Sri Lanka is among the most valued of "fancy stones". Other examples are star-topaz (8 rays) and star diopside (4 rays); star garnets may display four-rayed or six-rayed asterisms. Description Asterism is generated by reflections of light from twin-lamellae or from extremely fine needle-shaped acicular inclusions within the stone's crystal structure. A common cause is oriented sub-microscopic crystals of rutile within the gem mineral. It occurs in rubies, sapphires, garnet, diopside, and spinel when a cabochon is cut from a suitable stone. Star sapphires and rubies display the property from titanium dioxide impurities (rutile) present in them. The star-effect or "asterism" is caused by the difference in refractive index between the host material and that of the dense inclusions of tiny fibers of rutile (also known as "silk"). Rutile causes the relative bright relief of a star in a host material such as corundum, which has a refractive index between 1.760 and 1.778, much lower than that of rutile. The stars are caused by the light reflecting from needle-like inclusions of rutile aligned perpendicularly to the rays of the star. The star-effect may be also caused by the inclusions of hematite. In black star sapphire hematite needles formed parallel to the faces of the second order prism produce asterism. Some star sapphires from Thailand contain both hematite and rutile needles forming a 12-ray star. Star-stones were formerly regarded with much superstition. Pliny the Elder's example is consistent with a moonstone; he described it as a colourless stone from India within which was the appearance of a star shining with the light of the moon. However, since rutile is present in most common star gemstones, these are almost never completely transparent. A distinction can be made between two types of asterism: Epiasterism, such as that seen in sapphire and most other gems, is the result of a reflection of light on parallel arranged inclusions inside the gemstone. Diasterism, such as that seen in rose quartz, is the result of light transmitted through the stone. In order to see this effect, the stone must be illuminated from behind. Rose quartz also exhibits epiasterism. See also Isomorphism (crystallography) Chatoyancy References D. S. Phillips, T. E. Mitchell and A. H. Heuer,"Precipitation in Star Sapphire I: Identification of the Precipitates, Phil. Mag. A, 1980, v. 42, N0. 3, pp 385–404 Gemology Optical phenomena
Asterism (gemology)
[ "Physics" ]
744
[ "Optical phenomena", "Physical phenomena" ]
1,028,265
https://en.wikipedia.org/wiki/Asterism%20%28astronomy%29
An asterism is an observed pattern or group of stars in the sky. Asterisms can be any identified pattern or group of stars, and therefore are a more general concept than the 88 formally defined constellations. Constellations are based on asterisms, but unlike asterisms, constellations outline and today completely divide the sky and all its celestial objects into regions around their central asterisms. For example, the asterism known as the Big Dipper or the Plough comprises the seven brightest stars in the constellation Ursa Major. Another asterism is the triangle, within the constellation of Capricornus. Asterisms range from simple shapes of just a few stars to more complex collections of many stars covering large portions of the sky. The stars themselves may be bright naked-eye objects or fainter, even telescopic, but they are generally all of a similar brightness to each other. The larger brighter asterisms are useful for people who are familiarizing themselves with the night sky. The patterns of stars seen in asterisms are not necessarily a product of any physical association between the stars, but are rather the result of the particular perspectives of their observations. For example the Summer Triangle is a purely observational physically unrelated group of stars, but the stars of Orion's Belt are all members of the Orion OB1 association and five of the seven stars of the Big Dipper are members of the Ursa Major Moving Group. Physical associations, such as the Hyades or Pleiades, can be asterisms in their own right and part of other asterisms at the same time. Background of asterisms and constellations In many early civilizations, it was common to associate groups of stars in connect-the-dots stick-figure patterns. Some of the earliest records are those of ancient India in the Vedanga Jyotisha and the Babylonians. Different cultures identified different constellations, although a few of the more obvious patterns tend to appear in the constellations of multiple cultures, such as those of Orion and Scorpius. As anyone could arrange and name a grouping of stars there was no distinct difference between a constellation and an asterism. For example, Pliny the Elder mentions 72 asterisms in his book Naturalis Historia. A general list containing 48 constellations likely began to develop with the astronomer Hipparchus (c. 190 – c. 120 BCE). As constellations were considered to be composed only of the stars that constituted the figure, it was always possible to use any leftover stars to create and squeeze in a new grouping among the established constellations. Exploration by Europeans to other parts of the globe exposed them to stars previously unknown to them. Two astronomers particularly known for greatly expanding the number of southern constellations were Johann Bayer (1572–1625) and Nicolas Louis de Lacaille (1713–1762). Bayer had listed twelve figures made out of stars that were too far south for Ptolemy to have seen. Lacaille created 14 new groups, mostly for the area surrounding South Celestial Pole. Many of these proposed constellations have been formally accepted, but the rest have remained as asterisms. In 1928, the International Astronomical Union (IAU) precisely divided the sky into 88 official constellations following geometric boundaries encompassing all of the stars within them. Any additional new selected groupings of stars or former constellations are often considered as asterisms. However, technical distinctions between the terms 'constellation' and 'asterism' often remain somewhat ambiguous. Asterisms consisting of first-magnitude stars Some asterisms consist completely of bright first-magnitude stars, which mark out simple geometric shapes. The Summer Triangle of Deneb, Altair, and Vega – α Cygni, α Aquilae, and α Lyrae – is prominent in the northern hemisphere summer skies, as its three stars are all of the 1st magnitude. The stars of the Triangle are in the band of the Milky Way which marks the galactic equator, and are in the direction of the Galactic Center. The Winter Triangle is visible in the northern sky's winter and comprises the first magnitude stars Betelgeuse, Sirius and Procyon (the second and fourth closest star or star system visible without aid). The larger northern Winter Hexagon includes seven of the twenty-two first-magnitude stars visible in the sky, with Pollux, Capella, Aldebaran, Rigel, Sirius and Procyon, and with the 2nd-magnitude Castor on the periphery, and Betelgeuse off-center. Adding Betelgeuse then it is known as the Heavenly 'G'''. It encircles the galactic anticenter, as well as incorporates constellations such as Gemini and Orion. It also includes in the background of Aldebaran the Hyades, the nearest star cluster and one of five first-magnitude deep-sky objects, two of which can be seen just north-east of the Hyades, the Pleiades also in the Taurus constellation and the Alpha Persei Cluster (with Alcyone and Mirfak as the brightest stars). The northern Spring Triangle consists of Arcturus, Regulus and Spica. The Great Diamond consisting of Arcturus, Spica, Denebola and Cor Caroli, the latter two not being first-magnitude stars. An east-west line from Arcturus to Denebola forms an equilateral triangle with Cor Caroli to the North, and another with Spica to the South. Together these two triangles form the Diamond. Formally, the stars of the Diamond are in the constellations Boötes, Virgo, Leo, and Canes Venatici. Other asterisms consist partially of multiple first-magnitude stars. The Southern Cross including the first-magnitude stars Acrux and Mimosa, west of the Carina Nebula (one of five first-magnitude deep-sky objects), and with the first-magnitude stars Alpha Centauri (the closest star to the Sun) and Beta Centauri pointing at the cross, distinguishing the cross from less bright and similar asterisms like the Diamond Cross or False Cross. All other first-magnitude stars are the only such stars in their asterisms or constellations, with Canopus in the Argo Navis asterism south of Sirius, visually east of the Carina Nebula and near the Large Magellanic Cloud (both being first-magnitude deep-sky objects), Achernar in the Eridanus constellation east of Canopus, Fomalhaut in the Southern Fish constellation east of Achernar and Antares in the Scorpius constellation visually near the Galactic Center. Constellation-based asterisms The Big Dipper, also known as The Plough or Charles's Wain, is composed of the seven brightest stars in Ursa Major. These stars delineate the Bear's hindquarters and exaggerated tail, or alternatively, the "handle" forming the upper outline of the bear's head and neck. With its longer tail, Ursa Minor hardly appears bearlike at all, and is widely known by its pseudonym, the Little Dipper. The Northern Cross in Cygnus. The upright runs from Deneb (α Cyg) in the Swan's tail to Albireo (β Cyg) in the beak. The transverse runs from ε Cygni in one wing to δ Cygni in the other. The Southern Cross is an asterism by name, but the whole area is now recognised as the constellation Crux. The main stars are Alpha, Beta, Gamma, Delta, and arguably also Epsilon Crucis. Earlier, Crux was deemed an asterism when Bayer created it in Uranometria (1603) from the stars in the hind legs of Centaurus, decreasing the size of Centaur. These same stars were probably identified by Pliny the Elder in his Naturalis Historia as the asterism 'Thronos Caesaris.' The Fish Hook is the traditional Hawaiian name for Scorpius. The image will be even more obvious if the chart's lines from Antares (α Sco) to Beta Scorpii (β Sco) and Pi Scorpii (π Sco) are replaced with a line from Beta through Delta Scorpii (δ Sco) to Pi forming a large capped "J." Adding vertical lines to connect the limbs at the left and right in the main diagram of Hercules will complete the figure of the Butterfly. Boötes is sometimes known as the Ice Cream Cone. It is also known as the Kite. The stars of Cassiopeia form a W which is often used as a nickname. The Great Square of Pegasus is the quadrilateral formed by the stars Markab, Scheat, Algenib, and Alpheratz, representing the body of the winged horse. The asterism was recognized as the constellation ASH.IKU "The Field" on the MUL.APIN cuneiform tablets from about 1100 to 700 BC. Alpheratz is now only considered a part of the constellation Andromeda whereas formerly the star was a part of both constellations. The Bowl of Virgo is formed by the stars Beta, Gamma, Delta, Epsilon and Eta Virginis. Together with Spica, they form a Y shape. The Three Leaps of the Gazelle consists of three pairs of stars in Ursa Major aligned in a row spanning about 30 degrees. In Arabic lore, the star pairs are pictured as the hoof prints of a gazelle startled from a pond by Leo the lion. (The "pond" is pictured as the Coma Star Cluster.) The first pair of stars are Xi and Nu, second pair Upsilon and Lambda, third pair Kappa and Iota Ursa Majoris. The pairs also mark three of the bear's paws. Some asterisms refer to portions of traditional constellation figures. These include: The Water Jar or Urn of Aquarius is a Y-shaped figure centered upon Zeta Aquarii and includes Gamma, Eta and Pi. It pours water in a stream of more than 20 stars terminating with the star Fomalhaut. The Crab Breast of Cancer is a quadrilateral formed by the four stars Gamma, Delta, Eta and Theta Cancri which make up the carapace (inner shell) of the Crab. Contained within is the Beehive Cluster (Messier 44) which includes Epsilon Cancri. The Snake Head is the westernmost portion of Hydra consisting of the stars Delta, Epsilon, Zeta, Eta, Rho and Sigma Hydrae. Orion's Belt consists of the three bright stars Zeta (Alnitak), Epsilon (Alnilam) and Delta Orionis (Mintaka) which form the belt of Orion. The Bull's Face of Taurus is a V-shaped figure formed by prominent members of the Hyades cluster, including stars Gamma, Delta¹, Delta², Delta³, Epsilon, Theta Tauri, as well as the bright star Alpha Tauri (Aldebaran) which forms the red eye of the Bull. Other particular asterisms Other asterisms are also composed of stars from one constellation, but do not refer to the traditional figures. Four stars (Beta, Upsilon, Theta, and Omega Carinae) form a well-shaped diamond – the Diamond Cross. The Saucepan or Pot, being the same stars as the Belt and Sword of Orion. The end of the handle is at ι Orionis, with the far rim at η Orionis. The four central stars in Hercules, Epsilon (ε Her), Zeta (ζ Her), Eta (η Her), and Pi (π Her), form the Keystone. The bright globular cluster Messier 13 lies along the western segment, between Zeta and Eta. The curve of stars at the front end of the Lion from Epsilon (ε Leo) to Regulus (α Leo), looking much like a mirror-image question mark, has long been known as the Sickle. The brighter stars of Sagittarius form the Teapot. (The Large Sagittarius Star Cloud appears to be steam emerging from the "spout".) Northeast of the Teapot asterism lies the fainter Teaspoon, consisting of the stars ξ¹, ξ², ο, π, ρ¹ and ρ² Sagitarii. Four bright stars in Delphinus (Sualocin or α Delphini, Rotanev or β Delphini, γ Delphini and δ Delphini) form Job's Coffin. The Terebellum is a small quadrilateral of four faint stars (Omega, 59, 60, 62) in Sagittarius' hindquarters. Just south of Pegasus, the western fish of Pisces is home to the Circlet formed from Gamma (γ Piscium), Kappa (κ Piscium), Lambda (λ Piscium), TX Piscium, Iota (ι Piscium), and Theta (θ Piscium). Dubhe and Merak (Alpha and Beta Ursae Majoris), the two stars at the end of the bowl of the Big Dipper are often called the Pointers: a line from β to α and continued for about five times the distance between them arrives at the North Celestial Pole and the star Polaris (α UMi/Alpha Ursae Minoris), the North Star. Rigil Kentaurus (α Centauri) and Hadar (β Centauri) are the Southern Pointers leading to the Southern Cross and thus helping to distinguish Crux from the False Cross. Asterisms across multiple constellations Other asterisms that are formed from stars in more than one constellation. The Egyptian X is a large asterism which, like the Diamond of Virgo, is composed of a pair of equilateral triangles. Sirius (α CMa), Procyon (α CMi), and Betelgeuse (α Ori) form one to the North (Winter Triangle) while Sirius, Naos (ζ Pup), and Phakt (α Col) form another to the South. Unlike the Diamond, however, these triangles meet, not base-to-base, but vertex-to-vertex. The name derives from both the shape and, because the stars straddle the Celestial Equator, it is more easily seen from south of the Mediterranean than in Europe. The Lozenge is a small diamond formed from three stars – Eltanin, Grumium, and Rastaban (Gamma, Xi, and Beta Draconis) – in the head of Draco and one – Iota Herculis – in the foot of Hercules. The diamond-shaped False Cross is composed of the four stars Alsephina (δ Velorum), Markeb (κ Velorum), Avior (ε Carinae), and Aspidiske (ι Carinae). Although its component stars are not quite as bright as those of the Southern Cross, it is somewhat larger and better shaped than the Southern Cross, for which it is sometimes mistaken, causing errors in astronavigation. Like the Southern Cross, three of its main four stars are whitish and one orange. The Northern Y is formed by four prominent stars, Arcturus (α Boötis), Seginus (γ Boötis), Alphecca (α Coronae Borealis), and centered on Izar (ε Boötis). From the United Kingdom in particular, where there is serious light pollution in many areas and also twilight much of the night when these constellations appear, this "Y" is often visible while other stars of Boötes and Corona Borealis are not. The Lightning Bolt, aligned north to south, consists of the stars Epsilon Pegasi, Alpha Aquarii, Beta Aquarii and Delta Capricorni. Easily visible to naked eyes even in light polluted skies, the asterism is useful for orienting among three constellations. The Serpent Bowl is a large curved asterism spanning 3.5 hours of right ascension, from mid-northern latitudes best seen in July and August evenings. From west to east, it includes the stars Delta, Alpha and Epsilon Serpentis, Delta, Epsilon, Upsilon, Zeta and Eta Ophiuchi, Xi Serpentis, Nu and Tau Ophiuchi, Eta and Theta Serpentis. The Eagle Tail Corona is a flattened curved figure in the tail of Aquila and extending into Scutum. It consists of the stars 14, 15, Lambda and 12 Aquilae, Eta Scuti, HD 174208, R and Beta Scuti. The compact open cluster Messier 11 is also aligned with the curve. Telescopic asterisms Asterisms range from the large and obvious to the small, and even telescopic. The 37 or LE of NGC 2169, in Orion. The Engagement Ring in Ursa Minor has the north star Polaris as the diamond, at one end of a ring of much fainter stars about one degree across. The Broken Engagement Ring in Ursa Major at 10:51 / +56°10' (preceding β Ursae Majoris, Merak). The Christmas Tree shape of the Christmas Tree Cluster, in Monoceros. It is made up of about approximately 40 stars. The Coathanger, in Vulpecula, also known as Brocchi's Cluster (see image at top). Kemble's Cascade, a chain of stars that ends in open cluster NGC 1502, in Camelopardalis. Napoleon's Hat (Picot 1), in Bootes (south of α Bootis, Arcturus). The Ring of the Nibelungen (Ferrero 27) in Draco, named after the 1857 German epic drama, at 15:57 / +62°32' (near galaxy NGC 6015). The V-shaped Messier 73 in Aquarius, determined to be an asterism in 2002. See also Australian Aboriginal astronomy Chinese constellation Nakshatra References Bibliography Allen, Richard Hinckley (1969). Star Names: Their Lore and Meaning. Dover Publications Inc. (Reprint of 1899 original). . Burnham, Robert (1978). Burnham's Celestial Handbook (3 vols). Dover Publications Inc. . Pasachoff, Jay M. (2000). A Field Guide to the Stars and Planets (4th ed.).'' Houghton Mifflin Co. External links List of Asterisms from deep-sky.co.uk Naked-Eye Asterisms from Milwaukee Astronomical Society List of Asterisms from deepsky.waarnemen.com List of Asterisms from nightskyatlas.com List of Asterisms from saguaroastro.org List of Asterisms from waynesthisandthat.com Stellar groupings + Former constellations
Asterism (astronomy)
[ "Astronomy" ]
3,929
[ "Constellations", "Former constellations", "Sky regions", "Asterisms (astronomy)" ]
1,028,268
https://en.wikipedia.org/wiki/Asterism%20%28typography%29
In typography, an asterism, ⁂, is a typographic symbol consisting of three asterisks placed in a triangle, which is used for a variety of purposes. The name originates from the astronomical term for a group of stars. The asterism was originally used as a type of dinkus in typography, though increasingly rarely. It can also be used to mean "untitled" or author or title withheld as seen, for example, in some editions of Album for the Young by composer Robert Schumann (№ 21, 26, and 30). In meteorology, an asterism in a station model indicates moderate snowfall. Dinkus A dinkus is a typographical device to divide text, such as at section breaks. Its purpose is to "indicate minor breaks in text", to call attention to a passage, or to separate sub-chapters in a book. An asterism used this way is thus a type of dinkus: nowadays this usage of the symbol is nearly obsolete. More commonly used dinkuses are three dots or three asterisks in a horizontal row. A small black and white drawing or a fleuron (❧) may be used for the same purpose. Otherwise, an extra space between paragraphs is used. A dinkus may be used in conjunction with the extra space to mark a smaller subdivision than a sub-chapter. See also Dingbat Ellipsis (three dots in mid-sentence) Signature mark References Typographical symbols Punctuation
Asterism (typography)
[ "Mathematics" ]
314
[ "Symbols", "Typographical symbols" ]
1,028,291
https://en.wikipedia.org/wiki/Krystyna%20Kuperberg
Krystyna M. Kuperberg (born Krystyna M. Trybulec; 17 July 1944) is a Polish-American mathematician who currently works as a professor of mathematics at Auburn University, where she was formerly an Alumni Professor of Mathematics. Early life and family Her parents, Jan W. and Barbara H. Trybulec, were pharmacists and owned a pharmacy in Tarnów. Her older brother is Andrzej Trybulec. Her husband Włodzimierz Kuperberg and her son Greg Kuperberg are also mathematicians, while her daughter Anna Kuperberg is a photographer. Education and career After attending high school in Gdańsk, she entered the University of Warsaw in 1962, where she studied mathematics. Her first mathematics course was taught by Andrzej Mostowski; later she attended topology lectures of Karol Borsuk and became fascinated by topology. After obtaining her undergraduate degree, Kuperberg began graduate studies at Warsaw under Borsuk, but stopped after earning a master's degree. She left Poland in 1969 with her young family to live in Sweden, then moved to the United States in 1972. She finished her Ph.D. in 1974, from Rice University, under the supervision of William Jaco. In the same year, both she and her husband were appointed to the faculty of Auburn University. From 1996 to 1998, Kuperberg served as an American Mathematical Society Council member at large. Contributions In 1987 she solved a problem of Bronisław Knaster concerning bi-homogeneity of continua. In the 1980s she became interested in fixed points and topological aspects of dynamical systems. In 1989 Kuperberg and Coke Reed solved a problem posed by Stanislaw Ulam in the Scottish Book. The solution to that problem led to her 1993 work in which she constructed a smooth counterexample to the Seifert conjecture. She has since continued to work in dynamical systems. Recognition In 1995 Kuperberg received the Alfred Jurzykowski Prize from the Kościuszko Foundation. Her major lectures include an American Mathematical Society Plenary Lecture in March 1995, a Mathematical Association of America Plenary Lecture in January 1996, and an International Congress of Mathematicians invited talk in 1998. In 2012 she became a fellow of the American Mathematical Society. Selected publications References Polish emigrants to the United States Polish women mathematicians 20th-century American mathematicians 21st-century American mathematicians 20th-century Polish mathematicians 21st-century Polish mathematicians Topologists 1944 births Living people People from Tarnów University of Warsaw alumni Rice University alumni Auburn University faculty Fellows of the American Mathematical Society Dynamical systems theorists American systems scientists 20th-century American women mathematicians 21st-century American women mathematicians 20th-century Polish women
Krystyna Kuperberg
[ "Mathematics" ]
556
[ "Topologists", "Dynamical systems theorists", "Topology", "Dynamical systems" ]
1,028,296
https://en.wikipedia.org/wiki/Tractor%20configuration
In aviation, a tractor configuration is a propeller-driven fixed-wing aircraft with its engine mounted with the propeller in front, so that the aircraft is "pulled" through the air. This is the usual configuration; the pusher configuration places the airscrew behind, and "pushes" the aircraft forward. Through common usage, the word "propeller" has come to mean any airscrew, whether it pulls or pushes the aircraft. In the early years of powered aviation both tractor and pusher designs were common. However, by the midpoint of the First World War, interest in pushers declined and the tractor configuration dominated. Today, propeller-driven aircraft are assumed to be tractors unless stated otherwise. Origins The first successful airplanes to have a "tractor" configuration were the 1907 Santos-Dumont Demoiselle and Blériot VII. The first biplane airplane to have a "tractor" configuration was the Goupy No.2 (first flight on 11 March 1909) designed by Mario Calderara and financed by Ambroise Goupy at the French firm Blériot Aéronautique. It was the fastest airplane when it was made. At that time a distinction was made between a propeller ("pushes the machine", akin to a ship's propeller) and a tractor-[air]screw ("pulls the machine through the air"). The Royal Flying Corps called the tractors "Bleriot type" after Louis Bleriot, and pushers "Farman type". Firing guns through the propeller A disadvantage of a single-engine tractor military aircraft was that it was initially impossible to fire a gun through the propeller arc without striking the blades. Early solutions included mounting guns (rifles or machine guns) to fire around the propeller arc, either at an angle to the side – which made aiming difficult – or on the top wing of a biplane so that the bullets passed over the propeller arc. The first system to fire through the propeller was developed by French engineer Eugene Gilbert for Morane-Saulnier, and involved fitting strong metal "deflector wedges" to the propeller blades of a Morane-Saulnier L monoplane, so that bullets fired when a propeller blade obstructed the line of fire were deflected rather than damaging the propeller. It was employed with immediate success by French aviator Roland Garros and was also used on at least one Sopwith Tabloid of the Royal Naval Air Service. A better solution was a gun synchronizer, which utilized a synchronization gear to shoot only at instants when the line of fire was unobstructed, developed by aircraft pioneer Anthony Fokker and fitted to the Fokker E.I monoplane in 1915. The first British "tractor" designed to be fitted with synchronization gear was the Sopwith 1½ Strutter. which entered service in early 1916. The problem of firing through the propeller's arc was avoided by passing the gun barrel through the propeller's hub or spinner  – first used in production military aircraft with the 1917 French SPAD S.XII – or mounting guns in the wings, as was used from the early 1930s until propeller engines were superseded in the jet age. References See also Pusher configuration Push-pull configuration Aircraft configurations
Tractor configuration
[ "Engineering" ]
667
[ "Aircraft configurations", "Aerospace engineering" ]
1,028,314
https://en.wikipedia.org/wiki/Penman%20equation
The Penman equation describes evaporation (E) from an open water surface, and was developed by Howard Penman in 1948. Penman's equation requires daily mean temperature, wind speed, air pressure, and solar radiation to predict E. Simpler Hydrometeorological equations continue to be used where obtaining such data is impractical, to give comparable results within specific contexts, e.g. humid vs arid climates. Details Numerous variations of the Penman equation are used to estimate evaporation from water, and land. Specifically the Penman–Monteith equation refines weather based potential evapotranspiration (PET) estimates of vegetated land areas. It is widely regarded as one of the most accurate models, in terms of estimates. The original equation was developed by Howard Penman at the Rothamsted Experimental Station, Harpenden, UK. The equation for evaporation given by Penman is: where: m = Slope of the saturation vapor pressure curve (Pa K−1) Rn = Net irradiance (W m−2) ρa = density of air (kg m−3) cp = heat capacity of air (J kg−1 K−1) δe = vapor pressure deficit (Pa) ga = momentum surface aerodynamic conductance (m s−1) λv = latent heat of vaporization (J kg−1) γ = psychrometric constant (Pa K−1) which (if the SI units in parentheses are used) will give the evaporation Emass in units of kg/(m2·s), kilograms of water evaporated every second for each square meter of area. Remove λ to obviate that this is fundamentally an energy balance. Replace λv with L to get familiar precipitation units ETvol, where Lv=λvρwater. This has units of m/s, or more commonly mm/day, because it is flux m3/s per m2=m/s. This equation assumes a daily time step so that net heat exchange with the ground is insignificant, and a unit area surrounded by similar open water or vegetation so that net heat & vapor exchange with the surrounding area cancels out. Some times people replace Rn with and A for total net available energy when a situation warrants account of additional heat fluxes. Temperature, wind speed, relative humidity impact the values of m, g, cp, ρ, and δe. Shuttleworth (1993) In 1993, W.Jim Shuttleworth modified and adapted the Penman equation to use SI, which made calculating evaporation simpler. The resultant equation is: where: Emass = Evaporation rate (mm day−1) m = Slope of the saturation vapor pressure curve (kPa K−1) Rn = Net irradiance (MJ m−2 day−1) γ = psychrometric constant = (kPa K−1) U2 = wind speed (m s−1) δe = vapor pressure deficit (kPa) λv = latent heat of vaporization (MJ kg−1) Some useful relationships δe = (es - ea) = (1 – relative humidity) es es = saturated vapor pressure of air, as is found inside plant stoma. ea = vapor pressure of free flowing air. es, mmHg = exp(21.07-5336/Ta), approximation by Merva, 1975 Therefore , mmHg/K Ta = air temperature in kelvins See also Pan evaporation Evapotranspiration Thornthwaite model Blaney–Criddle equation Penman–Monteith equation Notes References Jarvis, P.G. (1976) The interpretation of the variations in leaf water potential and stomatal conductance found in canopies in the field. Phil. Trans. R. Soc. Lond. B. 273, 593–610. Neitsch, S.L.; J.G. Arnold; J.R. Kliniry; J.R. Wolliams. 2005. Soil and Water Assessment Tool Theoretical Document; Version 2005. Grassland, Soil and Water Research Laboratory; Agricultural Research Service. and Blackland Research Center; Texas Agricultural Experiment Station. Temple, Texas. https://web.archive.org/web/20090116193356/http://www.brc.tamus.edu/swat/downloads/doc/swat2005/SWAT%202005%20theory%20final.pdf Penman, H.L. (1948): Natural evaporation from open water, bare soil and grass. Proc. Roy. Soc. London A(194), S. 120–145. Agronomy Equations Hydrology
Penman equation
[ "Chemistry", "Mathematics", "Engineering", "Environmental_science" ]
973
[ "Hydrology", "Mathematical objects", "Equations", "Environmental engineering" ]
1,028,345
https://en.wikipedia.org/wiki/Stanis%C5%82aw%20Zaremba%20%28mathematician%29
Stanisław Zaremba (3 October 1863 – 23 November 1942) was a Polish mathematician and engineer. His research in partial differential equations, applied mathematics and classical analysis, particularly on harmonic functions, gained him a wide recognition. He was one of the mathematicians who contributed to the success of the Polish School of Mathematics through his teaching and organizational skills as well as through his research. Apart from his research works, Zaremba wrote many university textbooks and monographies. He was a professor of the Jagiellonian University (since 1900), member of Academy of Learning (since 1903), co-founder and president of the Polish Mathematical Society (1919), and the first editor of the Annales de la Société Polonaise de Mathématique. He should not be confused with his son Stanisław Krystyn Zaremba, also a mathematician. Biography Zaremba was born on 3 October 1863 in Romanówka, present-day Ukraine. The son of an engineer, he was educated at a grammar school in Saint Petersburg and studied at the Institute of Technology of the same city obtaining his diploma in engineering in 1886. The same year he left Saint Petersburg and went to Paris to study mathematics: he received his degree from the Sorbonne in 1889. He stayed in France until 1900, when he joined the faculty at the Jagiellonian University in Kraków. His years in France enabled him to establish a strong bridge between Polish mathematicians and those in France. He died on 23 November 1942 in Kraków, during the German occupation of Poland. Work Research activity Selected publication , translated in Russian as . See also Kraków School of Mathematics Mixed boundary condition Notes References . . . , . External links 19th-century Polish mathematicians 20th-century Polish mathematicians Corresponding Members of the Russian Academy of Sciences (1917–1925) Corresponding Members of the USSR Academy of Sciences Members of the Lwów Scientific Society Polish engineers Mathematical analysts University of Paris alumni Academic staff of Jagiellonian University 1863 births 1942 deaths Mathematicians from Austria-Hungary
Stanisław Zaremba (mathematician)
[ "Mathematics" ]
402
[ "Mathematical analysis", "Mathematical analysts" ]
1,028,348
https://en.wikipedia.org/wiki/Eschar
An eschar (; ; ) is a slough or piece of dead tissue that is cast off from the surface of the skin, particularly after a burn injury, but also seen in gangrene, ulcer, fungal infections, necrotizing spider bite wounds, tick bites associated with spotted fevers and exposure to cutaneous anthrax. The term ‘eschar’ is not interchangeable with ‘scab’. An eschar contains necrotic tissue whereas a scab is composed of dried blood and exudate. Black eschars are most frequently attributed in medicine to cutaneous anthrax (infection by Bacillus anthracis), which may be contracted through herd animal exposure and also from Pasteurella multocida exposure in cats and rabbits. A newly identified human rickettsial infection, R. parkeri rickettsiosis, can be differentiated from Rocky Mountain spotted fever by the presence of an eschar at the site of inoculation. Eschar is sometimes called a black wound because the wound is covered with thick, dry, black necrotic tissue. Eschar may be allowed to slough off naturally, or it may require surgical removal (debridement) to prevent infection, especially in immunocompromised patients (e.g. if a skin graft is to be conducted). If eschar is on a limb, it is important to assess peripheral pulses of the affected limb to make sure blood and lymphatic circulation is not compromised. If circulation is compromised, an escharotomy, or surgical incision through the eschar, may be indicated. Escharotic An escharotic is a substance that kills unwanted or diseased tissue, usually skin or superficial growths like warts, leaving them to slough off. Examples include: inorganic reagents, such as strong acids and alkalis, or cytotoxic salts of heavy metals, for example zinc or silver organic compounds such as sanguinarine, salicylic acid, and certain medicines like imiquimod Irritant or corrosive fluids from plants, such as latex or resins from various species of Ficus, Euphorbia, Carica, or Taraxacum refrigerants, which kill the tissue by freezing; examples include liquid nitrogen, solid carbon dioxide, and its solution in ether Escharotics have long been used in medicine. In conventional modern practice some still are useful for topical treatment of growths such as warts. For lack of anything better in the past, escharotics once were more widely used, and for example, popular products included so-called black salves, with ingredients such as zinc chloride, plus sanguinarine in the form of bloodroot extract. These and others were traditional as topical treatments for localised skin cancers in herbal medicine. They combined unreliability in eradication of the cancer, with harmful effects such as scarring, serious injury, and disfigurement. Consequently escharotic salves now are strictly regulated in most western countries, and available on prescription only. Some prosecutions have been pursued over unlicensed sales of escharotic products such as Cansema. See also Wound healing References External links Medical Separation of the Eschar Cutaneous conditions Necrosis
Eschar
[ "Biology" ]
696
[ "Cellular processes", "Necrosis" ]
1,028,388
https://en.wikipedia.org/wiki/Nitrosamine
Nitrosamines (or more formally N-nitrosamines) are organic compounds produced by industrial processes. The chemical structure is , where R is usually an alkyl group. They feature a nitroso group () that are "probable human carcinogens", bonded to a deprotonated amine. Most nitrosamines are carcinogenic in animals. A 2006 systematic review supports a "positive association between nitrite and nitrosamine intake and gastric cancer, between meat and processed meat intake and gastric cancer and oesophageal cancer, and between preserved fish, vegetable and smoked food intake and gastric cancer, but is not conclusive". Chemistry The organic chemistry of nitrosamines is well developed with regard to their syntheses, their structures, and their reactions. They usually are produced by the reaction of nitrous acid () and secondary amines, although other nitrosyl sources (e.g. , , RONO) have the same effect: The nitrous acid usually arises from protonation of a nitrite. This synthesis method is relevant to the generation of nitrosamines under some biological conditions. The nitrosation is also reversible, particularly in acidic solutions of nucleophiles. Aryl nitrosamines rearrange to give a para-nitroso aryl amine in the Fischer-Hepp rearrangement. With regards to structure, the core of nitrosamines is planar, as established by X-ray crystallography. The N-N and N-O distances are 132 and 126 pm, respectively in dimethylnitrosamine, one of the simplest members of a large class of N-nitrosamines Nitrosamines are not directly carcinogenic. Metabolic activation is required to convert them to the alkylating agents that modify bases in DNA, inducing mutations. The specific alkylating agents vary with the nitrosamine, but all are proposed to feature alkyldiazonium centers. History and occurrence In 1956, two British scientists, John Barnes and Peter Magee, reported that a simple member of the large class of N-nitrosamines, dimethylnitrosamine, produced liver tumours in rats. Subsequent studies showed that approximately 90% of the 300 nitrosamines tested were carcinogenic in a wide variety of animals. Tobacco exposure A common way ordinary consumers are exposed to nitrosamines is through tobacco use and cigarette smoke. Tobacco-specific nitrosamines also can be found in American dip snuff, chewing tobacco, and to a much lesser degree, snus (127.9 ppm for American dip snuff compared to 2.8 ppm in Swedish snuff or snus). Dietary exposure Medication impurities There have been recalls for various medications due to the presence of nitrosamine impurities. There have been recalls for angiotensin II receptor blockers, ranitidine, valsartan, Duloxetine, and others. The US Food and Drug Administration published guidance about the control of nitrosamine impurities in medicines. Health Canada published guidance about nitrosamine impurities in medications and a list of established acceptable intake limits of nitrosamine impurities in medications. Examples See also Hydrazines derived from these nitrosamines, e.g. UDMH, are also carcinogenic. Possible health hazards of pickled vegetables Tobacco-specific nitrosamines Additional reading References External links Oregon State University, Linus Pauling Institute article on Nitrosamines and cancer, including info on history of meat laws Risk factors in Pancreatic Cancer Nitrogen cycle Functional groups Garde manger Carcinogens IARC Group 1 carcinogens
Nitrosamine
[ "Chemistry", "Biology", "Environmental_science" ]
785
[ "Digestive system", "Toxicology", "Functional groups", "Organ systems", "Nitrogen cycle", "Carcinogens", "Metabolism" ]