source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Strongly%20monotone%20operator
In functional analysis, a set-valued mapping where X is a real Hilbert space is said to be strongly monotone if This is analogous to the notion of strictly increasing for scalar-valued functions of one scalar argument. See also Monotonic function
https://en.wikipedia.org/wiki/Guy%20Coburn%20Robson
Guy Coburn Robson (1888–1945) was a British zoologist, specializing in Mollusca, who first named and described Mesonychoteuthis hamiltoni, the colossal squid. Robson studied at the marine biological station in Naples, and joined the staff of the Natural History Museum in 1911, becoming Deputy Keeper of the Zoology Department from 1931 to 1936. Evolution Robson is best known for his major book The Variations of Animals in Nature (co-authored with O. W. Richards, 1936) which argued that although the fact of evolution is well established, the mechanisms are largely hypothetical and undemonstrated. The book claims that most differences among animal populations and related species are non-adaptive. It was published before major developments in the modern synthesis and contains critical evaluation of natural selection. It was positively reviewed in science journals in the 1930s. Zoologist Mark Ridley has noted that "Robson and Richards suggested that the differences between species are non-adaptive and have nothing to do with natural selection." Historian Will Provine has commented that the book "has been in disrepute since the late 1940s because of its antagonism to natural selection" but notes that it was the "best known general work on animal taxonomy" before the work of Julian Huxley and Ernst Mayr. Huxley in Evolution: The Modern Synthesis (1942), described the book as "an undue belittling of the role of selection in evolution." Eponymy The following marine species have been named after Guy Robson to honour his contribution to science: Abralia robsoni Grimpe, 1931 genus Robsonella W. Adam, 1938 Onykia robsoni (Adam, 1962) Amphioctopus robsoni (Adam, 1941) Opisthoteuthis robsoni O'Shea, 1999 Uroteuthis (Photololigo) robsoni (Alexeyev, 1992) Digitosepia robsoni (Massy, 1927) Publications Guide to the Mollusca exhibited in the Zoological Department, British Museum (1923) The Species Problem (1926) A Monograph of the Recent Cephalopoda. Based on the colle
https://en.wikipedia.org/wiki/Link%20Layer%20Topology%20Discovery
Link Layer Topology Discovery (LLTD) is a proprietary link layer protocol for network topology discovery and quality of service diagnostics. Microsoft developed it as part of the Windows Rally set of technologies. The LLTD protocol operates over both wired (such as Ethernet (IEEE 802.3) or power line communication) as well as wireless networks (such as IEEE 802.11). LLTD is included in Windows 7, Windows Vista and Windows 10. It is used by their Network Map feature to display a graphical representation of the local area network (LAN) or wireless LAN (WLAN), to which the computer is connected. Windows XP does not contain the LLTD protocol as a standard component and as a result, Windows XP computers do not appear on the Network Map unless the LLTD responder is installed on Windows XP computers. LLTD is available for download for 32-bit editions of Windows XP with Service Pack 2 (as a publicly released update) and for Windows XP with Service Pack 3 (as a hotfix by request). LLTD Responder was not released for Windows XP Professional x64 Edition. A 2006 fall update for the Xbox 360 enabled support for the LLTD protocol. Being a link layer (or OSI Layer 2) implementation, LLTD operates strictly on a given local network segment. It cannot discover devices across routers, an operation which would require Internet Protocol level routing. Link Layer Topology Discovery in Windows Vista consists of two components. The LLTD Mapper I/O component is the master module which controls the discovery process and generates the Network Map. Appropriate permissions for this may be configured with Group Policy settings. It can be allowed or disallowed for domains, and private and public networks. The Mapper sends discovery command packets onto the local network segment via a raw network interface socket. The second component of LLTD are the LLTD Responders which answer Mapper requests about their host and possibly other discovered network information. In addition to illustrating the
https://en.wikipedia.org/wiki/Cue%20note
In musical notation, a cue note is or cue notes are indications informing players, "of important passages being played by other instruments, [such as an] entrance after a long period of rest." A cue may also function as a guideline for another instrument for musical improvisation or if there are many bars rest to help the performer find where to come in. "Cue notes may be given as guidance only, to assist a performer's entrance after numerous measures of rest....[Their size, and all elements associated with them] is somewhat smaller than normal note size, but still large enough to be legible (65-75% of normal note size)." The cued instrument is indicated with text and the cue notes are smaller than the rest. The stems of cue notes all go in the same direction and cue notes are transposed into the key of the part entering.
https://en.wikipedia.org/wiki/PTK%20Toolkit
PTK is a 2D rendering engine and SDK developed by Phelios, Inc., that allows computer programmers create downloadable games in C++ that are portable to Microsoft Windows and Mac OS X. It is currently used by about 60 downloadable games It is mainly known for powering breakaway casual hits from funpause and Big Fish Games, such as Azada, Atlantis, Atlantis Sky Patrol, Mystic Inn and Fairies. Design philosophy PTK was designed for programmers to enjoy basic-like ease of programming using the C++. It abstracts rendering, input and I/O and removes the need for directly setting up complex renderers such as DirectX or OpenGL. PTK uses a "2D in 3D" paradigm. While it is a 2D engine, it uses 3D acceleration for rendering, enabling very good, bicubic filtered rendering of scaled, rotated sprites and per-pixel alpha blending at no expense of computing time. A game such as Mystic Inn make extensive use of PTK's rendering capabilities.
https://en.wikipedia.org/wiki/E%20%28verification%20language%29
e is a hardware verification language (HVL) which is tailored to implementing highly flexible and reusable verification testbenches. History e was first developed in 1992 in Israel by Yoav Hollander for his Specman software. In 1995 he founded a company, InSpec (later renamed Verisity), to commercialize the software. The product was introduced at the 1996 Design Automation Conference. Verisity has since been acquired by Cadence Design Systems. Features Main features of e are: Random and constrained random stimulus generation Functional coverage metric definition and collection Temporal language that can be used for writing assertions Aspect-oriented programming language with reflection capability Language is DUT-neutral in that you can use a single e testbench to verify a SystemC/C++ model, an RTL model, a gate level model, or even a DUT residing in a hardware acceleration box (using the UVM Acceleration for e Methodology) Can create highly reusable code, especially when the testbench is written following the Universal Verification Methodology (UVM) Formerly known as e Re-use Methodology (eRM) UVM e library and documentation can be downloaded here: UVM World Language Features The e language uses an aspect-oriented programming (AOP) approach, which is an extension of the object-oriented programming approach to specifically address the needs required in functional verification. AOP is a key feature in allowing for users to easily bolt on additional functionality to existing code in a non-invasive manner. This permits easy reuse and code maintenance which is a huge benefit in the hardware world, where designs are continually being tweaked to meet market demands throughout the project lifecycle. AOP also addresses cross cutting concerns (features that cut across various sections of the code) easily by allowing users to extend either specific or all instances of a particular struct to add functionality. Users can extend several structs to add function
https://en.wikipedia.org/wiki/All-or-none%20law
In physiology, the all-or-none law (sometimes the all-or-none principle or all-or-nothing law) is the principle that if a single nerve fibre is stimulated, it will always give a maximal response and produce an electrical impulse of a single amplitude. If the intensity or duration of the stimulus is increased, the height of the impulse will remain the same. The nerve fibre either gives a maximal response or none at all. It was first established by the American physiologist Henry Pickering Bowditch in 1871 for the contraction of heart muscle. An induction shock produces a contraction or fails to do so according to its strength; if it does so at all, it produces the greatest contraction that can be produced by any strength of stimulus in the condition of the muscle at the time. This principle was later found to be present in skeletal muscle by Keith Lucas in 1909. The individual fibres of nerves also respond to stimulation according to the all-or-none principle. Isolation of the action potential The first recorded time of isolating a single action potential was carried out by Edgar Adrian in 1925 from a set of crosscut muscle fibres. Using a thermionic triode valve amplifier with 1850 amplification, Adrian noticed that when the muscle preparation was left to hang, it produced oscillations; yet when supported, no such activity occurred. Later with the help of Yngve Zotterman, Adrian isolated and stimulated one sensory fibre. The impulses externally on the fibre were uniform: "as simple as the dots in Morse code". Stimulus strength was manipulated and the resulting frequency measured, yielding a relationship where f∝sn . Relationship between stimulus and response The magnitude of the action potential set up in any single nerve fibre is independent of the strength of the exciting stimulus, provided the latter is adequate. An electrical stimulus below threshold strength fails to elicit a propagated spike potential. If it is of threshold strength or over, a spike (
https://en.wikipedia.org/wiki/Phlegmasia%20cerulea%20dolens
Phlegmasia cerulea dolens (PCD) (literally: 'painful blue inflammation'), not to be confused with preceding phlegmasia alba dolens, is an uncommon severe form of lower extremity deep venous thrombosis (DVT) that obstructs blood outflow from a vein. Upper extremity PCD is less common, occurring in under 10% of all cases. PCD results from extensive thrombotic occlusion (blockage by a thrombus) of extremity veins, most commonly an iliofemoral DVT, of the iliac vein and/or common femoral vein. It is a medical emergency requiring immediate evaluation and treatment. Symptoms and signs Primary symptoms It is characterized by progressive lower extremity edema distal to the thigh, tight shiny skin, cyanosis (inadequate blood oxygenation), petechiae or purpura, and sudden severe pain of the affected limb in proportion to the level of venous blockage. Patients often have difficulty walking. Blisters, bullae, paresthesias, and motor weakness may develop in severe cases, along with gangrene in ~50% of cases. Distal pulses are palpable early on but may diminish over time, and doppler signal can be usually heard throughout disease progression. The left limb is more commonly affected due to its vascular anatomy (the right internal iliac artery directly overlies the left iliac vein). Associated diseases PCD is associated with an underlying malignancy in 20-40% of cases. There is a high risk of massive pulmonary embolism, even under anticoagulation. Etiology Risk factors, present in around 50% of documented cases, include malignancy, hyper-coagulable states, cardiac disease, venous stasis, venous insufficiency, May-Thurner syndrome (right iliac artery compressing the left iliac vein that runs beneath it), surgery, trauma, pregnancy, inferior vena cava (IVC) filter, hormone therapy, oral contraceptives, prolonged immobilization, inflammatory bowel disease, heart failure, and central venous catheters. Etiology is unknown in ~10% of PCD cases. Pathophysiology When a thrombus o
https://en.wikipedia.org/wiki/Harvard%20University%20Herbaria
The Harvard University Herbaria and Botanical Museum are institutions located on the grounds of Harvard University at 22 Divinity Avenue, Cambridge, Massachusetts. The Botanical Museum is one of three which comprise the Harvard Museum of Natural History. The Herbaria, founded in 1842 by Asa Gray, are one of the 10 largest in the world with over 5 million specimens, and including the Botany Libraries, form the world's largest university owned herbarium. The Gray Herbarium is named after him. HUH hosts the Gray Herbarium Index (GCI) as well as an extensive specimen, botanist, and publications database. HUH was the center for botanical research in the United States of America by the time of its founder's retirement in the 1870s. The materials deposited there are one of the three major sources for the International Plant Names Index. The Botanical museum was founded in 1858. It was originally called the Museum of Vegetable Products and was predominantly focused on an interdisciplinary study of useful plants (i.e. economic botany and horticulture). The nucleus of materials for this museum was donated by Sir William Hooker, the Director of the Royal Botanic Garden. Professor George Lincoln Goodale became the museum's first director in 1888; under his direction the building was completed in 1890 and provided both research facilities and public exhibit space, which were the botanical complement to the "Agassiz" Museum of Comparative Zoology. Three successive directors substantially enlarged the collections of economic products, medicinal plants, artifacts, archeological materials, pollen, and photographs. Faculty and students continue to add significantly to the extensive paleobotanical collections, particularly Precambrian material containing early life forms. The Oakes Ames Collection of Economic Botany, the Paleobotanical Collection (including the Pollen Collection), and the Margaret Towle Collection of Archaeological Plant Remains are housed in the Botanical Mus
https://en.wikipedia.org/wiki/Fitness%20model%20%28network%20theory%29
In complex network theory, the fitness model is a model of the evolution of a network: how the links between nodes change over time depends on the fitness of nodes. Fitter nodes attract more links at the expense of less fit nodes. It has been used to model the network structure of the World Wide Web. Description of the model The model is based on the idea of fitness, an inherent competitive factor that nodes may have, capable of affecting the network's evolution. According to this idea, the nodes' intrinsic ability to attract links in the network varies from node to node, the most efficient (or "fit") being able to gather more edges in the expense of others. In that sense, not all nodes are identical to each other, and they claim their degree increase according to the fitness they possess every time. The fitness factors of all the nodes composing the network may form a distribution ρ(η) characteristic of the system been studied. Ginestra Bianconi and Albert-László Barabási proposed a new model called Bianconi-Barabási model, a variant to the Barabási-Albert model (BA model), where the probability for a node to connect to another one is supplied with a term expressing the fitness of the node involved. The fitness parameter is time independent and is multiplicative to the probability Fitness model where fitnesses are not coupled to preferential attachment has been introduced by Caldarelli et al. Here a link is created between two vertices with a probability given by a linking function of the fitnesses of the vertices involved. The degree of a vertex i is given by: If is an invertible and increasing function of , then the probability distribution is given by As a result if the fitnesses are distributed as a power law, then also the node degree does. Less intuitively with a fast decaying probability distribution as together with a linking function of the kind with a constant and the Heavyside function, we also obtain scale-free networks. Such model has
https://en.wikipedia.org/wiki/Girvan%E2%80%93Newman%20algorithm
The Girvan–Newman algorithm (named after Michelle Girvan and Mark Newman) is a hierarchical method used to detect communities in complex systems. Edge betweenness and community structure The Girvan–Newman algorithm detects communities by progressively removing edges from the original network. The connected components of the remaining network are the communities. Instead of trying to construct a measure that tells us which edges are the most central to communities, the Girvan–Newman algorithm focuses on edges that are most likely "between" communities. Vertex betweenness is an indicator of highly central nodes in networks. For any node , vertex betweenness is defined as the fraction of shortest paths between pairs of nodes that run through it. It is relevant to models where the network modulates transfer of goods between known start and end points, under the assumption that such transfer seeks the shortest available route. The Girvan–Newman algorithm extends this definition to the case of edges, defining the "edge betweenness" of an edge as the number of shortest paths between pairs of nodes that run along it. If there is more than one shortest path between a pair of nodes, each path is assigned equal weight such that the total weight of all of the paths is equal to unity. If a network contains communities or groups that are only loosely connected by a few inter-group edges, then all shortest paths between different communities must go along one of these few edges. Thus, the edges connecting communities will have high edge betweenness (at least one of them). By removing these edges, the groups are separated from one another and so the underlying community structure of the network is revealed. The algorithm's steps for community detection are summarized below The betweenness of all existing edges in the network is calculated first. The edge(s) with the highest betweenness are removed. The betweenness of all edges affected by the removal is recalculated. Step
https://en.wikipedia.org/wiki/G%C3%B6del%20numbering%20for%20sequences
In mathematics, a Gödel numbering for sequences provides an effective way to represent each finite sequence of natural numbers as a single natural number. While a set theoretical embedding is surely possible, the emphasis is on the effectiveness of the functions manipulating such representations of sequences: the operations on sequences (accessing individual members, concatenation) can be "implemented" using total recursive functions, and in fact by primitive recursive functions. It is usually used to build sequential “data types” in arithmetic-based formalizations of some fundamental notions of mathematics. It is a specific case of the more general idea of Gödel numbering. For example, recursive function theory can be regarded as a formalization of the notion of an algorithm, and can be regarded as a programming language to mimic lists by encoding a sequence of natural numbers in a single natural number. Gödel numbering Besides using Gödel numbering to encode unique sequences of symbols into unique natural numbers (i.e. place numbers into mutually exclusive or one-to-one correspondence with the sequences), we can use it to encode whole “architectures” of sophisticated “machines”. For example, we can encode Markov algorithms, or Turing machines into natural numbers and thereby prove that the expressive power of recursive function theory is no less than that of the former machine-like formalizations of algorithms. Accessing members Any such representation of sequences should contain all the information as in the original sequence—most importantly, each individual member must be retrievable. However, the length does not have to match directly; even if we want to handle sequences of different length, we can store length data as a surplus member, or as the other member of an ordered pair by using a pairing function. We expect that there is an effective way for this information retrieval process in form of an appropriate total recursive function. We want to find
https://en.wikipedia.org/wiki/Clostridium%20perfringens%20alpha%20toxin
Clostridium perfringens alpha toxin is a toxin produced by the bacterium Clostridium perfringens (C. perfringens) and is responsible for gas gangrene and myonecrosis in infected tissues. The toxin also possesses hemolytic activity. Clinical significance This toxin has been shown to be the key virulence factor in infection with C. perfringens; the bacterium is unable to cause disease without this toxin. Further, vaccination against the alpha toxin toxoid protects mice against C. perfringens gas gangrene. As a result, knowledge about the function of this particular protein greatly aids understanding of myonecrosis. Structure and homology The alpha toxin has remarkable similarity to toxins produced by other bacteria as well as natural enzymes. There is significant homology with phospholipase C enzymes from Bacillus cereus, C. bifermentans, and Listeria monocytogenes. The C terminal domain shows similarity with non-bacterial enzymes such as pancreatic lipase, soybean lipoxygenase, and synaptotagmin I. The alpha toxin is a zinc metallophospholipase, requiring zinc for activation. First, the toxin binds to a binding site on the cell surface. The C-terminal C2-like PLAT domain binds calcium and allows the toxin to bind to the phospholipid head-groups on the cell surface. The C-terminal domain enters the phospholipid bilayer. The N-terminal domain has phospholipase activity. This property allows hydrolysis of phospholipids such as phosphatidyl choline, mimicking endogenous phospholipase C. The hydrolysis of phosphatidyl choline produces diacylglycerol, which activates a variety of second messenger pathways. The end-result includes activation of arachidonic acid pathway and production of thromboxane A2, production of IL-8, platelet-activating factor, and several intercellular adhesion molecules. These actions combine to cause edema due to increased vascular permeability. See also Clostridium perfringens beta toxin
https://en.wikipedia.org/wiki/3-j%20symbol
In quantum mechanics, the Wigner 3-j symbols, also called 3-jm symbols, are an alternative to Clebsch–Gordan coefficients for the purpose of adding angular momenta. While the two approaches address exactly the same physical problem, the 3-j symbols do so more symmetrically. Mathematical relation to Clebsch–Gordan coefficients The 3-j symbols are given in terms of the Clebsch–Gordan coefficients by The j and m components are angular-momentum quantum numbers, i.e., every (and every corresponding ) is either a nonnegative integer or half-odd-integer. The exponent of the sign factor is always an integer, so it remains the same when transposed to the left, and the inverse relation follows upon making the substitution : Explicit expression where is the Kronecker delta. The summation is performed over those integer values for which the argument of each factorial in the denominator is non-negative, i.e. summation limits and are taken equal: the lower one the upper one Factorials of negative numbers are conventionally taken equal to zero, so that the values of the 3j symbol at, for example, or are automatically set to zero. Definitional relation to Clebsch–Gordan coefficients The CG coefficients are defined so as to express the addition of two angular momenta in terms of a third: The 3-j symbols, on the other hand, are the coefficients with which three angular momenta must be added so that the resultant is zero: Here is the zero-angular-momentum state (). It is apparent that the 3-j symbol treats all three angular momenta involved in the addition problem on an equal footing and is therefore more symmetrical than the CG coefficient. Since the state is unchanged by rotation, one also says that the contraction of the product of three rotational states with a 3-j symbol is invariant under rotations. Selection rules The Wigner 3-j symbol is zero unless all these conditions are satisfied: Symmetry properties A 3-j symbol is invariant under an even pe
https://en.wikipedia.org/wiki/Agaricus%20arvensis
Agaricus arvensis, commonly known as the horse mushroom, is a mushroom of the genus Agaricus. Taxonomy It was described as Agaricus arvensis by Jacob Christian Schaeffer in 1762, and given numerous binomial descriptions since. Its specific name arvensis means 'of the field'. Description The cap is , whitish, smooth, and dry; it stains yellow, particularly when young. The gills are pale pink to white at first, later passing through grey and brown to become dull chocolate. There is a large spreading ring, white above but sometimes with yellowish scales underneath. Viewed from below, on a closed-cap specimen, the twin-layered ring has a well-developed 'cogwheel' pattern around the stipe. This is the lower part of the double ring. The stalk is long and 1–3 cm wide. The spores are brown and smooth. The odor is similar to that of almond extract or marzipan, due to the presence of benzaldehyde. It belongs to a group of Agaricus which tend to stain yellow on bruising. Similar species When young, this fungus is often confused with species of the deadly genus Amanita. Agaricus osecanus is rare, and is without the almond smell. Agaricus xanthodermus, the yellow stainer, can cause stomach upsets. Agaricus silvicola, the wood mushroom, is a touch more arboreal, with a frail and delicate ring, but also edible. Agaricus campestris, the field mushroom, is generally (but not always) smaller, has pink gills when young, and is also edible. Agaricus bitorquis, the spring agaricus, looks similar to arvensis and campestris, which are more common in the summer and autumn. Agaricus bisporus is the most commonly cultivated mushroom of the genus Agaricus. Distribution and habitat It is one of the largest white Agaricus species in Britain (where it appears during the months of July–November), West Asia (Iran) and North America. Frequently found near stables, as well as in meadows, it may form fairy rings. The mushroom is often found growing with nettles (a plant that also likes nutrient
https://en.wikipedia.org/wiki/SMTC%20Corporation
SMTC Corporation (Surface Mount Technology Centre), founded in 1985, is a mid-size provider of end-to-end electronics manufacturing services (EMS) including PCBA production, systems integration and comprehensive testing services, enclosure fabrication, as well as product design, sustaining engineering and supply chain management services. SMTC facilities span a broad footprint in the United States, Canada, Mexico, and China, with more than 2,300 employees. SMTC services extend over the entire electronic product life cycle from the development and introduction of new products through to the growth, maturity and end-of-life phases. SMTC offers fully integrated contract manufacturing services with a distinctive approach to global original equipment manufacturers (OEMs) and emerging technology companies primarily within industrial, computing and communication market segments. SMTC was recognized in 2012 by Frost & Sullivan with the Global EMS Award for Product Quality Leadership and 2013 with the North American Growth Leadership Award in the EMS industry, as one of the fastest growth companies in 2012. History 1985 - Surface Mount founded in Toronto, Ontario 1990 - HTM established in Denver, Colorado 1997 - acquires Ogden Atlantic Design in Charlotte, North Carolina July 1999 - merger of Surface Mount and HTM 1999 - purchased Zenith Electronics' facility in Chihuahua, Mexico, SMTC's only site with unionized employees September 1999 - acquired W. F. Wood of Boston, Massachusetts July 2000 - acquired EMS company, Pensar Electronic Solutions of Appleton, Wisconsin July 21, 2000 - IPO November 2000 - acquired Qualtron Teoranta of Donegal, Ireland, and subsidiary in Haverhill, Massachusetts March 2002 - Closed down its Cork City, Ireland manufacturing plant with the loss of 200 jobs due to its main customer going into administration. August 2003 - Sold EMS company, Pensar Electronics Solutions of Appleton, Wisconsin back to the original owners. The original owne
https://en.wikipedia.org/wiki/Type%20II%20cytokine%20receptor
Type II cytokine receptors, also commonly known as class II cytokine receptors, are transmembrane proteins that are expressed on the surface of certain cells. They bind and respond to a select group of cytokines including interferon type I, interferon type II, interferon type III. and members of the interleukin-10 family These receptors are characterized by the lack of a WSXWS motif which differentiates them from type I cytokine receptors. Structure Typically type II cytokine receptors are heterodimers or multimers with a high and a low affinity component. These receptors are related predominantly by sequence similarities in their extracellular portions that are composed of tandem Ig-like domains. The structures for the extracellular domains of the receptors for interferon types, I, II, and III are all known. Type II cytokine receptors are tyrosine-kinase-linked receptors. The intracellular domain of type II cytokine receptors is typically associated with a tyrosine kinase belonging to the Janus kinase (JAK family). Binding of the receptor typically leads to activation of the canonical JAK/STAT signaling pathway. Types Type II cytokine receptors include those that bind interferons and those that bind members of the interleukin-10 family (interleukin-10, interleukin-20, interleukin-22, and interleukin-28). Expression of specific receptor varieties is highly variable across tissue types with some receptors being ubiquitously expressed and some receptors only expressed in specific tissues. Interferon receptors The interferon receptor is a molecule displayed on the surface of cells which interacts with extracellular interferons. Class II cytokine receptors bind type I, type II, and type III interferons. Type I interferons play important roles in both the adaptive and innate immune responses, prevent proliferation of pathogens, and have antiviral activities. Type II interferons help to modulate the immune system’s response to pathogens, and these interferons also r
https://en.wikipedia.org/wiki/Fuzzy%20routing
Fuzzy routing is the application of fuzzy logic to routing protocols, particularly in the context of ad-hoc wireless networks and in networks supporting multiple quality of service classes. It is currently the subject of research. See also Dynamic routing List of ad hoc routing protocols External links Hui Liu et al., An Adaptive Genetic Fuzzy Multi-path Routing Protocol for Wireless Ad Hoc Networks Runtong Zhang, A Fuzzy Routing Mechanism In Next-Generation Networks Routing protocols Fuzzy logic Wireless networking
https://en.wikipedia.org/wiki/African%20textiles
African textiles are textiles from various locations across the African continent. Across Africa, there are many distinctive styles, techniques, dyeing methods, and decorative and functional purposes. These textiles hold cultural significance and also have significance as historical documents of African design. History Some of the oldest surviving African textiles were discovered at the archaeological site of Kissi in northern Burkina Faso. They are made of wool or fine "short" animal hair including dried skin for integrity. Some fragments have also survived from the thirteenth century Benin City in Nigeria. Historically textiles were used as a form of currency since the fourteenth century in West Africa and Central Africa. Below is an overview of some of the common techniques and textile materials used in various African regions and countries. Textile weaving Stripweaving, a centuries-old textile manufacturing technique of creating cloth by weaving strips together, is characteristic of weaving in West Africa, who credit Mande weavers and in particular the Tellem people as the first to master the art of weaving complex weft patterns into strips. Findings from caves at Bandiagara Escarpment in Mali propose its use from as far back as the 11th century. Stripwoven cloths are made up of narrow strips that are cut into desired lengths and sewn together. From Mali, the technique spread across West Africa to Ivory Coast, Ghana, and Nigeria. Raphia fiber from dried stripped leaves of raphia palm was commonly used in West Africa and Central Africa since it is widely available in countries with grasslands like Cameroon, Ghana, and Nigeria. Cotton fibers from the kapok tree has been extensively used by the Dagomba to produce long strips of fibre to make the Ghanaian smock. Other fiber materials included undyed wild silk used in Nigeria for embroidery and weaving, as well as barkcloth from fig trees used to make clothes for ceremonial occasions in Uganda, Cameroon, and the
https://en.wikipedia.org/wiki/Cephalopod%20size
Cephalopods, which include squids and octopuses, vary enormously in size. The smallest are only about long and weigh less than at maturity, while the giant squid can exceed in length and the colossal squid weighs close to half a tonne (), making them the largest living invertebrates. Living species range in mass more than three-billion-fold, or across nine orders of magnitude, from the lightest hatchlings to the heaviest adults. Certain cephalopod species are also noted for having individual body parts of exceptional size. Cephalopods were at one time the largest of all organisms on Earth, and numerous species of comparable size to the largest present day squids are known from the fossil record, including enormous examples of ammonoids, belemnoids, nautiloids, orthoceratoids, teuthids, and vampyromorphids. In terms of mass, the largest of all known cephalopods were likely the giant shelled ammonoids and endocerid nautiloids, though perhaps still second to the largest living cephalopods when considering tissue mass alone. Cephalopods vastly larger than either giant or colossal squids have been postulated at various times. One of these was the St. Augustine Monster, a large carcass weighing several tonnes that washed ashore on the United States coast near St. Augustine, Florida, in 1896. Reanalyses in 1995 and 2004 of the original tissue samples—together with those of other similar carcasses—showed conclusively that they were all masses of the collagenous matrix of whale blubber. Giant cephalopods have fascinated humankind for ages. The earliest surviving records are perhaps those of Aristotle and Pliny the Elder, both of whom described squids of very large size. Tales of giant squid have been common among mariners since ancient times, and may have inspired the monstrous kraken of Nordic legend, said to be as large as an island and capable of engulfing and sinking any ship. Similar tentacled sea monsters are known from other parts of the globe, including the Akk
https://en.wikipedia.org/wiki/LOMAC
Low Water-Mark Mandatory Access Control (LOMAC) is a Mandatory Access Control model which protects the integrity of system objects and subjects by means of an information flow policy coupled with the subject demotion via floating labels. In LOMAC, all system subjects and objects are assigned integrity labels, made up of one or more hierarchical grades, depending on their types. Together, these label elements permit all labels to be placed in a partial order, with information flow protections and demotion decisions based on a dominance operator describing the order. Implementations In FreeBSD, the Biba model is implemented by the mac_lomac MAC policy. In Linux, there is a project that attempts to implement LOMAC policy. See also Multi-Level Security — MLS Mandatory Access Control — MAC Discretionary Access — DAC Take-Grant Model The Clark-Wilson Integrity Model Graham-Denning Model Security Modes of Operation
https://en.wikipedia.org/wiki/List%20of%20BSD%20operating%20systems
There are a number of Unix-like operating systems under active development, descended from the Berkeley Software Distribution (BSD) series of UNIX variants developed (originally by Bill Joy) at the University of California, Berkeley, Department of Electrical Engineering and Computer Science. there were four major BSD operating systems, and an increasing number of other OSs derived from these, that add or remove certain features but generally remain compatible with their originating OS—and so are not really forks of them. This is a list of those that have been active since 2014, and their websites. FreeBSD-based FreeBSD is a free Unix-like operating system descended from AT&T UNIX via the Berkeley Software Distribution (BSD). FreeBSD currently has more than 200 active developers and thousands of contributors. Other notable derivatives include DragonFly BSD, which was forked from FreeBSD 4.8, and Apple Inc.'s macOS, with its Darwin base including a large amount of code derived from FreeBSD. Active Discontinued DragonFly BSD-based NetBSD-based NetBSD is a freely redistributable, open source version of the Unix-derivative Berkeley Software Distribution (BSD) computer operating system. It was the second open source BSD descendant to be formally released, after 386BSD, and continues to be actively developed. Noted for its portability and quality of design and implementation, it is often used in embedded systems and as a starting point for the porting of other operating systems to new computer architectures. OpenBSD-based OpenBSD is a Unix-like computer operating system descended from Berkeley Software Distribution (BSD), a Unix derivative developed at the University of California, Berkeley. It was forked from NetBSD in 1995. OpenBSD includes a number of security features absent or optional in other operating systems and has a tradition of developers auditing the source code for software bugs and security problems. Historic BSD BSD was originally derived from Unix
https://en.wikipedia.org/wiki/Muller%27s%20morphs
Hermann J. Muller (1890–1967), who was a 1946 Nobel Prize winner, coined the terms amorph, hypomorph, hypermorph, antimorph and neomorph to classify mutations based on their behaviour in various genetic situations, as well as gene interaction between themselves. These classifications are still widely used in Drosophila genetics to describe mutations. For a more general description of mutations, see mutation, and for a discussion of allele interactions, see dominance relationship. Key: In the following sections, alleles are referred to as +=wildtype, m=mutant, Df=gene deletion, Dp=gene duplication. Phenotypes are compared with '>', meaning 'phenotype is more severe thanLoss of function AmorphAmorphic describes a mutation that causes complete loss of gene function. Amorph is sometimes used interchangeably with "genetic null". An amorphic mutation might cause complete loss of protein function by disrupting translation ("protein null") and/or preventing transcription ("RNA null"). An amorphic allele elicits the same phenotype when homozygous and when heterozygous to a chromosomal deletion or deficiency that disrupts the same gene. This relationship can be represented as follows: m/m = m/Df An amorphic allele is commonly recessive to its wildtype counterpart. It is possible for an amorph to be dominant if the gene in question is required in two copies to elicit a normal phenotype (i.e. haploinsufficient). HypomorphHypomorphic describes a mutation that causes a partial loss of gene function. A hypomorph is a reduction in gene function through reduced (protein, RNA) expression or reduced functional performance, but not a complete loss. The phenotype of a hypomorph is more severe in trans to a deletion allele than when homozygous. m/DF > m/m Hypomorphs are usually recessive, but occasional alleles are dominant due to haploinsufficiency. Gain of function Hypermorph A hypermorphic mutation causes an increase in normal gene function. Hypermorphic alleles are gain o
https://en.wikipedia.org/wiki/Simon%20model
In applied probability theory, the Simon model is a class of stochastic models that results in a power-law distribution function. It was proposed by Herbert A. Simon to account for the wide range of empirical distributions following a power-law. It models the dynamics of a system of elements with associated counters (e.g., words and their frequencies in texts, or nodes in a network and their connectivity ). In this model the dynamics of the system is based on constant growth via addition of new elements (new instances of words) as well as incrementing the counters (new occurrences of a word) at a rate proportional to their current values. Description To model this type of network growth as described above, Bornholdt and Ebel considered a network with nodes, and each node with connectivities , . These nodes form classes of nodes with identical connectivity . Repeat the following steps: (i) With probability add a new node and attach a link to it from an arbitrarily chosen node. (ii) With probability add one link from an arbitrary node to a node of class chosen with a probability proportional to . For this stochastic process, Simon found a stationary solution exhibiting power-law scaling, , with exponent Properties (i) Barabási-Albert (BA) model can be mapped to the subclass of Simon's model, when using the simpler probability for a node being connected to another node with connectivity (same as the preferential attachment at BA model). In other words, the Simon model describes a general class of stochastic processes that can result in a scale-free network, appropriate to capture Pareto and Zipf's laws. (ii) The only free parameter of the model reflects the relative growth of number of nodes versus the number of links. In general has small values; therefore, the scaling exponents can be predicted to be . For instance, Bornholdt and Ebel studied the linking dynamics of World Wide Web, and predicted the scaling exponent as , which was consistent wit
https://en.wikipedia.org/wiki/Superior%20rectal%20plexus
The superior rectal plexus (or superior hemorrhoidal plexus) supplies the rectum and joins in the pelvis with branches from the pelvic plexuses. The superior rectal plexus is a division of the inferior mesenteric plexus.
https://en.wikipedia.org/wiki/Copying%20mechanism
In the study of scale-free networks, a copying mechanism is a process by which such a network can form and grow, by means of repeated steps in which nodes are duplicated with mutations from existing nodes. Several variations have been studied. In the general copying model, a growing network starts as a small initial graph and, at each time step, a new vertex is added with a given number k of new outgoing edges. As a result of a stochastic selection, the neighbors of the new vertex are either chosen randomly among the existing vertices, or one existing vertex is randomly selected and k of its neighbors are "copied" as heads of the new edges. Motivation Copying mechanisms for modeling growth of the World Wide Web are motivated by the following intuition: Some web page authors will note an interesting but novel commonality between certain pages, and will link to pages exhibiting this commonality; pages created with this motivation are modeled by a random choice among existing pages. Most authors, on the other hand, will be interested in certain already-represented topics, and will collect together links to pages about these topics. Pages created in this way can be modeled by node copying. Those are the growth and preferential attachment properties of the networks. Description For the simple case, nodes are never deleted. At each step we create a new node with a single edge emanating from it. Let u be a page chosen uniformly at random from the pages in existence before this step. (I) With probability , the only parameter of the model, the new edge points to u. (II) With probability , the new edge points to the destination of u's (sole) out-link; the new node attains its edge by copying. The second process increases the probability of high-degree nodes' receiving new incoming edges. In fact, since u is selected randomly, the probability that a webpage with degree will receive a new hyperlink is proportional with , indicating that the copying mechanism effective
https://en.wikipedia.org/wiki/Database%20of%20Molecular%20Motions
The Database of Macromolecular Motions is a bioinformatics database and software-as-a-service tool that attempts to categorize macromolecular motions, sometimes also known as conformational change. It was originally developed by Mark B. Gerstein, Werner Krebs, and Nat Echols in the Molecular Biophysics & Biochemistry Department at Yale University. Discussion Since its introduction in the late 1990s, peer-reviewed papers on the database have received thousands of citations. The database has been mentioned in news articles in major scientific journals, book chapters, and elsewhere. Users can search the database for a particular motion by either protein name or Protein Data Bank ID number. Typically, however, users will enter the database via the Protein Data Bank, which often provides a hyperlink to the molmovdb entry for proteins found in both databases. The database includes a web-based tool (the Morph server) which allows non-experts to animate and visualize certain types of protein conformational change through the generation of short movies. This system uses molecular modelling techniques to interpolate the structural changes between two different protein conformers and to generate a set of intermediate structures. A hyperlink pointing to the morph results is then emailed to the user. The Morph Server was originally primarily a research tool rather than general molecular animation tool, and thus offered only limited user control over rendering, animation parameters, color, and point of view, and the original methods sometimes required a fair amount of CPU time to completion. Since their initial introduction in 1996, the database and associated morph server have undergone development to try to address some of these shortcomings as well as add new features, such as Normal Mode Analysis. Other research grounds have subsequently developed alternative systems, such as MovieMaker from the University of Alberta. Commercialization Bioinformatics vendor DNASTAR ha
https://en.wikipedia.org/wiki/Kratos%20MS%2050
The Kratos MS 50, or EI 50, is a tool for electron ionization (EI). The EI 50, used for relatively small molecules (as opposed to methods like MALDI), ionizes molecules via electron ionization (normally under 70 electronvolt conditions) and then accelerates them through an electric potential. The spectroscopy is done by analyzing the different displacements by a magnet. For equal charge, these displacements depend only on velocity, thus for the EI 50's constant kinetic energy conditions, these displacements are uniquely determined by a particle's mass. Mass spectrometry
https://en.wikipedia.org/wiki/Reagent%20testing
Reagent testing is one of the processes used to identify substances contained within a pill, usually illicit substances. With the increased prevalence of drugs being available in their pure forms, the terms "drug checking" or "pill testing" may also be used, although these terms usually refer to testing with a wider variety of techniques covered by drug checking. Reagent testing notes A test is done by taking a small scraping from a pill and placing it in the reagent testing liquid or dropping the reagent onto the scraping. The liquid will change colour when reacting with different chemicals to indicate the presence of certain substances. Testing with a reagent kit does not indicate the pill is safe. While the testing process does show some particular substances are present, it may not show a harmful substance that is also present and unaccounted for by the testing process. Some substances that cause strong colour changes can also mask the presence of other substances that cause weaker colour changes. Thin layer chromatography is used with reagent testing to separate substances before testing and prevent this "masking" effect. Ehrlich reagent can only detect drugs with an indole moiety, but this is useful because drugs from the NBOMe class do not have an indole and are often sold as LSD which does. The Ehrlich reagent has an additional benefit over other reagents in that it does not react with the paper on which LSD is often distributed. Reagent tests are often limited to target specific chemicals, and when these substances are mis-sold it is usually by substitution of a different substance in the same chemical family, rendering the test unuseful for consumers. However, reagent tests for chemicals families also exist. Lacing agents are often used to cut the weight of substances. Some of the most available and non-suspicious cutting agents are reducing sugars: The common dietary monosaccharides galactose, glucose and fructose are all reducing sugars. Sugar is t
https://en.wikipedia.org/wiki/Acoustic%20radiation%20pressure
Acoustic radiation pressure is the apparent pressure difference between the average pressure at a surface moving with the displacement of the wave propagation (the Lagrangian pressure) and the pressure that would have existed in the fluid of the same mean density when at rest. Numerous authors make a distinction between the phenomena of Rayleigh radiation pressure and Langevin radiation pressure. See also Radiation pressure Acoustic levitation Acoustic radiation force
https://en.wikipedia.org/wiki/National%20Center%20for%20High-Performance%20Computing
The National Center for High-Performance Computing (NCHC; ) is one of ten national-level research laboratories under National Applied Research Laboratories (NARL), headquartered at Hsinchu Science and Industrial Park, Hsinchu City, Taiwan. The NCHC is Taiwan's primary facility for high performance computing (HPC) resources including large-scale computational science and engineering, cluster and grid computing, middleware development, visualization and virtual reality, data storage, networking, and HPC-related training. The NCHC is also responsible for the operation of the 20 Gbit/s Taiwan Advanced Research and Education Network (TWAREN), the national education and research network of Taiwan. The NCHC supports academia and industry with hardware and software, advanced research and application development, and professional training. Its Free Software Lab developed and maintains the free disk cloning utility Clonezilla. History The research center was opened in 1993. In November 2018 the National Center for High-Performance Computing owned supercomputer Taiwania 2 debuted at number 20 on the top500 list of fastest supercomputers. List of supercomputers Formosa 4 Formosa 5 ALPS Taiwania (supercomputer) Taiwania 2 Taiwania 3 Branches Hsinchu HQ Taichung Tainan See also Ministry of Science and Technology (Republic of China) Taiwan Semiconductor Research Institute Industrial Technology Research Institute Taiwania (supercomputer) Taiwania 3 (supercomputer)
https://en.wikipedia.org/wiki/Timeline%20of%20biotechnology
The historical application of biotechnology throughout time is provided below in chronological order. These discoveries, inventions and modifications are evidence of the application of biotechnology since before the common era and describe notable events in the research, development and regulation of biotechnology. Before Common Era 5000 BCE – Chinese discover fermentation through beer making. 6000 BCE – Yogurt and cheese made with lactic acid-producing bacteria by various people. 4500 BCE – Egyptians bake leavened bread using yeast. 500 BCE – Moldy soybean curds used as an antibiotic. 300 BCE – The Greeks practice crop rotation for maximum soil fertility. 100 AD – Chinese use chrysanthemum as a natural insecticide. Pre-20th century 1663 – First recorded description of living cells by Robert Hooke. 1677 – Antonie van Leeuwenhoek discovers and describes bacteria and protozoa. 1798 – Edward Jenner uses first viral vaccine to inoculate a child from smallpox. 1802 – The first recorded use of the word biology. 1824 – Henri Dutrochet discovers that tissues are composed of living cells. 1838 – Protein discovered, named and recorded by Gerardus Johannes Mulder and Jöns Jacob Berzelius. 1862 – Louis Pasteur discovers the bacterial origin of fermentation. 1863 – Gregor Mendel discovers the laws of inheritance. 1864 – invents first centrifuge to separate cream from milk. 1869 – Friedrich Miescher identifies DNA in the sperm of a trout. 1871 – Felix Hoppe-Seyler discovers invertase, which is still used for making artificial sweeteners. 1877 – Robert Koch develops a technique for staining bacteria for identification. 1878 – Walther Flemming discovers chromatin leading to the discovery of chromosomes. 1881 – Louis Pasteur develops vaccines against bacteria that cause cholera and anthrax in chickens. 1885 – Louis Pasteur and Emile Roux develop the first rabies vaccine and use it on Joseph Meister. 20th century 1919 – Károly Ereky, a Hungarian
https://en.wikipedia.org/wiki/McASP
McASP is an acronym for Multichannel Audio Serial Port, a communication peripheral found in Texas Instruments family of digital signal processors (DSPs) and Microcontroller Units (MCUs). The McASP functions as a general-purpose audio serial port optimized for the needs of multichannel audio applications. Depending on the implementation, the McASP may be useful for time-division multiplexed (TDM) stream, Inter-Integrated Sound (I2S) protocols, and intercomponent digital audio interface transmission (DIT). However, some implementations are limited to supporting just the Inter-Integrated Sound (I2S) protocol. The McASP consists of transmit and receive sections that may operate synchronized, or completely independently with separate master clocks, bit clocks, and frame syncs, and using different transmit modes with different bit-stream formats. The McASP module also includes up to 16 serializers that can be individually enabled to either transmit or receive. In addition, all of the McASP pins can be configured as general-purpose input/output (GPIO) pins. Features Features of the McASP include: Two independent clock generator modules for transmit and receive Clocking flexibility allows the McASP to receive and transmit at different rates. For example, the McASP can receive data at 48 kHz but output up-sampled data at 96 kHz or 192 kHz. Independent transmit and receive modules, each includes: Programmable clock and frame sync generator TDM streams from 2 to 32, and 384 time slots Support for time slot sizes of 8, 12, 16, 20, 24, 28, and 32 bits Data formatter for bit manipulation Individually assignable serial data pins (up to 16 pins) Glueless connection to audio analog-to-digital converters (ADC), digital-to-analog converters (DAC), Codec, digital audio interface receiver (DIR), and S/PDIF transmit physical layer components. Wide variety of I2S and similar bit-stream format Integrated digital audio interface transmitter (DIT) supports: S/PDIF, IEC60958-1, AES-3 forma
https://en.wikipedia.org/wiki/Tree%20of%20life%20%28biology%29
The tree of life or universal tree of life is a metaphor, model and research tool used to explore the evolution of life and describe the relationships between organisms, both living and extinct, as described in a famous passage in Charles Darwin's On the Origin of Species (1859). Tree diagrams originated in the medieval era to represent genealogical relationships. Phylogenetic tree diagrams in the evolutionary sense date back to the mid-nineteenth century. The term phylogeny for the evolutionary relationships of species through time was coined by Ernst Haeckel, who went further than Darwin in proposing phylogenic histories of life. In contemporary usage, tree of life refers to the compilation of comprehensive phylogenetic databases rooted at the last universal common ancestor of life on Earth. Two public databases for the tree of life are TimeTree, for phylogeny and divergence times, and the Open Tree of Life, for phylogeny. History Early natural classification Although tree-like diagrams have long been used to organise knowledge, and although branching diagrams known as claves ("keys") were omnipresent in eighteenth-century natural history, it appears that the earliest tree diagram of natural order was the 1801 "Arbre botanique" (Botanical Tree) of the French schoolteacher and Catholic priest Augustin Augier. Yet, although Augier discussed his tree in distinctly genealogical terms, and although his design clearly mimicked the visual conventions of a contemporary family tree, his tree did not include any evolutionary or temporal aspect. Consistent with Augier's priestly vocation, the Botanical Tree showed rather the perfect order of nature as instituted by God at the moment of Creation. In 1809, Augier's more famous compatriot Jean-Baptiste Lamarck (1744–1829), who was acquainted with Augier's "Botanical Tree", included a branching diagram of animal species in his Philosophie zoologique. Unlike Augier, however, Lamarck did not discuss his diagram in terms of
https://en.wikipedia.org/wiki/Cyn.in
Cyn.in is an open-source enterprise collaborative software built on top of Plone a content management system written in the Python programming language which is a layer above Zope. Cyn.in is developed by Cynapse a company founded by Apurva Roy Choudhury and Dhiraj Gupta which is based in India. Cyn.in enables its users to store, retrieve and organize files and rich content in a collaborative, multiuser environment. Cyn.in comes in three flavors. Cyn.in Community Edition is released under the GNU General Public License version 3 based on open standards and is completely "free" to use. Cyn.in Enterprise Editions are commercially supported, certified and tested by Cynapse. The on-premises appliance is designed towards businesses who want to install the software on their infrastructure behind their firewall. With the On-Demand Service, Cynapse hosts the software for businesses to use, in secure cloud servers. History Cyn.in was developed and released in late 2006 as a closed source Enterprise Bliki software, based on the .NET Framework as a SaaS offering by Cynapse. In 2008, June, Cynapse, the company behind Cyn.in, released a new version of Cyn.in and open sourced the project. This release was built on the popular open source Plone - Zope - Python framework. With this release Cynapse's intention was to expand its focus into the enterprise collaboration domain. While the new release still supported Blogs and Wikis, Cyn.in had evolved to include enterprise collaboration tools including file repositories, event calendars, image galleries and more. The company decided to discontinue using the Bliki terminology and Cyn.in is called a Collaboration software Concepts Application convergence The cyn.in collaborative information management system attempts to bring together the core concepts of: Personal information management Organization-wide knowledge and document management Information and file collaboration Knowledge transfer Content publishing Spaces Information can
https://en.wikipedia.org/wiki/Gate%20dielectric
A gate dielectric is a dielectric used between the gate and substrate of a field-effect transistor (such as a MOSFET). In state-of-the-art processes, the gate dielectric is subject to many constraints, including: Electrically clean interface to the substrate (low density of quantum states for electrons) High capacitance, to increase the FET transconductance High thickness, to avoid dielectric breakdown and leakage by quantum tunneling. The capacitance and thickness constraints are almost directly opposed to each other. For silicon-substrate FETs, the gate dielectric is almost always silicon dioxide (called "gate oxide"), since thermal oxide has a very clean interface. However, the semiconductor industry is interested in finding alternative materials with higher dielectric constants, which would allow higher capacitance with the same thickness. History The earliest gate dielectric used in a field-effect transistor was silicon dioxide (SiO2). The silicon and silicondioxide surface passivation process was developed by Egyptian engineer Mohamed M. Atalla at Bell Labs during the late 1950s, and then used in the first MOSFETs (metal–oxide–semiconductor field-effect transistors). Silicon dioxide remains the standard gate dielectric in MOSFET technology. See also QBD (electronics)
https://en.wikipedia.org/wiki/Network%20DVR
Network DVR (NDVR), or network personal video recorder (NPVR), or remote storage digital video recorder (RS-DVR) is a network-based digital video recorder (DVR) stored at the provider's central location rather than at the consumer's private home. Traditionally, media content was stored in a subscriber's set-top box hard drive, but with NDVR the service provider owns a large number of servers, on which the subscribers' media content is stored. The term RS-DVR is used by Cablevision for their version of this technology. Overview NDVR is a consumer service where real-time broadcast television is captured in the network on a server allowing the end user to access the recorded programs at will, rather than being tied to the broadcast schedule. The NDVR system provides time-shifted viewing of broadcast programs, allowing subscribers to record and watch programs at their convenience, without the requirement of a local PVR device. It can be considered as a "PVR that is built into the network". NDVR subscribers can choose from the programmes available in the network-based library, when they want, without needing yet another device or remote control. However, many people would still prefer to have their own PVR device, as it would allow them to choose exactly what they want to record. Local PVR bypasses the strict rights and licensing regulations, as well as other limitations, that often prevent the network itself from providing "on demand" access to certain programmes. In contrast, RS-DVR (Remote Storage Digital Video Recorder) refers to a service where a subscriber can record a program and store it on the network. A stored program is only available to the person who recorded it. Should any two persons record the same program, it must for legal reasons be recorded and stored as separate copies. Essentially implementing a traditional DVR with network based storage. In Greece, On Telecoms offers an NPVR service to all subscribers in their basic package with all the pr
https://en.wikipedia.org/wiki/Melnikov%20distance
In mathematics, the Melnikov method is a tool to identify the existence of chaos in a class of dynamical systems under periodic perturbation. Introduction The Melnikov method is used in many cases to predict the occurrence of chaotic orbits in non-autonomous smooth nonlinear systems under periodic perturbation. According to the method, it is possible to construct a function called the "Melnikov function" which can be used to predict either regular or chaotic behavior of a dynamical system. Thus, the Melnikov function will be used to determine a measure of distance between stable and unstable manifolds in the Poincaré map. Moreover, when this measure is equal to zero, by the method, those manifolds crossed each other transversally and from that crossing the system will become chaotic. This method appeared in 1890 by H. Poincaré and by V. Melnikov in 1963 and could be called the "Poincaré-Melnikov Method". Moreover, it was described by several textbooks as Guckenheimer & Holmes,Kuznetsov, S. Wiggins, Awrejcewicz & Holicke and others. There are many applications for Melnikov distance as it can be used to predict chaotic vibrations. In this method, critical amplitude is found by setting the distance between homoclinic orbits and stable manifolds equal to zero. Just like in Guckenheimer & Holmes where they were the first who based on the KAM theorem, determined a set of parameters of relatively weak perturbed Hamiltonian systems of two-degrees-of-freedom, at which homoclinic bifurcation occurred. The Melnikov distance Consider the following class of systems given by or in vector form where , , and Assume that system (1) is smooth on the region of interest, is a small perturbation parameter and is a periodic vector function in with the period . If , then there is an unperturbed system From this system (3), looking at the phase space in Figure 1, consider the following assumptions A1 - The system has a hyperbolic fixed point , connected to itself by a hom
https://en.wikipedia.org/wiki/Heat%20capacity%20rate
The heat capacity rate is heat transfer terminology used in thermodynamics and different forms of engineering denoting the quantity of heat a flowing fluid of a certain mass flow rate is able to absorb or release per unit temperature change per unit time. It is typically denoted as C, listed from empirical data experimentally determined in various reference works, and is typically stated as a comparison between a hot and a cold fluid, Ch and Cc either graphically, or as a linearized equation. It is an important quantity in heat exchanger technology common to either heating or cooling systems and needs, and the solution of many real world problems such as the design of disparate items as different as a microprocessor and an internal combustion engine. Basis A hot fluid's heat capacity rate can be much greater than, equal to, or much less than the heat capacity rate of the same fluid when cold. In practice, it is most important in specifying heat-exchanger systems, wherein one fluid usually of dissimilar nature is used to cool another fluid such as the hot gases or steam cooled in a power plant by a heat sink from a water source—a case of dissimilar fluids, or for specifying the minimal cooling needs of heat transfer across boundaries, such as in air cooling. As the ability of a fluid to resist change in temperature itself changes as heat transfer occurs changing its net average instantaneous temperature, it is a quantity of interest in designs which have to compensate for the fact that it varies continuously in a dynamic system. While itself varying, such change must be taken into account when designing a system for overall behavior to stimuli or likely environmental conditions, and in particular the worst-case conditions encountered under the high stresses imposed near the limits of operability— for example, an air-cooled engine in a desert climate on a very hot day. If the hot fluid had a much larger heat capacity rate, then when hot and cold fluids went through
https://en.wikipedia.org/wiki/Nodding%20disease
Nodding disease is a disease which emerged in Sudan in the 1960s. It is a mentally and physically disabling disease that only affects children, typically between the ages of 5 and 15. It is currently restricted to small regions in South Sudan, Tanzania, and northern Uganda. Prior to the South Sudan outbreaks and subsequent limited spread, the disease was first described in 1962 existing in secluded mountainous regions of Tanzania, although the connection between that disease and nodding syndrome was only made recently. Signs and symptoms Children affected by nodding disease experience a complete and permanent stunting of growth. The growth of the brain is also stunted, leading to intellectual disability. The disease is named for the characteristic, pathological nodding seizure, which often begins when the children begin to eat, or sometimes when they feel cold. These seizures are brief and halt after the children stop eating or when they feel warm again. Seizures in nodding disease span a wide range of severity. Neurotoxicologist Peter Spencer, who has investigated the disease, has stated that upon presentation with food, "one or two [children] will start nodding very rapidly in a continuous, pendulous nod. A nearby child may suddenly go into a tonic–clonic seizure, while others will freeze." Severe seizures can cause the child to collapse, leading to further injury. Sub-clinical seizures have been identified in electroencephalograms, and MRI scans have shown brain atrophy and damage to the hippocampus and glia cells. It has been found that no seizures occur when affected individuals are given an unfamiliar or non-traditional food, such as chocolate. Causes It is currently not known what causes the disease, but it is believed to be connected to infestations of the parasitic worm Onchocerca volvulus, which is prevalent in all outbreak areas, and a possible explanation involves the formation of antibodies against parasite antigen that are cross-reactive to leiomo
https://en.wikipedia.org/wiki/Magnapinna%20talismani
Magnapinna talismani is a species of bigfin squid known only from a single damaged specimen. It is characterised by small white nodules present on the ventral surface of its fins. It is the first described species of Magnapinna, although it was not recognized as a member of the genus until over a century later. Description The holotype of M. talismani is a specimen of mantle length (ML) collected in the northern Atlantic Ocean, south of the Azores, at . It was caught by an open bottom trawl at a depth of up to . The capture location of this specimen is very near to that of the as-yet undescribed Magnapinna sp. B. Taxonomy M. talismani was originally placed in the genus Chiroteuthopsis, which is now considered a junior synonym of Mastigoteuthis. Mastigoteuthis talismani was subsequently placed in the genus Magnapinna by Michael Vecchione and Richard E. Young in 2006. Gallery
https://en.wikipedia.org/wiki/Superbase%20%28database%29
Superbase is an end-user desktop database program that started on the Commodore 64 and was ported from that to various operating systems over the course of more than 20 years. It also has generally included a programming language to automate database-oriented tasks, and with later versions included WYSIWYG form and report designers as well as more sophisticated programming capabilities. History It was originally created in 1983 by Precision Software for the Commodore 64 and 128 and later the Amiga and Atari ST. In 1989, it was the first database management system to run on a Windows computer. Precision Software, a UK-based company, was the original creator of the product Superbase. Superbase was and still is used by a large number of people on various platforms. It was often used only as an end-user database but a very large number of applications were built throughout industry, government, and academia and these were often of significant complexity. Some of these applications continue in use to the current day, mostly in small businesses. The initial versions were text mode only, but with the release of the Amiga version, Superbase became the first product to use the now common VCR control panel for browsing through records. It also supported a number of different media formats, including images, sounds, and video. Superbase was often referred to as the multimedia database in early years, when such features were uncommon. The Amiga version also featured an internal language and the capability to generate front end "masks" for queries and reports, years before Microsoft Access. This version was a huge success and that resulted in a version being created for a number of platforms using the same approach. Eventually a Microsoft Windows version was released and a couple of years later the company was sold by its founders to Software Publishing Corporation. SPC sold off the non-Windows versions of the product and after releasing version 2 and in the late alpha stag
https://en.wikipedia.org/wiki/Averch%E2%80%93Johnson%20effect
The Averch–Johnson effect is the tendency of regulated companies to engage in excessive amounts of capital accumulation in order to expand the volume of their profits. If companies' profits to capital ratio is regulated at a certain percentage then there is a strong incentive for companies to over-invest in order to increase profits overall. This investment goes beyond any optimal efficiency point for capital that the company may have calculated as higher profit is almost always desired over and above efficiency. Excessive capital accumulation under rate-of-return regulation is informally known as gold plating. But the so-called Averch-Johnson effect of overcapitalization does not as a general case involve "gold-plating". Mathematical derivation Suppose that a regulated firm wishes to maximize its profit:where is the revenue function, is the firm's capital stock, is the firm's labor stock, is the wage rate, and is the cost of capital. The firm's profit is constrained such that:where is the allowable rate of return. Assume that . We may then form a functional to find the firm's optimal action:where is the Lagrange multiplier (also known as the shadow price). The derivatives of this functional are:Taken together, this implies that:The ratio of the marginal product of capital and the marginal product of labor is:Since this new cost of capital is perceived to be less than the market cost of capital, the firm will tend to overinvest in capital. See also Law and economics Public utilities commission Rate-of-return regulation
https://en.wikipedia.org/wiki/Institute%20of%20Food%20and%20Agricultural%20Sciences
The University of Florida Institute of Food and Agricultural Sciences (UF/IFAS) is a teaching, research and Extension scientific organization focused on agriculture and natural resources. It is a partnership of federal, state, and county governments that includes an Extension office in each of Florida's 67 counties, 12 off-campus research and education centers, five demonstration units, the University of Florida College of Agricultural and Life Sciences (including the School of Forest, Fisheries and Geomatics Sciences and the School of Natural Resources and Environment), three 4-H camps, portions of the UF College of Veterinary Medicine, the Florida Sea Grant program, the Emerging Pathogens Institute, the UF Water Institute and the UF Genetics Institute. UF/IFAS research and development covers natural resource industries that have a $101 billion annual impact. The program is ranked #1 in the nation in federally financed higher education R&D expenditures in agricultural sciences and natural resources conservation by the National Science Foundation for FY 2019. Because of this mission and the diversity of Florida's climate and agricultural commodities, IFAS has facilities located throughout Florida. On July 13, 2020, Dr. J. Scott Angle became leader of UF/IFAS and UF's vice president for agriculture and natural resources. History Research The mission of UF/IFAS is to develop knowledge in agricultural, human, and natural resources, and to make that knowledge accessible to sustain and enhance the quality of human life. Faculty members pursue fundamental and applied research that furthers understanding of natural and human systems. Research is supported by state and federally appropriated funds and supplemented by grants and contracts. UF/IFAS received $155.6 million in annual research expenditures in sponsored research for FY 2021. The Florida Agricultural Experiment Station administers and supports research programs in UF/IFAS. The research program was created in
https://en.wikipedia.org/wiki/British%20Origami%20Society
The British Origami Society is a registered charity (no. 293039), devoted to the art of origami (paper folding). The Society has 700 members worldwide and publishes a bi-monthly magazine called "British Origami". They also have a library which is one of the world's largest collections of Origami resources, containing well over 4000 books, and a similar quantity of magazines, journals, convention packs and catalogues. As stated in the constitution of the society, its aims are, "to advance public education in the art of Origami and to promote the study and practice of Origami in education and as a means of therapy for the relief of people who are sick or mentally or physically handicapped". The society was founded at its inaugural meeting held at The Russell Hotel in London 28 October 1967. It was formed from the Origami Portfolio Society which had been founded in 1965. The first president of the new society was Robert Harbin, a noted British magician and author. Later, another notable president was Alfred Bestall, who had been writer and illustrator of Rupert Bear for the London Daily Express, from 1935 to 1965. The Society created the Sydney French medal to honour recipients for outstanding services to origami. The first recipient was David Brill in 1992. Notes and references External links Origami 1967 establishments in the United Kingdom Arts organizations established in 1967 Cultural charities based in the United Kingdom
https://en.wikipedia.org/wiki/Magnapinna%20sp.%20B
Magnapinna'' sp. B is an undescribed species of bigfin squid known only from a single immature specimen collected in the northern Atlantic Ocean. Description It is characterised by its dark epidermal pigmentation, which is epithelial, as opposed to the chromatophoral pigmentation found in other Magnapinna species. Discovery The only known specimen of Magnapinna sp. B is a juvenile male of mantle length (ML) held in the Bergen Museum. It was caught by the R/V G.O. SARS (MAR-ECO cruise super station 46, local station 374) on July 11, 2004, at .
https://en.wikipedia.org/wiki/Masonry%20trowel
The Masonry trowel is a hand trowel used in brickwork or stonework for levelling, spreading and shaping mortar or concrete. They come in several shapes and sizes depending on the task. The following is a list of the more common masonry trowels: Brick trowel: or mason's trowel is a point-nosed trowel for spreading mortar on bricks or concrete blocks with a technique called "buttering". The shape of the blade allows for very precise control of mortar placement. Bucket trowel: a wide-bladed tool for scooping mortar from a bucket; it is also good for buttering bricks and smoothing mortar. Concrete finishing trowel: is used to smooth a surface after the concrete has begun to set; it is held nearly level to the surface of the concrete, and moved with a sweeping arc across the surface. Corner trowel: used for shaping concrete around internal or external corners; the handle is located at the center of a 90-degree bend in the blade for balance and the ability to apply even pressure to both sides of a corner. Gauging trowel: a round-nosed trowel used for mixing mortar and applying small amounts in confined areas; it is also used to replace crumbled mortar and to patch concrete. Margin trowel: a flat-nosed trowel used to work mortar in tight spaces and corners where a larger pointed trowel will not fit. Pointing trowel: a smaller version of the brick trowel. Useful for filling in small cavities and repairing crumbling mortar joints. Pool trowel or round trowel: a variation of the concrete finishing trowel; rounded blade prevents it from digging into wet concrete. Step trowel: similar to the corner trowel, it is used for shaping inside angles on concrete steps; the center of the 90-degree bend in the blade allows for rounded edges. Tile setter: a brick trowel with an extra-wide blade to hold more mortar than a standard brick trowel. It is ideal for smoothing mortar on large bricks and blocks. Tuck pointer: used for neatly packing mortar between bricks and blocks when repointi
https://en.wikipedia.org/wiki/Magnapinna%20sp.%20C
Magnapinna'' sp. C is an undescribed species of bigfin squid known only from a single specimen of mantle length (ML) collected in the southern Atlantic Ocean and held in the Natural History Museum. Description It is characterised by several morphological features: the proximal tentacles are more slender than arm pair IV, pigmentation is contained in the chromatophores, and "white nodules" are absent from the fins and glandular regions of the proximal tentacles. TaxonomyMagnapinna sp. C was originally illustrated in The Open Sea in 1956 and identified as Octopodoteuthopsis.
https://en.wikipedia.org/wiki/Specific%20surface%20area
Specific surface area (SSA) is a property of solids defined as the total surface area (SA) of a material per unit mass, (with units of m2/kg or m2/g). Alternatively, it may be defined as SA per solid or bulk volume (units of m2/m3 or m−1). It is a physical value that can be used to determine the type and properties of a material (e.g. soil or snow). It has a particular importance for adsorption, heterogeneous catalysis, and reactions on surfaces. Measurement Values obtained for specific surface area depend on the method of measurement. In adsorption based methods, the size of the adsorbate molecule (the probe molecule), the exposed crystallographic planes at the surface and measurement temperature all affect the obtained specific surface area. For this reason, in addition to the most commonly used Brunauer–Emmett–Teller (N2-BET) adsorption method, several techniques have been developed to measure the specific surface area of particulate materials at ambient temperatures and at controllable scales, including methylene blue (MB) staining, ethylene glycol monoethyl ether (EGME) adsorption, electrokinetic analysis of complex-ion adsorption and a Protein Retention (PR) method. A number of international standards exist for the measurement of specific surface area, including ISO standard 9277. Calculation The SSA can be simply calculated from a particle size distribution, making some assumption about the particle shape. This method, however, fails to account for surface associated with the surface texture of the particles. Adsorption The SSA can be measured by adsorption using the BET isotherm. This has the advantage of measuring the surface of fine structures and deep texture on the particles. However, the results can differ markedly depending on the substance adsorbed. The BET theory has inherent limitations but has the advantage to be simple and to yield adequate relative answers when the solids are chemically similar. In relatively rare cases, more complicated mo
https://en.wikipedia.org/wiki/Cooperative%20Human%20Linkage%20Center
CHLC (or Cooperative Human Linkage Center) was a National Institutes of Health project to map a large number of human genome markers, prior to the completion of the Human Genome Project. The project was stopped in 1999. National Institutes of Health Genetic mapping
https://en.wikipedia.org/wiki/Laser%20voltage%20prober
The laser voltage probe (LVP) is a laser-based voltage and timing waveform acquisition system which is used to perform failure analysis on flip-chip integrated circuits. The device to be analyzed is de-encapsulated in order to expose the silicon surface. The silicon substrate is thinned mechanically using a back side mechanical thinning tool. The thinned device is then mounted on a movable stage and connected to an electrical stimulus source. Signal measurements are performed through the back side of the device after substrate thinning has been performed. The device being probed must be electrically stimulated using a repeating test pattern, with a trigger pulse provided to the LVP as reference. The operation of the LVP is similar to that of a sampling oscilloscope. Theory of operation The LVP instrument measures voltage waveform signals in the device diffusion regions. Device imaging is accomplished through the use of a laser scanning microscope (LSM). The LVP uses dual infrared (IR) lasers to perform both device imaging and waveform acquisition. One laser is used to acquire images or waveforms from the device, while the second laser provides a reference which may be used to subtract unwanted noise from the signal data being acquired. On an electrically active device, the instrument monitors the changes in the phase of the electromagnetic field surrounding a signal being applied to a junction. The instrument obtains voltage waveform and timing information by monitoring the interaction of laser light with the changes in the electric field across a p-n junction. As the laser reaches the silicon surface, a certain amount of that light is reflected back. The amount of reflected laser light from the junction is sampled at various points in time. The changing electromagnetic field at the junction affects the amount of laser light that is reflected back. By plotting the variations in reflected laser light versus time, it is possible to construct a timing waveform of the
https://en.wikipedia.org/wiki/Large-conductance%20mechanosensitive%20channel
Large conductance mechanosensitive ion channels (MscLs) (TC# 1.A.22) are a family of pore-forming membrane proteins that are responsible for translating stresses at the cell membrane into an electrophysiological response. MscL has a relatively large conductance, 3 nS, making it permeable to ions, water, and small proteins when opened. MscL acts as stretch-activated osmotic release valve in response to osmotic shock. History MscL was first discovered on the surface of giant Escherichia coli spheroplasts using patch-clamp technique. Subsequently, the Escherichia coli MscL (Ec-MscL) gene was cloned in 1994. Following the cloning of MscL, the crystal structure of Mycobacterium tuberculosis MscL (Tb-MscL), was obtained in its closed conformation. In addition, the crystal structure of Staphylococcus aureus MscL (Sa-MscL) and Ec-MscL have been determined using X-ray crystallography and molecular model respectively. However, some evidence suggests that the Sa-MscL structure is not physiological, and is due to the detergent used in crystallization. Structure Similar to other ion channels, MscLs are organized as symmetric oligomers with the permeation pathway formed by the packing of subunits around the axis of rotational symmetry. Unlike MscS, which is heptameric, MscL is likely pentameric; although the Sa-MscL appears to be a tetramer in a crystal structure, this may be an artifact. MscL contains two transmembrane helices that are packed in an up-down/nearest neighbor topology. The permeation pathway of the MscL is approximately funnel shaped, with larger opening facing the periplasmic surface of the membrane and the narrowest point near the cytoplasm. At the narrowest point, the pore is constricted by the side chains of symmetry-related residues in Ec-MscL: Leu19 and Val23. The pore diameter of MscL in the open state has been estimated to ~3 nm, which accommodates the passage of small protein up to 9 kD. Ec-MscL consists of five identical subunits, each 136 amino acids
https://en.wikipedia.org/wiki/Solving%20quadratic%20equations%20with%20continued%20fractions
In mathematics, a quadratic equation is a polynomial equation of the second degree. The general form is where a ≠ 0. The quadratic equation on a number can be solved using the well-known quadratic formula, which can be derived by completing the square. That formula always gives the roots of the quadratic equation, but the solutions are expressed in a form that often involves a quadratic irrational number, which is an algebraic fraction that can be evaluated as a decimal fraction only by applying an additional root extraction algorithm. If the roots are real, there is an alternative technique that obtains a rational approximation to one of the roots by manipulating the equation directly. The method works in many cases, and long ago it stimulated further development of the analytical theory of continued fractions. Simple example Here is a simple example to illustrate the solution of a quadratic equation using continued fractions. We begin with the equation and manipulate it directly. Subtracting one from both sides we obtain This is easily factored into from which we obtain and finally Now comes the crucial step. We substitute this expression for x back into itself, recursively, to obtain But now we can make the same recursive substitution again, and again, and again, pushing the unknown quantity x as far down and to the right as we please, and obtaining in the limit the infinite continued fraction By applying the fundamental recurrence formulas we may easily compute the successive convergents of this continued fraction to be 1, 3/2, 7/5, 17/12, 41/29, 99/70, 239/169, ..., where each successive convergent is formed by taking the numerator plus the denominator of the preceding term as the denominator in the next term, then adding in the preceding denominator to form the new numerator. This sequence of denominators is a particular Lucas sequence known as the Pell numbers. Algebraic explanation We can gain further insight into this simple example by consider
https://en.wikipedia.org/wiki/Negative%20selection%20%28natural%20selection%29
In natural selection, negative selection or purifying selection is the selective removal of alleles that are deleterious. This can result in stabilising selection through the purging of deleterious genetic polymorphisms that arise through random mutations. Purging of deleterious alleles can be achieved on the population genetics level, with as little as a single point mutation being the unit of selection. In such a case, carriers of the harmful point mutation have fewer offspring each generation, reducing the frequency of the mutation in the gene pool. In the case of strong negative selection on a locus, the purging of deleterious variants will result in the occasional removal of linked variation, producing a decrease in the level of variation surrounding the locus under selection. The incidental purging of non-deleterious alleles due to such spatial proximity to deleterious alleles is called background selection. This effect increases with lower mutation rate but decreases with higher recombination rate. Purifying selection can be split into purging by non-random mating (assortative mating) and purging by genetic drift. Purging by genetic drift can remove primarily deeply recessive alleles, whereas natural selection can remove any type of deleterious alleles. See also Assortative mating Balancing selection Directional selection Disruptive selection Dysgenics Fluctuating selection Genetic purging Koinophilia Mutation–selection balance Stabilizing selection
https://en.wikipedia.org/wiki/C2%20domain
A C2 domain is a protein structural domain involved in targeting proteins to cell membranes. The typical version (PKC-C2) has a beta-sandwich composed of 8 β-strands that co-ordinates two or three calcium ions, which bind in a cavity formed by the first and final loops of the domain, on the membrane binding face. Many other C2 domain families don't have calcium binding activity. Coupling with other domains C2 domains are frequently found coupled to enzymatic domains; for example, the C2 domain in PTEN, brings the phosphatase domain into contact with the plasma membrane, where it can dephosphorylate its substrate, phosphatidylinositol (3,4,5)-trisphosphate (PIP3), without removing it from the membrane - which would be energetically very costly. PTEN consists of two domains, a protein tyrosine phosphatase domain and a C2 domain. This domain pair constitutes a superdomain, a heritable unit that is found in various proteins in fungi, plants and animals. In addition, phosphatidylinositol 3-kinase (PI3-kinase), an enzyme that phosphorylates phosphoinositides on the 3-hydroxyl group of the inositol ring, also uses a C2 domain to bind to the membrane (e.g. 1e8w PDB entry). Evolution The C2 domain is currently only known from eukaryotes and the prokaryote Clostridium perfringens where it is part of the alpha-toxin. Over 17 distinct clades of C2 domains have been identified. Most C2 families can be traced back to basal eukaryotic species indicating an early diversification before the last eukaryotic common ancestor (LECA). Only the PKC-C2 domain family contains conserved calcium-binding residues, suggesting the typical calcium-dependent membrane interaction is a derived feature limited in PKC-C2 domains. Calcium and Lipid selectivity C2 domains are unique among membrane targeting domains in that they show wide range of lipid selectivity for the major components of cell membranes, including phosphatidylserine and phosphatidylcholine. This C2 domain is about 116 amino-aci
https://en.wikipedia.org/wiki/Stardent%20Inc.
Stardent Computer, Inc. was a manufacturer of graphics supercomputer workstations in the late 1980s. The company was formed in 1989 when Ardent Computer Corporation (formerly Dana Computer, Inc.) and Stellar Computer Inc. merged. History Stellar Computer Stellar Computer was founded in 1985 in Newton, Massachusetts, and headed by William Poduska, who had previously founded Prime Computer and Apollo Computer. This company aimed to produce a workstation system with enough performance to be a serious threat to the Titan, and at a lower price. Ardent responded by starting work on a new desktop system called Stiletto, which featured two MIPS R3000s (paired with two R3010 FPUs) and four i860s for graphics processing (the i860s replaced the vector units). Their first product was demonstrated in March 1988. An investment from Japanese company Mitsui and others was announced in June 1988, bringing the total capital raised to $48 million. Ardent Computer Corporation At almost the same time, in November 1985, Allen H. Michels and Matthew Sanders III co-founded Dana Computer, Inc. in Sunnyvale, California. The company sought to produce a desktop multiprocessing supercomputer dedicated to graphics that could support up to four processor units. Each processor unit consisted initially of a MIPS R2000 CPU, and later a R3000, connected to a custom vector processor. The vector unit held 8,192 64-bit registers that could be used in any way from 8,192 one-word to thirty-two 256-word registers. This compares to modern SIMD systems which allow for perhaps eight to sixteen 128-bit registers with a small variety of addressing schemes. Their goal was to release their Titan supercomputer in July 1987 at a $50,000 price point. By late 1986, however, it became clear that this was unrealistic. A second round of funding came from Kubota Corporation, a Japanese heavy industries company, which had cash to spare and was looking for new opportunities. Kubota agreed not only to fund the comple
https://en.wikipedia.org/wiki/C1%20domain
C1 domain (also known as phorbol esters/diacylglycerol binding domain) binds an important secondary messenger diacylglycerol (DAG), as well as the analogous phorbol esters. Phorbol esters can directly stimulate protein kinase C, PKC. Phorbol esters (such as PMA) are analogues of DAG and potent tumor promoters that cause a variety of physiological changes when administered to both cells and tissues. DAG activates a family of serine/threonine protein kinases, collectively known as protein kinase C (PKC). Phorbol esters can directly stimulate PKC. The N-terminal region of PKC, known as C1, binds PMA and DAG in a phospholipid and zinc-dependent fashion. The C1 region contains one or two copies of a cysteine-rich domain, which is about 50 amino-acid residues long, and which is essential for DAG/PMA-binding. The DAG/PMA-binding domain binds two zinc ions; the ligands of these metal ions are probably the six cysteines and two histidines that are conserved in this domain. Human proteins containing this domain AKAP13; ARAF; ARHGAP29; ARHGEF2; BRAF; CDC42BPA; CDC42BPB; CDC42BPG; CHN1; CHN2; CIT; CIC; DGKA; DGKB; DGKD; DGKE; DGKG; DGKH; DGKI; DGKK; DGKQ; DGKZ; GMIP; HMHA1; KSR1; KSR2; MYO9A; MYO9B; PDZD8; PRKCA; PRKCB1; PRKCD; PRKCE; PRKCG; PRKCH; PRKCI; PRKCN; PRKCQ; PRKCZ; PRKD1; PRKD2; PRKD3; RACGAP1; RAF1; RASGRP; RASGRP1; RASGRP2; RASGRP3; RASGRP4; RASSF1; RASSF5; ROCK1; ROCK2; STAC; STAC2; STAC3; TENC1; UNC13A; UNC13B; UNC13C; VAV1; VAV2; VAV3;
https://en.wikipedia.org/wiki/FYVE%20domain
In molecular biology the FYVE zinc finger domain is named after the four cysteine-rich proteins: Fab 1 (yeast orthologue of PIKfyve), YOTB, Vac 1 (vesicle transport protein), and EEA1, in which it has been found. FYVE domains bind phosphatidylinositol 3-phosphate, in a way dependent on its metal ion coordination and basic amino acids. The FYVE domain inserts into cell membranes in a pH-dependent manner. The FYVE domain has been connected to vacuolar protein sorting and endosome function. Structure The FYVE domain is composed of two small beta hairpins (or zinc knuckles) followed by an alpha helix. The FYVE finger binds two zinc ions. The FYVE finger has eight potential zinc coordinating cysteine positions and is characterized by having basic amino acids around the cysteines. Many members of this family also include two histidines in a sequence motif: The FYVE finger is structurally similar to the RING domain and the PHD finger. Examples The following is a list of human proteins containing this domain: ANKFY1, EEA1, FGD1, FGD2, FGD3, FGD4, FGD5, FGD6, FYCO1, HGS, MTMR3, MTMR4, PIKFYVE, PLEKHF1, PLEKHF2 RUFY1, RUFY2, RUFY3, RUFY4, WDFY1, WDFY2, WDFY3, ZFYVE1, ZFYVE9, ZFYVE16, ZFYVE19, ZFYVE20, ZFYVE21, ZFYVE26, ZFYVE27, ZFYVE28
https://en.wikipedia.org/wiki/Z-channel%20%28information%20theory%29
In coding theory and information theory, a Z-channel or binary asymmetric channel is a communications channel used to model the behaviour of some data storage systems. Definition A Z-channel is a channel with binary input and binary output, where each 0 bit is transmitted correctly, but each 1 bit has probability p of being transmitted incorrectly as a 0, and probability 1–p of being transmitted correctly as a 1. In other words, if X and Y are the random variables describing the probability distributions of the input and the output of the channel, respectively, then the crossovers of the channel are characterized by the conditional probabilities: Capacity The channel capacity of the Z-channel with the crossover 1 → 0 probability p, when the input random variable X is distributed according to the Bernoulli distribution with probability for the occurrence of 0, is given by the following equation: where for the binary entropy function . This capacity is obtained when the input variable X has Bernoulli distribution with probability of having value 0 and of value 1, where: For small p, the capacity is approximated by as compared to the capacity of the binary symmetric channel with crossover probability p. {| class="toccolours collapsible collapsed" width="80%" style="text-align:left" !Calculation |- | To find the maximum we differentiate And we see the maximum is attained for yielding the following value of as a function of p |} For any p, (i.e. more 0s should be transmitted than 1s) because transmitting a 1 introduces noise. As , the limiting value of is . Bounds on the size of an asymmetric-error-correcting code Define the following distance function on the words of length n transmitted via a Z-channel Define the sphere of radius t around a word of length n as the set of all the words at distance t or less from , in other words, A code of length n is said to be t-asymmetric-error-correcting if for any two codewords , one has . Denote by
https://en.wikipedia.org/wiki/Sussman%20anomaly
The Sussman anomaly is a problem in artificial intelligence, first described by Gerald Sussman, that illustrates a weakness of noninterleaved planning algorithms, which were prominent in the early 1970s. Most modern planning systems are not restricted to noninterleaved planning and thus can handle this anomaly. While the significance/value of the problem is now a historical one, it is still useful for explaining why planning is non-trivial. In the problem, three blocks (labeled A, B, and C) rest on a table. The agent must stack the blocks such that A is atop B, which in turn is atop C. However, it may only move one block at a time. The problem starts with B on the table, C atop A, and A on the table: However, noninterleaved planners typically separate the goal (stack A atop B atop C) into subgoals, such as: get A atop B get B atop C Suppose the planner starts by pursuing Goal 1. The straightforward solution is to move C out of the way, then move A atop B. But while this sequence accomplishes Goal 1, the agent cannot now pursue Goal 2 without undoing Goal 1, since both A and B must be moved atop C: If instead the planner starts with Goal 2, the most efficient solution is to move B. But again, the planner cannot pursue Goal 1 without undoing Goal 2: The problem was first identified by Sussman as a part of his PhD research. Sussman (and his supervisor, Marvin Minsky) believed that intelligence requires a list of exceptions or tricks, and developed a modular planning system for "debugging" plans. See also STRIPS Automated planning Greedy algorithm Sources G.J. Sussman (1975) A Computer Model of Skill Acquisition Elsevier Science Inc. New York, NY, USA. Book version of his PhD thesis. Automated planning and scheduling 1975 introductions
https://en.wikipedia.org/wiki/Ivan%20Ivanov%20%28mathematician%29
Ivan Ivanovich Ivanov (; 11 August 1862 – 17 December 1939) was a Russian-Soviet mathematician who worked in the field of number theory. Together with Georgy Voronoy he continued Pafnuty Chebyshev's work on the subject. Life and work Ivanov was born in Saint Petersburg, Russia. He finished his studies in mathematics at Saint Petersburg University with his candidate thesis, "About prime numbers". In 1891 there followed his master thesis "integral complex numbers", and in 1901 his doctoral thesis, "About some questions in connection with the number of prime numbers". Starting in 1891, Ivanov lectured at St. Petersburg University; from 1896, he lectured at the women's university, and after 1902 at Saint Petersburg Polytechnical University. In 1924 Ivanov was elected corresponding member of the Russian Academy of Sciences.
https://en.wikipedia.org/wiki/%28E%29-4-Hydroxy-3-methyl-but-2-enyl%20pyrophosphate
(E)-4-Hydroxy-3-methyl-but-2-enyl pyrophosphate (HMBPP or HMB-PP) is an intermediate of the MEP pathway (non-mevalonate pathway) of isoprenoid biosynthesis. The enzyme HMB-PP synthase (GcpE, IspG) catalyzes the conversion of 2-C-methyl-D-erythritol 2,4-cyclodiphosphate (MEcPP) into HMB-PP. HMB-PP is then converted further to isopentenyl pyrophosphate (IPP) and dimethylallyl pyrophosphate (DMAPP) by HMB-PP reductase (LytB, IspH). HMB-PP is an essential metabolite in most pathogenic bacteria including Mycobacterium tuberculosis as well as in malaria parasites, but is absent from the human host. HMB-PP is the physiological activator ("phosphoantigen") for human Vγ9/Vδ2 T cells, the major γδ T cell population in peripheral blood. With a bioactivity of 0.1 nM it is 10,000-10,000,000 times more potent than any other natural compound, such as IPP or alkyl amines. HMB-PP functions in this capacity by binding the B30.2 domain of BTN3A1.
https://en.wikipedia.org/wiki/Non-mevalonate%20pathway
The non-mevalonate pathway—also appearing as the mevalonate-independent pathway and the 2-C-methyl-D-erythritol 4-phosphate/1-deoxy-D-xylulose 5-phosphate (MEP/DOXP) pathway—is an alternative metabolic pathway for the biosynthesis of the isoprenoid precursors isopentenyl pyrophosphate (IPP) and dimethylallyl pyrophosphate (DMAPP). The currently preferred name for this pathway is the MEP pathway, since MEP is the first committed metabolite on the route to IPP. Isoprenoid precursor biosynthesis The mevalonate pathway (MVA pathway or HMG-CoA reductase pathway) and the MEP pathway are metabolic pathways for the biosynthesis of isoprenoid precursors: IPP and DMAPP. Whereas plants use both MVA and MEP pathway, most organisms only use one of the pathways for the biosynthesis of isoprenoid precursors. In plant cells IPP/DMAPP biosynthesis via the MEP pathway takes place in plastid organelles, while the biosynthesis via the MVA pathway takes place in the cytoplasm. Most gram-negative bacteria, the photosynthetic cyanobacteria and green algae use only the MEP pathway. Bacteria that use the MEP pathway include important pathogens such Mycobacterium tuberculosis. IPP and DMAPP serve as precursors for the biosynthesis of isoprenoid (terpenoid) molecules used in processes as diverse as protein prenylation, cell membrane maintenance, the synthesis of hormones, protein anchoring and N-glycosylation in all three domains of life. In photosynthetic organisms MEP-derived precursors are used for the biosynthesis of photosynthetic pigments, such as the carotenoids and the phytol chain of chlorophyll and light harvesting pigments. Bacteria such as Escherichia coli have been engineered for co-expressing biosynthesis genes of both the MEP and the MVA pathway. Distribution of the metabolic fluxes between the MEP and the MVA pathway can be studied using 13C-glucose isotopomers. Reactions The reactions of the MEP pathway are as follows, taken primarily from Eisenreich and co-workers, excep
https://en.wikipedia.org/wiki/Software%20portability
A computer program is said to be portable if there is very low effort required to make it run on different platforms. The pre-requirement for portability is the generalized abstraction between the application logic and system interfaces. When software with the same functionality is produced for several computing platforms, portability is the key issue for development cost reduction. Strategies for portability Software portability may involve: Transferring installed program files to another computer of basically the same architecture. Reinstalling a program from distribution files on another computer of basically the same architecture. Building executable programs for different platforms from source code; this is what is usually understood by "porting". Similar systems When operating systems of the same family are installed on two computers with processors with similar instruction sets it is often possible to transfer the files implementing program files between them. In the simplest case, the file or files may simply be copied from one machine to the other. However, in many cases, the software is installed on a computer in a way which depends upon its detailed hardware, software, and setup, with device drivers for particular devices, using installed operating system and supporting software components, and using different drives or directories. In some cases, software, usually described as "portable software", is specifically designed to run on different computers with compatible operating systems and processors, without any machine-dependent installation. Porting is no more than transferring specified directories and their contents. Software installed on portable mass storage devices such as USB sticks can be used on any compatible computer on simply plugging the storage device in, and stores all configuration information on the removable device. Hardware- and software-specific information is often stored in configuration files in specified locations (e.g.
https://en.wikipedia.org/wiki/Alexander%20Monro%20Secundus
Alexander Monro of Craiglockhart and Cockburn (22 May 1733 – 2 October 1817) was a Scottish anatomist, physician and medical educator. He is typically known as or Junior to distinguish him as the second of three generations of physicians of the same name. His students included the naval physician and abolitionist Thomas Trotter. Munro was from the distinguished Monro of Auchenbowie family. His major achievements included, describing the lymphatic system, providing the most detailed elucidation of the musculo-skeletal system to date and introducing clinical medicine into the curriculum. He is known for the Monro–Kellie doctrine on intracranial pressure, a hypothesis developed by Monro and his former pupil George Kellie, who worked as a surgeon in the port of Leith. Life Alexander Monro, the third and youngest son of Isabella Macdonald of Sleat, and Alexander Monro Primus was born at Edinburgh on 20 May 1733. He was sent with his brothers to Mr Mundell's school, where he learned the rudiments of Latin and Greek, and showed early evidences of great ability. Among his school-fellows were Ilay Campbell who was afterwards Lord President of the Court of Session and William Ramsay of Barnton, the banker. Monro's father decided to make him his successor and sent him to the University of Edinburgh when he was 12 years old, to attend the ordinary course of philosophy before beginning his professional training. He studied mathematics under Colin Maclaurin and ethics under Sir John Pringle. He was also a favourite of Matthew Stewart, Professor of Experimental Philosophy. He showed an interest for anatomy and after entering on the medical course, aged 18, he became a useful assistant to his father in the dissecting room. He attended the lectures of Drs Rutherford, Andrew Plummer, Alston and Sinclair. He possessed an insatiable thirst for medical knowledge, an uncommon share of perseverance, and a good memory. In the session of 1753–54, his father Alexander Monro Primus foun
https://en.wikipedia.org/wiki/Tropical%20Village
Tropical Village is a miniature amusement theme park in Ayer Hitam, Batu Pahat District, Johor, Malaysia. The replicas of the structures are built within the theme park. The park is divided into four sections: Landmarks, the Leisure Corner, the Playground and the Agricultural Enclosure. The Landmarks section is a garden with famous landmarks from around the world. It contains a section dedicated to Malay Culture like Kompang sculptures and Kuda Kepang. It also has a Mini Malaysia section, which contains replicas of the Petronas Towers, A Famosa Fort and Mini World. The Leisure Corner is targeted to younger visitors with its Haunted House, House of Mirrors and Dinosaur Train amongst other attractions. The Playground is also a children-oriented section of the park. It includes the Oriental Island, Pet Corner and Garden of the Shy Monkey. The park also has dorms for visitors who want to stay overnight. List of major attractions in the Mini World, Tropical Village Europe & USA region Leaning Tower of Pisa - Italy Statue of Liberty - USA Colosseum - Italy Eiffel Tower - France Hollywood Sign - USA Windmills - Netherlands The Little Mermaid (statue) - Denmark Atomium - Belgium Asia region Great Wall of China - China Sigiriya Lion Rock - Sri Lanka Wat Pho Reclining Buddha - Thailand Borobudur - Indonesia Merlion - Singapore Taj Mahal - India Kuwait Towers - Kuwait Giza & Spinx - Egypt Other famous replicas Moai sculpture - Easter Island Japan kokeshi - Japan Olmec head - Mexico Budai - Chinese God Bruce Lee Sculpture - Hong Kong Jeju Island Sculpture - South Korea Transportation The theme park is accessible by bus from Larkin Sentral (2, 888) in Johor Bahru. See also List of tourist attractions in Malaysia
https://en.wikipedia.org/wiki/The%20W.%20Alton%20Jones%20Cell%20Science%20Center
The W. Alton Jones Cell Science Center (1971–1995) was a non-profit research and education center on 10 Old Barn Road in Lake Placid, New York. The Center was established by a gift of of land and $3 million to the Tissue Culture Association from the W. Alton Jones Foundation through efforts of Nettie Marie Jones, widow of W. Alton Jones, who was former chairman of the Board of Cities Service Company (see Citgo). The original tax-free gift was accompanied by the institutional charter that use of the facility would be restricted forever to non-profit activities related to research and education on the biology of cells. Cell Culture Research and Education Center 1971-1982 The Cell Center was largely the vision of cell culture pioneer Dr. George Otto Gey, director of the Finney-Howell Cancer Research Laboratory at the Johns Hopkins Hospital, a founder and first President of The Tissue Culture Association (now the Society for In Vitro Biology). Dr. Gey was introduced to Nettie Marie Jones, widow of W. Alton Jones, through her daughter Patricia Jones, an employee or acquaintance at Johns Hopkins. A highlight of the W. Alton Jones Cell Science Center building was the George and Margaret Gey Library. The objective was to provide a center in the peaceful setting of the Adirondack Mountains where experts in the fields of genetics, immunology, virology, insect physiology and other invertebrates unified by common interest in the art and science of culturing cells outside the body could come together, pool their ideas and techniques, and convey them to others. In the period 1971 to 1980, the Cell Center consisted of research groups oriented around the theme of cell and tissue culture, provided specialty 1- to 3-week courses and hosted international meetings on the theme. The first Director was Dr. Donald Merchant, followed by Dr. Paul Chapple. For the period 1971 through 1979 the W. Alton Jones Foundation contributed annually to the operating expenses and mission of the Cell
https://en.wikipedia.org/wiki/Cyclical%20asymmetry
Cyclical asymmetry is an economic term that describes any large imbalance in economic factors occurring for purely-cyclical reactions by a market or nation. That may include employment rates, debt retention, interest rates, bond strengths, or stock market imbalances. Types There are two main types of cyclical asymmetry: fiscal and economic. Fiscal cyclical asymmetry Fiscal cyclical asymmetry is based on national or international changes to fiscal policy as a result of cyclical intervention in money markets or currency exchanges. A simple example is the reaction of the US Federal Reserve to raise interest rates when the dollar performs too well against other currencies such as the euro and the yen. Since currency exchanges are often predilated on the results of economic changes such as quarterly profit results, Federal Reserve msut be cautious not to overreact or such fiscal changes will actually exacacerbate the situation by making investment in America more attractive than equalizing exchange rates. When this does occur, it is a cyclical asymmetry. Economic cyclical asymmetry Economic cyclical asymmetry is usually based on cyclical trends in national markets, such as the labor market. A simple example is found in the yearly changes in demand for labor. Job markets are, by nature, cyclical, with upswings in certain sectors such as retail near year's end, and in construction during the spring and summer. While job creation and destruction as a national whole average usually equalize, when disturbances in the markets occur, the disruptions can cause higher than usual unemployment, which has a negative effect on the economy and causes further economic stress. Causes The primary cause of cyclical asymmetry is rapid change in an otherwise regularly cyclical model, and overreactions to counteract such changes. Similar to a man walking across a swaying tightrope, any economic model subject to cyclical stressors must find a balance. When it does not, cyclical asymmetrie
https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta%E2%80%93Fehlberg%20method
In mathematics, the Runge–Kutta–Fehlberg method (or Fehlberg method) is an algorithm in numerical analysis for the numerical solution of ordinary differential equations. It was developed by the German mathematician Erwin Fehlberg and is based on the large class of Runge–Kutta methods. The novelty of Fehlberg's method is that it is an embedded method from the Runge–Kutta family, meaning that identical function evaluations are used in conjunction with each other to create methods of varying order and similar error constants. The method presented in Fehlberg's 1969 paper has been dubbed the RKF45 method, and is a method of order O(h4) with an error estimator of order O(h5). By performing one extra calculation, the error in the solution can be estimated and controlled by using the higher-order embedded method that allows for an adaptive stepsize to be determined automatically. Butcher tableau for Fehlberg's 4(5) method Any Runge–Kutta method is uniquely identified by its Butcher tableau. The embedded pair proposed by Fehlberg The first row of coefficients at the bottom of the table gives the fifth-order accurate method, and the second row gives the fourth-order accurate method. Implementing an RK4(5) Algorithm The coefficients found by Fehlberg for Formula 1 (derivation with his parameter α2=1/3) are given in the table below, using array indexing of base 1 instead of base 0 to be compatible with most computer languages: The coefficients in the below table do not work. Fehlberg outlines a solution to solving a system of n differential equations of the form: to iterative solve for where h is an adaptive stepsize to be determined algorithmically: The solution is the weighted average of six increments, where each increment is the product of the size of the interval, , and an estimated slope specified by function f on the right-hand side of the differential equation. Then the weighted average is: The estimate of the truncation error is: At the completion of
https://en.wikipedia.org/wiki/Computational%20neurogenetic%20modeling
Computational neurogenetic modeling (CNGM) is concerned with the study and development of dynamic neuronal models for modeling brain functions with respect to genes and dynamic interactions between genes. These include neural network models and their integration with gene network models. This area brings together knowledge from various scientific disciplines, such as computer and information science, neuroscience and cognitive science, genetics and molecular biology, as well as engineering. Levels of processing Molecular kinetics Models of the kinetics of proteins and ion channels associated with neuron activity represent the lowest level of modeling in a computational neurogenetic model. The altered activity of proteins in some diseases, such as the amyloid beta protein in Alzheimer's disease, must be modeled at the molecular level to accurately predict the effect on cognition. Ion channels, which are vital to the propagation of action potentials, are another molecule that may be modeled to more accurately reflect biological processes. For instance, to accurately model synaptic plasticity (the strengthening or weakening of synapses) and memory, it is necessary to model the activity of the NMDA receptor (NMDAR). The speed at which the NMDA receptor lets Calcium ions into the cell in response to Glutamate is an important determinant of Long-term potentiation via the insertion of AMPA receptors (AMPAR) into the plasma membrane at the synapse of the postsynaptic cell (the cell that receives the neurotransmitters from the presynaptic cell). Genetic regulatory network In most models of neural systems neurons are the most basic unit modeled. In computational neurogenetic modeling, to better simulate processes that are responsible for synaptic activity and connectivity, the genes responsible are modeled for each neuron. A gene regulatory network, protein regulatory network, or gene/protein regulatory network, is the level of processing in a computational neurogeneti
https://en.wikipedia.org/wiki/Schaumann%20body
In pathology, Schaumann bodies are calcium and protein inclusions inside of Langhans giant cells as part of a granuloma. Many conditions can cause Schaumann bodies, including: Sarcoidosis, Hypersensitivity pneumonitis, and Berylliosis. uncommonly, Crohn's disease and tuberculosis. Etymology These inclusions were named after Swedish dermatologist Jörgen Nilsen Schaumann. See also Asteroid body
https://en.wikipedia.org/wiki/Handbook%20of%20Automated%20Reasoning
The Handbook of Automated Reasoning (, 2128 pages) is a collection of survey articles on the field of automated reasoning. Published in June 2001 by MIT Press, it is edited by John Alan Robinson and Andrei Voronkov. Volume 1 describes methods for classical logic, first-order logic with equality and other theories, and induction. Volume 2 covers higher-order, non-classical and other kinds of logic. Index Volume 1 History Classical Logic Equality and Other Theories Induction Volume 2 Higher-Order Logic and Logical Frameworks Nonclassical Logics Decidable Classes and Model Building Implementation External links MIT press page 2001 non-fiction books Handbooks and manuals Logic books Computer science books Automated reasoning
https://en.wikipedia.org/wiki/Aperture%20card
An aperture card is a type of punched card with a cut-out window into which a chip of microfilm is mounted. Such a card is used for archiving or for making multiple inexpensive copies of a document for ease of distribution. The card is typically punched with machine-readable metadata associated with the microfilm image, and printed across the top of the card for visual identification; it may also be punched by hand in the form of an edge-notched card. The microfilm chip is most commonly 35mm in height, and contains an optically reduced image, usually of some type of reference document, such as an engineering drawing, that is the focus of the archiving process. Machinery exists to automatically store, retrieve, sort, duplicate, create, and digitize cards with a high level of automation. Aperture cards have several advantages and disadvantages when compared to digital systems. While many aperture cards still play an important role in archiving, their role is gradually being replaced by digital systems. Usage Aperture cards are used for engineering drawings from all engineering disciplines. The U.S. Department of Defense once made extensive use of aperture cards, and some are still in use, but most data is now digital. Information about the drawing, for example the drawing number, could be both punched and printed on the remainder of the card. With the proper machinery, this allows for automated handling. In the absence of such machinery, the cards can still be read by a human with a lens and a light source. Advantages Aperture cards have, for archival purposes, some advantages over digital systems. They have a 500-year lifetime, they are human readable, and there is no expense or risk in converting from one digital format to the next when computer systems become obsolete. Disadvantages Most of the disadvantages are related to the well established differences in analog and digital technology. In particular, searching for given strings within content i
https://en.wikipedia.org/wiki/Sporangiole
A sporangiole is a specialised spherical sporangium produced by some species of fungi, smaller than or secondary to the typical sporangium.
https://en.wikipedia.org/wiki/Pressure%E2%80%93volume%20diagram
A pressure–volume diagram (or PV diagram, or volume–pressure loop) is used to describe corresponding changes in volume and pressure in a system. They are commonly used in thermodynamics, cardiovascular physiology, and respiratory physiology. PV diagrams, originally called indicator diagrams, were developed in the 18th century as tools for understanding the efficiency of steam engines. Description A PV diagram plots the change in pressure P with respect to volume V for some process or processes. Typically in thermodynamics, the set of processes forms a cycle, so that upon completion of the cycle there has been no net change in state of the system; i.e. the device returns to the starting pressure and volume. The figure shows the features of an idealized PV diagram. It shows a series of numbered states (1 through 4). The path between each state consists of some process (A through D) which alters the pressure or volume of the system (or both). A key feature of the diagram is that the amount of energy expended or received by the system as work can be measured because the net work is represented by the area enclosed by the four lines. In the figure, the processes 1-2-3 produce a work output, but processes from 3-4-1 require a smaller energy input to return to the starting position / state; so the net work is the difference between the two. This figure is highly idealized, in so far as all the lines are straight and the corners are right angles. A diagram showing the changes in pressure and volume in a real device will show a more complex shape enclosing the work cycle. (). History The PV diagram, then called an indicator diagram, was developed in 1796 by James Watt and his employee John Southern. Volume was traced by a plate moving with the piston, while pressure was traced by a pressure gauge whose indicator moved at right angles to the piston. A pencil was used to draw the diagram. Watt used the diagram to make radical improvements to steam engine performance.
https://en.wikipedia.org/wiki/Finite-rank%20operator
In functional analysis, a branch of mathematics, a finite-rank operator is a bounded linear operator between Banach spaces whose range is finite-dimensional. Finite-rank operators on a Hilbert space A canonical form Finite-rank operators are matrices (of finite size) transplanted to the infinite dimensional setting. As such, these operators may be described via linear algebra techniques. From linear algebra, we know that a rectangular matrix, with complex entries, has rank if and only if is of the form Exactly the same argument shows that an operator on a Hilbert space is of rank if and only if where the conditions on are the same as in the finite dimensional case. Therefore, by induction, an operator of finite rank takes the form where and are orthonormal bases. Notice this is essentially a restatement of singular value decomposition. This can be said to be a canonical form of finite-rank operators. Generalizing slightly, if is now countably infinite and the sequence of positive numbers accumulate only at , is then a compact operator, and one has the canonical form for compact operators. If the series is convergent, is a trace class operator. Algebraic property The family of finite-rank operators on a Hilbert space form a two-sided *-ideal in , the algebra of bounded operators on . In fact it is the minimal element among such ideals, that is, any two-sided *-ideal in must contain the finite-rank operators. This is not hard to prove. Take a non-zero operator , then for some . It suffices to have that for any , the rank-1 operator that maps to lies in . Define to be the rank-1 operator that maps to , and analogously. Then which means is in and this verifies the claim. Some examples of two-sided *-ideals in are the trace-class, Hilbert–Schmidt operators, and compact operators. is dense in all three of these ideals, in their respective norms. Since any two-sided ideal in must contain , the algebra is simple if and only i
https://en.wikipedia.org/wiki/Desulfatibacillum%20alkenivorans%20AK-01
Desulfatibacillum alkenivorans AK-01 is a specific strain of Desulfatibacillum alkenivorans. Strain AK-01 was isolated from the Arthur Kill, NJ/NY waterway. This site has a history of contamination from petrochemical industry. AK-01 is a delta-proteobacterium capable of using C13-C18 alkanes as growth substrates (So et al., 1999). Analysis of labeled and fully deuterated metabolites shows that AK-01 activates n-alkanes via fumarate addition to the subterminal carbon using alkylsuccinate synthase. Recent studies have also shown that AK-01 uses sulfate, sulfite and thiosulfate as terminal electron acceptors. It has also been shown that AK-01 uses not only alkanes but also 1-alkenes, 1-alkanols, fatty acids and other organic acids as carbon substrates. Background The ubiquitous distribution of petroleum hydrocarbons in the environment is the consequence of diagenetic processes that occur in sedimentary rock formations containing large amounts of organic matter. Heat and pressure lead to the formation of a wide variety of hydrocarbons, including alkanes, alkenes, and cyclic/polycyclic aromatic hydrocarbons (PAHs), which can seep into aquatic environments. The environmental recalcitrance of many of these compounds is governed by their high bond dissociation energies. Alkanes are the least reactive class of hydrocarbons due to their apolar sigma bonds. In the absence of high temperatures, high pressures, metal catalysts or UV light, biotransformation plays the dominant role in environmental alkane degradation. The mechanisms and genetics of aerobic hydrocarbon degradation have been described extensively. The key feature of aerobic degradation is the role of dioxygen. Oxygen is not only a physiological requirement, but serves as a reactant in the hydroxylation of both aliphatic and aromatic hydrocarbons via monooxygenase and dioxygenase enzymes. Oxygen's key role as a reactant during aerobic hydrocarbon degradation led to the belief for many years that n-alkanes and ot
https://en.wikipedia.org/wiki/Mixing%20patterns
Mixing patterns refer to systematic tendencies of one type of nodes in a network to connect to another type. For instance, nodes might tend to link to others that are very similar or very different. This feature is common in many social networks, although it also appears sometimes in non-social networks. Mixing patterns are closely related to assortativity; however, for the purposes of this article, the term is used to refer to assortative or disassortative mixing based on real-world factors, either topological or sociological. Types of Mixing Patterns Mixing patterns are a characteristic of an entire network, referring to the extent for nodes to connect to other similar or different nodes. Mixing, therefore, can be classified broadly as assortative or disassortative. Assortative mixing is the tendency for nodes to connect to like nodes, while disassortative mixing captures the opposite case in which very different nodes are connected. Obviously, the particular node characteristics involved in the process of creating a link between a pair will shape a network's mixing patterns. For instance, in a sexual relationship network, one is likely to find a preponderance of male-female links, while in a friendship network male-male and female-female networks might prevail. Examining different sets of node characteristics thus may reveal interesting communities or other structural properties of the network. In principle there are two kinds of methods used to exploit these properties. One is based on analytical calculations by using generating function techniques. The other is numerical, and is based on Monte Carlo simulations for the graph generation. In a study on mixing patterns in networks, M.E.J. Newman starts by classifying the node characteristics into two categories. While the number of real-world node characteristics is virtually unlimited, they tend to fall under two headings: discrete and scalar/topological. The following sections define the differences between
https://en.wikipedia.org/wiki/Introduction%20to%20genetics
Genetics is the study of genes and tries to explain what they are and how they work. Genes are how living organisms inherit features or traits from their ancestors; for example, children usually look like their parents because they have inherited their parents' genes. Genetics tries to identify which traits are inherited and to explain how these traits are passed from generation to generation. Some traits are part of an organism's physical appearance, such as eye color, height or weight. Other sorts of traits are not easily seen and include blood types or resistance to diseases. Some traits are inherited through genes, which is the reason why tall and thin people tend to have tall and thin children. Other traits come from interactions between genes and the environment, so a child who inherited the tendency of being tall will still be short if poorly nourished. The way our genes and environment interact to produce a trait can be complicated. For example, the chances of somebody dying of cancer or heart disease seems to depend on both their genes and their lifestyle. Genes are made from a long molecule called DNA, which is copied and inherited across generations. DNA is made of simple units that line up in a particular order within it, carrying genetic information. The language used by DNA is called genetic code, which lets organisms read the information in the genes. This information is the instructions for the construction and operation of a living organism. The information within a particular gene is not always exactly the same between one organism and another, so different copies of a gene do not always give exactly the same instructions. Each unique form of a single gene is called an allele. As an example, one allele for the gene for hair color could instruct the body to produce much pigment, producing black hair, while a different allele of the same gene might give garbled instructions that fail to produce any pigment, giving white hair. Mutations are random
https://en.wikipedia.org/wiki/Mutant
In biology, and especially in genetics, a mutant is an organism or a new genetic character arising or resulting from an instance of mutation, which is generally an alteration of the DNA sequence of the genome or chromosome of an organism. It is a characteristic that would not be observed naturally in a specimen. The term mutant is also applied to a virus with an alteration in its nucleotide sequence whose genome is in the nuclear genome. The natural occurrence of genetic mutations is integral to the process of evolution. The study of mutants is an integral part of biology; by understanding the effect that a mutation in a gene has, it is possible to establish the normal function of that gene. Mutants arise by mutation Mutants arise by mutations occurring in pre-existing genomes as a result of errors of DNA replication or errors of DNA repair. Errors of replication often involve translesion synthesis by a DNA polymerase when it encounters and bypasses a damaged base in the template strand. A DNA damage is an abnormal chemical structure in DNA, such as a strand break or an oxidized base, whereas a mutation, by contrast, is a change in the sequence of standard base pairs. Errors of repair occur when repair processes inaccurately replace a damaged DNA sequence. The DNA repair process microhomology-mediated end joining is particularly error-prone. Etymology Although not all mutations have a noticeable phenotypic effect, the common usage of the word "mutant" is generally a pejorative term, only used for genetically or phenotypically noticeable mutations. Previously, people used the word "sport" (related to spurt) to refer to abnormal specimens. The scientific usage is broader, referring to any organism differing from the wild type. The word finds its origin in the Latin term mūtant- (stem of mūtāns), which means "to change". Mutants should not be confused with organisms born with developmental abnormalities, which are caused by errors during morphogenesis. In a devel
https://en.wikipedia.org/wiki/Equating
Test equating traditionally refers to the statistical process of determining comparable scores on different forms of an exam. It can be accomplished using either classical test theory or item response theory. In item response theory, equating is the process of placing scores from two or more parallel test forms onto a common score scale. The result is that scores from two different test forms can be compared directly, or treated as though they came from the same test form. When the tests are not parallel, the general process is called linking. It is the process of equating the units and origins of two scales on which the abilities of students have been estimated from results on different tests. The process is analogous to equating degrees Fahrenheit with degrees Celsius by converting measurements from one scale to the other. The determination of comparable scores is a by-product of equating that results from equating the scales obtained from test results. Purpose Suppose that Dick and Jane both take a test to become licensed in a certain profession. Because the high stakes (you get to practice the profession if you pass the test) may create a temptation to cheat, the organization that oversees the test creates two forms. If we know that Dick scored 60% on form A and Jane scored 70% on form B, do we know for sure which one has a better grasp of the material? What if form A is composed of very difficult items, while form B is relatively easy? Equating analyses are performed to address this very issue, so that scores are as fair as possible. Equating in item response theory In item response theory, person "locations" (measures of some quality being assessed by a test) are estimated on an interval scale; i.e., locations are estimated in relation to a unit and origin. It is common in educational assessment to employ tests in order to assess different groups of students with the intention of establishing a common scale by equating the origins, and when appropri
https://en.wikipedia.org/wiki/Imbert%E2%80%93Fedorov%20effect
The Imbert–Fiodaraŭ effect (named after Fiodar Ivanavič Fiodaraŭ (1911 – 1994) and Christian Imbert (1937 – 1998) is an optical phenomenon in which a beam of circularly or elliptically polarized light undergoes a small sideways shift, when refracted or totally internally reflected. The sideways shift is perpendicular to the plane containing the incident and reflected beams. This effect is the circular polarization analog of the Goos–Hänchen effect.
https://en.wikipedia.org/wiki/Goos%E2%80%93H%C3%A4nchen%20effect
The Goos–Hänchen effect (named after Hermann Fritz Gustav Goos (1883 – 1968) and Hilda Hänchen (1919 – 2013) is an optical phenomenon in which linearly polarized light undergoes a small lateral shift when totally internally reflected. The shift is perpendicular to the direction of propagation in the plane containing the incident and reflected beams. This effect is the linear polarization analog of the Imbert–Fedorov effect. Description This effect occurs because the reflections of a finite sized beam will interfere along a line transverse to the average propagation direction. As shown in the figure, the superposition of two plane waves with slightly different angles of incidence but with the same frequency or wavelength is given by where and with . It can be shown that the two waves generate an interference pattern transverse to the average propagation direction, and on the interface along the plane. Both waves are reflected from the surface and undergo different phase shifts, which leads to a lateral shift of the finite beam. Therefore, the Goos–Hänchen effect is a coherence phenomenon. Research This effect continues to be a topic of scientific research, for example in the context of nanophotonics applications. A negative Goos–Hänchen shift was shown by Walter J. Wild and Lee Giles. Sensitive detection of biological molecules is achieved based on measuring the Goos–Hänchen shift, where the signal of lateral change is in a linear relation with the concentration of target molecules. The work by M. Merano et al. studied the Goos–Hänchen effect experimentally for the case of an optical beam reflecting from a metal surface (gold) at 826 nm. They report a substantial, negative lateral shift of the reflected beam in the plane of incidence for a p-polarization and a smaller, positive shift for the s-polarization case. Generation of giant Goos-Hänchen shift It is known that the value of lateral position Goos-Hänchen shift is only 5-10 μm at a Total internal r
https://en.wikipedia.org/wiki/Electron%20beam%20prober
The electron beam prober (e-beam prober) is a specialized adaption of a standard scanning electron microscope (SEM) that is used for semiconductor failure analysis. While a conventional SEM may be operated in a voltage range of 10–30 keV, the e-beam Prober typically operates at 1 keV. The e-beam prober is capable of measuring voltage and timing waveforms on internal semiconductor signal structures. Waveforms may be measured on metal line, polysilicon and diffusion structures that have an electrically active, changing signal. The operation of the prober is similar to that of a sampling oscilloscope. A continuously looping, repeating test pattern must be applied to the device-under-test (DUT). E-beam probers are used primarily for front side semiconductor analysis. With the advent of flip-chip technology, many e-beam probers have been replaced with back side analysis instruments. Theory of operation The e-beam prober generates an SEM image by raster-scanning a focused electron beam over a selected region of the semiconductor surface. The high energy electrons in the primary beam strike the surface of the silicon, producing a number of low energy secondary electrons. The secondary electrons are guided back up through the SEM column to a detector. The varying numbers of secondary electrons reaching the detector are interpreted to produce the SEM image. During waveform acquisition mode, the primary electron beam is focused on a single point on the device surface. As the DUT cycles through its test pattern, the signal at the point being probed changes. The signal changes produce a corresponding change in the local electric field surrounding the point being probed. This affects the number of secondary electrons that escape the device surface and reach the detector. Since electrons are negatively charged, a conductor at a +5 volt potential inhibits the escape of electrons, while a 0 volt potential allows a greater number of electrons to reach the detector. By monitoring t
https://en.wikipedia.org/wiki/Michel%20Van%20den%20Bergh
Michel Van den Bergh (born 25 July 1960) is a Belgian mathematician and professor at the Vrije Universiteit Brussel and does research at Hasselt University. His research interest is on the fundamental relationship between algebra and geometry. In 2003, he was awarded the Francqui Prize on Exact Sciences. Van den Bergh obtained his Ph.D. in mathematics from the University of Antwerp in 1985, with thesis Algebraic Elements in Finite Dimensional Division Algebras written under the direction of Fred Van Oystaeyen and Jan Maria Hendrik Van Geel.
https://en.wikipedia.org/wiki/Lexis%20ratio
The Lexis ratio is used in statistics as a measure which seeks to evaluate differences between the statistical properties of random mechanisms where the outcome is two-valued — for example "success" or "failure", "win" or "lose". The idea is that the probability of success might vary between different sets of trials in different situations. This ratio is not much used currently having been largely replaced by the use of the chi-squared test in testing for the homogeneity of samples. This measure compares the between-set variance of the sample proportions (evaluated for each set) with what the variance should be if there were no difference between in the true proportions of success across the different sets. Thus the measure is used to evaluate how data compares to a fixed-probability-of-success Bernoulli distribution. The term "Lexis ratio" is sometimes referred to as L or Q, where Where is the (weighted) sample variance derived from the observed proportions of success in sets in "Lexis trials" and is the variance computed from the expected Bernoulli distribution on the basis of the overall average proportion of success. Trials where L falls significantly above or below 1 are known as supernormal and subnormal, respectively. This ratio ( Q ) is a measure that can be used to distinguish between three types of variation in sampling for attributes: Bernoullian, Lexian and Poissonian. The Lexis ratio is sometimes also referred to as L. Definition Let there be k samples of size n1, n3, n3, ... , nk and these samples have the proportion of the attribute being examined of p1, p2, p3, ..., pk respectively. Then the Lexis ratio is If the Lexis ratio is significantly below 1, the sampling is referred to as Poissonian (or subnormal); it is equal to 1 the sampling is referred to as Bernoullian (or normal); and if it is above 1 it is referred to as Lexian (or supranormal). Chuprov showed in 1922 that in the case of statistical homogeneity and where E() is
https://en.wikipedia.org/wiki/Mian%E2%80%93Chowla%20sequence
In mathematics, the Mian–Chowla sequence is an integer sequence defined recursively in the following way. The sequence starts with Then for , is the smallest integer such that every pairwise sum is distinct, for all and less than or equal to . Properties Initially, with , there is only one pairwise sum, 1 + 1 = 2. The next term in the sequence, , is 2 since the pairwise sums then are 2, 3 and 4, i.e., they are distinct. Then, can't be 3 because there would be the non-distinct pairwise sums 1 + 3 = 2 + 2 = 4. We find then that , with the pairwise sums being 2, 3, 4, 5, 6 and 8. The sequence thus begins 1, 2, 4, 8, 13, 21, 31, 45, 66, 81, 97, 123, 148, 182, 204, 252, 290, 361, 401, 475, ... . Similar sequences If we define , the resulting sequence is the same except each term is one less (that is, 0, 1, 3, 7, 12, 20, 30, 44, 65, 80, 96, ... ). History The sequence was invented by Abdul Majid Mian and Sarvadaman Chowla.
https://en.wikipedia.org/wiki/OmniPeek
Omnipeek is a packet analyzer software tool from Savvius, a LiveAction company, for network troubleshooting and protocol analysis. It supports an application programming interface (API) for plugins. History Savvius (formerly WildPackets) was founded in 1990 as The AG Group by Mahboud Zabetian and Tim McCreery. In 2000 the company changed its name to WildPackets to address the popular market it had developed for its products. The first product by the company was written for the Macintosh and was called EtherPeek. It was the first affordable software-only protocol analyzer for Ethernet networks. It was later ported to Microsoft Windows, which was released in 1997. Earlier, LocalPeek and TokenPeek were developed for LocalTalk and Token Ring networks respectively. In 2001, AiroPeek was released, which added support for wireless IEEE 802.11 (marketed with the Wi-Fi brand) networks. In 2003, the OmniEngine Distributed Capture Engine was released as software, and as a hardware network recorder appliance. In the early morning of July 15, 2002, WildPackets' building in Walnut Creek, California burnt to the ground including everything in it. However, no one was hurt and the employees regrouped at a new location and the company survived the fire. Mid-April 2015, the company changed its name from WildPackets to Savvius and broadened its focus to include network security. In June 2018, Savvius was acquired by LiveAction, a company that provides network performance management, visualization and analytics software. Acquisitions Savvius acquired Net3 Group in November 2000. Their product, NetSense, an expert system for network troubleshooting, was converted initially converted into a plug-in and then later fully integrated into a new version of the product called EtherPeekNX. Savvius acquired Optimized Engineering Corporation in 2001. Optimized network analysis instructors, training courses and certifications were added to Savvius' services. Extensibility Omnipeek has
https://en.wikipedia.org/wiki/Quantum%20dot%20cellular%20automaton
Quantum dot cellular automata (QDCA, sometimes referred to simply as quantum cellular automata, or QCA) are a proposed improvement on conventional computer design (CMOS), which have been devised in analogy to conventional models of cellular automata introduced by John von Neumann. Background Any device designed to represent data and perform computation, regardless of the physics principles it exploits and materials used to build it, must have two fundamental properties: distinguishability and conditional change of state, the latter implying the former. This means that such a device must have barriers that make it possible to distinguish between states, and that it must have the ability to control these barriers to perform conditional change of state. For example, in a digital electronic system, transistors play the role of such controllable energy barriers, making it extremely practical to perform computing with them. Cellular automata A cellular automaton (CA) is a discrete dynamical system consisting of a uniform (finite or infinite) grid of cells. Each cell can be in only one of a finite number of states at a discrete time. As time moves forward, the state of each cell in the grid is determined by a transformation rule that factors in its previous state and the states of the immediately adjacent cells (the cell's "neighborhood"). The most well-known example of a cellular automaton is John Horton Conway's "Game of Life", which he described in 1970. Quantum-dot cells Origin Cellular automata are commonly implemented as software programs. However, in 1993, Lent et al. proposed a physical implementation of an automaton using quantum-dot cells. The automaton quickly gained popularity and it was first fabricated in 1997. Lent combined the discrete nature of both cellular automata and quantum mechanics, to create nano-scale devices capable of performing computation at very high switching speeds (order of Terahertz) and consuming extremely small amounts of electrical
https://en.wikipedia.org/wiki/Acid%E2%80%93base%20homeostasis
Acid–base homeostasis is the homeostatic regulation of the pH of the body's extracellular fluid (ECF). The proper balance between the acids and bases (i.e. the pH) in the ECF is crucial for the normal physiology of the body—and for cellular metabolism. The pH of the intracellular fluid and the extracellular fluid need to be maintained at a constant level. The three dimensional structures of many extracellular proteins, such as the plasma proteins and membrane proteins of the body's cells, are very sensitive to the extracellular pH. Stringent mechanisms therefore exist to maintain the pH within very narrow limits. Outside the acceptable range of pH, proteins are denatured (i.e. their 3D structure is disrupted), causing enzymes and ion channels (among others) to malfunction. An acid–base imbalance is known as acidemia when the pH is acidic, or alkalemia when the pH is alkaline. Lines of defense In humans and many other animals, acid–base homeostasis is maintained by multiple mechanisms involved in three lines of defense: Chemical: The first lines of defense are immediate, consisting of the various chemical buffers which minimize pH changes that would otherwise occur in their absence. These buffers include the bicarbonate buffer system, the phosphate buffer system, and the protein buffer system. Respiratory component: The second line of defense is rapid consisting of the control the carbonic acid (H2CO3) concentration in the ECF by changing the rate and depth of breathing by hyperventilation or hypoventilation. This blows off or retains carbon dioxide (and thus carbonic acid) in the blood plasma as required. Metabolic component: The third line of defense is slow, best measured by the base excess, and mostly depends on the renal system which can add or remove bicarbonate ions () to or from the ECF. Bicarbonate ions are derived from metabolic carbon dioxide which is enzymatically converted to carbonic acid in the renal tubular cells. There, carbonic acid spontane
https://en.wikipedia.org/wiki/Kernel-based%20Virtual%20Machine
Kernel-based Virtual Machine (KVM) is a free and open-source virtualization module in the Linux kernel that allows the kernel to function as a hypervisor. It was merged into the mainline Linux kernel in version 2.6.20, which was released on February 5, 2007. KVM requires a processor with hardware virtualization extensions, such as Intel VT or AMD-V. KVM has also been ported to other operating systems such as FreeBSD and illumos in the form of loadable kernel modules. KVM was originally designed for x86 processors but has since been ported to ESA/390, PowerPC, IA-64, and ARM. The IA-64 port was removed in 2014. KVM supports hardware-assisted virtualization for a wide variety of guest operating systems including BSD, Solaris, Windows, Haiku, ReactOS, Plan 9, AROS, macOS, and even other Linux systems. In addition, Android 2.2, GNU/Hurd (Debian K16), Minix 3.1.2a, Solaris 10 U3 and Darwin 8.0.1, together with other operating systems and some newer versions of these listed, are known to work with certain limitations. Additionally, KVM provides paravirtualization support for Linux, OpenBSD, FreeBSD, NetBSD, Plan 9 and Windows guests using the VirtIO API. This includes a paravirtual Ethernet card, disk I/O controller, balloon driver, and a VGA graphics interface using SPICE or VMware drivers. History Avi Kivity began the development of KVM in mid-2006 at Qumranet, a technology startup company that was acquired by Red Hat in 2008. KVM surfaced in October 2006 and was merged into the Linux kernel mainline in kernel version 2.6.20, which was released on 5 February 2007. KVM is maintained by Paolo Bonzini. Internals KVM provides device abstraction but no processor emulation. It exposes the interface, which a user mode host can then use to: Set up the guest VM's address space. The host must also supply a firmware image (usually a custom BIOS when emulating PCs) that the guest can use to bootstrap into its main OS. Feed the guest simulated I/O. Map the guest's video
https://en.wikipedia.org/wiki/Exclusive%20First%20Editions
The Exclusive First Editions (EFE) is a UK-based die-cast model manufacturer. It began trading in 1989, when the company released its first models of an AEC bus and truck. Models are mostly produced in 1/76th scale, which matches the standard scale for UK OO gauge model railways. The initial aim of EFE was to provide a range of die-cast models representing diverse history of UK road vehicles. The models are designed in the UK and manufactured in China. By the end of 2010 the total number of EFE model items produced had passed the 2000 mark, with around 90 new releases each year. On the 5 October 2016 Gilbow (Holdings) Ltd, the company behind the Exclusive First Edition range, came to administration. On 17 October 2016 Bachmann Europe plc announced their acquisition of the Exclusive First Editions range of 1/76th scale die-cast models including buses, coaches, lorries and London Underground tube trains. History The first releases came in summer 1989. They proved successful with the AEC RT bus in particular receiving much acclaim. Expansion of the range occurred in 1990 with the launch of the first single-deck coaches, the Harrington Grenadier & Cavalier. Around the same time another truck, the Atkinson Knight, and a set of four sports cars were also added to the range; however, it was once again the passenger service models that proved the most popular and from this point onwards the bus and coach models became the main focus of the range. In the following years the range of bus models was vastly expanded to include among others the famous London Transport AEC Routemaster in some different versions, Leyland National single decker, Bristol MW coach, MCW Atlantean & Fleetline, Bristol VRT, Leyland Titan, AEC RF and the London Transport GS class Guy single-decker. The number of truck types was also doubled with the addition of the Bedford TK and Ergomatic cabs in 1996. Between 2005 and 2007 four further cabs – an AEC Mark V, ERF KV, Foden S24 and Ford Thames Trader
https://en.wikipedia.org/wiki/EServer.org
The EServer was an open access electronic publishing cooperative, founded in 1990, which published writings in the arts and humanities free of charge to Internet readers. In 2006, it was rated by Alexa as the most popular arts and humanities website in the world. Martha L. Brogan and Daphnée Rentfrow wrote in 2005 that it had "more than 200 active members, including editors of an eclectic mix of 45 discrete 'collections' (Web sites), which 'publish' more than 32,000 works." Duke University Library rated the EServer among the "best overall directories for literary information on the Web." Scope of collection The EServer published written works in the arts and humanities, largely (but not exclusively) those from the Western cultural tradition. In addition to literature such as poetry, novels, drama and short stories, the EServer published seven scholarly journals. Most releases were in English, but there were also significant numbers in many other languages. Whenever possible, EServer publications were released in open standards, such as XHTML. History The EServer was founded in 1990, when a group of graduate students set up their office computer in "Trailer H" on the Carnegie Mellon University campus network to permit them to collaborate with one another. In 1991, with the addition of more disk space, it became an Internet network server designed to provide public access (via FTP, telnet and Gopher to literary research, criticism, novels, and writings from various humanities disciplines. The site, originally called the English Server, was dedicated to publishing works in the arts and humanities free of charge to Internet readers. It was developed to assist leisure reading in particular, following a study by Geoffrey Sauer (the site's director) into the rapid and significant increase of books in the United States post-1979 and a consequent decrease in leisure readings among young Americans. By 1992 it was an extremely popular Gopher and FTP site, and by 1993 had
https://en.wikipedia.org/wiki/Client%20Puzzle%20Protocol
Client Puzzle Protocol (CPP) is a computer algorithm for use in Internet communication, whose goal is to make abuse of server resources infeasible. It is an implementation of a proof-of-work system (PoW). The idea of the CPP is to require all clients connecting to a server to correctly solve a mathematical puzzle before establishing a connection, if the server is under attack. After solving the puzzle, the client would return the solution to the server, which the server would quickly verify, or reject and drop the connection. The puzzle is made simple and easily solvable but requires at least a minimal amount of computation on the client side. Legitimate users would experience just a negligible computational cost, but abuse would be deterred: those clients that try to simultaneously establish a large number of connections would be unable to do so because of the computational cost (time delay). This method holds promise in fighting some types of spam as well as other attacks like denial-of-service. See also Computer security Intrusion-prevention system Proof-of-work system Guided tour puzzle protocol
https://en.wikipedia.org/wiki/Inorganic%20pyrophosphatase
Inorganic pyrophosphatase (or inorganic diphosphatase, PPase) is an enzyme () that catalyzes the conversion of one ion of pyrophosphate to two phosphate ions. This is a highly exergonic reaction, and therefore can be coupled to unfavorable biochemical transformations in order to drive these transformations to completion. The functionality of this enzyme plays a critical role in lipid metabolism (including lipid synthesis and degradation), calcium absorption and bone formation, and DNA synthesis, as well as other biochemical transformations. Two types of inorganic diphosphatase, very different in terms of both amino acid sequence and structure, have been characterised to date: soluble and transmembrane proton-pumping pyrophosphatases (sPPases and H(+)-PPases, respectively). sPPases are ubiquitous proteins that hydrolyse pyrophosphate to release heat, whereas H+-PPases, so far unidentified in animal and fungal cells, couple the energy of PPi hydrolysis to proton movement across biological membranes. Structure Thermostable soluble pyrophosphatase had been isolated from the extremophile Thermococcus litoralis. The 3-dimensional structure was determined using x-ray crystallography, and was found to consist of two alpha-helices, as well as an antiparallel closed beta-sheet. The form of inorganic pyrophosphatase isolated from Thermococcus litoralis was found to contain a total of 174 amino acid residues and have a hexameric oligomeric organization (Image 1). Humans possess two genes encoding pyrophosphatase, PPA1 and PPA2. PPA1 has been assigned to a gene locus on human chromosome 10, and PPA2 to chromosome 4. Mechanism Though the precise mechanism of catalysis via inorganic pyrophosphatase in most organisms remains uncertain, site-directed mutagenesis studies in Escherichia coli have allowed for analysis of the enzyme active site and identification of key amino acids. In particular, this analysis has revealed 17 residues of that may be of functional importance in c
https://en.wikipedia.org/wiki/Sherman%20trap
The Sherman trap is a box-style animal trap designed for the live capture of small mammals. It was invented by Dr. H. B. Sherman in the 1920s and became commercially available in 1955. Since that time, the Sherman trap has been used extensively by researchers in the biological sciences for capturing animals such as mice, voles, shrews, and chipmunks. The Sherman trap consists of eight hinged pieces of sheet metal (either galvanized steel or aluminum) that allow the trap to be collapsed for storage or transport. Sherman traps are often set in grids and may be baited with grains and seed. Description The hinged design allows the trap to fold up flat into something only the width of one side panel. This makes it compact for storage and easy to transport to field locations (e.g. in a back pack). Both ends are hinged, but in normal operation the rear end is closed and the front folds inwards and latches the treadle, trigger plate, in place. When an animal enters far enough to be clear of the front door, their weight releases the latch and the door closes behind them. The lure or bait is placed at the far end and can be dropped in place through the rear hinged door. Variants Later, other variants that built upon the basic design, appeared - such as the Elliott trap used in Europe and Australasia. The Elliott trap has simplified the design slightly and is made from just 7 hinged panels.
https://en.wikipedia.org/wiki/Robel%20pole
A Robel pole is a device consisting of a vertical pole possessing alternating horizontal bands and a line of rope or cord. It is used by range ecologists, field biologists and other scientists to measure the density of vegetation and to quantify the volume of ground cover in a particular habitat using the visual obstruction (VO) measurement method. The Robel pole is named for Robert J. Robel, the scientist who developed the device and technique. Modifications of Robel's original design have been developed and published; all use the VO method.
https://en.wikipedia.org/wiki/Evolvability%20%28computer%20science%29
The term evolvability is used for a recent framework of computational learning introduced by Leslie Valiant in his paper of the same name and described below. The aim of this theory is to model biological evolution and categorize which types of mechanisms are evolvable. Evolution is an extension of PAC learning and learning from statistical queries. General framework Let and be collections of functions on variables. Given an ideal function , the goal is to find by local search a representation that closely approximates . This closeness is measured by the performance of with respect to . As is the case in the biological world, there is a difference between genotype and phenotype. In general, there can be multiple representations (genotypes) that correspond to the same function (phenotype). That is, for some , with , still for all . However, this need not be the case. The goal then, is to find a representation that closely matches the phenotype of the ideal function, and the spirit of the local search is to allow only small changes in the genotype. Let the neighborhood of a representation be the set of possible mutations of . For simplicity, consider Boolean functions on , and let be a probability distribution on . Define the performance in terms of this. Specifically, Note that In general, for non-Boolean functions, the performance will not correspond directly to the probability that the functions agree, although it will have some relationship. Throughout an organism's life, it will only experience a limited number of environments, so its performance cannot be determined exactly. The empirical performance is defined by where is a multiset of independent selections from according to . If is large enough, evidently will be close to the actual performance . Given an ideal function , initial representation , sample size , and tolerance , the mutator is a random variable defined as follows. Each is classified as beneficial, neutral, or deleteriou
https://en.wikipedia.org/wiki/ClearSpeed
ClearSpeed Technology Ltd was a semiconductor company, formed in 2002 to develop enhanced SIMD processors for use in high-performance computing and embedded systems. Based in Bristol, UK, the company has been selling its processors since 2005. Its current 192-core CSX700 processor was released in 2008, but a lack of sales has forced the company to downsize and it has since delisted from the London stock exchange. Products The CSX700 processor consists of two processing arrays, each with 96 processing elements. The processing elements each contain a 32/64-bit floating point multiplier, a 32/64-bit floating point adder, 6 KB of SRAM, an integer arithmetic logic unit, and a 16-bit integer multiply–accumulate unit. It currently sells its CSX700 processor on a PCI Express expansion card with 2 GB of memory, called the Advance e710. The card is supplied with the ClearSpeed Software Development Kit and application libraries. Related multi-core architectures include Ambric, PicoChip, Cell BE, Texas Memory Systems, and GPGPU stream processors such as AMD FireStream and Nvidia Tesla. ClearSpeed competes with AMD and Nvidia in the hardware acceleration market, where computationally intensive applications offload tasks to the accelerator. As of 2009, only the ClearSpeed e710 performs 64-bit arithmetic at its peak computational rate. History In November 2003 ClearSpeed demonstrated the CS301, with 64 processing elements running at 200 MHz, and peak 25.6 FP32 GFLOPS. In June 2005 ClearSpeed demonstrated the CSX600, with 96 processing elements running at 210 MHz, capable of 40 GFLOPS. In September 2005 John Gustafson joined ClearSpeed as CTO of high performance computing. In November 2005 ClearSpeed made its first significant sale of CSX600 processors to the Tokyo Institute of Technology using X620 Advance cards. In November 2006 ClearSpeed X620 Advance cards helped place the Tsubame cluster 7th in the TOP500 list of supercomputers. The cards continue to be used in 2009.
https://en.wikipedia.org/wiki/Visceroptosis
Visceroptosis is a prolapse or a sinking of the abdominal viscera (internal organs) below their natural position. "Ptosis" being the defining term, any or all of the organs may be displaced downward. When only the intestines are involved, the condition is known as enteroptosis. When the stomach is found below its normal position, the term gastroptosis is used. The condition exists in all degrees of severity and may not give rise to any adverse symptoms. Generally, however, there may be loss of appetite, heartburn, nervous indigestion, constipation, diarrhea, abdominal distention, headache, vertigo, emaciation, and loss of sleep. Any or all of these symptoms may be present. The condition may be brought about by loss of muscular tone, particularly of the abdominal muscles, with relaxation of the ligaments that typically hold the viscera in place. Tightlacing has been held to be a cause as well. Corsets to reduce the circumference of women's waists have been used to enable fashionable styles occurring during several historical periods, such as the late 1800s and early 1900s, when these symptoms were described for treatment by physicians. Adverse symptoms may be alleviated by supporting the organs with a properly applied bandage, or other similar device. Rest in bed, attention to diet, hygiene, exercise, and general muscular strengthening will cure the majority of cases. In some cases, surgical intervention may become necessary. Visceroptosis is a known risk factor for the development of Superior mesenteric artery syndrome. Visceroptosis also is known as Glénard's disease (after French physician Frantz Glénard [1848–1920]). Glénard's theory – the theory that abdominal ptosis is a nutritional disease with atrophy and prolapse of the intestine Glénard's test (also called girdle test) – while standing behind the patient, the examiner places his arms around the patient, so that his hands meet in front of the patient's abdomen; he squeezes, raising the viscera, and