source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Trichrome%20staining
Trichrome staining is a histological staining method that uses two or more acid dyes in conjunction with a polyacid. Staining differentiates tissues by tinting them in contrasting colours. It increases the contrast of microscopic features in cells and tissues, which makes them easier to see when viewed through a microscope. The word trichrome means "three colours". The first staining protocol that was described as "trichrome" was Mallory's trichrome stain, which differentially stained erythrocytes to a red colour, muscle tissue to a red colour, and collagen to a blue colour. Some other trichrome staining protocols are the Masson's trichrome stain, Lillie's trichrome, and the Gömöri trichrome stain. Purpose Without trichrome staining, discerning one feature from another can be extremely difficult. Smooth muscle tissue, for example, is hard to differentiate from collagen. A trichrome stain can colour the muscle tissue red, and the collagen fibres green or blue. Liver biopsies may have fine collagen fibres between the liver cells, and the amount of collagen may be estimated based on the staining method. Trichrome methods are now used for differentiating muscle from collagen, pituitary alpha cells from beta cells, fibrin from collagen, and mitochondria in fresh frozen muscle sections, among other applications. It helps in identifying increases in collagenous tissue (i.e., fibrotic changes) such as in liver cirrhosis and distinguishing tumours arising from muscle cells and fibroblasts. Procedure Trichrome staining techniques employ two or more acid dyes. Normally acid dyes would stain the same basic proteins, but by applying them sequentially the staining pattern can be manipulated. A polyacid (such as phosphomolybdic acid or tungstophosphoric acid) is used to remove dye selectively. Polyacids are thought to behave as dyes with a high molecular weight: they displace easily removed dye from collagen. Usually a red dye in dilute acetic acid is applied first to oversta
https://en.wikipedia.org/wiki/Neisseria%20meningitidis
Neisseria meningitidis, often referred to as the meningococcus, is a Gram-negative bacterium that can cause meningitis and other forms of meningococcal disease such as meningococcemia, a life-threatening sepsis. The bacterium is referred to as a coccus because it is round, and more specifically a diplococcus because of its tendency to form pairs. About 10% of adults are carriers of the bacteria in their nasopharynx. As an exclusively human pathogen, it is the main cause of bacterial meningitis in children and young adults, causing developmental impairment and death in about 10% of cases. It causes the only form of bacterial meningitis known to occur epidemically, mainly in Africa and Asia. It occurs worldwide in both epidemic and endemic form. N. meningitidis is spread through saliva and respiratory secretions during coughing, sneezing, kissing, chewing on toys and through sharing a source of fresh water. It has also been reported to be transmitted through oral sex and cause urethritis in men. It infects its host cells by sticking to them with long thin extensions called pili and the surface-exposed proteins Opa and Opc and has several virulence factors. Signs and symptoms Meningococcus can cause meningitis and other forms of meningococcal disease. It initially produces general symptoms like fatigue, fever, and headache and can rapidly progress to neck stiffness, coma and death in 10% of cases. Petechiae occur in about 50% of cases. Chance of survival is highly correlated with blood cortisol levels, with lower levels prior to steroid administration corresponding with increased patient mortality. Symptoms of meningococcal meningitis are easily confused with those caused by other bacteria, such as Haemophilus influenzae and Streptococcus pneumoniae. Suspicion of meningitis is a medical emergency and immediate medical assessment is recommended. Current guidance in the United Kingdom is that if a case of meningococcal meningitis or septicaemia (infection of the bloo
https://en.wikipedia.org/wiki/Fumonisin%20B1
Fumonisin B1 is the most prevalent member of a family of toxins, known as fumonisins, produced by several species of Fusarium molds, such as Fusarium verticillioides, which occur mainly in maize (corn), wheat and other cereals. Fumonisin B1 contamination of maize has been reported worldwide at mg/kg levels. Human exposure occurs at levels of micrograms to milligrams per day and is greatest in regions where maize products are the dietary staple. Fumonisin B1 is hepatotoxic and nephrotoxic in all animal species tested. The earliest histological change to appear in either the liver or kidney of fumonisin-treated animals is increased apoptosis followed by regenerative cell proliferation. While the acute toxicity of fumonisin is low, it is the known cause of two diseases which occur in domestic animals with rapid onset: equine leukoencephalomalacia and porcine pulmonary oedema syndrome. Both of these diseases involve disturbed sphingolipid metabolism and cardiovascular dysfunction. History In 1970, an outbreak of leukoencephalomalacia in horses in South Africa was associated with the contamination of corn with the fungus Fusarium verticillioides. It is one of the most prevalent seed-borne fungi associated with corn. Another study was done on the possible role of fungal toxins in the etiology of human esophageal cancer in a region in South Africa. The diet of the people living in this area was homegrown corn and F. verticillioides was the most prevalent fungus in the corn consumed by the people with high incidence of esophageal cancer. Further outbreaks of leukoencephalomalacia and people in certain regions with high incidence of esophageal cancer led to more research on F. verticillioides. Soon they found experimentally that F. verticillioides caused leukoencephalomalacia in horses and porcine pulmonary edema in pigs. It was found to be highly hepatotoxic and cardiotoxic in rats. In 1984 it was shown that the fungus was hepatocarcinogenic in rats. The chemical natur
https://en.wikipedia.org/wiki/5-Nitro-2-propoxyaniline
5-Nitro-2-propoxyaniline, also known as P-4000 and Ultrasüss, is about 4,000 times the intensity of sucrose (hence its alternate name, P-4000). It is an orange solid that is only slightly soluble in water. It is stable in boiling water and dilute acids. 5-Nitro-2-propoxyaniline was once used as an artificial sweetener but has been banned in the United States because of its possible toxicity. In the US, food containing any added or detectable level of 5-nitro-2-propoxyaniline is deemed to be adulterated in violation of the act based upon an order published in the Federal Register of January 19, 1950 (15 FR 321).
https://en.wikipedia.org/wiki/QCD%20vacuum
The QCD vacuum is the quantum vacuum state of quantum chromodynamics (QCD). It is an example of a non-perturbative vacuum state, characterized by non-vanishing condensates such as the gluon condensate and the quark condensate in the complete theory which includes quarks. The presence of these condensates characterizes the confined phase of quark matter. Symmetries and symmetry breaking Symmetries of the QCD Lagrangian Like any relativistic quantum field theory, QCD enjoys Poincaré symmetry including the discrete symmetries CPT (each of which is realized). Apart from these space-time symmetries, it also has internal symmetries. Since QCD is an SU(3) gauge theory, it has local SU(3) gauge symmetry. Since it has many flavours of quarks, it has approximate flavour and chiral symmetry. This approximation is said to involve the chiral limit of QCD. Of these chiral symmetries, the baryon number symmetry is exact. Some of the broken symmetries include the axial U(1) symmetry of the flavour group. This is broken by the chiral anomaly. The presence of instantons implied by this anomaly also breaks CP symmetry. In summary, the QCD Lagrangian has the following symmetries: Poincaré symmetry and CPT invariance SU(3) local gauge symmetry approximate global SU() × SU() flavour chiral symmetry and the U(1) baryon number symmetry The following classical symmetries are broken in the QCD Lagrangian: scale, i.e., conformal symmetry (through the scale anomaly), giving rise to asymptotic freedom the axial part of the U(1) flavour chiral symmetry (through the chiral anomaly), giving rise to the strong CP problem. Spontaneous symmetry breaking When the Hamiltonian of a system (or the Lagrangian) has a certain symmetry, but the vacuum does not, then one says that spontaneous symmetry breaking (SSB) has taken place. A familiar example of SSB is in ferromagnetic materials. Microscopically, the material consists of atoms with a non-vanishing spin, each of which acts like a tiny bar mag
https://en.wikipedia.org/wiki/Siegel%20zero
In mathematics, more specifically in the field of analytic number theory, a Landau–Siegel zero or simply Siegel zero (also known as exceptional zero), named after Edmund Landau and Carl Ludwig Siegel, is a type of potential counterexample to the generalized Riemann hypothesis, on the zeros of Dirichlet L-functions associated to quadratic number fields. Roughly speaking, these are possible zeros very near (in a quantifiable sense) to . Motivation and definition The way in which Siegel zeros appear in the theory of Dirichlet L-functions is as potential exceptions to the classical zero-free regions, which can only occur when the L-function is associated to a real Dirichlet character. Real primitive Dirichlet characters For an integer , a Dirichlet character modulo is an arithmetic function satisfying the following properties: (Completely multiplicative) for every , ; (Periodic) for every ; (Support) if . That is, is the lifting of a homomorphism . The trivial character is the character modulo 1, and the principal character modulo , denoted , is the lifting of the trivial homomorphism . A character is called imprimitive if there exists some integer with such that the induced homomorphism factors as for some character ; otherwise, is called primitive. A character is real (or quadratic) if it equals its complex conjugate (defined as ), or equivalently if . The real primitive Dirichlet characters are in one-to-one correspondence with the Kronecker symbols for a fundamental discriminant (i.e., the discriminant of a quadratic number field). One way to define is as the completely multiplicative arithmetic function determined by (for prime): It is thus common to write , which are real primitive characters modulo . Classical zero-free regions The Dirichlet L-function associated to a character is defined as the analytic continuation of the Dirichlet series defined for , where s is a complex variable. For non-principal, this continuation is enti
https://en.wikipedia.org/wiki/Antarctic%20Technology%20Offshore%20Lagoon%20Laboratory
The Antarctic Technology Offshore Lagoon Laboratory (ATOLL) was a floating oceanographic laboratory for in situ observation experiments. This facility also tested instruments and equipment for polar expeditions. The ATOLL hull was the largest fiberglass structure ever built at that time. It was in operation from 1982 to 1995. Structure and infrastructure The ATOLL was composed of three curved fiberglass elements, each long and having a draught of only . For towing, the elements could be assembled in a long S-shape; in operation, the elements would form a horseshoe shape surrounding water surface. The lab provided ample space for twelve researchers. The laboratory contained a lab, storage and supply facilities, a dormitory, computer room, and a fireplace. The laboratory was installed and operated in the Baltic Sea (and the Bay of Kiel in particular) at the initiative and under the direction of Uwe Kils, at the Institute of Oceanography (Institut für Meereskunde) of the University of Kiel. The fiberglass hulls themselves were bought from Waki Zöllner's "Atoll" company. The onboard computer was a NeXT and the first versions of a virtual microscope of Antarctic krill for interactive dives into their morphology and behavior were developed here, finding later mention in Science magazine. The lab was connected to the Internet via a radio link, and the first images of ocean critters on the internet came from this NeXT. The first ever in situ videos of Atlantic herring feeding on copepods were recorded from this lab. An underwater observation and experimentation room allowed direct observation and manipulation through large portholes. The technical equipment included an ultra-high-resolution scanning sonar that was used for locating schools of juvenile herring, for guiding a ROV, which was controlled via a cyberhelmet and glove, and for determining positions, distances, and speeds. Probes measured the water salinity, temperature, and oxygen levels. Special instrument
https://en.wikipedia.org/wiki/Facial%20symmetry
Facial symmetry is one specific measure of bodily symmetry. Along with traits such as averageness and youthfulness it influences judgments of aesthetic traits of physical attractiveness and beauty. For instance, in mate selection, people have been shown to have a preference for symmetry. Facial bilateral symmetry is typically defined as fluctuating asymmetry of the face comparing random differences in facial features of the two sides of the face. The human face also has systematic, directional asymmetry: on average, the face (mouth, nose and eyes) sits systematically to the left with respect to the axis through the ears, the so-called aurofacial asymmetry. Directional asymmetry Directional asymmetry is systematic. The average across the population is not "symmetric", but statistically significantly biased on one direction. That means, that individuals of a species can be symmetric, or even asymmetric to the opposite side (see, e.g., handedness), but most individuals are asymmetric to the same side. The relation between directional and fluctuating asymmetry is comparable to the concepts of accuracy and precision in empirical measurements. There are examples from the brain (Yakovlevian torque and spine, and inner organs (see axial twist theory), but also from various animals (see Symmetry in biology). Aurofacial asymmetry Aurofacial asymmetry (from Latin auris 'ear' and faciēs 'face') is an example of directed asymmetry of the face. It refers to the left-sided offset of the face (i.e. eyes, nose, and mouth) with respect to the ears. On average, the face's offset is slightly to the left, meaning that the right side of the face appears larger than the left side. The offset is larger in newborns and reduces gradually during growth. Anatomy and definition In contrast to fluctuating asymmetry, directional asymmetry is systematic, i.e. across the population it is systematically more often in one direction than in the other. It means that across the population a devi
https://en.wikipedia.org/wiki/Social%20behavior
Social behavior is behavior among two or more organisms within the same species, and encompasses any behavior in which one member affects the other. This is due to an interaction among those members. Social behavior can be seen as similar to an exchange of goods, with the expectation that when you give, you will receive the same. This behavior can be affected by both the qualities of the individual and the environmental (situational) factors. Therefore, social behavior arises as a result of an interaction between the two—the organism and its environment. This means that, in regards to humans, social behavior can be determined by both the individual characteristics of the person, and the situation they are in. A major aspect of social behavior is Personal information, which is the basis for survival and reproduction. Social behavior is said to be determined by two different processes, that can either work together or oppose one another. The dual-systems model of reflective and impulsive determinants of social behavior came out of the realization that behavior cannot just be determined by one single factor. Instead, behavior can arise by those consciously behaving (where there is an awareness and intent), or by pure impulse. These factors that determine behavior can work in different situations and moments, and can even oppose one another. While at times one can behave with a specific goal in mind, other times they can behave without rational control, and driven by impulse instead. There are also distinctions between different types of social behavior, such as mundane versus defensive social behavior. Mundane social behavior is a result of interactions in day-to-day life, and are behaviors learned as one is exposed to those different situations. On the other hand, defensive behavior arises out of impulse, when one is faced with conflicting desires. The development of social behavior Social behavior constantly changes as one continues to grow and develop, reaching
https://en.wikipedia.org/wiki/Radiosensitivity
Radiosensitivity is the relative susceptibility of cells, tissues, organs or organisms to the harmful effect of ionizing radiation. Cells types affected Cells are least sensitive when in the S phase, then the G1 phase, then the G2 phase, and most sensitive in the M phase of the cell cycle. This is described by the 'law of Bergonié and Tribondeau', formulated in 1906: X-rays are more effective on cells which have a greater reproductive activity. From their observations, they concluded that quickly dividing tumor cells are generally more sensitive than the majority of body cells. This is not always true. Tumor cells can be hypoxic and therefore less sensitive to X-rays because most of their effects are mediated by the free radicals produced by ionizing oxygen. It has meanwhile been shown that the most sensitive cells are those that are undifferentiated, well nourished, dividing quickly and highly active metabolically. Amongst the body cells, the most sensitive are spermatogonia and erythroblasts, epidermal stem cells, gastrointestinal stem cells. The least sensitive are nerve cells and muscle fibers. Very sensitive cells are also oocytes and lymphocytes, although they are resting cells and do not meet the criteria described above. The reasons for their sensitivity are not clear. There also appears to be a genetic basis for the varied vulnerability of cells to ionizing radiation. This has been demonstrated across several cancer types and in normal tissues. Cell damage classification The damage to the cell can be lethal (the cell dies) or sublethal (the cell can repair itself). Cell damage can ultimately lead to health effects which can be classified as either Tissue Reactions or Stochastic Effects according to the International Commission on Radiological Protection. Tissue reactions Tissue reactions have a threshold of irradiation under which they do not appear and above which they typically appear. Fractionation of dose, dose rate, the application of antioxidan
https://en.wikipedia.org/wiki/Large%20sieve
The large sieve is a method (or family of methods and related ideas) in analytic number theory. It is a type of sieve where up to half of all residue classes of numbers are removed, as opposed to small sieves such as the Selberg sieve wherein only a few residue classes are removed. The method has been further heightened by the larger sieve which removes arbitrarily many residue classes. Name Its name comes from its original application: given a set such that the elements of S are forbidden to lie in a set Ap ⊂ Z/p Z modulo every prime p, how large can S be? Here Ap is thought of as being large, i.e., at least as large as a constant times p; if this is not the case, we speak of a small sieve. History The early history of the large sieve traces back to work of Yu. B. Linnik, in 1941, working on the problem of the least quadratic non-residue. Subsequently Alfréd Rényi worked on it, using probability methods. It was only two decades later, after quite a number of contributions by others, that the large sieve was formulated in a way that was more definitive. This happened in the early 1960s, in independent work of Klaus Roth and Enrico Bombieri. It is also around that time that the connection with the duality principle became better understood. In the mid-1960s, the Bombieri–Vinogradov theorem was proved as a major application of large sieves using estimations of mean values of Dirichlet characters. In the late 1960s and early 1970s, many of the key ingredients and estimates were simplified by Patrick X. Gallagher. Development Large-sieve methods have been developed enough that they are applicable to small-sieve situations as well. Something is commonly seen as related to the large sieve not necessarily in terms of whether it is related to the kind of situation outlined above, but, rather, if it involves one of the two methods of proof traditionally used to yield a large-sieve result: Approximate Plancherel inequality If a set S is ill-distributed modulo p (by vir
https://en.wikipedia.org/wiki/Data%20General/One
The Data General/One (DG-1) was a laptop introduced in 1984 by Data General. Description The nine-pound battery-powered 1984 Data General/One ran MS-DOS and had dual 3.5" diskettes, a 79-key full-stroke keyboard, 128 KB to 512 KB of RAM, and a monochrome LCD screen capable of either the standard 80×25 characters or full CGA graphics (640×200). It was a laptop comparable in capabilities to desktops of the era. History The Data General/One offered several features in comparison with contemporary portable computers. For instance, the popular 1983 Radio Shack TRS-80 Model 100, a non-PC-compatible machine, was comparably sized. It was a small battery-operated computer resting in one's lap, but had a 40×8 character (240×64 pixel) screen, a rudimentary ROM-based menu in lieu of a full OS, and no built-in floppy. IBM's 1984 Portable PC was comparable in capability with desktops, but was not battery operable and, being much larger and heavier, was by no means a laptop. Drawbacks The DG-1 was only a modest success. One problem was its use of 3.5" diskettes. Popular software titles were thus not widely available (5.25" being still the standard), a serious issue since then-common diskette copy-protection schemes made it difficult for users to copy software into that format. The device achieved moderate success in a large OEM deal with Allen-Bradley, where it was private labelled as a T-45 "programming terminal" and was resold from 1987 to 1991 with thousands of units sold. The CPU was a CMOS version of the 8086, compatible with the IBM PC's 8088 except it ran slightly slower, at 4.0 MHz instead of the standard 4.77 MHz. Unlike the Portable PC, the DG-1 laptop could not take regular PC/XT expansion cards. RS-232 serial ports were built-in, but the CMOS (low battery consumption) serial I-O chip available at design time, a CMOS version of the Intel 8251, was register incompatible with the 8250 serial IC standard for the IBM PC. As a result, software written for the PC ser
https://en.wikipedia.org/wiki/Afforestation
Afforestation is the establishment of a forest or stand of trees (forestation) in an area where there was no recent tree cover. Many government and non-governmental organizations directly engage in afforestation programs to create forests and increase carbon capture. Afforestation is an increasingly sought-after method to fight climate concerns, as it is known to increase the soil quality and organic carbon levels into the soil, avoiding desertification. Afforestation is mainly done for conservational and commercial purposes. Procedure The process of afforestation begins with site selection. Several environmental factors of the site must be analyzed, including climate, soil, vegetation, and human activity. These factors will determine the quality of the site, what species of trees should be planted, and what planting method should be used. After the forest site has been assessed, the area must be prepared for planting. Preparation can involve a variety of mechanical or chemical methods, such as chopping, mounding, bedding, herbicides, and prescribed burning. Once the site is prepared, planting can take place. One method for planting is direct seeding, which involves sowing seeds directly into the forest floor. Another is seedling planting, which is similar to direct seeding except that seedlings already have an established root system. Afforestation by cutting is an option for tree species that can reproduce asexually, where a piece of a tree stem, branch, root, or leaves can be planted onto the forest floor and sprout successfully. Sometimes special tools, such as a tree planting bar, are used to make planting of trees easier and faster. Benefits Climate change mitigation Afforestation boasts many climate-related benefits. Afforestation helps to slow down global warming by reducing in the atmosphere and introducing more O2. Trees are carbon sinks that remove from the atmosphere via photosynthesis and convert it into biomass. Scale The rate of net forest
https://en.wikipedia.org/wiki/Mapinguari
Mpinguari, or Mpinguary, (also called the Juma) are mythical monstrous jungle-dwelling spirits from Brazilian folklore said to protect the forest and its animals. Description There are various depictions of the mapinguari. Prior to 1933, traditional folklore describe it as a former human shaman turned into a hairy humanoid cyclops. This version is often said to have a gaping mouth on its abdomen, with its feet turned backwards. Creatures with such feet, which confuse those trying to track it, are found in folklore around the world. In the latter half of the 20th century, some cryptozoologists speculated that the mapinguari might be an unknown primate, akin to Bigfoot. Others claim that it is a modern-day sighting of the giant ground sloth, an animal estimated to have gone extinct during the early Holocene epoch. These later descriptions may be attributed to David C. Oren, an ornithologist, who heard stories of the creature and hypothesized they might be the extinct sloths. This was met by criticism by scientists at the time, but an article Oren published in 1993 was picked up by major news papers despite no evidence. Skeptics point out that there have not been any fossil records of ground sloths for thousands of years A 2023 academic study of the 1995 discovery of giant sloth bones “modified into primordial pendants” suggested that humans lived in the Americas contemporaneous with the giant sloth, specifically that “it may have served as inspiration for the Mapinguari, a mythical beast that, in Amazonian legend, had the nasty habit of twisting off the heads of humans and devouring them.” Terminology According to Felipe Ferreira Vander Velden, its name is a combination of the Tupi-Guarani words "mbappé", "pi", and "guari", meaning "a thing that has a bent [or] crooked foot [or] paw". Other names by which they are referred to include the Karitiana kida harara, and the Machiguenga segamai. See also List of legendary creatures Mylodon
https://en.wikipedia.org/wiki/Distortion%20%28optics%29
In geometric optics, distortion is a deviation from rectilinear projection; a projection in which straight lines in a scene remain straight in an image. It is a form of optical aberration. Radial distortion Although distortion can be irregular or follow many patterns, the most commonly encountered distortions are radially symmetric, or approximately so, arising from the symmetry of a photographic lens. These radial distortions can usually be classified as either barrel distortions or pincushion distortions. Mathematically, barrel and pincushion distortion are quadratic, meaning they increase as the square of distance from the center. In mustache distortion the quartic (degree 4) term is significant: in the center, the degree 2 barrel distortion is dominant, while at the edge the degree 4 distortion in the pincushion direction dominates. Other distortions are in principle possible – pincushion in center and barrel at the edge, or higher order distortions (degree 6, degree 8) – but do not generally occur in practical lenses, and higher order distortions are small relative to the main barrel and pincushion effects. Occurrence In photography, distortion is particularly associated with zoom lenses, particularly large-range zooms, but may also be found in prime lenses, and depends on focal distance – for example, the Canon EF 50mm 1.4 exhibits barrel distortion at extremely short focal distances. Barrel distortion may be found in wide-angle lenses, and is often seen at the wide-angle end of zoom lenses, while pincushion distortion is often seen in older or low-end telephoto lenses. Mustache distortion is observed particularly on the wide end of zooms, with certain retrofocus lenses, and more recently on large-range zooms such as the Nikon 18–200 mm. A certain amount of pincushion distortion is often found with visual optical instruments, e.g., binoculars, where it serves to counteract the globe effect. In order to understand these distortions, it should be remembe
https://en.wikipedia.org/wiki/Cartan%E2%80%93K%C3%A4hler%20theorem
In mathematics, the Cartan–Kähler theorem is a major result on the integrability conditions for differential systems, in the case of analytic functions, for differential ideals . It is named for Élie Cartan and Erich Kähler. Meaning It is not true that merely having contained in is sufficient for integrability. There is a problem caused by singular solutions. The theorem computes certain constants that must satisfy an inequality in order that there be a solution. Statement Let be a real analytic EDS. Assume that is a connected, -dimensional, real analytic, regular integral manifold of with (i.e., the tangent spaces are "extendable" to higher dimensional integral elements). Moreover, assume there is a real analytic submanifold of codimension containing and such that has dimension for all . Then there exists a (locally) unique connected, -dimensional, real analytic integral manifold of that satisfies . Proof and assumptions The Cauchy-Kovalevskaya theorem is used in the proof, so the analyticity is necessary.
https://en.wikipedia.org/wiki/Extended%20affix%20grammar
In computer science, extended affix grammars (EAGs) are a formal grammar formalism for describing the context free and context sensitive syntax of language, both natural language and programming languages. EAGs are a member of the family of two-level grammars; more specifically, a restriction of Van Wijngaarden grammars with the specific purpose of making parsing feasible. Like Van Wijngaarden grammars, EAGs have hyperrules that form a context-free grammar except in that their nonterminals may have arguments, known as affixes, the possible values of which are supplied by another context-free grammar, the metarules. EAGs were introduced and studied by D.A. Watt in 1974; recognizers were developed at the University of Nijmegen between 1985 and 1995. The EAG compiler developed there will generate either a recogniser, a transducer, a translator, or a syntax directed editor for a language described in the EAG formalism. The formalism is quite similar to Prolog, to the extent that it borrowed its cut operator. EAGs have been used to write grammars of natural languages such as English, Spanish, and Hungarian. The aim was to verify the grammars by making them parse corpora of text (corpus linguistics); hence, parsing had to be sufficiently practical. However, the parse tree explosion problem that ambiguities in natural language tend to produce in this type of approach is worsened for EAGs because each choice of affix value may produce a separate parse, even when several different values are equivalent. The remedy proposed was to switch to the much simpler Affix Grammar over a Finite Lattice (AGFL) instead, in which metagrammars can only produce simple finite languages. See also Affix grammar Corpus linguistics External links Informal introduction to the Extended Affix Grammar formalism and its compiler, by Marc Seutter, University of Nijmegen EAG project website, University of Nijmegen public announcement of the EAG software release, in comp.compilers, by
https://en.wikipedia.org/wiki/Hypervelocity
Hypervelocity is very high velocity, approximately over 3,000 meters per second (6,700 mph, 11,000 km/h, 10,000 ft/s, or Mach 8.8). In particular, hypervelocity is velocity so high that the strength of materials upon impact is very small compared to inertial stresses. Thus, metals and fluids behave alike under hypervelocity impact. Extreme hypervelocity results in vaporization of the impactor and target. For structural metals, hypervelocity is generally considered to be over 2,500 m/s (5,600 mph, 9,000 km/h, 8,200 ft/s, or Mach 7.3). Meteorite craters are also examples of hypervelocity impacts. Overview The term "hypervelocity" refers to velocities in the range from a few kilometers per second to some tens of kilometers per second. This is especially relevant in the field of space exploration and military use of space, where hypervelocity impacts (e.g. by space debris or an attacking projectile) can result in anything from minor component degradation to the complete destruction of a spacecraft or missile. The impactor, as well as the surface it hits, can undergo temporary liquefaction. The impact process can generate plasma discharges, which can interfere with spacecraft electronics. Hypervelocity usually occurs during meteor showers and deep space reentries, as carried out during the Zond, Apollo and Luna programs. Given the intrinsic unpredictability of the timing and trajectories of meteors, space capsules are prime data gathering opportunities for the study of thermal protection materials at hypervelocity (in this context, hypervelocity is defined as greater than escape velocity). Given the rarity of such observation opportunities since the 1970s, the Genesis and Stardust Sample Return Capsule (SRC) reentries as well as the recent Hayabusa SRC reentry have spawned observation campaigns, most notably at NASA's Ames Research Center. Hypervelocity collisions can be studied by examining the results of naturally occurring collisions (between micrometeorites and
https://en.wikipedia.org/wiki/Abbreviated%20dialing
Abbreviated dialing is the use of a very short digit sequence to reach specific telephone numbers, such as those of public services. The purpose of such numbers is to be universal, short, and easy to remember. Typically they are two or three digits. Carriers refer to the shortened number sequences as abbreviated dialing codes (ADCs). Unlike SMS short codes, they are generally not automatically synchronized across carriers. ADCs are provisioned separately for mobile networks versus landline networks. Examples The most commonly known examples are emergency telephone numbers such as 9-9-9, 1-1-2 and 9-1-1. Other services may also be available through abbreviated dialing numbers, such as the other of the eight N11 codes of the North American Numbering Plan (NANP) besides 9-1-1. State highway departments in recent years have used abbreviated dialing codes to allow drivers to obtain information about road conditions or to reach the state highway patrol. Examples are *55 in Missouri and Oklahoma, or *FHP which connects to the Florida Highway Patrol. In December 2019, the Federal Communications Commission proposed making 9-8-8 a national number in the United States for the National Suicide Prevention Lifeline. Security Privileged group-number, system-number, and enhanced-number lists provide access to numbers that typically would be restricted. Similar concepts For text messaging, the technical equivalent is a short code; however, these are rented by their private users rather than being universal and for public services. Vertical service codes may also be considered as abbreviated dialing, though these are often prefixed by the special touch-tone characters * and # (or often 11 for pulse dialing) instead of using only numerals. Most are used to access calling features rather than a called party, and some are specific to each telephone company. Some are used only locally or regionally (such as *FHP (*347) to reach the Florida Highway Patrol); other codes as shor
https://en.wikipedia.org/wiki/Video%20game%20clone
A video game clone is either a video game or a video game console very similar to, or heavily inspired by, a previous popular game or console. Clones are typically made to take financial advantage of the popularity of the cloned game or system, but clones may also result from earnest attempts to create homages or expand on game mechanics from the original game. An additional motivation unique to the medium of games as software with limited compatibility, is the desire to port a simulacrum of a game to platforms that the original is unavailable for or unsatisfactorily implemented on. The legality of video game clones is governed by copyright and patent law. In the 1970s, Magnavox controlled several patents to the hardware for Pong, and pursued action against unlicensed Pong clones that led to court rulings in their favor, as well as legal settlements for compensation. As game production shifted to software on discs and cartridges, Atari sued Philips under copyright law, allowing them to shut down several clones of Pac-Man. By the end of the 1980s, courts had ruled in favor of a few alleged clones, and the high costs of a lawsuit meant that most disputes with alleged clones were ignored or settled through to the mid-2000s. In 2012, courts ruled against alleged clones in both Tetris Holding, LLC v. Xio Interactive, Inc. and Spry Fox, LLC v. Lolapps, Inc., due to explicit similarities between the games' expressive elements. Legal scholars agree that these cases establish that general game ideas, game mechanics, and stock scenes cannot be protected by copyright – only the unique expression of those ideas. However, the high cost of a lawsuit combined with the fact-specific nature of each dispute has made it difficult to predict which game developers can protect their games' look and feel from clones. Other methods like patents, trademarks, and industry regulation have played a role in shaping the prevalence of clones. Overview Cloning a game in digital marketplaces is
https://en.wikipedia.org/wiki/Percentage%20point
A percentage point or percent point is the unit for the arithmetic difference between two percentages. For example, moving up from 40 percent to 44 percent is an increase of 4 percentage points (although it is a 10-percent increase in the quantity being measured, if the total amount remains the same). In written text, the unit (the percentage point) is usually either written out, or abbreviated as pp, p.p., or %pt. to avoid confusion with percentage increase or decrease in the actual quantity. After the first occurrence, some writers abbreviate by using just "point" or "points". Differences between percentages and percentage points Consider the following hypothetical example: In 1980, 50 percent of the population smoked, and in 1990 only 40 percent of the population smoked. One can thus say that from 1980 to 1990, the prevalence of smoking decreased by 10 percentage points (or by 10 percent of the population) or by 20 percent when talking about smokers only – percentages indicate proportionate part of a total. Percentage-point differences are one way to express a risk or probability. Consider a drug that cures a given disease in 70 percent of all cases, while without the drug, the disease heals spontaneously in only 50 percent of cases. The drug reduces absolute risk by 20 percentage points. Alternatives may be more meaningful to consumers of statistics, such as the reciprocal, also known as the number needed to treat (NNT). In this case, the reciprocal transform of the percentage-point difference would be 1/(20pp) = 1/0.20 = 5. Thus if 5 patients are treated with the drug, one could expect to cure one more patient than would have occurred in the absence of the drug. For measurements involving percentages as a unit, such as, growth, yield, or ejection fraction, statistical deviations and related descriptive statistics, including the standard deviation and root-mean-square error, the result should be expressed in units of percentage points instead of percentage
https://en.wikipedia.org/wiki/IP%20traceback
IP traceback is any method for reliably determining the origin of a packet on the Internet. The IP protocol does not provide for the authentication of the source IP address of an IP packet, enabling the source address to be falsified in a strategy called IP address spoofing, and creating potential internet security and stability problems. Use of false source IP addresses allows denial-of-service attacks (DoS) or one-way attacks (where the response from the victim host is so well known that return packets need not be received to continue the attack). IP traceback is critical for identifying sources of attacks and instituting protection measures for the Internet. Most existing approaches to this problem have been tailored toward DoS attack detection. Such solutions require high numbers of packets to converge on the attack path(s). Probabilistic packet marking Savage et al. suggested probabilistically marking packets as they traverse routers through the Internet. They propose that the router mark the packet with either the router’s IP address or the edges of the path that the packet traversed to reach the router. For the first alternative, marking packets with the router's IP address, analysis shows that in order to gain the correct attack path with 95% accuracy as many as 294,000 packets are required. The second approach, edge marking, requires that the two nodes that make up an edge mark the path with their IP addresses along with the distance between them. This approach would require more state information in each packet than simple node marking but would converge much faster. They suggest three ways to reduce the state information of these approaches into something more manageable. The first approach is to XOR each node forming an edge in the path with each other. Node a inserts its IP address into the packet and sends it to b. Upon being detected at b (by detecting a 0 in the distance), b XORs its address with the address of a. This new data entity is ca
https://en.wikipedia.org/wiki/Osteolysis
Osteolysis is an active resorption of bone matrix by osteoclasts and can be interpreted as the reverse of ossification. Although osteoclasts are active during the natural formation of healthy bone the term "osteolysis" specifically refers to a pathological process. Osteolysis often occurs in the proximity of a prosthesis that causes either an immunological response or changes in the bone's structural load. Osteolysis may also be caused by pathologies like bone tumors, cysts, or chronic inflammation. Joint replacement While bone resorption is commonly associated with many diseases or joint problems, the term osteolysis generally refers to a problem common to artificial joint replacements such as total hip replacements, total knee replacements and total shoulder replacements. Osteolysis can also be associated with the radiographic changes seen in those with bisphosphonate-related osteonecrosis of the jaw. There are several biological mechanisms which may lead to osteolysis. In total hip replacement, the generally accepted explanation for osteolysis involves wear particles (worn off the contact surface of the artificial ball and socket joint). As the body attempts to clean up these wear particles (typically consisting of plastic or metal), it triggers an autoimmune reaction which causes resorption of living bone tissue. Osteolysis has been reported to occur as early as 12 months after implantation and is usually progressive. This may require a revision surgery (replacement of the prosthesis). Although osteolysis itself is clinically asymptomatic, it can lead to implant loosening or bone breakage, which in turn causes serious medical problems. Distal clavicular osteolysis Distal clavicular osteolysis (DCO) is often associated with problems weightlifters have with their acromioclavicular joints due to high mechanical stresses put on the clavicle as it meets with the acromion. This condition is often referred to as "weight lifter's shoulder". Medical ultrasonography r
https://en.wikipedia.org/wiki/Fluctuating%20asymmetry
Fluctuating asymmetry (FA), is a form of biological asymmetry, along with anti-symmetry and direction asymmetry. Fluctuating asymmetry refers to small, random deviations away from perfect bilateral symmetry. This deviation from perfection is thought to reflect the genetic and environmental pressures experienced throughout development, with greater pressures resulting in higher levels of asymmetry. Examples of FA in the human body include unequal sizes (asymmetry) of bilateral features in the face and body, such as left and right eyes, ears, wrists, breasts, testicles, and thighs. Research has exposed multiple factors that are associated with FA. As measuring FA can indicate developmental stability, it can also suggest the genetic fitness of an individual. This can further have an effect on mate attraction and sexual selection, as less asymmetry reflects greater developmental stability and subsequent fitness. Human physical health is also associated with FA. For example, young men with greater FA report more medical conditions than those with lower levels of FA. Multiple other factors can be linked to FA, such as intelligence and personality traits. Measurement Fluctuating asymmetry (FA) can be measured by the equation: Mean FA = mean absolute value of left sides - mean absolute value of right sides. The closer the mean value is to zero, the lower the levels of FA, indicating more symmetrical features. By taking many measurements of multiple traits per individual, this increases the accuracy in determining that individual's developmental stability. However, these traits must be chosen carefully, as different traits are affected by different selection pressures. This equation can further be used to study the distribution of asymmetries at population levels, to distinguish between traits that show FA, directional asymmetry, and anti-symmetry. The distribution of FA around a mean point of zero suggests that FA is not an adaptive trait, where symmetry is ideal. Dire
https://en.wikipedia.org/wiki/Inferior%20oblique%20muscle
The inferior oblique muscle or obliquus oculi inferior is a thin, narrow muscle placed near the anterior margin of the floor of the orbit. The inferior oblique is one of the extraocular muscles, and is attached to the maxillary bone (origin) and the posterior, inferior, lateral surface of the eye (insertion). The inferior oblique is innervated by the inferior branch of the oculomotor nerve. Structure The inferior oblique arises from the orbital surface of the maxilla, lateral to the lacrimal groove. Unlike the other extraocular muscles (recti and superior oblique), the inferior oblique muscle does not originate from the common tendinous ring (annulus of Zinn). Passing lateralward, backward, and upward, between the inferior rectus and the floor of the orbit, and just underneath the lateral rectus muscle, the inferior oblique inserts onto the scleral surface between the inferior rectus and lateral rectus. In humans, the muscle is about 35 mm long. Innervation The inferior oblique is innervated by the inferior division of the oculomotor nerve (cranial nerve III). Function Its actions are extorsion, elevation and abduction of the eye. Primary action is extorsion (external rotation); secondary action is elevation; tertiary action is abduction (i.e. it extorts the eye and moves it upward and outwards). The field of maximal inferior oblique elevation is in the adducted position. The inferior oblique muscle is the only muscle that is capable of elevating the eye when it is in a fully adducted position. Clinical significance While commonly affected by palsies of the inferior division of the oculomotor nerve, isolated palsies of the inferior oblique (without affecting other functions of the oculomotor nerve) are quite rare. "Overaction" of the inferior oblique muscle is a commonly observed component of childhood strabismus, particularly infantile esotropia and exotropia. Because true hyperinnervation is not usually present, this phenomenon is better termed "elev
https://en.wikipedia.org/wiki/Professional%20Graphics%20Controller
Professional Graphics Controller (PGC, often called Professional Graphics Adapter and sometimes Professional Graphics Array) is a graphics card manufactured by IBM for PCs. It consists of three interconnected PCBs, and contains its own processor and memory. The PGC was, at the time of its release, the most advanced graphics card for the IBM XT and aimed for tasks such as CAD. Introduced in 1984, the Professional Graphics Controller offered a maximum resolution of 640 × 480 with 256 colors on an analog RGB monitor, at a refresh rate of 60 hertz—a higher resolution and color depth than CGA and EGA supported. This mode is not BIOS-supported. It was intended for the computer-aided design market and included 320 KB of display RAM and an on-board Intel 8088 microprocessor. The 8088 ran software routines such as "draw polygon" and "fill area" from an on-board 64 KB ROM so that the host CPU didn't need to load and run these routines itself. While never widespread in consumer-class personal computers, its list price, plus $1,295 display, compared favorably to US$50,000 dedicated CAD workstations of the time (even when the $4,995 price of a PC XT Model 87 was included). It was discontinued in 1987 with the arrival of VGA and 8514. Software support The board was targeted at the CAD market, therefore limited software support is to be expected. The only software known to support the PGC are IBM's Graphical Kernel System, P-CAD 4.5, Canyon State Systems CompuShow and AutoCAD 2.5. Output capabilities PGC supports: 640 × 480 with 256 colors from a palette of 4,096 (12-bit RGB palette, or 4 bits per color component). Color Graphics Adapter text and graphics modes. Text modes use a font with 8×16-pixel character cells and have 400 rows of pixels. The are six possible color arrangements: Default 256-colour palette - Low 4 bits intensity, high 4 bits colour; 16-colour palette - Makes the PGC behave as two 16-colour planes. If high 4 bits are 0, low 4 bits are colour; otherw
https://en.wikipedia.org/wiki/Upper%20memory%20area
In DOS memory management, the upper memory area (UMA) is the memory between the addresses of 640 KB and 1024 KB (0xA0000–0xFFFFF) in an IBM PC or compatible. IBM reserved the uppermost 384 KB of the 8088 CPU's 1024 KB address space for BIOS ROM, Video BIOS, Option ROMs, video RAM, RAM on peripherals, memory-mapped I/O, and obsoleted ROM BASIC. However, even with video RAM, the ROM BIOS, the Video BIOS, the Option ROMs, and I/O ports for peripherals, much of this 384 KB of address space was unused. As the 640 KB memory restriction became ever more of an obstacle, techniques were found to fill the empty areas with RAM. These areas were referred to as upper memory blocks (UMBs). Usage The next stage in the evolution of DOS was for the operating system to use upper memory blocks (UMBs) and the high memory area (HMA). This occurred with the release of DR DOS 5.0 in 1990. DR DOS' built-in memory manager, EMM386.EXE, could perform most of the basic functionality of QEMM and comparable programs. The advantage of DR DOS 5.0 over the combination of an older DOS plus QEMM was that the DR DOS kernel itself and almost all of its data structures could be loaded into high memory. This left virtually all the base memory free, allowing configurations with up to 620 KB out of 640 KB free. Configuration was not automatic - free UMBs had to be identified by hand, manually included in the line that loaded EMM386 from CONFIG.SYS, and then drivers and so on had to be manually loaded into UMBs from CONFIG.SYS and AUTOEXEC.BAT. This configuration was not a trivial process. As it was largely automated by the installation program of QEMM, this program survived on the market; indeed, it worked well with DR DOS' own HMA and UMB support and went on to be one of the best-selling utilities for the PC. This functionality was copied by Microsoft with the release of MS-DOS 5.0 in June 1991. Eventually, even more DOS data structures were moved out of conventional memory, allowing up to 631 KB ou
https://en.wikipedia.org/wiki/Polygene
A polygene is a member of a group of non-epistatic genes that interact additively to influence a phenotypic trait, thus contributing to multiple-gene inheritance (polygenic inheritance, multigenic inheritance, quantitative inheritance), a type of non-Mendelian inheritance, as opposed to single-gene inheritance, which is the core notion of Mendelian inheritance. The term "monozygous" is usually used to refer to a hypothetical gene as it is often difficult to distinguish the effect of an individual gene from the effects of other genes and the environment on a particular phenotype. Advances in statistical methodology and high throughput sequencing are, however, allowing researchers to locate candidate genes for the trait. In the case that such a gene is identified, it is referred to as a quantitative trait locus (QTL). These genes are generally pleiotropic as well. The genes that contribute to type 2 diabetes are thought to be mostly polygenes. In July 2016, scientists reported identifying a set of 355 genes from the last universal common ancestor (LUCA) of all organisms living on Earth. Traits with polygenic determinism correspond to the classical quantitative characters, as opposed to the qualitative characters with monogenic or oligogenic determinism. In essence instead of two options, such as freckles or no freckles, there are many variations, like the color of skin, hair, or even eyes. Overview Polygenic locus is any individual locus which is included in the system of genes responsible for the genetic component of variation in a quantitative (polygenic) character. Allelic substitutions contribute to the variance in a specified quantitative character. Polygenic locus may be either a single or complex genetic locus in the conventional sense, i.e., either a single gene or closely linked block of functionally related genes. In modern sense, the inheritance mode of polygenic patterns is called polygenic inheritance, whose main properties may be summarized as foll
https://en.wikipedia.org/wiki/NEC%20SX
NEC SX describes a series of vector supercomputers designed, manufactured, and marketed by NEC. This computer series is notable for providing the first computer to exceed 1 gigaflop, as well as the fastest supercomputer in the world between 1992–1993, and 2002–2004. The current model, as of 2018, is the SX-Aurora TSUBASA. History The first models, the SX-1 and SX-2, were announced in April 1983, and released in 1985. The SX-2 was the first computer to exceed 1 gigaflop. The SX-1 and SX-1E were less powerful models offered by NEC. The SX-3 was announced in 1989, and shipped in 1990. The SX-3 allows parallel computing using both SIMD and MIMD. It also switched from the ACOS-4 based SX-OS, to the AT&T System V UNIX-based SUPER-UX operating system. In 1992 an improved variant, the SX-3R, was announced. A SX-3/44 variant was the fastest computer in the world between 1992-1993 on the TOP500 list. It had LSI integrated circuits with 20,000 gates per IC with a per-gate delay time of 70 picoseconds, could house 4 arithmetic processors with up to 4 sharing the same main memory, and up to several processors to achieve up to 22 GFLOPS of performance, with 1.37 GFLOPS of performance with a single processor. 100 LSI ICs were housed in a single multi chip module to achieve 2 million gates per module. The modules were watercooled. The SX-4 series was announced in 1994, and first shipped in 1995. Since the SX-4, SX series supercomputers are constructed in a doubly parallel manner. A number of central processing units (CPUs) are arranged into a parallel vector processing node. These nodes are then installed in a regular SMP arrangement. The SX-5 was announced and shipped in 1998, with the SX-6 following in 2001, and the SX-7 in 2002. Starting in 2001, Cray marketed the SX-5 and SX-6 exclusively in the US, and non-exclusively elsewhere for a short time. The Earth Simulator, built from SX-6 nodes, was the fastest supercomputer from June 2002 to June 2004 on the LINPACK benchmark
https://en.wikipedia.org/wiki/List%20of%20backup%20software
This is a list of notable backup software that performs data backups. Archivers, transfer protocols, and version control systems are often used for backups but only software focused on backup is listed here. See Comparison of backup software for features. Free and open-source software Commercial and closed-source software Defunct software See also Comparison of file synchronization software Comparison of online backup services Data recovery File synchronization List of data recovery software Remote backup service Tape management system Notes
https://en.wikipedia.org/wiki/Bevel
A bevelled edge (UK) or beveled edge (US) is an edge of a structure that is not perpendicular to the faces of the piece. The words bevel and chamfer overlap in usage; in general usage they are often interchanged, while in technical usage they may sometimes be differentiated as shown in the image at right. A bevel is typically used to soften the edge of a piece for the sake of safety, wear resistance, or aesthetics; or to facilitate mating with another piece. Applications Cutting tools Most cutting tools have a bevelled edge which is apparent when one examines the grind. Bevel angles can be duplicated using a sliding T bevel. Graphic design Typographic bevels are shading and artificial shadows that emulate the appearance of a 3-dimensional letter. The bevel is a relatively common effect in graphic editors such as Photoshop. As such, it is in widespread use in mainstream logos and other design elements. Glass and mirrors Bevelled edges are a common aesthetic nicety added to window panes and mirrors. Geology Geologists refer to any slope of land into a stratum of different elevation as a bevel. Sports In waterskiing, a bevel is the transition area between the side of the ski and the bottom of the ski. Beginners tend to prefer sharp bevels, which allow the ski to glide on the water surface. In Disc Golf, the 'beveled edge' was patented in 1983 by Dave Dunipace who founded Innova Champion Discs. This element transformed the Frisbee into the farther flying golf discs the sport uses today. Cards With a deck of cards, the top portion can be slid back so that the back of the deck is at an angle, a technique used in card tricks. Semiconductor wafers In the semiconductor industry, wafers have two typical edge types: a slanted beveled shape or a rounded bullet shape. The edges on the beveled types are called the bevel region, and they are typically ground at a 22 degree angle. While it is not possible to create a complete and functional die with the bevel material
https://en.wikipedia.org/wiki/Shade%20avoidance
Shade avoidance is a set of responses that plants display when they are subjected to the shade of another plant. It often includes elongation, altered flowering time, increased apical dominance and altered partitioning of resources. This set of responses is collectively called the shade-avoidance syndrome (SAS). Shade responses display varying strength along a continuum. Most plants are neither extreme shade avoiders or tolerators, but possess a combination of the two strategies; this helps adapt them to their environment. However, the ability to perceive and respond to shade plays a very important role in all plants: they are sessile by nature and access to photosynthetically active radiation is essential for plant nutrition and growth. In addition, the time at which a plant starts to flower is affected by the amount of light that is available. Over the past few decades, major increases in grain yield have come largely through increasing planting densities. As planting densities increase so does the proportion of far red light in the canopy. Thus, it is likely that plant breeders have selected for lines with reduced SAS in their efforts to produce high yields at high density. Sensing shade Plants can tell the difference between the shade of an inanimate object (e.g. a rock) and the shade of another plant, as well as the presence of nearby plants that may compete with and shade it in the future. In the shade of a plant, far red light is present in a higher irradiance than red light, as a result of the absorption of the red light by the pigments involved in photosynthesis, while a nearby plant forms an intermediate ratio. This is known as far red enrichment. Phytochrome can be used to measure the ratio of far red to red light, and thus to detect whether the plant is in the shade of another plant, so it can alter its growth strategy accordingly (photomorphogenesis). In Arabidopsis, phytochrome B is the predominant photoreceptor that regulates SAS. Phytochromes
https://en.wikipedia.org/wiki/Aptamer
Aptamers are short sequences of artificial DNA, RNA, XNA, or peptide that bind a specific target molecule, or family of target molecules. They exhibit a range of affinities (KD in the pM to μM range), with variable levels of off-target binding and are sometimes classified as chemical antibodies. Aptamers and antibodies can be used in many of the same applications, but the nucleic acid-based structure of aptamers, which are mostly oligonucleotides, is very different from the amino acid-based structure of antibodies, which are proteins. This difference can make aptamers a better choice than antibodies for some purposes (see antibody replacement). Aptamers are used in biological lab research and medical tests. If multiple aptamers are combined into a single assay, they can measure large numbers of different proteins in a sample. They can be used to identify molecular markers of disease, or can function as drugs, drug delivery systems and controlled drug release systems. They also find use in other molecular engineering tasks. Most aptamers originate from SELEX, a family of test-tube experiments for finding useful aptamers in a massive pool of different DNA sequences. This process is much like natural selection, directed evolution or artificial selection. In SELEX, the researcher repeatedly selects for the best aptamers from a starting DNA library made of about a quadrillion different randomly generated pieces of DNA or RNA. After SELEX, the researcher might mutate or change the chemistry of the aptamers and do another selection, or might use rational design processes to engineer improvements. Non-SELEX methods for discovering aptamers also exist. Researchers optimize aptamers to achieve a variety of beneficial features. The most important feature is specific and sensitive binding to the chosen target. When aptamers are exposed to bodily fluids, as in serum tests or aptamer therapeutics, it is often important for them to resist digestion by DNA- and RNA-destroying pr
https://en.wikipedia.org/wiki/Weil%E2%80%93Ch%C3%A2telet%20group
In arithmetic geometry, the Weil–Châtelet group or WC-group of an algebraic group such as an abelian variety A defined over a field K is the abelian group of principal homogeneous spaces for A, defined over K. named it for who introduced it for elliptic curves, and , who introduced it for more general groups. It plays a basic role in the arithmetic of abelian varieties, in particular for elliptic curves, because of its connection with infinite descent. It can be defined directly from Galois cohomology, as , where is the absolute Galois group of K. It is of particular interest for local fields and global fields, such as algebraic number fields. For K a finite field, proved that the Weil–Châtelet group is trivial for elliptic curves, and proved that it is trivial for any connected algebraic group. See also The Tate–Shafarevich group of an abelian variety A defined over a number field K consists of the elements of the Weil–Châtelet group that become trivial in all of the completions of K. The Selmer group, named after Ernst S. Selmer, of A with respect to an isogeny of abelian varieties is a related group which can be defined in terms of Galois cohomology as where Av[f] denotes the f-torsion of Av and is the local Kummer map .
https://en.wikipedia.org/wiki/Selmer%20group
In arithmetic geometry, the Selmer group, named in honor of the work of by , is a group constructed from an isogeny of abelian varieties. The Selmer group of an isogeny The Selmer group of an abelian variety A with respect to an isogeny f : A → B of abelian varieties can be defined in terms of Galois cohomology as where Av[f] denotes the f-torsion of Av and is the local Kummer map . Note that is isomorphic to . Geometrically, the principal homogeneous spaces coming from elements of the Selmer group have Kv-rational points for all places v of K. The Selmer group is finite. This implies that the part of the Tate–Shafarevich group killed by f is finite due to the following exact sequence 0 → B(K)/f(A(K)) → Sel(f)(A/K) → Ш(A/K)[f] → 0. The Selmer group in the middle of this exact sequence is finite and effectively computable. This implies the weak Mordell–Weil theorem that its subgroup B(K)/f(A(K)) is finite. There is a notorious problem about whether this subgroup can be effectively computed: there is a procedure for computing it that will terminate with the correct answer if there is some prime p such that the p-component of the Tate–Shafarevich group is finite. It is conjectured that the Tate–Shafarevich group is in fact finite, in which case any prime p would work. However, if (as seems unlikely) the Tate–Shafarevich group has an infinite p-component for every prime p, then the procedure may never terminate. has generalized the notion of Selmer group to more general p-adic Galois representations and to p-adic variations of motives in the context of Iwasawa theory. The Selmer group of a finite Galois module More generally one can define the Selmer group of a finite Galois module M (such as the kernel of an isogeny) as the elements of H1(GK,M) that have images inside certain given subgroups of H1(GKv,M).
https://en.wikipedia.org/wiki/Electronic%20organizer
An electronic organizer (or electric organizer) is a small calculator-sized computer, often with an built-in diary application and other functions such as an address book and calendar, replacing paper-based personal organizers. Typically, it has a small alphanumeric keypad and an LCD screen of one, two, or three lines. The electronic diary or organizer was invented by Indian businessman Satyan Pitroda in 1975, who is regarded as one of the earliest pioneers of hand-held computing because of his invention of the Electronic Diary in 1975. They were very popular especially with businessmen during the 1990s, but because of the advent of palmtop PCs in the 1990s, personal digital assistants in the 2000s, and smartphones in the 2010s, all of which have a larger set of features, electronic organizers are mostly seen today for research purposes. One of the leading research topics being the study of how electronics can help people with mental disabilities use this type of equipment to aid their daily life. Electronic organizers have more recently been used to support people with Alzheimer's disease to have a visual representation of a schedule. Casio digital diary Casio digital diaries were produced by Casio in the early and mid 1990s, but have since been entirely superseded by Mobile Phones and PDAs. Other electronic organizers While Casio was a major role player in the field of electronic organizers there were many different ideas, patent requests, and manufacturers of electronic organizers. Rolodex, widely known for their index card holders in the 1980s, Sharp Electronics, mostly known for their printers and audio visual equipment, and lastly Royal electronics were all large contributors to the electronic organizer in its heyday. Features Telephone directory Schedule keeper: Keep track of appointments. Memo function: Store text data such as price lists, airplane schedules, and more. To do list: Keep track of daily tasks, checking off items as you complete them.
https://en.wikipedia.org/wiki/Bidirectional%20Forwarding%20Detection
Bidirectional Forwarding Detection (BFD) is a network protocol that is used to detect faults between two routers or switches connected by a link. It provides low-overhead detection of faults even on physical media that doesn't support failure detection of any kind, such as Ethernet, virtual circuits, tunnels and MPLS label-switched paths. BFD establishes a session between two endpoints over a particular link. If more than one link exists between two systems, multiple BFD sessions may be established to monitor each one of them. The session is established with a three-way handshake, and is torn down the same way. Authentication may be enabled on the session. A choice of simple password, MD5 or SHA1 authentication is available. BFD does not have a discovery mechanism; sessions must be explicitly configured between endpoints. BFD may be used on many different underlying transport mechanisms and layers, and operates independently of all of these. Therefore, it needs to be encapsulated by whatever transport it uses. For example, monitoring MPLS LSPs involves piggybacking session establishment on LSP-Ping packets. Protocols that support some form of adjacency setup, such as OSPF, IS-IS, BGP or RIP may also be used to bootstrap a BFD session. These protocols may then use BFD to receive faster notification of failing links than would normally be possible using the protocol's own keepalive mechanism. A session may operate in one of two modes: asynchronous mode and demand mode. In asynchronous mode, both endpoints periodically send Hello packets to each other. If a number of those packets are not received, the session is considered down. In demand mode, no Hello packets are exchanged after the session is established; it is assumed that the endpoints have another way to verify connectivity to each other, perhaps on the underlying physical layer. However, either host may still send Hello packets if needed. Regardless of which mode is in use, either endpoint may also initiat
https://en.wikipedia.org/wiki/Secretary%20problem
The secretary problem demonstrates a scenario involving optimal stopping theory that is studied extensively in the fields of applied probability, statistics, and decision theory. It is also known as the marriage problem, the sultan's dowry problem, the fussy suitor problem, the googol game, and the best choice problem. The basic form of the problem is the following: imagine an administrator who wants to hire the best secretary out of rankable applicants for a position. The applicants are interviewed one by one in random order. A decision about each particular applicant is to be made immediately after the interview. Once rejected, an applicant cannot be recalled. During the interview, the administrator gains information sufficient to rank the applicant among all applicants interviewed so far, but is unaware of the quality of yet unseen applicants. The question is about the optimal strategy (stopping rule) to maximize the probability of selecting the best applicant. If the decision can be deferred to the end, this can be solved by the simple maximum selection algorithm of tracking the running maximum (and who achieved it), and selecting the overall maximum at the end. The difficulty is that the decision must be made immediately. The shortest rigorous proof known so far is provided by the odds algorithm. It implies that the optimal win probability is always at least (where e is the base of the natural logarithm), and that the latter holds even in a much greater generality. The optimal stopping rule prescribes always rejecting the first applicants that are interviewed and then stopping at the first applicant who is better than every applicant interviewed so far (or continuing to the last applicant if this never occurs). Sometimes this strategy is called the stopping rule, because the probability of stopping at the best applicant with this strategy is about already for moderate values of . One reason why the secretary problem has received so much attention is tha
https://en.wikipedia.org/wiki/Hierarchical%20storage%20management
Hierarchical storage management (HSM), also known as Tiered storage, is a data storage and Data management technique that automatically moves data between high-cost and low-cost storage media. HSM systems exist because high-speed storage devices, such as solid state drive arrays, are more expensive (per byte stored) than slower devices, such as hard disk drives, optical discs and magnetic tape drives. While it would be ideal to have all data available on high-speed devices all the time, this is prohibitively expensive for many organizations. Instead, HSM systems store the bulk of the enterprise's data on slower devices, and then copy data to faster disk drives when needed. The HSM system monitors the way data is used and makes best guesses as to which data can safely be moved to slower devices and which data should stay on the fast devices. HSM may also be used where more robust storage is available for long-term archiving, but this is slow to access. This may be as simple as an off-site backup, for protection against a building fire. HSM is a long-established concept, dating back to the beginnings of commercial data processing. The techniques used though have changed significantly as new technology becomes available, for both storage and for long-distance communication of large data sets. The scale of measures such as 'size' and 'access time' have changed dramatically. Despite this, many of the underlying concepts keep returning to favour years later, although at much larger or faster scales. Implementation In a typical HSM scenario, data which is frequently used are stored on warm storage device, such as solid state disk (SSD). Data that is infrequently accessed is, after some time migrated to a slower, high capacity cold storage tier. If a user does access data which is on the cold storage tier, it is automatically moved back to warm storage. The advantage is that the total amount of stored data can be much larger than the capacity of the warm storage device,
https://en.wikipedia.org/wiki/Etymological%20fallacy
An etymological fallacy is an argument that a word is defined by its etymology, and that its customary usage is therefore incorrect. History Ancient Greeks taught that a word's meaning could be tracked across time, creating a distinction between formal and informal language, with a similar practice existing among ancient Vedic scholars. In modern linguistic anthropology, semiotics and semantics are intertwined, and reflective of a society's culture across time. The discipline operates on the principle that current meaning derives from previous meaning, which has embedded itself in the predicate of subsequent derivatives. Removing the etymological history thus removes necessary context from which present meaning is ontologically dependent on. Occurrence and examples An etymological fallacy becomes possible when a word's meaning shifts over time from its original meaning. Such changes can include a narrowing or widening of scope or a change of connotation (amelioration or pejoration). In some cases, modern usage can shift to the point where the new meaning has no evident connection to its etymon. An example of a word with a potentially misleading etymology is antisemitism. The structure of the word suggests that it is about opposition to and hatred of Semitic peoples, but the term was coined in the 19th century to specifically refer to anti-Jewish beliefs and practices, and explicitly defined Jewish people as a racial class. Modern anthropology and evolutionary biology overwhelmingly reject the concept of race, and the term Semite has now become largely obsolete, with the notable exception of classifying Semitic languages. An etymological fallacy emerges when a speaker asserts that antisemitism is not restricted to hatred of Jews, but rather must include opposition to all other Semitic peoples. See also False friends
https://en.wikipedia.org/wiki/Allium%20drummondii
Allium drummondii, also known as Drummond's onion, wild garlic and prairie onion, is a North American species of onion native to the southern Great Plains of North America. It is found in South Dakota, Kansas, Nebraska, Colorado, Oklahoma, Arkansas, Texas, New Mexico, and northeastern Mexico. Allium drummondii is a bulb-forming perennial. The flowers appear in April and May, in a variety of colors ranging from white to pink. It is common, considered invasive in some regions. Uses This species of Allium is gathered by Native Americans for its small edible bulbs. These contain a considerable amount of inulin, a non-reducing sugar that humans cannot digest. Because of this, these onions must be heated for a long period of time in order to convert the inulin into digestible sugars.
https://en.wikipedia.org/wiki/Allium%20canadense
Allium canadense, the Canada onion, Canadian garlic, wild garlic, meadow garlic and wild onion is a perennial plant native to eastern North America from Texas to Florida to New Brunswick to Montana. The species is also cultivated in other regions as an ornamental and as a garden culinary herb. The plant is also reportedly naturalized in Cuba. Description Allium canadense has an edible bulb covered with a dense skin of brown fibers. The plant also has strong onion odor and taste. Crow garlic (Allium vineale) is similar, but it has a strong garlic taste. The narrow, grass-like leaves originate near the base of the stem, which is topped by a dome-like cluster of star-shaped, pink or white flowers. These flowers may be partially or entirely replaced by bulblets. When present, the flowers are hermaphroditic (both male and female organs) and are pollinated by American bees (not honeybees) and other insects. It typically flowers in the spring and early summer, from May to June. Varieties The bulblet-producing form is classified as A. canadense var. canadense. It was once thought that the tree onion could be related to this plant, but it is now known that the cultivated tree onion is a hybrid between the common onion (A. cepa) and Welsh onion (A. fistulosum), classified as A. × proliferum. Five varieties of the species are widely recognized: Allium canadense var. canadense - most pedicels replaced by bulbils, rarely producing fruits or seeds; most of the range of the species. Allium canadense var. ecristatum Ownbey tepals deep pink and rather thick; coastal plain of Texas. Allium canadense var. fraseri Ownbey - flowers white; Great Plains from Texas to Kansas. Allium canadense var. hyacinthoides (Bush) Ownbey - tepals pink, thin, flowers fragrant; northern Texas and southern Oklahoma. Allium canadense var. lavandulare (Bates) Ownbey & Aase - flowers lavender, not fragrant; northern Arkansas to South Dakota. Allium canadense var. mobilense (Regel) Ownbey - flowers lilac,
https://en.wikipedia.org/wiki/Scattering%20parameters
Scattering parameters or S-parameters (the elements of a scattering matrix or S-matrix) describe the electrical behavior of linear electrical networks when undergoing various steady state stimuli by electrical signals. The parameters are useful for several branches of electrical engineering, including electronics, communication systems design, and especially for microwave engineering. The S-parameters are members of a family of similar parameters, other examples being: Y-parameters, Z-parameters, H-parameters, T-parameters or ABCD-parameters. They differ from these, in the sense that S-parameters do not use open or short circuit conditions to characterize a linear electrical network; instead, matched loads are used. These terminations are much easier to use at high signal frequencies than open-circuit and short-circuit terminations. Contrary to popular belief, the quantities are not measured in terms of power (except in now-obsolete six-port network analyzers). Modern vector network analyzers measure amplitude and phase of voltage traveling wave phasors using essentially the same circuit as that used for the demodulation of digitally modulated wireless signals. Many electrical properties of networks of components (inductors, capacitors, resistors) may be expressed using S-parameters, such as gain, return loss, voltage standing wave ratio (VSWR), reflection coefficient and amplifier stability. The term 'scattering' is more common to optical engineering than RF engineering, referring to the effect observed when a plane electromagnetic wave is incident on an obstruction or passes across dissimilar dielectric media. In the context of S-parameters, scattering refers to the way in which the traveling currents and voltages in a transmission line are affected when they meet a discontinuity caused by the insertion of a network into the transmission line. This is equivalent to the wave meeting an impedance differing from the line's characteristic impedance. Although appli
https://en.wikipedia.org/wiki/Neritic%20zone
The neritic zone (or sublittoral zone) is the relatively shallow part of the ocean above the drop-off of the continental shelf, approximately in depth. From the point of view of marine biology it forms a relatively stable and well-illuminated environment for marine life, from plankton up to large fish and corals, while physical oceanography sees it as where the oceanic system interacts with the coast. Definition (marine biology), context, extra terminology In marine biology, the neritic zone, also called coastal waters, the coastal ocean or the sublittoral zone, refers to that zone of the ocean where sunlight reaches the ocean floor, that is, where the water is never so deep as to take it out of the photic zone. It extends from the low tide mark to the edge of the continental shelf, with a relatively shallow depth extending to about 200 meters (660 feet). Above the neritic zone lie the intertidal (or eulittoral) and supralittoral zones; below it the continental slope begins, descending from the continental shelf to the abyssal plain and the pelagic zone. Within the neritic, marine biologists also identify the following: The infralittoral zone is the algal-dominated zone down to around five metres below the low water mark. The circalittoral zone is the region beyond the infralittoral, which is dominated by sessile animals such as oysters. The subtidal zone is the region of the neritic zone which is below the intertidal zone, therefore never exposed to the atmosphere. Physical characteristics The neritic zone is covered with generally well-oxygenated water, receives plenty of sunlight, is relatively stable temperature, has low water pressure and stable salinity levels, making it highly suitable for photosynthetic life. There are several different areas or zones in the ocean. The area along the bottom of any body of water from the shore to the deepest abyss is called the benthic zone. It is where decomposed organic debris (also known as ocean 'snow') has sett
https://en.wikipedia.org/wiki/Micrococcus%20luteus
Micrococcus luteus is a Gram-positive to Gram-variable, nonmotile, tetrad-arranging, pigmented, saprotrophic coccus bacterium in the family Micrococcaceae. It is urease and catalase positive. An obligate aerobe, M. luteus is found in soil, dust, water and air, and as part of the normal microbiota of the mammalian skin. The bacterium also colonizes the human mouth, mucosae, oropharynx and upper respiratory tract. Micrococcus luteus is generally harmless but can become an opportunistic pathogen in immunocompromised people or those with indwelling catheters. It resists antibiotic treatment by slowing of major metabolic processes and induction of unique genes. Its genome has a high G + C content. Micrococcus luteus is coagulase negative, bacitracin susceptible, and forms bright yellow colonies on nutrient agar. Micrococcus luteus has been shown to survive in oligotrophic environments for extended periods of time. It has survived for at least 34,000 to 170,000 years, as assessed by 16S rRNA analysis, and possibly much longer. Its genome was sequenced in 2010 and is one of the smallest genomes of free-living Actinomycetota sequenced to date, comprising a single circular chromosome of 2,501,097 bp. Novel codon usage Micrococcus luteus was one of the early examples of novel codon usage, which led to the conclusion that the genetic code is not static, but evolves. Classification Micrococcus luteus was formerly known as Micrococcus lysodeikticus. In 2003, it was proposed that one strain of Micrococcus luteus, ATCC 9341, be reclassified as Kocuria rhizophila. Ultraviolet absorption Norwegian researchers in 2013 found a M. luteus strain that synthesizes a pigment that absorbs wavelengths of light from 350 to 475 nanometers. Exposure to these wavelengths of ultraviolet light has been correlated with an increased incidence of skin cancer, and scientists believe this pigment can be used to make a sunscreen that can protect against ultraviolet light. Tests for identifica
https://en.wikipedia.org/wiki/Three-state%20logic
In digital electronics, a tri-state or three-state buffer is a type of digital buffer that has three stable states: a high output state, a low output state, and a high-impedance state. In the high-impedance state, the output of the buffer is disconnected from the output bus, allowing other devices to drive the bus without interference from the tri-state buffer. This can be useful in situations where multiple devices are connected to the same bus and need to take turns accessing it. Systems implementing three-state logic on their bus are known as a three-state bus or tri-state bus. Tri-state buffers are commonly used in bus-based systems, where multiple devices are connected to the same bus and need to share it. For example, in a computer system, multiple devices such as the CPU, memory, and peripherals may be connected to the same data bus. To ensure that only one device can transmit data on the bus at a time, each device is equipped with a tri-state buffer. When a device wants to transmit data, it activates its tri-state buffer, which connects its output to the bus and allows it to transmit data. When the transmission is complete, the device deactivates its tri-state buffer, which disconnects its output from the bus and allows another device to access the bus. Tri-state buffers can be implemented using gates, flip-flops, or other digital logic circuits. They are useful for reducing crosstalk and noise on a bus, and for allowing multiple devices to share the same bus without interference. Uses The basic concept of the third state, high impedance (Hi-Z), is to effectively remove the device's influence from the rest of the circuit. If more than one device is electrically connected to another device, putting an output into the Hi-Z state is often used to prevent short circuits, or one device driving high (logical 1) against another device driving low (logical 0). Three-state buffers can also be used to implement efficient multiplexers, especially those with large
https://en.wikipedia.org/wiki/Cryptochrome
Cryptochromes (from the Greek κρυπτός χρώμα, "hidden colour") are a class of flavoproteins found in plants and animals that are sensitive to blue light. They are involved in the circadian rhythms and the sensing of magnetic fields in a number of species. The name cryptochrome was proposed as a portmanteau combining the chromatic nature of the photoreceptor, and the cryptogamic organisms on which many blue-light studies were carried out. The genes Cry1 and Cry2 encode the two cryptochrome proteins CRY1 and CRY2, respectively. Cryptochromes are classified into plant Cry and animal Cry. Animal Cry can be further categorized into insect type (Type I) and mammal-like (Type II). CRY1 is a circadian photoreceptor whereas CRY2 is a clock repressor which represses Clock/Cycle (Bmal1) complex in insects and vertebrates. In plants, blue-light photoreception can be used to cue developmental signals. Besides chlorophylls, cryptochromes are the only proteins known to form photoinduced radical-pairs in vivo. These appear to enable some animals to detect magnetic fields. Cryptochromes have been the focus of several current efforts in optogenetics. Employing transfection, initial studies on yeast have capitalized on the potential of CRY2 heterodimerization to control cellular processes, including gene expression, by light. Discovery Although Charles Darwin first documented plant responses to blue light in the 1880s, it was not until the 1980s that research began to identify the pigment responsible. In 1980, researchers discovered that the HY4 gene of the plant Arabidopsis thaliana was necessary for the plant's blue light sensitivity, and, when the gene was sequenced in 1993, it showed high sequence homology with photolyase, a DNA repair protein activated by blue light. Reference sequence analysis of cryptochrome-1 isoform d shows two conserved domains with photolyase proteins. Isoform d nucleotide positions 6 through 491 show a conserved domain with deoxyribodipyrimidine photol
https://en.wikipedia.org/wiki/Phototropin
Phototropins are photoreceptor proteins (more specifically, flavoproteins) that mediate phototropism responses in various species of algae, fungi and higher plants. Phototropins can be found throughout the leaves of a plant. Along with cryptochromes and phytochromes they allow plants to respond and alter their growth in response to the light environment. Phototropins may also be important for the opening of stomata and the movement of chloroplasts. These blue light receptors are seen across the entire green plant lineage. When Phototropins are hit with blue light, they induce a signal transduction pathway that alters the plant cells' functions in different ways. Phototropins are part of the phototropic sensory system in plants that causes various environmental responses in plants. Phototropins specifically will cause stems to bend towards light and stomata to open. Phototropins have been shown to impact the movement of chloroplast inside the cell. In addition phototropins mediate the first changes in stem elongation in blue light prior to cryptochrome activation. Phototropin is also required for blue light mediated transcript destabilization of specific mRNAs in the cell. They are present in the guard cell.
https://en.wikipedia.org/wiki/N%2CN%27-Dicyclohexylcarbodiimide
{{DISPLAYTITLE:N,N'''-Dicyclohexylcarbodiimide}} is an organic compound with the chemical formula (C6H11N)2C. It is a waxy white solid with a sweet odor. Its primary use is to couple amino acids during artificial peptide synthesis. The low melting point of this material allows it to be melted for easy handling. It is highly soluble in dichloromethane, tetrahydrofuran, acetonitrile and dimethylformamide, but insoluble in water. Structure and spectroscopy The C−N=C=N−C core of carbodiimides (N=C=N) is linear, being related to the structure of allene. The molecule has idealized C2 symmetry. The N=C=N moiety gives characteristic IR spectroscopic signature at 2117 cm−1. The 15N NMR spectrum shows a characteristic shift of 275 ppm upfield of nitric acid and the 13C NMR spectrum features a peak at about 139 ppm downfield from TMS. Preparation DCC is produced by the decarboxylation of cyclohexylisocyanate using phosphine oxides as a catalyst: 2 C6H11NCO → (C6H11N)2C + CO2 Alternative catalysts for this conversion include the highly nucleophilic OP(MeNCH2CH2)3N. Other methods Of academic interest, palladium acetate, iodine, and oxygen can be used to couple cyclohexyl amine and cyclohexyl isocyanide. Yields of up to 67% have been achieved using this route: C6H11NC + C6H11NH2 + O2 → (C6H11N)2C + H2O DCC has also been prepared from dicyclohexylurea using a phase transfer catalyst. The disubstituted urea, arenesulfonyl chloride, and potassium carbonate react in toluene in the presence of benzyl triethylammonium chloride to give DCC in 50% yield. Reactions Amide, peptide, and ester formation DCC is a dehydrating agent for the preparation of amides, ketones, and nitriles. In these reactions, DCC hydrates to form dicyclohexylurea (DCU), a compound that is nearly insoluble in most organic solvents and insoluble in water. The majority of the DCU is thus readily removed by filtration, although the last traces can be difficult to eliminate from non-polar products. DCC
https://en.wikipedia.org/wiki/Arithmetic%20geometry
In mathematics, arithmetic geometry is roughly the application of techniques from algebraic geometry to problems in number theory. Arithmetic geometry is centered around Diophantine geometry, the study of rational points of algebraic varieties. In more abstract terms, arithmetic geometry can be defined as the study of schemes of finite type over the spectrum of the ring of integers. Overview The classical objects of interest in arithmetic geometry are rational points: sets of solutions of a system of polynomial equations over number fields, finite fields, p-adic fields, or function fields, i.e. fields that are not algebraically closed excluding the real numbers. Rational points can be directly characterized by height functions which measure their arithmetic complexity. The structure of algebraic varieties defined over non-algebraically closed fields has become a central area of interest that arose with the modern abstract development of algebraic geometry. Over finite fields, étale cohomology provides topological invariants associated to algebraic varieties. p-adic Hodge theory gives tools to examine when cohomological properties of varieties over the complex numbers extend to those over p-adic fields. History 19th century: early arithmetic geometry In the early 19th century, Carl Friedrich Gauss observed that non-zero integer solutions to homogeneous polynomial equations with rational coefficients exist if non-zero rational solutions exist. In the 1850s, Leopold Kronecker formulated the Kronecker–Weber theorem, introduced the theory of divisors, and made numerous other connections between number theory and algebra. He then conjectured his "liebster Jugendtraum" ("dearest dream of youth"), a generalization that was later put forward by Hilbert in a modified form as his twelfth problem, which outlines a goal to have number theory operate only with rings that are quotients of polynomial rings over the integers. Early-to-mid 20th century: algebraic developments
https://en.wikipedia.org/wiki/Functional%20design
Functional Design is a paradigm used to simplify the design of hardware and software devices such as computer software and, increasingly, 3D models. A functional design assures that each modular part of a device has only one responsibility and performs that responsibility with the minimum of side effects on other parts. Functionally designed modules tend to have low coupling. Advantages The advantage for implementation is that if a software module has a single purpose, it will be simpler, and therefore easier and less expensive, to design and implement. Systems with functionally designed parts are easier to modify because each part does only what it claims to do. Since maintenance is more than 3/4 of a successful system's life, this feature is a crucial advantage. It also makes the system easier to understand and document, which simplifies training. The result is that the practical lifetime of a functional system is longer. In a system of programs, a functional module will be easier to reuse because it is less likely to have side effects that appear in other parts of the system. Technique The standard way to assure functional design is to review the description of a module. If the description includes conjunctions such as "and" or "or", then the design has more than one responsibility, and is therefore likely to have side effects. The responsibilities need to be divided into several modules in order to achieve a functional design. Critiques and limits Every computer system has parts that cannot be functionally pure because they exist to distribute CPU cycles or other resources to different modules. For example, most systems have an "initialization" section that starts up the modules. Other well-known examples are the interrupt vector table and the main loop. Some functions inherently have mixed semantics. For example, a function "move the car from the garage" inherently has a side effect of changing the "car position". In some cases, the mixed semantics c
https://en.wikipedia.org/wiki/Flag%20of%20Orkney
The Flag of Orkney was the winner of a public flag consultation in February and March 2007. In the flag consultation the people of Orkney were asked for their preferred design from a short list of 5, all of which had been approved by the Court of the Lord Lyon. The chosen design was that of Duncan Tullock of Birsay, which polled 53% of the 200 votes cast by the public. The colours red and yellow are from the Scottish and Norwegian royal coats of arms, which both use yellow and red, with a lion rampant. The flag symbolises the islands' Scottish and Norwegian heritage. The blue is taken from the flag of Scotland and also represents the sea and the maritime heritage of the islands. Former flag of Orkney The former flag of Orkney was adopted in 1995. In 2001, the flag of Orkney, the traditional flag of St Magnus, was declined official recognition by the Lord Lyon, the heraldic authority of Scotland, due to similarity with other national flags; as well as the flag of the Kalmar Union. Chronology See also Flag of Shetland Nordic Cross Flag Royal Standard of Scotland Royal Standard of Norway Flag of Norway List of Scottish flags Emblems of the Kalmar Union
https://en.wikipedia.org/wiki/Hydroxyl%20tagging%20velocimetry
Hydroxyl tagging velocimetry (HTV) is a velocimetry method used in humid air flows. The method is often used in high-speed combusting flows because the high velocity and temperature accentuate its advantages over similar methods. HTV uses a laser (often an argon-fluoride excimer laser operating at ~193 nm) to dissociate the water in the flow into H + OH. Before entering the flow optics are used to create a grid of laser beams. The water in the flow is dissociated only where beams of sufficient energy pass through the flow, thus creating a grid in the flow where the concentrations of hydroxyl (OH) are higher than in the surrounding flow. Another laser beam (at either ~248 nm or ~308 nm) in the form of a sheet is also passed through the flow in the same plane as the grid. This laser beam is tuned to a wavelength that causes the hydroxyl molecules to fluoresce in the UV spectrum. The fluorescence is then captured by a charge-coupled device (CCD) camera. Using electronic timing methods the picture of the grid can be captured at nearly the same instant that the grid is created. By delaying the pulse of the fluorescence laser and the camera shot, an image of the grid that has now displaced downstream can be captured. Computer programs are then used to compare the two images and determine the displacement of the grid. By dividing the displacement by the known time delay the two dimensional velocity field (in the plane of the grid) can be determined. Flow ratios, however, are shown to affect the impingement locations, where increased air flow ratios can reduce the required combustor size by isolating reaction products solely within the secondary cavity. Other molecular tagging velocimetry (MTV) methods have used ozone (O3), excited oxygen and nitric oxide as the tag instead of hydroxyl. In the case of ozone the method is known as ozone tagging velocimetry or OTV. OTV has been developed and tested in many room air temperature applications with very accurate tes
https://en.wikipedia.org/wiki/Anti-replay
Anti-replay is a sub-protocol of IPsec that is part of Internet Engineering Task Force (IETF). The main goal of anti-replay is to avoid hackers injecting or making changes in packets that travel from a source to a destination. Anti-replay protocol uses a unidirectional security association in order to establish a secure connection between two nodes in the network. Once a secure connection is established, the anti-replay protocol uses packet sequence numbers to defeat replay attacks as follows: When the source sends a message, it adds a sequence number to its packet; the sequence number starts at 0 and is incremented by 1 for each subsequent packet. The destination maintains a 'sliding window' record of the sequence numbers of validated received packets; it rejects all packets which have a sequence number which is lower than the lowest in the sliding window (i.e. too old) or already appears in the sliding window (i.e. duplicates/replays). Accepted packets, once validated, update the sliding window (displacing the lowest sequence number out of the window if it was already full). See also Cryptanalysis Man in the middle attack Replay attack Session ID Transport Layer Security
https://en.wikipedia.org/wiki/IPSANET
IPSANET was a packet switching network written by I. P. Sharp Associates (IPSA). Operation began in May 1976. It initially used the IBM 3705 Communications Controller and Computer Automation LSI-2 computers as nodes. An Intel 80286 based-node was added in 1987. It was called the Beta node. The original purpose was to connect low-speed dumb terminals to a central time sharing host in Toronto. It was soon modified to allow a terminal to connect to an alternate host running the SHARP APL software under license. Terminals were initially either 2741-type machines based on the 14.8 characters/s IBM Selectric typewriter or 30 character/s ASCII machines. Link speed was limited to 9600 bit/s until about 1984. Other services including 2780/3780 Bisync support, remote printing, X.25 gateway and SDLC pipe lines were added in the 1978 to 1984 era. There was no general purpose data transport facility until the introduction of Network Shared Variable Processor (NSVP) in 1984. This allowed APL programs running on different hosts to communicate via Shared Variables. The Beta node improved performance and provided new services not tied to APL. An X.25 interface was the most important of these. It allowed connection to a host which was not running SHARP APL. IPSANET allowed for the development of an early yet advanced e-mail service, 666 BOX, which also became a major product for some time, originally hosted on IPSA's system, and later sold to end users to run on their own machines. NSVP allowed these remote e-mail systems to exchange traffic. The network reached its maximum size of about 300 nodes before it was shut down in 1993. External links IPSANET Archives Computer networking Packets (information technology)
https://en.wikipedia.org/wiki/Independence-friendly%20logic
Independence-friendly logic (IF logic; proposed by Jaakko Hintikka and in 1989) is an extension of classical first-order logic (FOL) by means of slashed quantifiers of the form and , where is a finite set of variables. The intended reading of is "there is a which is functionally independent from the variables in ". IF logic allows one to express more general patterns of dependence between variables than those which are implicit in first-order logic. This greater level of generality leads to an actual increase in expressive power; the set of IF sentences can characterize the same classes of structures as existential second-order logic (). For example, it can express branching quantifier sentences, such as the formula which expresses infinity in the empty signature; this cannot be done in FOL. Therefore, first-order logic cannot, in general, express this pattern of dependency, in which depends only on and , and depends only on and . IF logic is more general than branching quantifiers, for example in that it can express dependencies that are not transitive, such as in the quantifier prefix , which expresses that depends on , and depends on , but does not depend on . The introduction of IF logic was partly motivated by the attempt of extending the game semantics of first-order logic to games of imperfect information. Indeed, a semantics for IF sentences can be given in terms of these kinds of games (or, alternatively, by means of a translation procedure to existential second-order logic). A semantics for open formulas cannot be given in the form of a Tarskian semantics; an adequate semantics must specify what it means for a formula to be satisfied by a set of assignments of common variable domain (a team) rather than satisfaction by a single assignment. Such a team semantics was developed by Hodges. Independence-friendly logic is translation equivalent, at the level of sentences, with a number of other logical systems based on team semantics, such as
https://en.wikipedia.org/wiki/Wilson%20quotient
The Wilson quotient W(p) is defined as: If p is a prime number, the quotient is an integer by Wilson's theorem; moreover, if p is composite, the quotient is not an integer. If p divides W(p), it is called a Wilson prime. The integer values of W(p) are : W(2) = 1 W(3) = 1 W(5) = 5 W(7) = 103 W(11) = 329891 W(13) = 36846277 W(17) = 1230752346353 W(19) = 336967037143579 ... It is known that where is the k-th Bernoulli number. Note that the first relation comes from the second one by subtraction, after substituting and . See also Fermat quotient
https://en.wikipedia.org/wiki/ATA%20over%20Ethernet
ATA over Ethernet (AoE) is a network protocol developed by the Brantley Coile Company, designed for simple, high-performance access of block storage devices over Ethernet networks. It is used to build storage area networks (SANs) with low-cost, standard technologies. Protocol description AoE runs on layer 2 Ethernet. AoE does not use Internet Protocol (IP); it cannot be accessed over the Internet or other IP networks. In this regard it is more comparable to Fibre Channel over Ethernet than iSCSI. With fewer protocol layers, this approach makes AoE fast and lightweight. It also makes the protocol relatively easy to implement and offers linear scalability with high performance. The AoE specification is 12 pages compared with iSCSI's 257 pages. AoE Header Format: 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 0 | Ethernet Destination MAC Address | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 4 | Ethernet Destination (cont) | Ethernet Source MAC Address | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 8 | Ethernet Source MAC Address (cont) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 12 | Ethernet Type (0x88A2) | Ver | Flags | Error | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 16 | Major | Minor | Command | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 20 | Tag | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 24 | Arg | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Ao
https://en.wikipedia.org/wiki/Depletion%20region
In semiconductor physics, the depletion region, also called depletion layer, depletion zone, junction region, space charge region or space charge layer, is an insulating region within a conductive, doped semiconductor material where the mobile charge carriers have been diffused away, or have been forced away by an electric field. The only elements left in the depletion region are ionized donor or acceptor impurities. This region of uncovered positive and negative ions is called the depletion region due to the depletion of carriers in this region, leaving none to carry a current. Understanding the depletion region is key to explaining modern semiconductor electronics: diodes, bipolar junction transistors, field-effect transistors, and variable capacitance diodes all rely on depletion region phenomena. Formation in a p–n junction A depletion region forms instantaneously across a p–n junction. It is most easily described when the junction is in thermal equilibrium or in a steady state: in both of these cases the properties of the system do not vary in time; they have been called dynamic equilibrium. Electrons and holes diffuse into regions with lower concentrations of them, much as ink diffuses into water until it is uniformly distributed. By definition, the N-type semiconductor has an excess of free electrons (in the conduction band) compared to the P-type semiconductor, and the P-type has an excess of holes (in the valence band) compared to the N-type. Therefore, when N-doped and P-doped semiconductors are placed together to form a junction, free electrons in the N-side conduction band migrate (diffuse) into the P-side conduction band, and holes in the P-side valence band migrate into the N-side valence band. Following transfer, the diffused electrons come into contact with holes and are eliminated by recombination in the P-side. Likewise, the diffused holes are recombined with free electrons so eliminated in the N-side. The net result is that the diffused elect
https://en.wikipedia.org/wiki/Belief%E2%80%93desire%E2%80%93intention%20software%20model
The belief–desire–intention software model (BDI) is a software model developed for programming intelligent agents. Superficially characterized by the implementation of an agent's beliefs, desires and intentions, it actually uses these concepts to solve a particular problem in agent programming. In essence, it provides a mechanism for separating the activity of selecting a plan (from a plan library or an external planner application) from the execution of currently active plans. Consequently, BDI agents are able to balance the time spent on deliberating about plans (choosing what to do) and executing those plans (doing it). A third activity, creating the plans in the first place (planning), is not within the scope of the model, and is left to the system designer and programmer. Overview In order to achieve this separation, the BDI software model implements the principal aspects of Michael Bratman's theory of human practical reasoning (also referred to as Belief-Desire-Intention, or BDI). That is to say, it implements the notions of belief, desire and (in particular) intention, in a manner inspired by Bratman. For Bratman, desire and intention are both pro-attitudes (mental attitudes concerned with action). He identifies commitment as the distinguishing factor between desire and intention, noting that it leads to (1) temporal persistence in plans and (2) further plans being made on the basis of those to which it is already committed. The BDI software model partially addresses these issues. Temporal persistence, in the sense of explicit reference to time, is not explored. The hierarchical nature of plans is more easily implemented: a plan consists of a number of steps, some of which may invoke other plans. The hierarchical definition of plans itself implies a kind of temporal persistence, since the overarching plan remains in effect while subsidiary plans are being executed. An important aspect of the BDI software model (in terms of its research relevance) is the ex
https://en.wikipedia.org/wiki/Gruit
Gruit (alternately grut or gruyt) is a herb mixture used for bittering and flavouring beer, popular before the extensive use of hops. The terms gruit and grut ale may also refer to the beverage produced using gruit. Historically, gruit is the term used in the historic Low Countries and the Holy Roman Empire (westernmost Germany). Today, however, gruit is a colloquial term for any beer seasoned with gruit-like herbs. Historical context The word "gruit" stems from an area now in the Netherlands, Belgium, and northwestern Germany. The word refers to the herb mixture originally used to enhance the flavour of beers before the general use of hops. The earliest reference to gruit dates from the late 10th century. During the 11th century, the Holy Roman Emperor Henry IV awarded monopoly privileges of the production and sale of gruit (Grutgerechtigkeit, or "grut licence") to different local authorities, and as such was a de facto tax on beer. The control of gruit restricted entry to local beer markets—brewers in a diocese were not allowed to sell beer brewed without the local gruit, and imports were similarly restricted. The gruit licensing system also exerted control over brewers within a city, as the holder of a Grutgerechtigkeit could calculate how much beer each brewer could make based on how much gruit was sold to them. Outside the area where the gruit monopoly applied, other countries and regions produced ales containing spices, but they were not called gruit. For instance, some traditional types of unhopped beer such as sahti in Finland, which is spiced with juniper berries and twigs, have survived the advent of hops. Specific gruit recipes were often guarded secrets. In 1420, the town council of Cologne "...directed a knowledgeable woman to teach a certain brewer, and no one else, how to make [gruit]..." Although largely replaced by hops in the 14th and 15th centuries, gruit beer was locally produced in Westphalia until at least the 17th century. In both the area
https://en.wikipedia.org/wiki/Sagebrush%20steppe
Sagebrush steppe is a type of shrub-steppe, a plant community characterized by the presence of shrubs, and usually dominated by sagebrush, any of several species in the genus Artemisia. This ecosystem is found in the Intermountain West in the United States. The most common sagebrush species in the sagebrush steppe in most areas is big sagebrush (Artemisia tridentata). Others include three-tip sagebrush (Artemisia tripartita) and low sagebrush (Artemisia arbuscula). Sagebrush is found alongside many species of grasses. Sagebrush steppe is a diverse habitat, with more than 350 recorded vertebrate species. It is also open rangeland for livestock, a recreation area, and a source of water in otherwise arid regions. It is key habitat for declining flora and fauna species, such as greater sage-grouse (Centrocercus urophasianus) and pygmy rabbit (Brachylagus idahoensis). Sagebrush steppe is a threatened ecosystem in many regions. It was once very widespread in the regions that form the Intermountain West, such as the Great Basin and Colorado Plateau. It has become fragmented and degraded by a number of forces. Steppe has been overgrown with introduced species and has changed to an ecosystem resembling pine and juniper woodland. This has changed the fire regime of the landscape, increasing fuel loads and increasing the chance of unnaturally severe wildfires. Cheatgrass (Bromus tectorum) is also an important introduced plant species that increases fire risk in this ecosystem. Other forces leading to these habitat changes include fire suppression and overgrazing of livestock. Besides severe fire, consequences of the breakdown of sagebrush steppe include increased erosion of the land and sedimentation in local waterways, decreased water quality, decreased quality of forage available for livestock, and degradation of habitat for wildlife and game animals.
https://en.wikipedia.org/wiki/Behavioral%20sink
"Behavioral sink" is a term invented by ethologist John B. Calhoun to describe a collapse in behavior which can result from overcrowding. The term and concept derive from a series of over-population experiments Calhoun conducted on Norway rats between 1958 and 1962. In the experiments, Calhoun and his researchers created a series of "rat utopias" – enclosed spaces in which rats were given unlimited access to food and water, enabling unfettered population growth. Calhoun coined the term "behavioral sink" in his February 1, 1962 report in an article titled "Population Density and Social Pathology" in Scientific American on the rat experiment. He would later perform similar experiments on mice, from 1968 to 1972. Calhoun's work became used as an animal model of societal collapse, and his study has become a touchstone of urban sociology and psychology in general. Experiments In the 1962 study, Calhoun described the behavior as follows: Calhoun's early experiments with rats were carried out on farmland at Rockville, Maryland, starting in 1947. While Calhoun was working at the National Institute of Mental Health (NIMH) in 1954, he began numerous experiments with rats and mice. During his first tests, he placed around 32 to 56 rats in a cage in a barn in Montgomery County. He separated the space into four rooms. Every room was specifically created to support a dozen matured brown Norwegian rats. Rats could maneuver between the rooms by using the ramps. Since Calhoun provided unlimited resources, such as water, food, and also protection from predators as well as from disease and weather, the rats were said to be in "rat utopia" or "mouse paradise", another psychologist explained. Following his earlier experiments with rats, Calhoun later created his "Mortality-Inhibiting Environment for Mice" in 1968: a cage for mice with food and water replenished to support any increase in population, which took his experimental approach to its limits. In his most famous experimen
https://en.wikipedia.org/wiki/Nordic%20cross%20flag
A Nordic cross flag is a flag bearing the design of the Nordic or Scandinavian cross, a cross symbol in a rectangular field, with the centre of the cross shifted towards the hoist. All independent Nordic countries have adopted such flags in the modern period, and while the Nordic cross is named for its use in the national flags of the Nordic nations, the term is used universally by vexillologists, in reference not only to the flags of the Nordic countries but to other flags with similar designs. The sideways cross is also known as the Cross of Saint Philip the Apostle, who preached not in Scandinavia but in Greece, Phrygia and Syria instead. The cross design represents Christianity, and was first seen in the Dannebrog, the national flag of Denmark in the first half of the 13th century. The same design, but with a red Nordic cross on a yellow background, was used as union flag during the Kalmar union (1397 to 1523), and when that union fell apart in 1523 the same design, but with a yellow cross on a blue background (derived from the Swedish coat of arms adopted in 1442), was adopted as national flag of Sweden, while Norway adopted their flag in 1821. From its adoption in the early 16th century until 1906 the background of the flag of Sweden was dark blue, but was changed to the currently used lighter shade of blue in a new flag law that was adopted in 1906, after the dissolution of the union between Sweden and Norway. After gaining independence the other Nordic countries adopted national flags of the same design, Iceland in 1915 and Finland in 1917. The Norwegian flag was the first Nordic cross flag with three colours. All Nordic flags may be flown as gonfalons as well. Flag formats Flags of the Nordic countries Some of these flags are historical. Also, flag proportions may vary between the different flags and sometimes even between different versions of the same flag. The Flag of Greenland is the only national flag of a Nordic country or territory without a No
https://en.wikipedia.org/wiki/Aquatic%20ecosystem
An aquatic ecosystem is an ecosystem found in and around a body of water, in contrast to land-based terrestrial ecosystems. Aquatic ecosystems contain communities of organisms—aquatic life—that are dependent on each other and on their environment. The two main types of aquatic ecosystems are marine ecosystems and freshwater ecosystems. Freshwater ecosystems may be lentic (slow moving water, including pools, ponds, and lakes); lotic (faster moving water, for example streams and rivers); and wetlands (areas where the soil is saturated or inundated for at least part of the time). Types Marine ecosystems Marine coastal ecosystem Marine surface ecosystem Freshwater ecosystems Lentic ecosystem (lakes) Lotic ecosystem (rivers) Wetlands Functions Aquatic ecosystems perform many important environmental functions. For example, they recycle nutrients, purify water, attenuate floods, recharge ground water and provide habitats for wildlife. The biota of an aquatic ecosystem contribute to its self-purification, most notably microorganisms, phytoplankton, higher plants, invertebrates, fish, bacteria, protists, aquatic fungi, and more. These organisms are actively involved in multiple self-purification processes, including organic matter destruction and water filtration. It is crucial that aquatic ecosystems are reliably self-maintained, as they also provide habitats for species that reside in them. In addition to environmental functions, aquatic ecosystems are also used for human recreation, and are very important to the tourism industry, especially in coastal regions. They are also used for religious purposes, such as the worshipping of the Jordan River by Christians, and educational purposes, such as the usage of lakes for ecological study. Biotic characteristics (living components) The biotic characteristics are mainly determined by the organisms that occur. For example, wetland plants may produce dense canopies that cover large areas of sediment—or snails or geese
https://en.wikipedia.org/wiki/Generalized%20method%20of%20moments
In econometrics and statistics, the generalized method of moments (GMM) is a generic method for estimating parameters in statistical models. Usually it is applied in the context of semiparametric models, where the parameter of interest is finite-dimensional, whereas the full shape of the data's distribution function may not be known, and therefore maximum likelihood estimation is not applicable. The method requires that a certain number of moment conditions be specified for the model. These moment conditions are functions of the model parameters and the data, such that their expectation is zero at the parameters' true values. The GMM method then minimizes a certain norm of the sample averages of the moment conditions, and can therefore be thought of as a special case of minimum-distance estimation. The GMM estimators are known to be consistent, asymptotically normal, and most efficient in the class of all estimators that do not use any extra information aside from that contained in the moment conditions. GMM were advocated by Lars Peter Hansen in 1982 as a generalization of the method of moments, introduced by Karl Pearson in 1894. However, these estimators are mathematically equivalent to those based on "orthogonality conditions" (Sargan, 1958, 1959) or "unbiased estimating equations" (Huber, 1967; Wang et al., 1997). Description Suppose the available data consists of T observations , where each observation Yt is an n-dimensional multivariate random variable. We assume that the data come from a certain statistical model, defined up to an unknown parameter . The goal of the estimation problem is to find the “true” value of this parameter, θ0, or at least a reasonably close estimate. A general assumption of GMM is that the data Yt be generated by a weakly stationary ergodic stochastic process. (The case of independent and identically distributed (iid) variables Yt is a special case of this condition.) In order to apply GMM, we need to have "moment conditions",
https://en.wikipedia.org/wiki/Skew%20lines
In three-dimensional geometry, skew lines are two lines that do not intersect and are not parallel. A simple example of a pair of skew lines is the pair of lines through opposite edges of a regular tetrahedron. Two lines that both lie in the same plane must either cross each other or be parallel, so skew lines can exist only in three or more dimensions. Two lines are skew if and only if they are not coplanar. General position If four points are chosen at random uniformly within a unit cube, they will almost surely define a pair of skew lines. After the first three points have been chosen, the fourth point will define a non-skew line if, and only if, it is coplanar with the first three points. However, the plane through the first three points forms a subset of measure zero of the cube, and the probability that the fourth point lies on this plane is zero. If it does not, the lines defined by the points will be skew. Similarly, in three-dimensional space a very small perturbation of any two parallel or intersecting lines will almost certainly turn them into skew lines. Therefore, any four points in general position always form skew lines. In this sense, skew lines are the "usual" case, and parallel or intersecting lines are special cases. Formulas Testing for skewness If each line in a pair of skew lines is defined by two points that it passes through, then these four points must not be coplanar, so they must be the vertices of a tetrahedron of nonzero volume. Conversely, any two pairs of points defining a tetrahedron of nonzero volume also define a pair of skew lines. Therefore, a test of whether two pairs of points define skew lines is to apply the formula for the volume of a tetrahedron in terms of its four vertices. Denoting one point as the 1×3 vector whose three elements are the point's three coordinate values, and likewise denoting , , and for the other points, we can check if the line through and is skew to the line through and by seeing if the tet
https://en.wikipedia.org/wiki/Electric%20potential%20energy
Electric potential energy is a potential energy (measured in joules) that results from conservative Coulomb forces and is associated with the configuration of a particular set of point charges within a defined system. An object may be said to have electric potential energy by virtue of either its own electric charge or its relative position to other electrically charged objects. The term "electric potential energy" is used to describe the potential energy in systems with time-variant electric fields, while the term "electrostatic potential energy" is used to describe the potential energy in systems with time-invariant electric fields. Definition The electric potential energy of a system of point charges is defined as the work required to assemble this system of charges by bringing them close together, as in the system from an infinite distance. Alternatively, the electric potential energy of any given charge or system of charges is termed as the total work done by an external agent in bringing the charge or the system of charges from infinity to the present configuration without undergoing any acceleration. The electrostatic potential energy can also be defined from the electric potential as follows: Units The SI unit of electric potential energy is joule (named after the English physicist James Prescott Joule). In the CGS system the erg is the unit of energy, being equal to 10−7 Joules. Also electronvolts may be used, 1 eV = 1.602×10−19 Joules. Electrostatic potential energy of one point charge One point charge q in the presence of another point charge Q The electrostatic potential energy, UE, of one point charge q at position r in the presence of a point charge Q, taking an infinite separation between the charges as the reference position, is: where is the Coulomb constant, r is the distance between the point charges q and Q, and q and Q are the charges (not the absolute values of the charges—i.e., an electron would have a negative value of charge when
https://en.wikipedia.org/wiki/Evil%20bit
The evil bit is a fictional IPv4 packet header field proposed in a humorous April Fools' Day RFC from 2003, authored by Steve Bellovin. The Request for Comments recommended that the last remaining unused bit, the "Reserved Bit" in the IPv4 packet header, be used to indicate whether a packet had been sent with malicious intent, thus making computer security engineering an easy problem simply ignore any messages with the evil bit set and trust the rest. Influence The evil bit has become a synonym for all attempts to seek simple technical solutions for difficult human social problems which require the willing participation of malicious actors, in particular efforts to implement Internet censorship using simple technical solutions. As a joke, FreeBSD implemented support for the evil bit that day but removed the changes the next day. A Linux patch implementing the iptables module "ipt_evil" was posted the next year. Furthermore, a patch for FreeBSD 7 is available and is kept up-to-date. There is extension for XMPP protocol "XEP-0076: Malicious Stanzas", inspired by evil bit. This RFC has also been quoted in the otherwise completely serious RFC 3675, ".sex Considered Dangerous", which may have caused the proponents of .xxx to wonder whether the Internet Engineering Task Force (IETF) was commenting on their application for a top-level domain (TLD) the document was not related to their application. For April Fool's 2010, Google added an &evil=true parameter to requests through the Ajax APIs. See also Technological fix Do Not Track HTTP 451
https://en.wikipedia.org/wiki/Divergent%20evolution
Divergent evolution or divergent selection is the accumulation of differences between closely related populations within a species, sometimes leading to speciation. Divergent evolution is typically exhibited when two populations become separated by a geographic barrier (such as in allopatric or peripatric speciation) and experience different selective pressures that drive adaptations to their new environment. After many generations and continual evolution, the populations become less able to interbreed with one another. The American naturalist J. T. Gulick (1832–1923) was the first to use the term "divergent evolution", with its use becoming widespread in modern evolutionary literature. Classic examples of divergence in nature are the adaptive radiation of the finches of the Galapagos or the coloration differences in populations of a species that live in different habitats such as with pocket mice and fence lizards. The term can also be applied in molecular evolution, such as to proteins that derive from homologous genes. Both orthologous genes (resulting from a speciation event) and paralogous genes (resulting from gene duplication) can illustrate divergent evolution. Through gene duplication, it is possible for divergent evolution to occur between two genes within a species. Similarities between species that have diverged are due to their common origin, so such similarities are homologies. In contrast, convergent evolution arises when an adaptation has arisen independently, creating analogous structures such as the wings of birds and of insects. Creation, definition, and usage The term divergent evolution is believed to have been first used by J. T. Gulick. Divergent evolution is commonly defined as what occurs when two groups of the same species evolve different traits within those groups in order to accommodate for differing environmental and social pressures. Various examples of such pressures can include predation, food supplies, and competition for mates
https://en.wikipedia.org/wiki/Littlewood%E2%80%93Offord%20problem
In mathematical field of combinatorial geometry, the Littlewood–Offord problem is the problem of determining the number of subsums of a set of vectors that fall in a given convex set. More formally, if V is a vector space of dimension d, the problem is to determine, given a finite subset of vectors S and a convex subset A, the number of subsets of S whose summation is in A. The first upper bound for this problem was proven (for d = 1 and d = 2) in 1938 by John Edensor Littlewood and A. Cyril Offord. This Littlewood–Offord lemma states that if S is a set of n real or complex numbers of absolute value at least one and A is any disc of radius one, then not more than of the 2n possible subsums of S fall into the disc. In 1945 Paul Erdős improved the upper bound for d = 1 to using Sperner's theorem. This bound is sharp; equality is attained when all vectors in S are equal. In 1966, Kleitman showed that the same bound held for complex numbers. In 1970, he extended this to the setting when V is a normed space. Suppose S = {v1, …, vn}. By subtracting from each possible subsum (that is, by changing the origin and then scaling by a factor of 2), the Littlewood–Offord problem is equivalent to the problem of determining the number of sums of the form that fall in the target set A, where takes the value 1 or −1. This makes the problem into a probabilistic one, in which the question is of the distribution of these random vectors, and what can be said knowing nothing more about the vi.
https://en.wikipedia.org/wiki/IEEE%20Transactions%20on%20Information%20Theory
IEEE Transactions on Information Theory is a monthly peer-reviewed scientific journal published by the IEEE Information Theory Society. It covers information theory and the mathematics of communications. It was established in 1953 as IRE Transactions on Information Theory. The editor-in-chief is Muriel Médard (Massachusetts Institute of Technology). As of 2007, the journal allows the posting of preprints on arXiv. According to Jack van Lint, it is the leading research journal in the whole field of coding theory. A 2006 study using the PageRank network analysis algorithm found that, among hundreds of computer science-related journals, IEEE Transactions on Information Theory had the highest ranking and was thus deemed the most prestigious. ACM Computing Surveys, with the highest impact factor, was deemed the most popular.
https://en.wikipedia.org/wiki/Simple%20Network%20Paging%20Protocol
Simple Network Paging Protocol (SNPP) is a protocol that defines a method by which a pager can receive a message over the Internet. It is supported by most major paging providers, and serves as an alternative to the paging modems used by many telecommunications services. The protocol was most recently described in . It is a fairly simple protocol that may run over TCP port 444 and sends out a page using only a handful of well-documented commands. Connecting and using SNPP servers It is relatively easy to connect to a SNPP server only requiring a telnet client and the address of the SNPP server. The port 444 is standard for SNPP servers, and it is free to use from the sender's point of view. Maximum message length can be carrier-dependent. Once connected, a user can simply enter the commands to send a message to a pager connected to that network. For example, you could then issue the PAGE command with the number of the device to which you wish to send the message. After that issue the MESS command with the text of the message you wish to send following it. You can then issue the SEND command to send out the message to the pager and then QUIT, or send another message to a different device. The protocol also allows you to issue multiple PAGE commands, stacking them one after the other, per message effectively allowing you to send the same message to several devices on that network with one MESS and SEND command pair.
https://en.wikipedia.org/wiki/Rapoport%27s%20rule
Rapoport's rule is an ecogeographical rule that states that latitudinal ranges of plants and animals are generally smaller at lower latitudes than at higher latitudes. Background Stevens (1989) named the rule after Eduardo H. Rapoport, who had earlier provided evidence for the phenomenon for subspecies of mammals (Rapoport 1975, 1982). Stevens used the rule to "explain" greater species diversity in the tropics in the sense that latitudinal gradients in species diversity and the rule have identical exceptional data and so must have the same underlying cause. Narrower ranges in the tropics would facilitate more species to coexist. He later extended the rule to altitudinal gradients, claiming that altitudinal ranges are greatest at greater altitudes (Stevens 1992), and to depth gradients in the oceans (Stevens 1996). The rule has been the focus of intense discussion and given much impetus to exploring distributional patterns of plants and animals. Stevens' original paper has been cited about 330 times in the scientific literature. Generality Support for the generality of the rule is at best equivocal. For example, marine teleost fishes have the greatest latitudinal ranges at low latitudes. In contrast, freshwater fishes do show the trend, although only above a latitude of about 40 degrees North. Some subsequent papers have found support for the rule; others, probably even more numerous, have found exceptions to it. For most groups that have been shown to follow the rule, it is restricted to or at least most distinct above latitudes of about 40–50 degrees. Rohde therefore concluded that the rule describes a local phenomenon. Computer simulations using the Chowdhury Ecosystem Model did not find support for the rule. Explanations Rohde (1996) explained the fact that the rule is restricted to very high latitudes by effects of glaciations which have wiped out species with narrow ranges, a view also expressed by Brown (1995). Another explanation of Rapoport's rule is
https://en.wikipedia.org/wiki/Pumapard
A pumapard is a hybrid of a cougar and a leopard. Both male cougar with female leopard and male leopard with female cougar pairings have produced offspring. In general, these hybrids have exhibited a tendency to dwarfism. Characteristics Whether born to a male cougar mated to a leopardess, or to a male leopard mated to a female cougar, pumapards inherit a form of dwarfism. Those reported grew to only half the size of the parents. They have a long, cougar-like body (proportional to the limbs, but nevertheless shorter than either parent) with short legs. The coat is variously described as sandy, tawny or grayish with brown, chestnut or "faded" rosettes. One is preserved in the Walter Rothschild Zoological Museum at Tring, England, and clearly shows the tendency to dwarfism. This hybrid was exhibited at the Tierpark in Stellingen (Hamburg), Germany. A black and white photograph of the Tring hybrid appeared in Animals of the World (1917) with the caption "This is a photograph from life of a very rare hybrid. That animal's father was a puma, its mother a leopard. It is now dead and it may be seen stuffed in Mr. Rothschild's Museum at Tring." History In the late 1890s/early 1900s, two hybrids were born in Chicago, United States, followed two years later by three sets of twin cubs born at a zoo in Hamburg, Germany, from a cougar father and leopard mother. Carl Hagenbeck apparently bred several litters of hybrids in 1898 at the suggestion of a menagerie owner in Great Britain; this was possibly Lord Rothschild (as one of the hybrids is preserved in his museum) who may have heard of the two hybrid cubs bred in Chicago in 1896 and suggested that Hagenbeck reproduce the pairing. Hagenbeck's cougar/leopard hybrids may have been inspired by a pair of hybrid cubs born in Chicago on 24 April 1896 at Tattersall's indoor arena, where Ringling Brothers Circus opened its season: A similar hybrid was reported by Helmut Hemmer. These hybrids had cougar-like long tails and sandy
https://en.wikipedia.org/wiki/FastICA
FastICA is an efficient and popular algorithm for independent component analysis invented by Aapo Hyvärinen at Helsinki University of Technology. Like most ICA algorithms, FastICA seeks an orthogonal rotation of prewhitened data, through a fixed-point iteration scheme, that maximizes a measure of non-Gaussianity of the rotated components. Non-gaussianity serves as a proxy for statistical independence, which is a very strong condition and requires infinite data to verify. FastICA can also be alternatively derived as an approximative Newton iteration. Algorithm Prewhitening the data Let the denote the input data matrix, the number of columns corresponding with the number of samples of mixed signals and the number of rows corresponding with the number of independent source signals. The input data matrix must be prewhitened, or centered and whitened, before applying the FastICA algorithm to it. Centering the data entails demeaning each component of the input data , that is, for each and . After centering, each row of has an expected value of . Whitening the data requires a linear transformation of the centered data so that the components of are uncorrelated and have variance one. More precisely, if is a centered data matrix, the covariance of is the -dimensional identity matrix, that is, A common method for whitening is by performing an eigenvalue decomposition on the covariance matrix of the centered data , , where is the matrix of eigenvectors and is the diagonal matrix of eigenvalues. The whitened data matrix is defined thus by Single component extraction The iterative algorithm finds the direction for the weight vector that maximizes a measure of non-Gaussianity of the projection , with denoting a prewhitened data matrix as described above. Note that is a column vector. To measure non-Gaussianity, FastICA relies on a nonquadratic nonlinear function , its first derivative , and its second derivative . Hyvärinen states that the functions
https://en.wikipedia.org/wiki/Codd%27s%20cellular%20automaton
Codd's cellular automaton is a cellular automaton (CA) devised by the British computer scientist Edgar F. Codd in 1968. It was designed to recreate the computation- and construction-universality of von Neumann's CA but with fewer states: 8 instead of 29. Codd showed that it was possible to make a self-reproducing machine in his CA, in a similar way to von Neumann's universal constructor, but never gave a complete implementation. History In the 1940s and '50s, John von Neumann posed the following problem: What kind of logical organization is sufficient for an automaton to be able to reproduce itself? He was able to construct a cellular automaton with 29 states, and with it a universal constructor. Codd, building on von Neumann's work, found a simpler machine with eight states. This modified von Neumann's question: What kind of logical organization is necessary for an automaton to be able to reproduce itself? Three years after Codd's work, Edwin Roger Banks showed a 4-state CA in his PhD thesis that was also capable of universal computation and construction, but again did not implement a self-reproducing machine. John Devore, in his 1973 masters thesis, tweaked Codd's rules to greatly reduce the size of Codd's design, to the extent that it could be implemented in the computers of that time. However, the data tape for self-replication was too long; Devore's original design was later able to complete replication using Golly. Christopher Langton made another tweak to Codd's cellular automaton in 1984 to create Langton's loops, exhibiting self-replication with far fewer cells than that needed for self-reproduction in previous rules, at the cost of removing the ability for universal computation and construction. Comparison of CA rulesets Specification Codd's CA has eight states determined by a von Neumann neighborhood with rotational symmetry. The table below shows the signal-trains needed to accomplish different tasks. Some of the signal trains need to be separated
https://en.wikipedia.org/wiki/Optical%20resolution
Optical resolution describes the ability of an imaging system to resolve detail, in the object that is being imaged. An imaging system may have many individual components, including one or more lenses, and/or recording and display components. Each of these contributes (given suitable design, and adequate alignment) to the optical resolution of the system; the environment in which the imaging is done often is a further important factor. Lateral resolution Resolution depends on the distance between two distinguishable radiating points. The sections below describe the theoretical estimates of resolution, but the real values may differ. The results below are based on mathematical models of Airy discs, which assumes an adequate level of contrast. In low-contrast systems, the resolution may be much lower than predicted by the theory outlined below. Real optical systems are complex, and practical difficulties often increase the distance between distinguishable point sources. The resolution of a system is based on the minimum distance at which the points can be distinguished as individuals. Several standards are used to determine, quantitatively, whether or not the points can be distinguished. One of the methods specifies that, on the line between the center of one point and the next, the contrast between the maximum and minimum intensity be at least 26% lower than the maximum. This corresponds to the overlap of one Airy disk on the first dark ring in the other. This standard for separation is also known as the Rayleigh criterion. In symbols, the distance is defined as follows: where is the minimum distance between resolvable points, in the same units as is specified is the wavelength of light, emission wavelength, in the case of fluorescence, is the index of refraction of the media surrounding the radiating points, is the half angle of the pencil of light that enters the objective, and is the numerical aperture This formula is suitable for confocal mi
https://en.wikipedia.org/wiki/Crossed%20module
In mathematics, and especially in homotopy theory, a crossed module consists of groups and , where acts on by automorphisms (which we will write on the left, , and a homomorphism of groups that is equivariant with respect to the conjugation action of on itself: and also satisfies the so-called Peiffer identity: Origin The first mention of the second identity for a crossed module seems to be in footnote 25 on p. 422 of J. H. C. Whitehead's 1941 paper cited below, while the term 'crossed module' is introduced in his 1946 paper cited below. These ideas were well worked up in his 1949 paper 'Combinatorial homotopy II', which also introduced the important idea of a free crossed module. Whitehead's ideas on crossed modules and their applications are developed and explained in the book by Brown, Higgins, Sivera listed below. Some generalisations of the idea of crossed module are explained in the paper of Janelidze. Examples Let be a normal subgroup of a group . Then, the inclusion is a crossed module with the conjugation action of on . For any group G, modules over the group ring are crossed G-modules with d = 0. For any group H, the homomorphism from H to Aut(H) sending any element of H to the corresponding inner automorphism is a crossed module. Given any central extension of groups the surjective homomorphism together with the action of on defines a crossed module. Thus, central extensions can be seen as special crossed modules. Conversely, a crossed module with surjective boundary defines a central extension. If (X,A,x) is a pointed pair of topological spaces (i.e. is a subspace of , and is a point in ), then the homotopy boundary from the second relative homotopy group to the fundamental group, may be given the structure of crossed module. The functor satisfies a form of the van Kampen theorem, in that it preserves certain colimits. The result on the crossed module of a pair can also be phrased as: if is a pointed fibration
https://en.wikipedia.org/wiki/Conic%20constant
In geometry, the conic constant (or Schwarzschild constant, after Karl Schwarzschild) is a quantity describing conic sections, and is represented by the letter K. The constant is given by where is the eccentricity of the conic section. The equation for a conic section with apex at the origin and tangent to the y axis is alternately where R is the radius of curvature at . This formulation is used in geometric optics to specify oblate elliptical (), spherical (), prolate elliptical (), parabolic (), and hyperbolic () lens and mirror surfaces. When the paraxial approximation is valid, the optical surface can be treated as a spherical surface with the same radius. Some non-optical design references use the letter p as the conic constant. In these cases, .
https://en.wikipedia.org/wiki/Teleoperation
Teleoperation (or remote operation) indicates operation of a system or machine at a distance. It is similar in meaning to the phrase "remote control" but is usually encountered in research, academia and technology. It is most commonly associated with robotics and mobile robots but can be applied to a whole range of circumstances in which a device or machine is operated by a person from a distance. Teleoperation can be considered a human-machine system. For example, ArduPilot provides a spectrum of autonomy ranging from manual control to full autopilot for autonomous vehicles. The term teleoperation is in use in research and technical communities as a standard term for referring to operation at a distance. This is as opposed to telepresence which is a less standard term and might refer to a whole range of existence or interaction that include a remote connotation. History The 19th century saw many inventors working on remotely operated weapons (torpedoes) including prototypes built by John Louis Lay (1872), John Ericsson (1873), Victor von Scheliha (1873), and the first practical wire guided torpedo, the Brennan torpedo, patented by Louis Brennan in 1877. In 1898, Nikola Tesla demonstrated a remotely controlled boat with a patented wireless radio guidance system that he tried to market to the United States military, but was turned down. Teleoperation is now moving into the hobby industry with first-person view (FPV) equipment. FPV equipment mounted on hobby cars, planes and helicopters give a TV-style transmission back to the operator, extending the range of the vehicle to greater than line-of-sight range. ALOHA (A Low-cost Open-source Hardware System for Bimanual Teleoperation) remote manipulator robot is used for data collection to train AI to perform learned skills. Examples There are several particular types of systems that are often controlled remotely: Entertainment systems (i.e. televisions, VCRs, DVD players etc.) are often controlled remotely vi
https://en.wikipedia.org/wiki/Gravity%20darkening
Gravity darkening, also referred to as gravity brightening, is an astronomical phenomenon where the poles of a star are brighter than the equator, due to rapid rotation and oblate shape. When a star is oblate, it has a larger radius at its equator than it does at its poles. As a result, the poles have a higher surface gravity, and thus higher temperature and pressure is needed to maintain hydrostatic equilibrium. Thus, the poles are "gravity brightened", and the equator "gravity darkened". The star becomes oblate (and hence gravity darkening occurs) because the centrifugal force resulting from rotation creates additional outward pressure on the star. The centrifugal force is expressed mathematically as where is mass (in this case of a small volume element of the star), is the angular velocity, and is the radial distance from the axis of rotation. In the case of a star, the value of is largest at the equator and smallest at the poles. This means that equatorial regions of a star have a greater centrifugal force than the pole. The centrifugal force pushes mass away from the axis of rotation, resulting in less overall pressure on the gas in the equatorial regions of the star. This causes the gas in this region to become less dense, and cooler. Von Zeipel's theorem states that the radiation from a star is proportional to the local effective gravity, that is to the gravity reduced by any centrifugal force at that location on the star's surface. Then the effective temperature is proportional to the fourth root of the effective gravity.
https://en.wikipedia.org/wiki/Tyan
Tyan Computer Corporation (泰安電腦科技股份有限公司; also known as Tyan Business Unit, or TBU) is a subsidiary of MiTAC International, and a manufacturer of computer motherboards, including models for both AMD and Intel processors. They develop and produce high-end server, SMP, and desktop barebones systems as well as provide design and production services to tier 1 global OEMs, and a number of other regional OEMs. Founding The company was founded in 1989 by Dr. T. Symon Chang, a veteran of IBM and Intel. At that time, Dr. Chang saw an empty space in the market in which there were no strong players for the SMP server space, and as such he founded Tyan in order to develop, produce and deliver such products, starting with a dual Intel Pentium-series motherboard as well as a number of other single processor motherboards all geared towards server applications. Since then, Tyan has produced a number of single and multi-processor (as well as multi-core) products using technology from many well-known companies (e.g. Intel, AMD, NVIDIA, Broadcom and many more). Notable design wins include that of Dawning corporation for the fastest supercomputer (twice); first to market with a dual AMD Athlon MP server platform; winner of the Maximum PC Kick-Ass Award (twice) for their contributions to the Dream Machine (most recently, the 2005 edition); and first to market with an eight (8) GPU server platform (the FT72-B7015). Later company history Tyan is headquartered in Taipei, Taiwan, separated between three buildings in the Nei-Hu industrial district. All three buildings belong to the parent company, MiTAC. The North American headquarter is in Newark, California, which is the same North American headquarter for MiTAC. The merger in question was with MiTAC, a Taiwanese OEM which develops and produces a range of products (including servers, notebooks, consumer electronics products, networking and educational products - as well as providing contract manufacturing services), was announced in Ma
https://en.wikipedia.org/wiki/SIGTRAN
SIGTRAN is the name, derived from signaling transport, of the former Internet Task Force (I) working group that produced specifications for a family of protocols that provide reliable datagram service and user layer adaptations for Signaling System and ISDN communications protocols. The SIGTRAN protocols are an extension of the SS7 protocol family, and they support the same application and call management paradigms as SS7. However, the SIGTRAN protocols use an Internet Protocol (IP) transport called Stream Control Transmission Protocol (SCTP), instead of TCP or UDP. Indeed, the most significant protocol defined by the SIGTRAN group is SCTP, which is used to carry PSTN signaling over IP. The SIGTRAN group was significantly influenced by telecommunications engineers intent on using the new protocols for adapting IP networks to the PSTN with special regard to signaling applications. Recently, SCTP is finding applications beyond its original purpose wherever reliable datagram service is desired. SIGTRAN has been published in RFC 2719, under the title Framework Architecture for Signaling Transport. RFC 2719 also defines the concept of a signaling gateway (SG), which converts Common Channel Signaling (CCS) messages from SS7 to SIGTRAN. Implemented in a variety of network elements including softswitches, the SG function can provide significant value to existing common channel signaling networks, leveraging investments associated with SS7 and delivering the cost/performance values associated with IP transport. SIGTRAN protocols The SIGTRAN family of protocols includes: Stream Control Transmission Protocol (SCTP), RFC 2960, RFC 3873, RFC 4166, RFC 4960. ISDN User Adaptation (IUA), RFC 4233, RFC 5133. Message Transfer Part 2 (MTP) User Peer-to-Peer Adaptation Layer (M2PA), RFC 4165. Message Transfer Part 2 User Adaptation Layer (M2UA), RFC 3331. Message Transfer Part 3 User Adaptation Layer (M3UA), RFC 4666. Signalling Connection Control Part (SCCP) User Adaptation (SUA
https://en.wikipedia.org/wiki/William%20Alvin%20Howard
William Alvin Howard (born 1926) is a proof theorist best known for his work demonstrating formal similarity between intuitionistic logic and the simply typed lambda calculus that has come to be known as the Curry–Howard correspondence. He has also been active in the theory of proof-theoretic ordinals. He earned his Ph.D. at the University of Chicago in 1956 for his dissertation "k-fold recursion and well-ordering". He was a student of Saunders Mac Lane. The Howard ordinal (also known as the Bachmann–Howard ordinal) was named after him. He was elected to the 2018 class of fellows of the American Mathematical Society.
https://en.wikipedia.org/wiki/CableACE%20Award
The CableACE Award (earlier known as the ACE Awards; ACE was an acronym for "Award for Cable Excellence") is a defunct award that was given by what was then the National Cable Television Association from 1978 to 1997 to honor excellence in American cable television programming. The trophy itself was shaped as a glass spade, alluding to the Ace of spades. History The CableACE was created to serve as the cable industry's counterpart to broadcast television's Primetime Emmy Awards. Until the 40th ceremony in 1988, the Emmys refused to honor cable programming. For much of its existence, the ceremony aired on a simulcast on as many as twelve cable networks in some years. The last few years found the ceremony awarded solely to one network, usually Lifetime or TBS. In 1992, the award's official name was changed from ACE to CableACE, agreeing to do so to reduce confusion with the American Cinema Editors (ACE) society. By 1997, the Emmys began to reach a tipping point, where cable programming had grown to hold much more critical acclaim over broadcast programming, and met an even parity, a position that would only hold for a short time before cable programming began to dominate the categories of the Primetime Emmys. Few attended the national CableACE Awards ceremony in November 1997, and the CableACE show had a low 0.6 rating on TNT, compared with a 1.2 rating the year before, while the Emmys had a 13.5 rating that year. Smaller cable networks called for the CableACEs to be saved as their only real forum for recognition. In April 1998, members of the NCTA chose to end the CableACEs. Judging Professionals in the television industry were randomly selected to be judges. A Universal City hotel would be selected, where several rooms would be rented for the day. Individual rooms would be designated for each award category. Judges were discouraged from leaving the rooms at any time during the day-long judging. There were usually eight to 12 judges for each category. Dependi
https://en.wikipedia.org/wiki/Epitaxial%20wafer
An epitaxial wafer (also called epi wafer, epi-wafer, or epiwafer) is a wafer of semiconducting material made by epitaxial growth (epitaxy) for use in photonics, microelectronics, spintronics, or photovoltaics. The epi layer may be the same material as the substrate, typically monocrystaline silicon, or it may be a silicon dioxide (SoI) or a more exotic material with specific desirable qualities. The purpose of epitaxy is to perfect the crystal structure over the bare substrate below and improve the wafer surface's electrical characteristics, making it suitable for highly complex microprocessors and memory devices. History Silicon epi wafers were first developed around 1966 and achieved commercial acceptance by the early 1980s. Methods for growing the epitaxial layer on monocrystalline silicon or other wafers include: various types of chemical vapor deposition (CVD) classified as Atmospheric pressure CVD (APCVD) or metal organic chemical vapor deposition (MOCVD), as well as molecular beam epitaxy (MBE). Two "kerfless" methods (without abrasive sawing) for separating the epitaxial layer from the substrate are called "implant-cleave" and "stress liftoff". A method applicable when the epi-layer and substrate are the same material employs ion implantation to deposit a thin layer of crystal impurity atoms and resulting mechanical stress at the precise depth of the intended epi layer thickness. The induced localized stress provides a controlled path for crack propagation in the following cleavage step. In the dry stress lift-off process applicable when the epi-layer and substrate are suitably different materials, a controlled crack is driven by a temperature change at the epi/wafer interface purely by the thermal stresses due to the mismatch in thermal expansion between the epi layer and substrate, without the necessity for any external mechanical force or tool to aid crack propagation. It was reported that this process yields single atomic plane cleavage, reducing the
https://en.wikipedia.org/wiki/Latin%20letters%20used%20in%20mathematics%2C%20science%2C%20and%20engineering
Many letters of the Latin alphabet, both capital and small, are used in mathematics, science, and engineering to denote by convention specific or abstracted constants, variables of a certain type, units, multipliers, or physical entities. Certain letters, when combined with special formatting, take on special meaning. Below is an alphabetical list of the letters of the alphabet with some of their uses. The field in which the convention applies is mathematics unless otherwise noted. Aa A represents: the first point of a triangle the digit "10" in hexadecimal and other positional numeral systems with a radix of 11 or greater the unit ampere for electric current in physics the area of a figure the mass number or nucleon number of an element in chemistry the Helmholtz free energy of a closed thermodynamic system of constant pressure and temperature a vector potential, in electromagnetics it can refer to the magnetic vector potential an Abelian group in abstract algebra the Glaisher–Kinkelin constant atomic weight, denoted by Ar work in classical mechanics the pre-exponential factor in the Arrhenius Equation electron affinity represents the algebraic numbers or affine space in algebraic geometry. A blood type A spectral type a represents: the first side of a triangle (opposite point A) the scale factor of the expanding universe in cosmology the acceleration in mechanics equations the first constant in a linear equation a constant in a polynomial the unit are for area (100 m2) the unit prefix atto (10−18) the first term in a sequence or series Reflectance Bb B represents: the digit "11" in hexadecimal and other positional numeral systems with a radix of 12 or greater the second point of a triangle a ball (also denoted by ℬ () or ) a basis of a vector space or of a filter (both also denoted by ℬ ()) in econometrics and time-series statistics it is often used for the backshift or lag operator, the formal parameter of the lag polynomial the magnetic field, denoted
https://en.wikipedia.org/wiki/Two-fluid%20model
Two-fluid model is a macroscopic traffic flow model to represent traffic in a town/city or metropolitan area, put forward in the 1970s by Ilya Prigogine and Robert Herman. There is also a two-fluid model which helps explain the behavior of superfluid helium. This model states that there will be two components in liquid helium below its lambda point (the temperature where superfluid forms). These components are a normal fluid and a superfluid component. Each liquid has a different density and together their sum makes the total density, which remains constant. The ratio of superfluid density to the total density increases as the temperature approaches absolute zero. External links Two Fluid Model of Superfluid Helium
https://en.wikipedia.org/wiki/Color%20model
A color model is an abstract mathematical model describing the way colors can be represented as tuples of numbers, typically as three or four values or color components. When this model is associated with a precise description of how the components are to be interpreted (viewing conditions, etc.), taking account of visual perception, the resulting set of colors is called "color space." This article describes ways in which human color vision can be modeled, and discusses some of the models in common use. Tristimulus color space One can picture this space as a region in three-dimensional Euclidean space if one identifies the x, y, and z axes with the stimuli for the long-wavelength (L), medium-wavelength (M), and short-wavelength (S) light receptors. The origin, (S,M,L) = (0,0,0), corresponds to black. White has no definite position in this diagram; rather it is defined according to the color temperature or white balance as desired or as available from ambient lighting. The human color space is a horse-shoe-shaped cone such as shown here (see also CIE chromaticity diagram below), extending from the origin to, in principle, infinity. In practice, the human color receptors will be saturated or even be damaged at extremely high light intensities, but such behavior is not part of the CIE color space and neither is the changing color perception at low light levels (see: Kruithof curve). The most saturated colors are located at the outer rim of the region, with brighter colors farther removed from the origin. As far as the responses of the receptors in the eye are concerned, there is no such thing as "brown" or "gray" light. The latter color names refer to orange and white light respectively, with an intensity that is lower than the light from surrounding areas. One can observe this by watching the screen of an overhead projector during a meeting: one sees black lettering on a white background, even though the "black" has in fact not become darker than the white scre
https://en.wikipedia.org/wiki/Mixed%20waste%20%28radioactive/hazardous%29
According to the United States Environmental Protection Agency, mixed waste (MW) is a waste type defined as follows; "MW contains both hazardous waste (as defined by RCRA and its amendments) and radioactive waste (as defined by AEA and its amendments). It is jointly regulated by NRC or NRC's Agreement States and EPA or EPA's RCRA Authorized States. The fundamental and most comprehensive statutory definition is found in the Federal Facilities Compliance Act (FFCA) where Section 1004(41) was added to RCRA: "The term 'mixed waste' means waste that contains both hazardous waste and source, special nuclear, or byproduct material subject to the Atomic Energy Act of 1954." Mixed waste is much more expensive to manage and dispose of than waste that is solely radioactive. Waste generators can avoid higher charge back costs by eliminating or minimizing the volume of mixed waste generated. US EPA hazardous waste definition The EPA defines hazardous waste as the following: A subset of solid wastes that pose substantial or potential threats to public health or the environment and meet any of the following criteria identified 40 CFR 260 and 261: It is specifically listed as a hazardous waste by EPA It exhibits one or more of the characteristics of hazardous waste (ignitability, corrosivity, reactivity, and/or toxicity); It is generated by the treatment of hazardous waste; or is contained in a hazardous waste.
https://en.wikipedia.org/wiki/Socket%205
Socket 5 was created for the second generation of Intel P5 Pentium processors operating at speeds from 75 to 133 MHz as well as certain Pentium OverDrive and Pentium MMX processors with core voltage 3.3 V. It superseded the earlier Socket 4. It was released in March 1994. Consisting of 320 pins, this was the first socket to use a staggered pin grid array, or SPGA, which allowed the chip's pins to be spaced closer together than earlier sockets. Socket 5 was replaced by Socket 7 in 1995. External links Differences between Socket 5 and Socket 7 (archived) See also List of Intel microprocessors List of AMD microprocessors
https://en.wikipedia.org/wiki/SMIF%20%28interface%29
SMIF (Standard Mechanical Interface) is an isolation technology developed in the 1980s by a group known as the "micronauts" at Hewlett-Packard in Palo Alto. The system is used in semiconductor wafer fabrication and cleanroom environments. It is a SEMI standard. Development The core development team was led by Ulrich Kaempf as engineering manager, under the direction of Mihir Parikh. The core team that developed the technology was driven by Barclay Tullis, who held most of the patents, with Dave Thrasher, who later joined the Silicon Valley Group, and Thomas Atchison, a member of the technical staff under direction of Barclay Tullis. Mihir later provided the technology to SEMI, and then licensed a copy for himself, and spun out Asyst Technologies to provide the technology commercially. Asyst technology subsequent acquire by Brooks Automation in their Versaport. The interface is the same after being acquired Use The purpose of SMIF pods is to isolate wafers from contamination by providing a miniature environment with controlled airflow, pressure and particle count. SMIF pods can be accessed by automated mechanical interfaces on production equipment. The wafers therefore remain in a carefully controlled environment whether in the SMIF pod or in a tool, without being exposed to the surrounding airflow. Each SMIF pod contains a wafer cassette in which the wafers are stored horizontally. The bottom surface of the pod is the opening door, and when a SMIF pod is placed on a load port, the bottom door and cassette are lowered into the tool so that the wafers can be removed. Both wafers and reticles can be handled by SMIF pods in a semiconductor fabrication environment. Used in lithographic tools, reticles or photomasks contain the image that is exposed on a coated wafer in one processing step of a complete integrated semiconductor manufacturing cycle. Because reticles are linked so directly with wafer processing, they also require steps to protect them from contamination
https://en.wikipedia.org/wiki/Ring%20strain
In organic chemistry, ring strain is a type of instability that exists when bonds in a molecule form angles that are abnormal. Strain is most commonly discussed for small rings such as cyclopropanes and cyclobutanes, whose internal angles are substantially smaller than the idealized value of approximately 109°. Because of their high strain, the heat of combustion for these small rings is elevated. Ring strain results from a combination of angle strain, conformational strain or Pitzer strain (torsional eclipsing interactions), and transannular strain, also known as van der Waals strain or Prelog strain. The simplest examples of angle strain are small cycloalkanes such as cyclopropane and cyclobutane. Ring strain energy can be attributed to the energy required for the distortion of bond and bond angles in order to close a ring. Ring strain energy is believed to be the cause of accelerated rates in altering ring reactions. Its interactions with traditional bond energies change the enthalpies of compounds effecting the kinetics and thermodynamics of ring strain reactions. History Ring strain theory was first developed by German chemist Adolf von Bayer in 1890.  Previously, the only bonds believed to exist were torsional and steric; however, Bayer's theory became based on the interactions between the two strains.   Bayer's theory was based on the assumption that ringed compounds were flat. Later, around the same time, Hermann Sachse formed his postulation that compound rings were not flat and potentially existed in a "chair" formation.  Ernst Mohr later combined the two theories to explain the stability of six-membered rings and their frequency in nature, as well as the energy levels of other ring structures. Angle strain (Baeyer strain) Alkanes In alkanes, optimum overlap of atomic orbitals is achieved at 109.5°. The most common cyclic compounds have five or six carbons in their ring. Adolf von Baeyer received a Nobel Prize in 1905 for the discovery of the B
https://en.wikipedia.org/wiki/Welsh%20Dragon
The Welsh Dragon (, meaning 'the red dragon'; ) is a heraldic symbol that represents Wales and appears on the national flag of Wales. As an emblem, the red dragon of Wales has been used since the reign of Cadwaladr, King of Gwynedd from around 655AD and is historically known as the "Red Dragon of Cadwaladr". Ancient leaders of the Celtic Britons that are personified as dragons include Maelgwn Gwynedd, Mynyddog Mwynfawr and Urien Rheged. Later Welsh "dragons" include Owain Gwynedd, Llywelyn ap Gruffydd and Owain Glyndŵr. The red dragon appears in the ancient Mabinogion story of Lludd and Llefelys where it is confined, battling with an invading white dragon, at Dinas Emrys. The story continues in the , written around AD 829, where Gwrtheyrn, King of the Britons is frustrated in attempts to build a fort at Dinas Emrys. He is told by a boy, Emrys, to dig up two dragons fighting beneath the castle. He discovers the white dragon representing the Anglo-Saxons, which is soon to be defeated by the red dragon of Wales. The red dragon is now seen as symbolising Wales, present on the current national flag of Wales, which became an official flag in 1959. History Draco as military standard and title The military use of the term "dragon" (in Latin, "draco") dates back to the Roman period and this in turn is likely inspired by the symbols of the Scythians, Indians, Persians, Dacians or Parthians. The term draco can refer to a dragon, serpent or snake and the term draconarius (also latin) denotes "the bearer of the serpent standard". Franz Altheim suggests that the first appearance of the draco used by Romans coincides with Roman recruitment of nomad troops from south and central Asia during the time of Marcus Aurelius. One notable Draco symbol which may have influenced the Welsh dragon is that of the Sarmatians, who contributed to the cavalry units stationed in Ribchester from the 2nd to 4th centuries. Cohorts were represented by the draco military standard from the third c
https://en.wikipedia.org/wiki/ShadowCrew
ShadowCrew was a cybercrime forum that operated under the domain name ShadowCrew.com between August 2002 and November 2004. Origins The concept of the ShadowCrew was developed in early 2002 during a series of chat sessions between Brett Johnson (GOllumfun), Seth Sanders (Kidd), and Kim Marvin Taylor (MacGayver). The ShadowCrew website also contained a number of sub-forums on the latest information on hacking tricks, social engineering, credit card fraud, virus development, scams, and phishing. Organizational structure ShadowCrew emerged early in 2002 from another underground site, counterfeitlibrary.com, which was run by Brett Johnson and would be followed up by carderplanet.com owned by Dmitry Golubov a.k.a. Script, a website primarily in the Russian language. The site also facilitated the sale of drugs wholesale. During its early years, the site was hosted in Hong Kong, but shortly before CumbaJohnny (Albert Gonzalez)'s arrest, the server was in his possession somewhere in New Jersey. Aftermath and legacy ShadowCrew was the forerunner of today's cybercrime forums and marketplaces. The structure, marketplace, review system, and other innovations began when Shadowcrew laid the basis of today's underground forums and marketplaces. Likewise, many of today's current scams and computer crimes began with Counterfeitlibrary and Shadowcrew. The site flourished from the time it opened in 2002 until its demise in late October 2004. Even though the site was booming with criminal activity and all seemed well, the members did not know what was going on behind the scenes. Federal agents received their "big break" when they found CumbaJohnny aka Albert Gonzalez. Upon Cumba's arrest, he immediately turned and started working with federal agents. From April 2003 to October 2004, Cumba assisted in gathering information and monitoring the site and those who utilized it. He started by taking out many of the Russians who were hacking databases and selling counterfeit credit cards
https://en.wikipedia.org/wiki/Torricelli%27s%20equation
In physics, Torricelli's equation, or Torricelli's formula, is an equation created by Evangelista Torricelli to find the final velocity of a moving object with constant acceleration along an axis (for example, the x axis) without having a known time interval. The equation itself is: where is the object's final velocity along the x axis on which the acceleration is constant. is the object's initial velocity along the x axis. is the object's acceleration along the x axis, which is given as a constant. is the object's change in position along the x axis, also called displacement. In this and all subsequent equations in this article, the subscript (as in ) is implied, but is not expressed explicitly for clarity in presenting the equations. This equation is valid along any axis on which the acceleration is constant. Derivation Without differentials and integration Begin with the definition of acceleration: where is the time interval. This is true because the acceleration is constant. The left hand side is this constant value of the acceleration and the right hand side is the average acceleration. Since the average of a constant must be equal to the constant value, we have this equality. If the acceleration was not constant, this would not be true. Now solve for the final velocity: Square both sides to get: The term also appears in another equation that is valid for motion with constant acceleration: the equation for the final position of an object moving with constant acceleration, and can be isolated: Substituting () into the original equation () yields: Using differentials and integration Begin with the definition of acceleration as the derivative of the velocity: Now, we multiply both sides by the velocity : In the left hand side we can rewrite the velocity as the derivative of the position: Multiplying both sides by gets us the following: Rearranging the terms in a more traditional manner: Integrating both sides from the initial instant wi
https://en.wikipedia.org/wiki/Hatching
Hatching () is an artistic technique used to create tonal or shading effects by drawing (or painting or scribing) closely spaced parallel lines. When lines are placed at an angle to one another, it is called cross-hatching. Hatching is also sometimes used to encode colours in monochromatic representations of colour images, particularly in heraldry. Hatching is especially important in essentially linear media, such as drawing, and many forms of printmaking, such as engraving, etching and woodcut. In Western art, hatching originated in the Middle Ages, and developed further into cross-hatching, especially in the old master prints of the fifteenth century. Master ES and Martin Schongauer in engraving and Erhard Reuwich and Michael Wolgemut in woodcut were pioneers of both techniques, and Albrecht Dürer in particular perfected the technique of crosshatching in both media. Artists use the technique, varying the length, angle, closeness and other qualities of the lines, most commonly in drawing, linear painting and engraving. Technique The main concept is that the quantity, thickness and spacing of the lines will affect the brightness of the overall image and emphasize forms creating the illusion of volume. Hatching lines should always follow (i.e. wrap around) the form. By increasing quantity, thickness and closeness, a darker area will result. An area of shading next to another area which has lines going in another direction is often used to create contrast. Line work can be used to represent colors, typically by using the same type of hatch to represent particular tones. For example, red might be made up of lightly spaced lines, whereas green could be made of two layers of perpendicular dense lines, resulting in a realistic image. Crosshatching is the technique of using line to shade and create value. Variations Representation of materials In technical drawing, the section lining may indicate the material of a component part of an assembly. Many hatching pat
https://en.wikipedia.org/wiki/Deep-level%20trap
Deep-level traps or deep-level defects are a generally undesirable type of electronic defect in semiconductors. They are "deep" in the sense that the energy required to remove an electron or hole from the trap to the valence or conduction band is much larger than the characteristic thermal energy kT, where k is the Boltzmann constant and T is the temperature. Deep traps interfere with more useful types of doping by compensating the dominant charge carrier type, annihilating either free electrons or electron holes depending on which is more prevalent. They also directly interfere with the operation of transistors, light-emitting diodes and other electronic and opto-electronic devices, by offering an intermediate state inside the band gap. Deep-level traps shorten the non-radiative life time of charge carriers, and—through the Shockley–Read–Hall (SRH) process—facilitate recombination of minority carriers, having adverse effects on the semiconductor device performance. Hence, deep-level traps are not appreciated in many opto-electronic devices as it may lead to poor efficiency and reasonably large delay in response. Common chemical elements that produce deep-level defects in silicon include iron, nickel, copper, gold, and silver. In general, transition metals produce this effect, while light metals such as aluminium do not. Surface states and crystallographic defects in the crystal lattice can also play role of deep-level traps. Optoelectronics Semiconductor properties Semiconductor structures
https://en.wikipedia.org/wiki/Spinnbarkeit
Spinnbarkeit (English: spinnability), also known as fibrosity, is a biomedical rheology term which refers to the stringy or stretchy property found to varying degrees in mucus, saliva, albumen and similar viscoelastic fluids. The term is used especially with reference to cervical mucus at the time just prior to or during ovulation. Under the influence of estrogens, cervical mucus becomes abundant, clear, and stretchable, and somewhat like egg white. The stretchability of the mucus is described by its spinnbarkeit, from the German word for the ability to be spun. Only such mucus appears to be able to be penetrated by sperm. After ovulation, the character of cervical mucus changes, and under the influence of progesterone it becomes thick, scant, and tacky. Sperm typically cannot penetrate it. Saliva does not always exhibit spinnbarkeit, but it can under certain circumstances. The thickness and spinnbarkeit of nasal mucus are factors in whether or not the nose seems to be blocked. Mucociliary transport depends on the interaction of fibrous mucus with beating cilia.