source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Flicker-free
Flicker-free is a term given to video displays, primarily cathode ray tubes, operating at a high refresh rate to reduce or eliminate the perception of screen flicker. For televisions, this involves operating at a 100 Hz or 120 Hz hertz field rate to eliminate flicker, compared to standard televisions that operate at 50 Hz (PAL, SÉCAM systems) or 60 Hz (NTSC), most simply done by displaying each field twice, rather than once. For computer displays, this is usually a refresh rate of 70–90 Hz, sometimes 100 Hz or higher. This should not be confused with motion interpolation, though they may be combined – see implementation, below. Televisions operating at these frequencies are often labelled as being 100 or 120 Hz without using the words flicker-free in the description. Prevalence The term is primarily used for CRTs, especially televisions in 50 Hz countries (PAL or SECAM) and computer monitors from the 1990s and early 2000s – the 50 Hz rate of PAL/SECAM video (compared with 60 Hz NTSC video) and the relatively large computer monitors close to the viewer's peripheral vision make flicker most noticeable on these devices. Contrary to popular belief, modern LCD monitors are not flicker free, since most of them use pulse-width modulation (PWM) for brightness control. As the brightness setting is lowered, the flicker becomes more noticeable, since the period when the backlight is active in each PWM duty cycle shortens. The problem is much more pronounced on modern LED backlit monitors, because LED backlights reacts faster to changes in current. Implementation The goal is to display images sufficiently frequently to exceed the human flicker fusion threshold, and hence create the impression of a constant (non-flickering) source. In computer displays this consists of changing the frame rate of the produced signal in the video card (and in sync with this, the displayed image on the display). This is limited by the clock speed of the video adapter and frame rate required
https://en.wikipedia.org/wiki/Current%E2%80%93voltage%20characteristic
A current–voltage characteristic or I–V curve (current–voltage curve) is a relationship, typically represented as a chart or graph, between the electric current through a circuit, device, or material, and the corresponding voltage, or potential difference, across it. In electronics In electronics, the relationship between the direct current (DC) through an electronic device and the DC voltage across its terminals is called a current–voltage characteristic of the device. Electronic engineers use these charts to determine basic parameters of a device and to model its behavior in an electrical circuit. These characteristics are also known as I–V curves, referring to the standard symbols for current and voltage. In electronic components with more than two terminals, such as vacuum tubes and transistors, the current–voltage relationship at one pair of terminals may depend on the current or voltage on a third terminal. This is usually displayed on a more complex current–voltage graph with multiple curves, each one representing the current–voltage relationship at a different value of current or voltage on the third terminal. For example the diagram at right shows a family of I–V curves for a MOSFET as a function of drain voltage with overvoltage (VGS − Vth) as a parameter. The simplest I–V curve is that of a resistor, which according to Ohm's law exhibits a linear relationship between the applied voltage and the resulting electric current; the current is proportional to the voltage, so the I–V curve is a straight line through the origin with positive slope. The reciprocal of the slope is equal to the resistance. The I–V curve of an electrical component can be measured with an instrument called a curve tracer. The transconductance and Early voltage of a transistor are examples of parameters traditionally measured from the device's I–V curve. Types of I–V curves The shape of an electrical component's characteristic curve reveals much about its operating properti
https://en.wikipedia.org/wiki/Inductive%20set
Bourbaki also defines an inductive set to be a partially ordered set that satisfies the hypothesis of Zorn's lemma when nonempty. In descriptive set theory, an inductive set of real numbers (or more generally, an inductive subset of a Polish space) is one that can be defined as the least fixed point of a monotone operation definable by a positive Σ1n formula, for some natural number n, together with a real parameter. The inductive sets form a boldface pointclass; that is, they are closed under continuous preimages. In the Wadge hierarchy, they lie above the projective sets and below the sets in L(R). Assuming sufficient determinacy, the class of inductive sets has the scale property and thus the prewellordering property. The term having a number of different meanings. According to: Russell's definition, an inductive set is a nonempty partially ordered set in which every element has a successor. An example is the set of natural numbers N, where 0 is the first element, and the others are produced by adding 1 successively. Roitman considers the same construction in a more abstract form: the elements are sets, 0 is replaced by the empty set, and the successor of every element y is the set y union {y}. In particular, every inductive set contains a sequence of the form. For many other authors (e.g., Bourbaki), an inductive set is a partially ordered set in which every totally ordered subset has an upper bound, i.e., it is a set fulfilling the assumption of Zorn's lemma.
https://en.wikipedia.org/wiki/Shizuo%20Kakutani
was a Japanese-American mathematician, best known for his eponymous fixed-point theorem. Biography Kakutani attended Tohoku University in Sendai, where his advisor was Tatsujirō Shimizu. At one point he spent two years at the Institute for Advanced Study in Princeton at the invitation of the mathematician Hermann Weyl. While there, he also met John von Neumann. Kakutani received his Ph.D. in 1941 from Osaka University and taught there through World War II. He returned to the Institute for Advanced Study in 1948, and was given a professorship by Yale in 1949, where he won a students' choice award for excellence in teaching. Kakutani received two awards of the Japan Academy, the Imperial Prize and the Academy Prize in 1982, for his scholarly achievements in general and his work on functional analysis in particular. He was a Plenary Speaker of the ICM in 1950 in Cambridge, Massachusetts. Kakutani was married to Keiko ("Kay") Uchida, who was a sister to author Yoshiko Uchida. His daughter, Michiko Kakutani, is a Pulitzer Prize-winning former literary critic for The New York Times. Work The Kakutani fixed-point theorem is a generalization of Brouwer's fixed-point theorem, holding for generalized correspondences instead of functions. Its most important uses are in proving the existence of Nash equilibria in game theory, and the Arrow–Debreu–McKenzie model of general equilibrium theory in microeconomics. Kakutani's other mathematical contributions include Markov–Kakutani fixed-point theorem, another fixed point theorem; the Kakutani skyscraper, a concept in ergodic theory (a branch of mathematics that studies dynamical systems with an invariant measure and related problems); his solution of the Poisson equation using the methods of stochastic analysis. The Collatz conjecture is also known as the Kakutani conjecture. Selected articles "A generalization of Brouwer's fixed point theorem." Duke Mathematical Journal (1941): 457–459. "Concrete representation of abs
https://en.wikipedia.org/wiki/Slovak%20koruna
The Slovak koruna or Slovak crown (, literally meaning Slovak crown) was the currency of Slovakia between 8 February 1993 and 31 December 2008, and could be used for cash payment until 16 January 2009. The ISO 4217 code was SKK and the local abbreviation was Sk. The koruna was subdivided into 100 haliers (abbreviated as "hal." or simply "h", singular: halier). The abbreviation is placed after the numeric value. Slovakia switched its currency from the koruna to the euro on 1 January 2009, at a rate of 30.1260 korunas per euro. In Slovak, the nouns koruna and halier both have two plural forms. "Koruny" and haliere appear after the numbers 2, 3 and 4 and in generic (uncountable) context, with korún and halierov being used after other numbers. The latter forms are genitive. Modern koruna In 1993, the newly independent Slovakia introduced its own koruna, replacing the Czechoslovak koruna at par. Coins In 1993, coins were introduced in denominations of 10, 20 and 50 haliers, 1, 2, 5 and 10 korunas. The 10 and 20 halier coins were taken out of circulation on 31 December 2003. In 1996 the 50 halier coin was made smaller and instead of aluminium it was made with copper plated steel. The obverse of the coins feature the coat of arms of Slovakia, with motifs from Slovak history on the reverses. 10 halierov (silver-coloured) – Octagonal wooden belfry from Zemplín (early 19th century) = €0.0033 20 halierov (silver-coloured) – the Kriváň peak in the High Tatras = €0.0066 50 halierov (copper-coloured) – Renaissance polygonal tower of Devín Castle = €0.0166 1 koruna (copper-coloured) – Gothic wooden sculpture of the Madonna with child ( 1500) = €0.0332 2 koruny (silver-coloured) – Earthen sculpture of the sitting Venus of Hradok (4th millennium BC) = €0.0664 5 korún (silver-coloured) – Reverse of a Celtic coin of Biatec (1st century BC) = €0.166 10 korún (copper-coloured) – Bronze cross (11th century A.D.) = €0.332 Coins were exchangeable for euros at the National Bank
https://en.wikipedia.org/wiki/Black%20level
Video black level is defined as the level of brightness at the darkest (black) part of a visual image or the level of brightness at which no light is emitted from a screen, resulting in a pure black screen. Video displays generally need to be calibrated so that the displayed black is true to the black information in the video signal. If the black level is not correctly adjusted, visual information in a video signal could be displayed as black, or black information could be displayed as above black information (gray). The voltage of the black level varies across different television standards. PAL sets the black level the same as the blanking level, while NTSC sets the black level approximately 54 mV above the blanking level. User misadjustment of black level on monitors is common. It results in darker colors having their hue changed, it affects contrast, and in many cases causes some of the image detail to be lost. Black level is set by displaying a testcard image and adjusting display controls. With CRT displays: "brightness" adjusts black level "contrast" adjusts white level CRTs tend to have some interdependence of controls, so a control sometimes needs adjustment more than once. In digital video black level usually means the range of RGB values in video signal, which can be either [0..255] (or "normal"; typical of a computer output) or [16..235] (or "low"; standard for video). See also Picture line-up generation equipment (PLUGE) Display technology Television technology
https://en.wikipedia.org/wiki/Chaos%20game
In mathematics, the term chaos game originally referred to a method of creating a fractal, using a polygon and an initial point selected at random inside it. The fractal is created by iteratively creating a sequence of points, starting with the initial random point, in which each point in the sequence is a given fraction of the distance between the previous point and one of the vertices of the polygon; the vertex is chosen at random in each iteration. Repeating this iterative process a large number of times, selecting the vertex at random on each iteration, and throwing out the first few points in the sequence, will often (but not always) produce a fractal shape. Using a regular triangle and the factor 1/2 will result in the Sierpinski triangle, while creating the proper arrangement with four points and a factor 1/2 will create a display of a "Sierpinski Tetrahedron", the three-dimensional analogue of the Sierpinski triangle. As the number of points is increased to a number N, the arrangement forms a corresponding (N-1)-dimensional Sierpinski Simplex. The term has been generalized to refer to a method of generating the attractor, or the fixed point, of any iterated function system (IFS). Starting with any point x0, successive iterations are formed as xk+1 = fr(xk), where fr is a member of the given IFS randomly selected for each iteration. The iterations converge to the fixed point of the IFS. Whenever x0 belongs to the attractor of the IFS, all iterations xk stay inside the attractor and, with probability 1, form a dense set in the latter. The "chaos game" method plots points in random order all over the attractor. This is in contrast to other methods of drawing fractals, which test each pixel on the screen to see whether it belongs to the fractal. The general shape of a fractal can be plotted quickly with the "chaos game" method, but it may be difficult to plot some areas of the fractal in detail. With the aid of the "chaos game" a new fractal can be made and w
https://en.wikipedia.org/wiki/48-bit%20computing
In computer architecture, 48-bit integers can represent 281,474,976,710,656 (248 or 2.814749767×1014) discrete values. This allows an unsigned binary integer range of 0 through 281,474,976,710,655 (248 − 1) or a signed two's complement range of -140,737,488,355,328 (-247) through 140,737,488,355,327 (247 − 1). A 48-bit memory address can directly address every byte of 256 terabytes of storage. 48-bit can refer to any other data unit that consumes 48 bits (6 octets) in width. Examples include 48-bit CPU and ALU architectures that are based on registers, address buses, or data buses of that size. Word size Computers with 48-bit words include the AN/FSQ-32, CDC 1604/upper-3000 series, BESM-6, Ferranti Atlas, Philco TRANSAC S-2000 and Burroughs large systems. The Honeywell DATAmatic 1000, H-800, the MANIAC II, the MANIAC III, the Brookhaven National Laboratory Merlin, the Philco CXPQ, the Ferranti Orion, the Telefunken Rechner TR 440, the ICT 1301, and many other early transistor-based and vacuum tube computers used 48-bit words. Addressing The IBM System/38, and the IBM AS/400 in its CISC variants, use 48-bit addresses. The address size used in logical block addressing was increased to 48 bits with the introduction of ATA-6. The Ext4 file system physically limits the file block count to 48 bits. The minimal implementation of the x86-64 architecture provides 48-bit addressing encoded into 64 bits; future versions of the architecture can expand this without breaking properly written applications. The media access control address (MAC address) of a network interface controller uses a 48-bit address space. Images In digital images, 48 bits per pixel, or 16 bits per each color channel (red, green and blue), is used for accurate processing. For the human eye, it is almost impossible to see any difference between such an image and a 24-bit image, but the existence of more shades of each of the three primary colors (65,536 as opposed to 256) means that more operations
https://en.wikipedia.org/wiki/Czechoslovak%20koruna
The Czechoslovak koruna (in Czech and Slovak: koruna československá, at times koruna česko-slovenská; koruna means crown) was the currency of Czechoslovakia from 10 April 1919 to 14 March 1939, and from 1 November 1945 to 7 February 1993. For a brief time in 1939 and again in 1993, it was also the currency of both the separate Czech Republic and Slovakia. On 8 February 1993, it was replaced by the Czech koruna and the Slovak koruna, both at par. The (last) ISO 4217 code and the local abbreviations for the koruna were CSK and Kčs. One koruna equalled 100 haléřů (Czech, singular: haléř) or halierov (Slovak, singular: halier). In both languages, the abbreviation h was used. The abbreviation was placed behind the numeric value. First koruna A currency called the krone in German and koruna in Czech was introduced in Austria-Hungary on 11 September 1892, as the first modern gold-based currency in the area. After the creation of an independent Czechoslovakia in 1918, an urgent need emerged for the establishment of a new currency system that would distinguish itself from the currencies of the other newly born countries suffering from inflation. The next year, on 10 April 1919, a currency reform took place, defining the new koruna as equal in value to the Austro-Hungarian krone. The first banknotes came into circulation the same year, the coins three years later, in 1922. This first koruna circulated until 1939, when separate currencies for Bohemia and Moravia and Slovakia were introduced, at par with the Czechoslovak koruna. These were the Bohemian and Moravian koruna and the Slovak koruna. Coins Banknotes Second koruna The Czechoslovak koruna was re-established in 1945, replacing the two previous currencies at par. As a consequence of the war, the currency had lost much of its value. Coins Banknotes Third koruna The koruna went through a number of further reforms. A particularly drastic one was undertaken in 1953. At that time, the Communist Party of Czechoslo
https://en.wikipedia.org/wiki/Integrable%20system
In mathematics, integrability is a property of certain dynamical systems. While there are several distinct formal definitions, informally speaking, an integrable system is a dynamical system with sufficiently many conserved quantities, or first integrals that its motion is confined to a submanifold of much smaller dimensionality than that of its phase space. Three features are often referred to as characterizing integrable systems: the existence of a maximal set of conserved quantities (the usual defining property of complete integrability) the existence of algebraic invariants, having a basis in algebraic geometry (a property known sometimes as algebraic integrability) the explicit determination of solutions in an explicit functional form (not an intrinsic property, but something often referred to as solvability) Integrable systems may be seen as very different in qualitative character from more generic dynamical systems, which are more typically chaotic systems. The latter generally have no conserved quantities, and are asymptotically intractable, since an arbitrarily small perturbation in initial conditions may lead to arbitrarily large deviations in their trajectories over a sufficiently large time. Many systems studied in physics are completely integrable, in particular, in the Hamiltonian sense, the key example being multi-dimensional harmonic oscillators. Another standard example is planetary motion about either one fixed center (e.g., the sun) or two. Other elementary examples include the motion of a rigid body about its center of mass (the Euler top) and the motion of an axially symmetric rigid body about a point in its axis of symmetry (the Lagrange top). In the late 1960's, it was realized that there are completely integrable systems in physics having an infinite number of degrees of freedom, such as some models of shallow water waves (Korteweg–de Vries equation), the Kerr effect in optical fibres, described by the nonlinear Schrödinger equati
https://en.wikipedia.org/wiki/Polar%20mount
A polar mount is a movable mount for satellite dishes that allows the dish to be pointed at many geostationary satellites by slewing around one axis. It works by having its slewing axis parallel, or almost parallel, to the Earth's polar axis so that the attached dish can follow, approximately, the geostationary orbit, which lies in the plane of the Earth's equator. Description Polar mounts are popular with home television receive-only (TVRO) satellite systems where they can be used to access the TV signals from many different geostationary satellites. They are also used in other types of installations such as TV, cable, and telecommunication Earth stations although those applications usually use more sophisticated altazimuth or fix angle dedicated mounts. Polar mounts can use a simplified one axis design because geostationary satellite are fixed in the sky relative to the observing dish and their equatorial orbits puts them all in a common line that can be accessed by swinging the satellite dish along a single arc approximately 90 degrees from the mount's polar axis. This also allows them to use a single positioner to move the antenna in the form of a "jackscrew" or horizon-to-horizon gear drive. Polar mounts work in a similar way to astronomical equatorial mounts in that they point at objects at fixed hour angles that follow the astronomical right ascension axis. Like equatorial mounts, polar mounts require polar alignment. They differ from equatorial mounts in that the objects (satellites) they point at are fixed in position and usually require no tracking, just accurate fixed aiming. Adjustments When observed from the equator, geostationary satellites follow exactly the imaginary line of the Earth's equatorial plane on the celestial sphere (i.e. they follow the celestial equator). But when observed from other latitudes the fact that geostationary satellites are at a fixed altitude of 35,786 km (22,236 mi) above the Earth's equator, and vary in distance from
https://en.wikipedia.org/wiki/Spondyloperipheral%20dysplasia
Spondyloperipheral dysplasia is an autosomal dominant disorder of bone growth. The condition is characterized by flattened bones of the spine (platyspondyly) and unusually short fingers and toes (brachydactyly). Some affected individuals also have other skeletal abnormalities, short stature, nearsightedness (myopia), hearing loss, and mental retardation. Spondyloperipheral dysplasia is a subtype of collagenopathy, types II and XI. Genetics Spondyloperipheral dysplasia is one of a spectrum of skeletal disorders caused by mutations in the COL2A1 gene, located on chromosome 12q13.11-q13.2. The protein made by this gene forms type II collagen, a molecule found mostly in cartilage and in the clear gel that fills the vitreous humour (the eyeball). Type II collagen is essential for the normal development of bones and other connective tissues (the tissues that form the body's supportive framework). Mutations in the COL2A1 gene interfere with the assembly of type II collagen molecules. The protein made by the altered COL2A1 gene cannot be used to make type II collagen, resulting in a reduced amount of this type of collagen in the body. Instead of forming collagen molecules, the abnormal protein builds up in cartilage cells (chondrocytes). These changes disrupt the normal development of bones, leading to the signs and symptoms of spondyloperipheral dysplasia. The disorder is believed to be inherited in an autosomal dominant manner. This indicates that the defective gene responsible for the disorder is located on an autosome (chromosome 12 is an autosome), and only one copy of the defective gene is sufficient to cause the disorder, when inherited from a parent who has the disorder. Diagnosis Management
https://en.wikipedia.org/wiki/Achondrogenesis%20type%202
Achondrogenesis, type 2 results in short arms and legs, a small chest with short ribs, and underdeveloped lungs at birth. Achondrogenesis, type 2 is a subtype of collagenopathy, types II and XI. This condition is also associated with a lack of bone formation (ossification) in the spine and pelvis. Typical facial features include a prominent forehead, a small chin, and, in some cases, an opening in the roof of the mouth (a cleft palate). The abdomen is enlarged, and affected infants often have a condition called hydrops fetalis in which excess fluid builds up in the body before birth. The skull bones may be soft, but they often appear normal on X-ray images. In contrast, bones in the spine (vertebrae) and pelvis do not harden. Achondrogenesis, type 2 and hypochondrogenesis (a similar skeletal disorder) together affect 1 in 40,000 to 60,000 births. Achondrogenesis, type 2 is one of several skeletal disorders caused by mutations in the COL2A1 gene. This gene provides instructions for making a protein that forms type II collagen. This type of collagen is found mostly in cartilage and in the clear gel that fills the eyeball (the vitreous). It is essential for the normal development of bones and other tissues that form the body's supportive framework (connective tissues). Mutations in the COL2A1 gene interfere with the assembly of type II collagen molecules, which prevents bones and other connective tissues from developing properly. Achondrogenesis, type 2 is considered an autosomal dominant disorder because one copy of the altered gene in each cell is sufficient to cause the condition. The disorder is not passed on to the next generation, however, because affected individuals hardly survive past puberty.
https://en.wikipedia.org/wiki/Kniest%20dysplasia
Kniest dysplasia is a rare form of dwarfism caused by a mutation in the COL2A1 gene on chromosome 12. The COL2A1 gene is responsible for producing type II collagen. The mutation of COL2A1 gene leads to abnormal skeletal growth and problems with hearing and vision. What characterizes Kniest dysplasia from other type II osteochondrodysplasia is the level of severity and the dumb-bell shape of shortened long tubular bones. This condition was first described by Dr. Wilhelm Kniest in 1952, publishing the case history of a 3 1⁄2 year-old girl. Dr. Kniest noticed that his patient had bone deformities and restricted joint mobility. The patient also had short stature and later developed blindness, resulting from retinal detachment and glaucoma. Upon analysis of the patient's DNA in 1992, sequencing revealed deletion of a 28 base pair sequence encompassing a splice site in exon 12 and a G to A transition in exon 50 of the COL2A1 gene. This condition is very rare and occurs less than 1 in 1,000,000 people. Males and females have equal chances of having this condition. Currently, there is no cure for Kniest dysplasia. Alternative names for Kniest Dysplasia can include Kniest syndrome, swiss cheese cartilage syndrome, Kniest chondrodystrophy, or metatrophic dwarfism type II. Signs and symptoms Because collagen plays an important role in the development of the body, people with Kniest Dysplasia will typically have their first symptoms at birth. These symptoms can include: Musculoskeletal Problems Short limbs Shortened body trunk Flattened bones in the spine (platyspondyly) kyphoscoliosis Scoliosis (Lateral curvature of the spine) Early development of arthritis Respiratory problems Respiratory tract infection Difficulty breathing Eye problems Severe myopia (near-sightedness) Cataract (cloudiness in the lens of the eye) Cranial structure may elongate the eyeball, causing a thinning of the retina, thereby predisposing retinal detachment Hearing problems progre
https://en.wikipedia.org/wiki/Spondyloepimetaphyseal%20dysplasia%2C%20Strudwick%20type
Spondyloepimetaphyseal dysplasia, Strudwick type is an inherited disorder of bone growth that results in dwarfism, characteristic skeletal abnormalities, and problems with vision. The name of the condition indicates that it affects the bones of the spine (spondylo-) and two regions near the ends of bones (epiphyses and metaphyses). This type was named after the first reported patient with the disorder. Spondyloepimetaphyseal dysplasia, Strudwick type is a subtype of collagenopathy, types II and XI. The signs and symptoms of this condition at birth are very similar to those of spondyloepiphyseal dysplasia congenita, a related skeletal disorder. Beginning in childhood, the two conditions can be distinguished in X-ray images by changes in areas near the ends of bones (metaphyses). These changes are characteristic of spondyloepimetaphyseal dysplasia, Strudwick type. Presentation People with this condition are short-statured from birth, with a very short trunk and shortened limbs. Their hands and feet, however, are usually average-sized. Curvature of the spine (scoliosis and lumbar lordosis) may be severe and can cause problems with breathing. Changes in the spinal bones (vertebrae) in the neck may also increase the risk of spinal cord damage. Other skeletal signs include flattened vertebrae (platyspondyly), severe protrusion of the breastbone (pectus carinatum), a hip joint deformity in which the upper leg bones turn inward (coxa vara), and a foot deformity known as clubfoot. Affected individuals have mild and variable changes in their facial features. The cheekbones close to the nose may appear flattened. Some infants are born with an opening in the roof of the mouth, which is called a cleft palate. Severe nearsightedness (high myopia) and detachment of the retina (the part of the eye that detects light and color) are also common. Cause This condition is one of a spectrum of skeletal disorders caused by mutations in the COL2A1 gene. The protein made by this gene fo
https://en.wikipedia.org/wiki/Radiative%20process
In particle physics, a radiative process refers to one elementary particle emitting another and continuing to exist. This typically happens when a fermion emits a boson such as a gluon or photon. See also Bremsstrahlung Radiation Particle physics
https://en.wikipedia.org/wiki/Stiff-person%20syndrome
Stiff-person syndrome (SPS), also known as stiff-man syndrome, is a rare neurologic disorder of unclear cause characterized by progressive muscular rigidity and stiffness. The stiffness primarily affects the truncal muscles and is superimposed by spasms, resulting in postural deformities. Chronic pain, impaired mobility, and lumbar hyperlordosis are common symptoms. SPS occurs in about one in a million people and is most commonly found in middle-aged people. A small minority of patients have the paraneoplastic variety of the condition. Variants of the condition, such as stiff-limb syndrome which primarily affects a specific limb, are often seen. SPS was first described in 1956. Diagnostic criteria were proposed in the 1960s and refined two decades later. In the 1990s and 2000s, the roles of antibodies in the condition became clearer. SPS patients generally have glutamic acid decarboxylase (GAD) antibodies, which seldom occur in the general population. In addition to blood tests for GAD, electromyography tests can help confirm the condition's presence. Benzodiazepine-class drugs are the most common treatment; they are used for symptom relief from stiffness. Other common treatments include baclofen, intravenous immunoglobin, and rituximab. Limited but encouraging therapeutic experience of haematopoietic stem cell transplantation exists for SPS. Signs and symptoms Stiff-person syndrome (SPS) is often separated into several subtypes, based on the cause and progression of the disease. There are three clinical classifications of SPS Classic SPS, associated with other autoimmune conditions and usually GAD-positive Partial SPS variants Progressive encephalomyelitis with rigidity and myoclonus (PERM) Around 70% of those with SPS have the "classic" form of the disease. People with classic SPS typically first experience intermittent tightness or aching in the muscles of the trunk. These muscles repeatedly and involuntarily contract, causing them to grow and rigidify.
https://en.wikipedia.org/wiki/Acoustic%20emission
Acoustic emission (AE) is the phenomenon of radiation of acoustic (elastic) waves in solids that occurs when a material undergoes irreversible changes in its internal structure, for example as a result of crack formation or plastic deformation due to aging, temperature gradients, or external mechanical forces. In particular, AE occurs during the processes of mechanical loading of materials and structures accompanied by structural changes that generate local sources of elastic waves. This results in small surface displacements of a material produced by elastic or stress waves generated when the accumulated elastic energy in a material or on its surface is released rapidly. The mechanism of emission of the primary elastic pulse AE (act or event AE) may have a different physical nature. The figure shows the mechanism of the AE act (event) during the nucleation of a microcrack due to the breakthrough of the dislocations pile-up (dislocation is a linear defect in the crystal lattice of a material) across the boundary in metals with a body-centered cubic (bcc) lattice under mechanical loading, as well as time diagrams of the stream of AE acts (events) (1) and the stream of recorded AE signals (2). The AE method makes it possible to study the kinetics of processes at the earliest stages of microdeformation, dislocation nucleation and accumulation of microcracks. Roughly speaking, each crack seems to "scream" about its growth. This makes it possible to diagnose the moment of crack origin itself by the accompanying AE. In addition, for each crack that has already arisen, there is a certain critical size, depending on the properties of the material. Up to this size, the crack grows very slowly (sometimes for decades) through a huge number of small discrete jumps accompanied by AE radiation. After the crack reaches a critical size, catastrophic destruction occurs, because its further growth is already at a speed close to half the speed of sound in the material of the struct
https://en.wikipedia.org/wiki/Kahn%20process%20networks
A Kahn process network (KPN, or process network) is a distributed model of computation in which a group of deterministic sequential processes communicate through unbounded first in, first out channels. The model requires that reading from a channel is blocking while writing is non-blocking. Due to these key restrictions, the resulting process network exhibits deterministic behavior that does not depend on the timing of computation nor on communication delays. Kahn process networks were originally developed for modeling parallel programs, but have proven convenient for modeling embedded systems, high-performance computing systems, signal processing systems, stream processing systems, dataflow programming languages, and other computational tasks. KPNs were introduced by Gilles Kahn in 1974. Execution model KPN is a common model for describing signal processing systems where infinite streams of data are incrementally transformed by processes executing in sequence or parallel. Despite parallel processes, multitasking or parallelism are not required for executing this model. In a KPN, processes communicate via unbounded FIFO channels. Processes read and write atomic data elements, alternatively called tokens, from and to channels. Writing to a channel is non-blocking, i.e. it always succeeds and does not stall the process, while reading from a channel is blocking, i.e. a process that reads from an empty channel will stall and can only continue when the channel contains sufficient data items (tokens). Processes are not allowed to test an input channel for existence of tokens without consuming them. A FIFO cannot be consumed by multiple processes, nor can multiple processes write to a single FIFO. Given a specific input (token) history for a process, the process must be deterministic so that it always produces the same outputs (tokens). Timing or execution order of processes must not affect the result and therefore testing input channels for tokens is forbidden. Notes
https://en.wikipedia.org/wiki/Ramtil%20oil
Ramtil oil, also known as Niger seed oil, is used mainly in cooking but also for lighting. In India it is pressed from the seed of Guizotia oleifera of the family Asteraceae. A very similar oil is made in Africa from G. abyssinica. The oil is used as an extender for sesame oil, which it resembles, as well as for making soap, in addition to its role as an illuminant. Countries The plant was originally cultivated in the Ethiopian highlands but is also cultivated in Mexico, Germany, Brazil, Nepal, India as well as other parts of Southeast Asia. The seeds are sold and grown in the United States as a niche crop. Composition The oil is rich in linoleic acid (75-80%) and other essential nutrients, with a fatty acid composition comparable to safflower and sunflower. The oil contains palmitic and stearic acids (7-8%) and oleic acid (5-8%). Indian Niger oil is reportedly higher in oleic acid (25%) and lower in linoleic acid (55%). Ethiopia revival There was an extended declining period of Niger seed oil production, due to the import of cheap palm oils, but an apparent waning appetite for these, and a government ban on oil imports, there has been a marked revival and several manufacturers produce one, two, three and five liters of oil.
https://en.wikipedia.org/wiki/SPARQL
SPARQL (pronounced "sparkle", a recursive acronym for SPARQL Protocol and RDF Query Language) is an RDF query language—that is, a semantic query language for databases—able to retrieve and manipulate data stored in Resource Description Framework (RDF) format. It was made a standard by the RDF Data Access Working Group (DAWG) of the World Wide Web Consortium, and is recognized as one of the key technologies of the semantic web. On 15 January 2008, SPARQL 1.0 was acknowledged by W3C as an official recommendation, and SPARQL 1.1 in March, 2013. SPARQL allows for a query to consist of triple patterns, conjunctions, disjunctions, and optional patterns. Implementations for multiple programming languages exist. There exist tools that allow one to connect and semi-automatically construct a SPARQL query for a SPARQL endpoint, for example ViziQuer. In addition, tools exist to translate SPARQL queries to other query languages, for example to SQL and to XQuery. Advantages SPARQL allows users to write queries against what can loosely be called "key-value" data or, more specifically, data that follow the RDF specification of the W3C. Thus, the entire database is a set of "subject-predicate-object" triples. This is analogous to some NoSQL databases' usage of the term "document-key-value", such as MongoDB. In SQL relational database terms, RDF data can also be considered a table with three columns – the subject column, the predicate column, and the object column. The subject in RDF is analogous to an entity in a SQL database, where the data elements (or fields) for a given business object are placed in multiple columns, sometimes spread across more than one table, and identified by a unique key. In RDF, those fields are instead represented as separate predicate/object rows sharing the same subject, often the same unique key, with the predicate being analogous to the column name and the object the actual data. Unlike relational databases, the object column is heterogeneous: the
https://en.wikipedia.org/wiki/Pneumonitis
Pneumonitis describes general inflammation of lung tissue. Possible causative agents include radiation therapy of the chest, exposure to medications used during chemo-therapy, the inhalation of debris (e.g., animal dander), aspiration, herbicides or fluorocarbons and some systemic diseases. If unresolved, continued inflammation can result in irreparable damage such as pulmonary fibrosis. Pneumonitis is distinguished from pneumonia on the basis of causation as well as its manifestation. Pneumonia can be described as pneumonitis combined with consolidation and exudation of lung tissue due to infection with microorganisms. The distinction between pneumonia and pneumonitis can be further understood with pneumonitis being the encapsulation of all respiratory infections (incorporating pneumonia and pulmonary fibrosis as major diseases), and pneumonia as a localized infection. For most infections, the immune response of the body is enough to control and apprehend the infection within a couple days, but if the tissue and the cells can't fight off the infection, the creation of pus will begin to form in the lungs which then hardens into lung abscess or suppurative pneumonitis. Patients that are immunodeficient and don't get treated immediately for any type of respiratory infection may lead to more severe infections and/or death. Pneumonitis can be classified into several different specific subcategories, including hypersensitivity pneumonitis, radiation pneumonitis, acute interstitial pneumonitis, and chemical pneumonitis. These all share similar symptoms, but differ in causative agents. Diagnosis of pneumonitis remains challenging, but several different treatment paths (corticosteroids, oxygen therapy, avoidance) have seen success. Causes Alveoli are the primary structure affected by pneumonitis. Any particles that are smaller than 5 microns can enter the alveoli of the lungs. These tiny air sacs facilitate the passage of oxygen from inhaled air to the bloodstream. In th
https://en.wikipedia.org/wiki/FAPSI
FAPSI () or Federal Agency of Government Communications and Information (FAGCI) () was a Russian government agency, which was responsible for signal intelligence and security of governmental communications. The present-day FAPSI successor agencies are the relevant departments of the Federal Security Service (FSB) and Foreign Intelligence Service (SVR) as well as the Special Communications Service of Russia (Spetssvyaz) (part of the Federal Protective Service of the Russian Federation) (FSO RF). History Creation FAPSI was created from the 8th Main Directorate (Government Communications) and 16th Directorate (Electronic Intelligence) of the KGB. It is the equivalent to the USA National Security Agency. On September 25, 1991, Soviet president Mikhail Gorbachev dismantled the KGB into several independent departments. One of them became the Committee on Government Communications under the President of Soviet Union. On December 24, 1991 after the disbanding of the Soviet Union the organization became the Federal Agency of Government Communications and Information under the President of Russian Federation. Dissolution On March 11, 2003 the agency was reorganized into the Service of Special Communications and Information (Spetssvyaz, Spetssviaz) () of the Federal Security Service of the Russian Federation (FSB RF). On August 7, 2004, Spetssviaz was incorporated as a structural sub unit of the Federal Protective Service of the Russian Federation (FSO RF). Structure According to the press, the structure of FAPSI copied the structure of the US National Security Agency, it includes: Chief R&D Directorate (Главное научно-техническое управление) Chief directorate of government communications Chief directorate of security of communications Chief directorate of information technology (Главное управление информационных систем) Special troops of FAPSI Academy of Cryptography Military School of FAPSI in Voronezh, sometimes referred as the world largest hacker's school
https://en.wikipedia.org/wiki/Fubini%E2%80%93Study%20metric
In mathematics, the Fubini–Study metric (IPA: /fubini-ʃtuːdi/) is a Kähler metric on projective Hilbert space, that is, on a complex projective space CPn endowed with a Hermitian form. This metric was originally described in 1904 and 1905 by Guido Fubini and Eduard Study. A Hermitian form in (the vector space) Cn+1 defines a unitary subgroup U(n+1) in GL(n+1,C). A Fubini–Study metric is determined up to homothety (overall scaling) by invariance under such a U(n+1) action; thus it is homogeneous. Equipped with a Fubini–Study metric, CPn is a symmetric space. The particular normalization on the metric depends on the application. In Riemannian geometry, one uses a normalization so that the Fubini–Study metric simply relates to the standard metric on the (2n+1)-sphere. In algebraic geometry, one uses a normalization making CPn a Hodge manifold. Construction The Fubini–Study metric arises naturally in the quotient space construction of complex projective space. Specifically, one may define CPn to be the space consisting of all complex lines in Cn+1, i.e., the quotient of Cn+1\{0} by the equivalence relation relating all complex multiples of each point together. This agrees with the quotient by the diagonal group action of the multiplicative group C* = C \ {0}: This quotient realizes Cn+1\{0} as a complex line bundle over the base space CPn. (In fact this is the so-called tautological bundle over CPn.) A point of CPn is thus identified with an equivalence class of (n+1)-tuples [Z0,...,Zn] modulo nonzero complex rescaling; the Zi are called homogeneous coordinates of the point. Furthermore, one may realize this quotient mapping in two steps: since multiplication by a nonzero complex scalar z = R eiθ can be uniquely thought of as the composition of a dilation by the modulus R followed by a counterclockwise rotation about the origin by an angle , the quotient mapping Cn+1 → CPn splits into two pieces. where step (a) is a quotient by the dilation Z ~ RZ for R ∈ R+,
https://en.wikipedia.org/wiki/List%20of%20software%20development%20philosophies
This is a list of approaches, styles, methodologies, philosophies in software development and engineering. It also contains programming paradigms, software development methodologies, software development processes, and single practices, principles and laws. Some of the mentioned methods are more relevant to a specific field than another, such as automotive or aerospace. The trend towards agile methods in software engineering is noticeable, however the need for improved studies on the subject is also paramount. Also note that some of the methods listed might be newer or older or still in use or out-dated, and the research on software design methods is not new and on-going. Software development methodologies, guidelines, strategies Large-scale programming styles Behavior-driven development Design-driven development Domain-driven design Secure by design Test-driven development Acceptance test-driven development Continuous test-driven development Specification by example Data-driven development Data-oriented design Specification-related paradigms Iterative and incremental development Waterfall model Formal methods Comprehensive systems Agile software development Lean software development Lightweight methodology Adaptive software development Extreme programming Feature-driven development ICONIX Kanban (development) Unified Process Rational Unified Process OpenUP Agile Unified Process Rules of thumb, laws, guidelines and principles 300 Rules of Thumb and Nuggets of Wisdom (excerpt from Managing the Unmanageable - Rules, Tools, and Insights for Managing Software People and Teams by Mickey W. Mantle, Ron Lichty) ACID Big ball of mud Brooks's law C++ Core Guidelines (Stroustrup/Sutter) P1 - P13 Philosophy rules CAP theorem Code reuse Command–query separation (CQS) Conways Law Cowboy coding Do what I mean (DWIM) Don't repeat yourself (DRY) Egoless programming Fail-fast Gall's law General Responsibility Assignment Software Patterns (GRASP) If
https://en.wikipedia.org/wiki/Egg%20tooth
An egg tooth is a temporary, sharp projection present on the bill or snout of an oviparous animal at hatching. It allows the hatchling to penetrate the eggshell from inside and break free. Birds, reptiles, and monotremes possess egg teeth as hatchlings. Similar structures exist in Eleutherodactyl frogs, and spiders. Birds When it is close to hatching, a chick uses its egg tooth to pierce the air sac between the membrane and the eggshell. This sac provides a few hours' worth of air, during which time the chick hatches. When a chick is ready to hatch from its egg, it begins the process of "pipping"; forcing the egg tooth through the shell repeatedly as the embryo rotates, eventually cutting away a section at the blunt end of the egg, leaving a hole through which the bird may emerge. Some species, including woodpeckers, have two egg teeth; one on both the upper and lower bill. After time the egg tooth falls off or is absorbed into the growing chick's bill. Some precocial species such as the Kiwi, and superprecocial species including megapodes, do not require an egg tooth to assist them in hatching. They are strong enough at the time of hatching to use their legs and feet to crack open the egg. Megapode embryos develop and shed their egg tooth before hatching. Snakes and lizards Most squamates (lizards and snakes) also lay eggs, and similarly need an egg tooth. Unlike in other amniotes, the egg tooth of squamates is an actual tooth which develops from the premaxilla. Crocodilians A baby crocodile has an egg tooth on the end of its snout. It is a tough piece of skin which is resorbed less than two months after hatching. Crocodile eggs are similar to those of birds in that they have an inner membrane and an outer one. The egg tooth is used to tear open the inner membrane; the baby crocodile can then push its way through the outer shell. If conditions are particularly dry that year, the inner membrane may be too tough for the crocodile to break through, and without
https://en.wikipedia.org/wiki/Cone%20beam%20reconstruction
In microtomography X-ray scanners, cone beam reconstruction is one of two common scanning methods, the other being Fan beam reconstruction. Cone beam reconstruction uses a 2-dimensional approach for obtaining projection data. Instead of utilizing a single row of detectors, as fan beam methods do, a cone beam systems uses a standard charge-coupled device camera, focused on a scintillator material. The scintillator converts X-ray radiation to visible light, which is picked up by the camera and recorded. The method has enjoyed widespread implementation in microtomography, and is also used in several larger-scale systems. An X-ray source is positioned across from the detector, with the object being scanned in between. (This is essentially the same setup used for an ordinary X-ray fluoroscope). Projections from different angles are obtained in one of two ways. In one method, the object being scanned is rotated. This has the advantage of simplicity in implementation; a rotating stage results in little complexity. The second method involves rotating the X-ray source and camera around the object, as is done in ordinary CT scanning and SPECT imaging. This adds complexity, size and cost to the system, but removes the need to rotate the object. The method is referred to as cone-beam reconstruction because the X-rays are emitted from the source as a cone-shaped beam. In other words, it begins as a tight beam at the source, and expands as it moves away. See also Computed tomography Industrial CT scanning Tomographic reconstruction
https://en.wikipedia.org/wiki/Triglyph
Triglyph is an architectural term for the vertically channeled tablets of the Doric frieze in classical architecture, so called because of the angular channels in them. The rectangular recessed spaces between the triglyphs on a Doric frieze are called metopes. The raised spaces between the channels themselves (within a triglyph) are called femur in Latin or meros in Greek. In the strict tradition of classical architecture, a set of guttae, the six triangular "pegs" below, always go with a triglyph above (and vice versa), and the pair of features are only found in entablatures of buildings using the Doric order. The absence of the pair effectively converts a building from being in the Doric order to being in the Tuscan order. The triglyph is largely thought to be a tectonic and skeuomorphic representation in stone of the wooden beam ends of the typical primitive hut, as described by Vitruvius and Renaissance writers. The wooden beams were notched in three separate places in order to cast their rough-cut ends mostly in shadow. Greek architecture (and later Roman architecture) preserved this feature, as well as many other features common in original wooden buildings, as a tribute to the origins of architecture and its role in the history and development of man. The channels could also have a function in channeling rainwater. Structure and placing In terms of structure, a triglyph may be carved from a single block with a metope, or the triglyph block may have slots cut into it to allow a separately cut metope (in stone or wood) to be slid into place, as at the Temple of Aphaea. Of the two groups of 6th-century metopes from Foce del Sele, now in the museum at Paestum, the earlier uses the first method, the later the second. There may be some variation in design within a single structure to allow for corner contraction, an adjustment of the column spacing and arrangement of the Doric frieze in a temple to make the design appear more harmonious. In the evolution of
https://en.wikipedia.org/wiki/List%20of%20MUD%20clients
A MUD client is a game client, a computer application used to connect to a MUD, a type of multiplayer online game. Generally, a MUD client is a very basic telnet client that lacks VT100 terminal emulation and the capability to perform telnet negotiations. On the other hand, MUD clients are enhanced with various features designed to make the MUD telnet interface more accessible to users, and enhance the gameplay of MUDs, with features such as syntax highlighting, keyboard macros, and connection assistance. Standard features seen in most MUD clients include ANSI color support, aliases, triggers and scripting. The client can often be extended almost indefinitely with its built-in scripting language. Most MUDs restrict the usage of scripts because they give an unfair advantage, as well as the fear that the game will end up being played by fully automated clients instead of human beings. Prominent clients include TinyTalk, TinyFugue, TinTin++, and zMUD. History The first MUD client with a notable number of features was Tinytalk by Anton Rang in January 1990, for Unix-like systems. In May 1990 TinyWar 1.1.4 was released by Leo Plotkin which was based on TinyTalk 1.0 and added support for event-driven programming. In September 1990 TinyFugue which was based on TinyWar 1.2.3 and TT 1.1 was released by Greg Hudson and featured more advanced trigger support. Development of TinyFugue was taken over by Ken Keys in 1991. TinyFugue has continued to evolve and remains a popular client today for Unix-like systems. TinyFugue, or tf, was primarily written for Unix-like operating systems. It is one of the earliest MUD clients in existence. It is primarily geared toward TinyMUD variants. TinyFugue is extensible through its own macro language, which also ties to its extensive trigger system. The trigger system allows implementation of automatically run commands. Another early client was TINTIN by Peter Unold in April 1992. In October 1992 Peter Unold made his final release, TIN
https://en.wikipedia.org/wiki/Firestarter%20%28firewall%29
Firestarter is a personal firewall tool that uses the Netfilter (iptables/ipchains) system built into the Linux kernel. It has the ability to control both inbound and outbound connections. Firestarter provides a graphical interface for configuring firewall rules and settings. It provides real-time monitoring of all network traffic for the system. Firestarter also provides facilities for port forwarding, internet connection sharing and DHCP service. Firestarter is free and open-source software and uses GUI widgets from GTK+. Note that it uses GTK2 and has not been upgraded to use GTK3 so the last Linux distributions it will run on are Ubuntu 18, Debian 9, etc. See also Uncomplicated Firewall iptables netfilter
https://en.wikipedia.org/wiki/Abstract%20state%20machine
In computer science, an abstract state machine (ASM) is a state machine operating on states that are arbitrary data structures (structure in the sense of mathematical logic, that is a nonempty set together with a number of functions (operations) and relations over the set). Overview The ASM Method is a practical and scientifically well-founded systems engineering method that bridges the gap between the two ends of system development: the human understanding and formulation of real-world problems (requirements capture by accurate high-level modeling at the level of abstraction determined by the given application domain) the deployment of their algorithmic solutions by code-executing machines on changing platforms (definition of design decisions, system and implementation details). The method builds upon three basic concepts: ASM: a precise form of pseudo-code, generalizing Finite State Machines to operate over arbitrary data structures ground model: a rigorous form of blueprints, serving as an authoritative reference model for the design refinement: a most general scheme for stepwise instantiations of model abstractions to concrete system elements, providing controllable links between the more and more detailed descriptions at the successive stages of system development. In the original conception of ASMs, a single agent executes a program in a sequence of steps, possibly interacting with its environment. This notion was extended to capture distributed computations, in which multiple agents execute their programs concurrently. Since ASMs model algorithms at arbitrary levels of abstraction, they can provide high-level, low-level and mid-level views of a hardware or software design. ASM specifications often consist of a series of ASM models, starting with an abstract ground model and proceeding to greater levels of detail in successive refinements or coarsenings. Due to the algorithmic and mathematical nature of these three concepts, ASM models and their pr
https://en.wikipedia.org/wiki/Duration%20calculus
Duration calculus (DC) is an interval logic for real-time systems. It was originally developed by Zhou Chaochen with the help of Anders P. Ravn and C. A. R. Hoare on the European ESPRIT Basic Research Action (BRA) ProCoS project on Provably Correct Systems. Duration calculus is mainly useful at the requirements level of the software development process for real-time systems. Some tools are available (e.g., DCVALID, IDLVALID, etc.). Subsets of duration calculus have been studied (e.g., using discrete time rather than continuous time). Duration calculus is especially espoused by UNU-IIST in Macau and the Tata Institute of Fundamental Research in Mumbai, which are major centres of excellence for the approach. See also Interval temporal logic (ITL) Temporal logic Temporal logic of actions (TLA) Modal logic
https://en.wikipedia.org/wiki/Blood%20shift
Blood shift has at least two separate meanings: In medicine, it is synonymous with left shift In diving physiology, it is part of the diving reflex
https://en.wikipedia.org/wiki/Peginterferon%20alfa-2a
Pegylated interferon alfa-2a, sold under the brand name Pegasys among others, is medication used to treat hepatitis C and hepatitis B. For hepatitis C it is typically used together with ribavirin and cure rates are between 24 and 92%. For hepatitis B it may be used alone. It is given by injection under the skin. Side effects are common. They may include headache, feeling tired, depression, trouble sleeping, hair loss, nausea, pain at the site of injection, and fever. Severe side effects may include psychosis, autoimmune disorders, blood clots, or infections. Use with ribavirin is not recommended during pregnancy. Pegylated interferon alfa-2a is in the alpha interferon family of medications. It is pegylated to protect the molecule from breakdown. Pegylated interferon alfa-2a was approved for medical use in the United States in 2002. It is on the World Health Organization's List of Essential Medicines. Medical uses This drug is approved around the world for the treatment of chronic hepatitis C (including people with HIV co-infection, cirrhosis, 'normal' levels of ALT) and has recently been approved (in the EU, U.S., China and many other countries) for the treatment of chronic hepatitis B. It is also used in the treatment of certain T-cell lymphomas, particularly mycosis fungoides. Peginterferon alfa-2a is a long acting interferon. Interferons are proteins released in the body in response to viral infections. Interferons are important for fighting viruses in the body, for regulating reproduction of cells, and for regulating the immune system. Host genetic factors For genotype 1 hepatitis C treated with pegylated interferon alfa-2a or pegylated interferon alfa-2b combined with ribavirin, it has been shown that genetic polymorphisms near the human IL28B gene, encoding interferon lambda 3, are associated with significant differences in response to the treatment. This finding, originally reported in Nature, showed genotype 1 hepatitis C patients carrying certain gen
https://en.wikipedia.org/wiki/Fluoride%20volatility
Fluoride volatility is the tendency of highly fluorinated molecules to vaporize at comparatively low temperatures. Heptafluorides, hexafluorides and pentafluorides have much lower boiling points than the lower-valence fluorides. Most difluorides and trifluorides have high boiling points, while most tetrafluorides and monofluorides fall in between. The term "fluoride volatility" is jargon used particularly in the context of separation of radionuclides. Volatility and valence Valences for the majority of elements are based on the highest known fluoride. Roughly, fluoride volatility can be used to remove elements with a valence of 5 or greater: uranium, neptunium, plutonium, metalloids (tellurium, antimony), nonmetals (selenium), halogens (iodine, bromine), and the middle transition metals (niobium, molybdenum, technetium, ruthenium, and possibly rhodium). This fraction includes the actinides most easily reusable as nuclear fuel in a thermal reactor, and the two long-lived fission products best suited to disposal by transmutation, Tc-99 and I-129, as well as Se-79. Noble gases (xenon, krypton) are volatile even without fluoridation, and will not condense except at much lower temperatures. Left behind are alkali metals (caesium, rubidium), alkaline earth metals (strontium, barium), lanthanides, the remaining actinides (americium, curium), remaining transition metals (yttrium, zirconium, palladium, silver) and post-transition metals (tin, indium, cadmium). This fraction contains the fission products that are radiation hazards on a scale of decades (Cs-137, Sr-90, Sm-151), the four remaining long-lived fission products Cs-135, Zr-93, Pd-107, Sn-126 of which only the last emits strong radiation, most of the neutron poisons, and the higher actinides (americium, curium, californium) that are radiation hazards on a scale of hundreds or thousands of years and are difficult to work with because of gamma radiation but are fissionable in a fast reactor. Reprocessing metho
https://en.wikipedia.org/wiki/Happy%20ending%20problem
In mathematics, the "happy ending problem" (so named by Paul Erdős because it led to the marriage of George Szekeres and Esther Klein) is the following statement: This was one of the original results that led to the development of Ramsey theory. The happy ending theorem can be proven by a simple case analysis: if four or more points are vertices of the convex hull, any four such points can be chosen. If on the other hand, the convex hull has the form of a triangle with two points inside it, the two inner points and one of the triangle sides can be chosen. See for an illustrated explanation of this proof, and for a more detailed survey of the problem. The Erdős–Szekeres conjecture states precisely a more general relationship between the number of points in a general-position point set and its largest subset forming a convex polygon, namely that the smallest number of points for which any general position arrangement contains a convex subset of points is . It remains unproven, but less precise bounds are known. Larger polygons proved the following generalisation: The proof appeared in the same paper that proves the Erdős–Szekeres theorem on monotonic subsequences in sequences of numbers. Let denote the minimum for which any set of points in general position must contain a convex N-gon. It is known that , trivially. . . A set of eight points with no convex pentagon is shown in the illustration, demonstrating that ; the more difficult part of the proof is to show that every set of nine points in general position contains the vertices of a convex pentagon. . The value of is unknown for all . By the result of , is known to be finite for all finite . On the basis of the known values of for N = 3, 4 and 5, Erdős and Szekeres conjectured in their original paper that They proved later, by constructing explicit examples, that . In 2016 Andrew Suk showed that for Suk actually proves, for N sufficiently large, A 2020 preprint by Andreas F. Holmsen,
https://en.wikipedia.org/wiki/Aflatoxin%20total%20synthesis
Aflatoxin total synthesis concerns the total synthesis of a group of organic compounds called aflatoxins. These compounds occur naturally in several fungi. As with other chemical compound targets in organic chemistry, the organic synthesis of aflatoxins serves various purposes. Traditionally it served to prove the structure of a complex biocompound in addition to evidence obtained from spectroscopy. It also demonstrates new concepts in organic chemistry (reagents, reaction types) and opens the way to molecular derivatives not found in nature. And for practical purposes, a synthetic biocompound is a commercial alternative to isolating the compound from natural resources. Aflatoxins in particular add another dimension because it is suspected that they have been mass-produced in the past from biological sources as part of a biological weapons program. The synthesis of racemic aflatoxin B1 has been reported by Buechi et al. in 1967 and that of racemic aflatoxin B2 by Roberts et al. in 1968 The group of Barry Trost of Stanford University is responsible for the enantioselective total synthesis of (+)-Aflatoxin B1 and B2a in 2003. In 2005 the group of E. J. Corey of Harvard University presented the enantioselective synthesis of Aflatoxin B2. Aflatoxin B2 synthesis The total synthesis of Aflatoxin B2 is a multistep sequence that begins with a [2+3]cycloaddition between the quinone 1 and the 2,3-Dihydrofuran. This reaction is catalyzed by a CBS catalyst and is enantioselective. The next step is the orthoformylation of reaction product 2 in a Duff reaction. The hydroxyl group in 3 is esterified with triflic anhydride which adds a triflate protecting group. This step enables a Grignard reaction of the aldehyde group in 4 with methylmagnesiumbromide to the alcohol 5 which is then oxidized with the Dess-Martin periodinane to the ketone 6. A Baeyer-Villiger oxidation converts the ketone to an ester (7) and a reduction with Raney nickel converts the ester into an alcohol and
https://en.wikipedia.org/wiki/K-202
K-202 was a 16-bit minicomputer, created by a team led by Polish scientist Jacek Karpiński between 1970–1973 in cooperation with British companies Data-Loop and M.B. Metals. The machine could perform about 1 million instructions per second, making it highly competitive with the US Data General SuperNOVA and UK CTL Modular One. Most other minicomputers of the era were significantly slower. Approximately 30 units were claimed to be produced. All units shipped to M.B. Metals were returned for service. Due to friction resulting from competition with Elwro, a government-backed competitor, the production of K-202 was blocked and Karpiński thrown out of his company under the allegations of sabotage and embezzlement. Sometime later the K-202 had a successor, , hundreds of which were built. Description The K-202 was packaged in a metal box similar to other minicomputers in overall size, and capable of being fit into a 19-inch rack, which was common for other systems. Like most computers of the era, the front panel included a number of switches and lamps that could be used to directly set or read the values stored in main memory. A unique feature was a large dial on the right that selected what to display or set, allowing rapid access to the processor registers simply by rotating the dial. A key that turned on the power and unlocked the case was positioned on the right side of the case. The system was designed to be highly expandable. A minimal setup consisted of the central processing unit (CPU), a minimum of 4 k 16-bit words of core memory (4 kW, or 8 kB), and a single input/output channel for use with a computer terminal. The basic system supported vectored interrupts for 32 input/output devices. At the other end of the scale, a maximally expanded system could include up to 4 MW of memory, a floating point unit (FPU), multiple multi-line programmable input/output systems, and even more than one CPU. At the maximum, it could support 272 I/O interrupt levels. The expans
https://en.wikipedia.org/wiki/United%20Nations%20University%20Institute%20in%20Macau
The United Nations University Institute in Macau, formerly the United Nations University International Institute for Software Technology (UNU-IIST; ; Portuguese: Instituto Internacional para Tecnologia de Programação da Universidade das Nações Unidas) and then United Nations University Institute on Computing and Society (UNU-CS; ), is a United Nations University global think tank conducting research and training on digital technologies for sustainable development, encouraging data-driven and evidence-based actions and policies to achieve the Sustainable Development Goals. UNU-Macau is located in Macau, China. History During 1987–1989, United Nations University conducted two studies regarding the need for a research and training centre for computing in the developing nations. The studies led to the decision of the Council of the UNU to establish in Macau the United Nations University International Institute for Software Technology (UNU-IIST), which was founded on 12 March 1991 and opened its door in July 1992. UNU-IIST was created following the agreement between the UN, the governments of Macau, Portugal, and China. The Macao authorities also supplied the institute with its office premises, located in a heritage building Casa Silva Mendes, and subsidize fellow accommodation. As part of the United Nations, the institute was to address the pressing global problems of human survival, development and welfare by international co-operation, research and advanced training in software technology. After the Transfer of sovereignty over Macau, UNU-IIST lost some level of visibility. United Nations University Institute on Computing and Society Eventually, in 2015, UNU decided to evolve the former IIST into a new Institute on Computing and Society (UNU-CS). UNU-CS intended to assist the United Nations to achieve its agenda by working as a bridge between the UN and the international academic and public policy communities, training the next generation of interdisciplinary sc
https://en.wikipedia.org/wiki/Rigorous%20Approach%20to%20Industrial%20Software%20Engineering
Rigorous Approach to Industrial Software Engineering (RAISE) was developed as part of the European ESPRIT II LaCoS project in the 1990s, led by Dines Bjørner. It consists of a set of tools designed for a specification language (RSL) for software development. It is especially espoused by UNU-IIST in Macau, who run training courses on site and around the world, especially in developing countries. See also Formal methods Formal specification External links RAISE Virtual Library entry RAISE – Rigorous Approach to Industrial Software Engineering RAISE information from Dines Bjørner Formal specification languages Formal methods tools Software testing tools
https://en.wikipedia.org/wiki/Field-replaceable%20unit
A field-replaceable unit (FRU) is a printed circuit board, part, or assembly that can be quickly and easily removed from a computer or other piece of electronic equipment, and replaced by the user or a technician without having to send the entire product or system to a repair facility. FRUs allow a technician lacking in-depth product knowledge to isolate faults and replace faulty components. The granularity of FRUs in a system impacts total cost of ownership and support, including the costs of stocking spare parts, where spares are deployed to meet repair time goals, how diagnostic tools are designed and implemented, levels of training for field personnel, whether end-users can do their own FRU replacement, etc. Other equipment FRUs are not strictly confined to computers but are also part of many high-end, lower-volume consumer and commercial products. For example, in military aviation, electronic components of line-replaceable units, typically known as shop-replaceable units (SRUs), are repaired at field-service backshops, usually by a "remove and replace" repair procedure, with specialized repair performed at centralized depot or by the OEM. History Many vacuum tube computers had FRUs: Pluggable units containing one or more vacuum tubes and various passive components Most transistorized and integrated circuit-based computers had FRUs: Computer modules, circuit boards containing discrete transistors and various passive components. Examples: IBM SMS cards DEC System Building Blocks cards DEC Flip-Chip cards Circuit boards containing monolithic ICs and/or hybrid ICs, such as IBM SLT cards. Vacuum tubes themselves are usually FRUs. For a short period starting in the late 1960s, some television set manufacturers made solid-state televisions with FRUs instead of a single board attached to the chassis. However modern televisions put all the electronics on one large board to reduce manufacturing costs. Trends As the sophistication and complexity of multi-replaceable
https://en.wikipedia.org/wiki/List%20of%20Jamaican%20flags
This is a list of flags used in Jamaica. National flag Governor-General Prime Minister Military flags Historical See also Flag of the West Indies Federation Jamaica Flags
https://en.wikipedia.org/wiki/California%20Coastal%20Conservancy
The California State Coastal Conservancy (CSCC, SCC) is a non-regulatory state agency in California established in 1976 to enhance coastal resources and public access to the coast. The CSCC is a department of the California Natural Resources Agency. The agency's work is conducted along the entirety of the California coast, including the interior San Francisco Bay and is responsible for the planning and coordination of federal land sales to acquire into state land as well as award grant funding for improvement projects. The Board of Directors for the agency is made up of seven members who are appointed by the Governor of California and approved by the California Legislature, members of the California State Assembly and California State Senate engage and provide oversight within their legislative capacity. Goals The agency's official goals are to: Protect and improve coastal wetlands, streams, and watersheds Improve access to coasts and shores by building trails and stairways and improving the availability of low cost accommodation including campgrounds and hostels Work with local communities to revitalize urban watersheds Assist in resolving complex land-use problems Purchase and hold environmentally valuable coastal and bay lands Protect Agricultural lands and support coastal agriculture Accept donations and dedication of land and easements for public access, wildlife habitat, agriculture and open spaces Since its establishment, the Conservancy has completed over 4,000 projects along the California coastline and San Francisco Bay, restored over 400,000 acres of coastal habitat, built hundreds of miles of new trail including the Bay Area Ridge Trail, Santa Ana Parkway Trail, and partnered on over 100 urban waterfront projects. The Conservancy has spent over $1.8 billion on projects. It works in partnership with other public agencies, nonprofit organizations and private landowners, employing 75 people and overseeing a current annual budget of 53 million dollars. Th
https://en.wikipedia.org/wiki/List%20of%20Polish%20flags
A variety of Polish flags are defined in current Polish national law, either through an act of parliament or a ministerial ordinance. Apart from the national flag, these are mostly military flags, used by one or all branches of the Polish Armed Forces, especially the Polish Navy. Other flags are flown by vessels of non-military uniformed services. Most Polish flags feature white and red, the national colors of Poland. The national colors, officially adopted in 1831, are of heraldic origin and derive from the tinctures of the coats of arms of Poland and Lithuania. Additionally, some flags incorporate the white eagle of the Polish coat of arms, while other flags used by the Armed Forces incorporate military eagles, which are variants. Both variants of the national flag of Poland were officially adopted in 1919, shortly after Poland re-emerged as an independent state in the aftermath of World War I in 1918. Many Polish flags were adopted within the following three years. The designs of most of these flags have been modified only to adjust to the changes in the official rendering of the national coat of arms. Major modifications included a change in the stylization of the eagle from Neoclassicist to Baroque in 1927 and the removal of the crown from the eagle's head during the Communist rule from 1944 to 1990. Legal specification for the shades of the national colors has also changed with time. The shade of red was first legally specified as vermilion by a presidential decree of 13 December 1928. This verbal prescription was replaced with coordinates in the CIE 1976 color space by the Coat of Arms Act of 31 January 1980. National flags The basic variant of the national flag is a plain white-and-red horizontal bicolor. A variant defaced with the coat of arms is restricted to official use abroad and at sea. Legal restrictions notwithstanding, the two variants are often treated as interchangeable in practice. Military flags Rank flags used in all branches of the Arme
https://en.wikipedia.org/wiki/B-staging
B-staging is a process that utilizes heat or UV light to remove the majority of solvent from an adhesive, thereby allowing a construction to be “staged”. In between adhesive application, assembly and curing, the product can be held for a period of time, without sacrificing performance. Attempts to use traditional epoxies in IC packaging often created expensive production bottlenecks, because, as soon as the epoxy adhesive was applied, the components had to be assembled and cured immediately. B-staging eliminates these bottlenecks by allowing the IC manufacturing to proceed efficiently, with each step performed on larger batches of product. B stage laminates are also used in the electronic circuit board industry, where the laminates are reinforced with woven glass fibers called prepregs. This allows manufacturers to have clean and accurate setup for multilayer pressing of cores and prepregs for production of PCBs, without the need to hassle with liquid uncured epoxies.
https://en.wikipedia.org/wiki/Way%20of%20the%20Tiger
The Way of the Tiger is a series of adventure gamebooks by Mark Smith and Jamie Thomson, originally published by Knight Books (an imprint of Hodder & Stoughton) from 1985. They are set on the fantasy world of Orb. The reader takes the part of a young monk/ninja, named Avenger, initially on a quest to avenge his foster father's murder and recover stolen scrolls. Later books presented other challenges for Avenger to overcome, most notably taking over and ruling a city. The world of Orb was originally created by Mark Smith for a Dungeons & Dragons game he ran while a pupil at Brighton College in the mid-1970s. Orb was also used as the setting for the 1984 Fighting Fantasy gamebook Talisman of Death, and one of the settings in the 1985 Falcon gamebook Lost in Time, both by Smith and Thomson. Each book has a disclaimer at the front against performing any of the ninja related feats in the book as "They could lead to serious injury or death to an untrained user". The sixth book, Inferno!, ends on a cliffhanger with Avenger trapped in the web of the Black Widow, Orb's darkest blight. As no new books were released, the fate of Avenger and Orb was unknown. Mark Smith has confirmed that the cliffhanger ending was deliberate. In August 2013, the original creators of the series were working with Megara Entertainment to develop re-edited hardcover collector editions of the gamebooks (including a new prequel (Book 0) and sequel (book 7)), and potentially a role-playing game based on the series. The two new books plus the six re-edited original books were reprinted in paperback format by Megara Entertainment in 2014, and made available as PDFs in 2019. Books and publication history The original series comprises six books: Avenger! (1985) Assassin! (1985) Usurper! (1985) Overlord! (1986) Warbringer! (1986) Inferno! (1987) The sixth book ended on a cliffhanger, which was not resolved until 27 years later. Interviewed in 2012, Mark Smith explained: "Our publishers Hodder
https://en.wikipedia.org/wiki/Use%20case%20survey
A use case survey is a list of names and perhaps brief descriptions of use cases associated with a system, component, or other logical or physical entity. This artifact is short and inexpensive to produce, and possibly advantageous over similar software development tools, depending on the needs of the project and analyst. Software requirements
https://en.wikipedia.org/wiki/Atropisomer
Atropisomers are stereoisomers arising because of hindered rotation about a single bond, where energy differences due to steric strain or other contributors create a barrier to rotation that is high enough to allow for isolation of individual conformers. They occur naturally and are important in pharmaceutical design. When the substituents are achiral, these conformers are enantiomers (atropoenantiomers), showing axial chirality; otherwise they are diastereomers (atropodiastereomers). Etymology and history The word atropisomer (, , meaning "without turn") was coined in application to a theoretical concept by German biochemist Richard Kuhn for Karl Freudenberg's seminal Stereochemie volume in 1933. Atropisomerism was first experimentally detected in a tetra substituted biphenyl, a diacid, by George Christie and James Kenner in 1922. Michinori Ōki further refined the definition of atropisomers taking into account the temperature-dependence associated with the interconversion of conformers, specifying that atropisomers interconvert with a half-life of at least 1000 seconds at a given temperature, corresponding to an energy barrier of 93 kJ mol−1 (22 kcal mol −1) at 300 K (27 °C). Energetics The stability of individual atropisomers is conferred by the repulsive interactions that inhibit rotation. Both the steric bulk and, in principle, the length and rigidity of the bond connecting the two subunits contribute. Commonly, atropisomerism is studied by dynamic nuclear magnetic resonance spectroscopy, since atropisomerism is a form of fluxionality. Inferences from theory and results of reaction outcomes and yields also contribute. Atropisomers exhibit axial chirality (planar chirality). When the barrier to racemization is high, as illustrated by the BINAP ligands, the phenomenon becomes of practical value in asymmetric synthesis. Methaqualone, the anxiolytic and hypnotic-sedative, is a classical example of a drug molecule that exhibits the phenomenon of atropisomerism. S
https://en.wikipedia.org/wiki/Trinucleotide%20repeat%20disorder
Trinucleotide repeat disorders, also known as microsatellite expansion diseases, are a set of over 50 genetic disorders caused by trinucleotide repeat expansion, a kind of mutation in which repeats of three nucleotides (trinucleotide repeats) increase in copy numbers until they cross a threshold above which they cause developmental, neurological or neuromuscular disorders. Depending on its location, the unstable trinucleotide repeat may cause defects in a protein encoded by a gene; change the regulation of gene expression; produce a toxic RNA, or lead to production of a toxic protein. In general, the larger the expansion the faster the onset of disease, and the more severe the disease becomes. Trinucleotide repeats are a subset of a larger class of unstable microsatellite repeats that occur throughout all genomes. The first trinucleotide repeat disease to be identified was fragile X syndrome, which has since been mapped to the long arm of the X chromosome. Patients carry from 230 to 4000 CGG repeats in the gene that causes fragile X syndrome, while unaffected individuals have up to 50 repeats and carriers of the disease have 60 to 230 repeats. The chromosomal instability resulting from this trinucleotide expansion presents clinically as intellectual disability, distinctive facial features, and macroorchidism in males. The second DNA-triplet repeat disease, fragile X-E syndrome, was also identified on the X chromosome, but was found to be the result of an expanded CCG repeat. The discovery that trinucleotide repeats could expand during intergenerational transmission and could cause disease was the first evidence that not all disease-causing mutations are stably transmitted from parent to offspring. Trinucleotide repeat disorders and the related microsatellite repeat disorders affect about 1 in 3,000 people worldwide. However, the frequency of occurrence of any one particular repeat sequence disorder varies greatly by ethnic group and geographic location. Many re
https://en.wikipedia.org/wiki/Conserved%20sequence
In evolutionary biology, conserved sequences are identical or similar sequences in nucleic acids (DNA and RNA) or proteins across species (orthologous sequences), or within a genome (paralogous sequences), or between donor and receptor taxa (xenologous sequences). Conservation indicates that a sequence has been maintained by natural selection. A highly conserved sequence is one that has remained relatively unchanged far back up the phylogenetic tree, and hence far back in geological time. Examples of highly conserved sequences include the RNA components of ribosomes present in all domains of life, the homeobox sequences widespread amongst eukaryotes, and the tmRNA in bacteria. The study of sequence conservation overlaps with the fields of genomics, proteomics, evolutionary biology, phylogenetics, bioinformatics and mathematics. History The discovery of the role of DNA in heredity, and observations by Frederick Sanger of variation between animal insulins in 1949, prompted early molecular biologists to study taxonomy from a molecular perspective. Studies in the 1960s used DNA hybridization and protein cross-reactivity techniques to measure similarity between known orthologous proteins, such as hemoglobin and cytochrome c. In 1965, Émile Zuckerkandl and Linus Pauling introduced the concept of the molecular clock, proposing that steady rates of amino acid replacement could be used to estimate the time since two organisms diverged. While initial phylogenies closely matched the fossil record, observations that some genes appeared to evolve at different rates led to the development of theories of molecular evolution. Margaret Dayhoff's 1966 comparison of ferrodoxin sequences showed that natural selection would act to conserve and optimise protein sequences essential to life. Mechanisms Over many generations, nucleic acid sequences in the genome of an evolutionary lineage can gradually change over time due to random mutations and deletions. Sequences may also recombine
https://en.wikipedia.org/wiki/Pseudocereal
A pseudocereal or pseudograin is one of any non-grasses that are used in much the same way as cereals (true cereals are grasses). Pseudocereals can be further distinguished from other non-cereal staple crops (such as potatoes) by their being processed like a cereal: their seed can be ground into flour and otherwise used as a cereal. Prominent examples of pseudocereals include amaranth (love-lies-bleeding, red amaranth, Prince-of-Wales-feather), quinoa, and buckwheat. The pseudocereals have a good nutritional profile, with high levels of essential amino acids, essential fatty acids, minerals, and some vitamins. The starch in pseudocereals has small granules and low amylose content (except for buckwheat), which gives it similar properties to waxy-type cereal starches. The functional properties of pseudocereals, such as high viscosity, water-binding capacity, swelling capability, and freeze-thaw stability, are determined by their starch properties and seed morphology. Pseudocereals are gluten-free, and they are used to make 100% gluten-free products, which has increased their popularity. Common pseudocereals Acorn Amaranth (love-lies-bleeding, red amaranth, Prince-of-Wales-feather) Breadnut Buckwheat Cañahua Chia Cockscomb (also called quail grass or soko) Fat hen Hanza Pitseed goosefoot Quinoa Wattleseed (also called acacia seed) Production The following table shows the annual production of some pseudocereals in 1961, 2010, 2011, 2012, and 2013 ranked by 2013 production. Other grains that are locally important, but are not included in FAO statistics, include: Amaranth, an ancient pseudocereal, formerly a staple crop of the Aztec Empire and now widely grown in Africa. Kañiwa or Cañahua, close relative of quinoa.
https://en.wikipedia.org/wiki/Zhou%20Chaochen
Zhou Chaochen (; born 1 November 1937) is a Chinese computer scientist. Zhou was born in Nanhui, Shanghai, China. He studied as an undergraduate at the Department of Mathematics and Mechanics, Peking University (1954–1958) and as a postgraduate at the Institute of Computing Technology, Chinese Academy of Sciences (CAS) (1963–1967). He worked at Peking University and CAS until his visit to the Oxford University Computing Laboratory (now the Oxford University Department of Computer Science) (1989–1992). During this time, he was the prime investigator of the duration calculus, an interval logic for real-time systems as part of the European ESPRIT ProCoS project on Provably Correct Systems. During the periods 1990–1992 and 1995–1996, Zhou Chaochen was visiting professor at the Department of Computer Science, Technical University of Denmark, Lyngby, on the invitation of Professor Dines Bjørner. He was Principal Research Fellow (1992–1997) and later Director of UNU-IIST in Macau (1997–2002), until his retirement, when he returned to Beijing. In 2007, Zhou and Dines Bjørner, the first Director of UNU-IIST, were honoured on the occasion of their 70th birthdays. Zhou is a member of the Chinese Academy of Sciences. Books Zhou, Chaochen and Hansen, Michael R., Duration Calculus: A Formal Approach to Real-Time Systems. Springer-Verlag, Monographs in Theoretical Computer Science, An EATCS Series, 2003. .
https://en.wikipedia.org/wiki/Homogeneity%20%28physics%29
In physics, a homogeneous material or system has the same properties at every point; it is uniform without irregularities. A uniform electric field (which has the same strength and the same direction at each point) would be compatible with homogeneity (all points experience the same physics). A material constructed with different constituents can be described as effectively homogeneous in the electromagnetic materials domain, when interacting with a directed radiation field (light, microwave frequencies, etc.). Mathematically, homogeneity has the connotation of invariance, as all components of the equation have the same degree of value whether or not each of these components are scaled to different values, for example, by multiplication or addition. Cumulative distribution fits this description. "The state of having identical cumulative distribution function or values". Context The definition of homogeneous strongly depends on the context used. For example, a composite material is made up of different individual materials, known as "constituents" of the material, but may be defined as a homogeneous material when assigned a function. For example, asphalt paves our roads, but is a composite material consisting of asphalt binder and mineral aggregate, and then laid down in layers and compacted. However, homogeneity of materials does not necessarily mean isotropy. In the previous example, a composite material may not be isotropic. In another context, a material is not homogeneous in so far as it is composed of atoms and molecules. However, at the normal level of our everyday world, a pane of glass, or a sheet of metal is described as glass, or stainless steel. In other words, these are each described as a homogeneous material. A few other instances of context are: dimensional homogeneity (see below) is the quality of an equation having quantities of same units on both sides; homogeneity (in space) implies conservation of momentum; and homogeneity in time implies co
https://en.wikipedia.org/wiki/Z%20User%20Group
The Z User Group (ZUG) was established in 1992 to promote use and development of the Z notation, a formal specification language for the description of and reasoning about computer-based systems. It was formally constituted on 14 December 1992 during the ZUM'92 Z User Meeting in London, England. Meetings and conferences ZUG has organised a series of Z User Meetings approximately every 18 months initially. From 2000, these became the ZB Conference (jointly with the B-Method, co-organized with APCB), and from 2008 the ABZ Conference (with Abstract State Machines as well). In 2010, the ABZ Conference also includes Alloy, a Z-like specification language with associated tool support. The Z User Group participated at the FM'99 World Congress on Formal Methods in Toulouse, France, in 1999. The group and the associated Z notation have been studied as a Community of Practice. List of proceedings The following proceedings were produced by the Z User Group: Bowen, J.P.; Nicholls, J.E., eds. (1993). Z User Workshop, London 1992, Proceedings of the Seventh Annual Z User Meeting, 14–15 December 1992. Springer, Workshops in Computing. Bowen, J.P.; Hall, J.A., eds. (1994). Z User Workshop, Cambridge 1994, Proceedings of the Eighth Annual Z User Meeting, 29–30 June 1994. Springer, Workshops in Computing. Bowen, J.P.; Hinchey, M.G, eds. (1995). ZUM '95: The Z Formal Specification Notation, 9th International Conference of Z Users, Limerick, Ireland, September 7–9, 1995. Springer, Lecture Notes in Computer Science, Volume 967. Bowen, J.P.; Hinchey, M.G.; Till, D., eds. (1997). ZUM '97: The Z Formal Specification Notation, 10th International Conference of Z Users, Reading, UK, April 3–4, 1997. Springer, Lecture Notes in Computer Science, Volume 1212. Bowen, J.P.; Fett, A.; Hinchey, M.G., eds. (1998). ZUM '98: The Z Formal Specification Notation, 11th International Conference of Z Users, Berlin, Germany, September 24–26, 1998. Springer, Lecture Notes in Computer Science, Vol
https://en.wikipedia.org/wiki/BCS-FACS
BCS-FACS is the BCS Formal Aspects of Computing Science Specialist Group. Overview The FACS group, inaugurated on 16 March 1978, organizes meetings for its members and others on formal methods and related computer science topics. There is an associated journal, Formal Aspects of Computing, published by Springer, and a more informal FACS FACTS newsletter. The group celebrated its 20th anniversary with a meeting at the Royal Society in London in 1998, with presentations by four eminent computer scientists, Mike Gordon, Tony Hoare, Robin Milner and Gordon Plotkin, all Fellows of the Royal Society. From 2002–2008 and since 2013 again, the Chair of BCS-FACS has been Jonathan Bowen. Jawed Siddiqi was Chair during 2008–2013. In December 2002, BCS-FACS organized a conference on the Formal Aspects of Security (FASec'02) at Royal Holloway, University of London. In 2004, FACS organized a major event at London South Bank University to celebrate its own 25th anniversary and also 25 Years of CSP (CSP25), attended by the originator of CSP, Sir Tony Hoare, and others in the field. The group liaises with other related groups such as the Centre for Software Reliability, Formal Methods Europe, the London Mathematical Society Computer Committee, the Safety-Critical Systems Club, and the Z User Group. It has held joint meetings with other BCS specialist groups such as the Advanced Programming Group and BCSWomen. FACS sponsors and supports meetings, such as the Refinement Workshop. It has often held a Christmas event each year, with a theme related to formal aspects of computing — for example, teaching formal methods and formal methods in industry. BCS-FACS supported the ABZ 2008 conference at the BCS London premises. In 2015, FACS hosted a two-day ProCoS Workshop on "Provably Correct Systems", with many former members of the ESPRIT ProCoS I and II projects and Working Group of the 1990s. Evening seminars In recent years, a series of evening seminars have been held, mainly at the
https://en.wikipedia.org/wiki/Ring%20singularity
A ring singularity or ringularity is the gravitational singularity of a rotating black hole, or a Kerr black hole, that is shaped like a ring. Description of a ring singularity When a spherical non-rotating body of a critical radius collapses under its own gravitation under general relativity, theory suggests it will collapse to a 0-dimensional single point. This is not the case with a rotating black hole (a Kerr black hole). With a fluid rotating body, its distribution of mass is not spherical (it shows an equatorial bulge), and it has angular momentum. Since a point cannot support rotation or angular momentum in classical physics (general relativity being a classical theory), the minimal shape of the singularity that can support these properties is instead a 2D ring with zero thickness but non-zero radius, and this is referred to as a ringularity or Kerr singularity. A rotating hole's rotational frame-dragging effects, described by the Kerr metric, causes spacetime in the vicinity of the ring to undergo curvature in the direction of the ring's motion. Effectively this means that different observers placed around a Kerr black hole who are asked to point to the hole's apparent center of gravity may point to different points on the ring. Falling objects will begin to acquire angular momentum from the ring before they actually strike it, and the path taken by a perpendicular light ray (initially traveling toward the ring's center) will curve in the direction of ring motion before intersecting with the ring. Traversability and nakedness An observer crossing the event horizon of a non-rotating and uncharged black hole (a Schwarzschild black hole) cannot avoid the central singularity, which lies in the future world line of everything within the horizon. Thus one cannot avoid spaghettification by the tidal forces of the central singularity. This is not necessarily true with a Kerr black hole. An observer falling into a Kerr black hole may be able to avoid the centr
https://en.wikipedia.org/wiki/Rabbit-proof%20fence
The State Barrier Fence of Western Australia, formerly known as the Rabbit-Proof Fence, the State Vermin Fence, and the Emu Fence, is a pest-exclusion fence constructed between 1901 and 1907 to keep rabbits, and other agricultural pests from the east, out of Western Australian pastoral areas. There are three fences in Western Australia: the original No. 1 Fence crosses the state from north to south, No. 2 Fence is smaller and further west, and No. 3 Fence is smaller still and runs east–west. The fences took six years to build. When completed, the rabbit-proof fence (including all three fences) stretched . The cost to build each kilometre of fence at the time was about $250 (). When it was completed in 1950, the No. 1 Fence was the longest unbroken fence in the world. History Rabbits were introduced to Australia by the First Fleet in 1788, but they became a problem after October 1859, when Thomas Austin released 24 wild rabbits from England for hunting purposes, believing "The introduction of a few rabbits could do little harm and might provide a touch of home, in addition to a spot of hunting." The rabbits proved to be extremely prolific and spread rapidly across the southern parts of the country. Australia had ideal conditions for an explosion in the rabbit population, including the fact that they had virtually no local predators. By 1887, losses from rabbit damage compelled the New South Wales Government to offer a £25,000 reward () for "any method of success not previously known in the Colony for the effectual extermination of rabbits". A Royal Commission was held in 1901 to investigate the situation. Construction The fence posts are placed apart and have a minimum diameter of . There were initially three wires of gauge, strung , , and above ground, with a barbed wire added later at and a plain wire at , to make the fence a barrier for dingoes and foxes as well. Wire netting, extending below ground, was attached to the wire. The fence was constru
https://en.wikipedia.org/wiki/KAUT-TV
KAUT-TV (channel 43) is a television station in Oklahoma City, Oklahoma, United States, serving as the local outlet for The CW Television Network. It is owned and operated by the network's majority owner, Nexstar Media Group, alongside NBC affiliate KFOR-TV (channel 4). Both stations share studios in Oklahoma City's McCourry Heights section, while KAUT-TV's transmitter is located on the city's northeast side. History Early history The UHF channel 43 allocation in Oklahoma City was originally assigned to Christian Broadcasting of Oklahoma Inc. – a religious nonprofit corporation headed by George G. Teague, a local evangelist and co-founder of the Capitol Hill Assembly of God – which filed an application with the Federal Communications Commission (FCC) for a license and construction permit on April 4, 1977, proposing to sign on a non-commercial religious television station on the frequency. The FCC Broadcast Bureau granted the license to Christian Broadcasting of Oklahoma on November 17, 1978; two months later in January 1979, the group applied to use KFHC-TV as the planned station's callsign. On July 13, 1979, the Teague group announced it would sell the license to Golden West Broadcasters (a joint venture between actor/singer and Ravia, Oklahoma, native Gene Autry and The Signal Companies that, at the time, also owned independent station KTLA [now a fellow CW owned-and-operated station] in Los Angeles) for $60,000; the FCC granted approval of the transaction on January 24, 1980. VEU The station first signed on the air on October 15, 1980, as KAUT, initially operating as a pilot station for Golden West's subscription service Video Entertainment Unlimited (VEU). (The callsign, which references controlling group stakeholder Autry, was chosen by Golden West two months prior to sign-on; a "-TV" suffix would be added to the callsign on January 27, 1983.) It was the first broadcast outlet for the service, which Golden West's pay television unit, Golden West Subscriptio
https://en.wikipedia.org/wiki/Formal%20Methods%20Europe
Formal Methods Europe (FME) is an organization whose aim is to encourage the research and application of formal methods for the improvement of software and hardware in computer-based systems. The association's members are drawn from academia and industry. It is based in Europe, but is international in scope. FME operates under Dutch law. Activities include or have included: Dissemination of research findings and industrial experience through conferences (every 18 months) and sponsored events; Development of information resources for educators; Networking for commercial practitioners through ForTIA (Formal Techniques Industry Association). The Chair of FME is John Fitzgerald of the University of Newcastle upon Tyne, UK. ForTIA The Formal Techniques Industry Association (ForTIA) aimed to support the industrial use of formal methods under the umbrella organization of Formal Methods Europe. It was founded in 2003 through the initial efforts of Dines Bjørner and was chaired by Anthony Hall and Volkmar Lotz among others. Its scope was international and membership was by company. It organized meetings, especially in conjunction with conferences, for instance, industry days at the FM conferences organized by FME. See also BCS-FACS Formal Aspects of Computing Science Specialist Group Formal methods Anthony Hall, founding chair of ForTIA
https://en.wikipedia.org/wiki/MK484
The MK484 AM radio IC is a fully functional AM radio detector on a chip. It is constructed in a TO-92 case, resembling a small transistor. It replaces the similar ZN414 AM radio IC from the 1970s. The MK484 is favored by many hobbyists. It is advantageous in that it performs well with minimal discrete components, and can run from a single 1.5-volt cell. The MK484 has now in turn been replaced by the TA7642. Standard operation The simplest circuit employing the MK484 can be constructed using only a battery, an earphone (or high-impedance speaker), a coil and a variable capacitor. Extended operation The output of the MK484 can be fed into the base of a transistor to provide greater amplification as a class-A amplifier, however this is often an inefficient design. Conversely, the LM386 audio amplifier may be used to drive a small speaker. Note that higher voltage is required if the LM386 is to be used. Therefore, small signal diodes (such as 1N4148) are recommended to create a voltage drop, or use a Zener DC–DC converter with a red LED (in forward, can double as a power indicator) and a resistor (several hundred ohms for 9V operation), to avoid overvolting the MK484. Advantages Compact size Low power consumption Low cost: Rs 40 to 60 in Indian electronic markets Retails from 65 cents each on various websites
https://en.wikipedia.org/wiki/Epicranial%20aponeurosis
The epicranial aponeurosis (aponeurosis epicranialis, galea aponeurotica) is an aponeurosis (a tough layer of dense fibrous tissue). It covers the upper part of the skull in humans and many other animals. Structure In humans, the epicranial aponeurosis originates from the external occipital protuberance and highest nuchal lines of the occipital bone. It merges with the occipitofrontalis muscle. In front, it forms a short and narrow prolongation between its union with the frontalis muscle (the frontal part of the occipitofrontalis muscle). On either side, the epicranial aponeurosis attaches to the anterior auricular muscles and the superior auricular muscles. Here it is less aponeurotic, and is continued over the temporal fascia to the zygomatic arch as a layer of laminated areolar tissue. It is closely connected to the integument by the firm, dense, fibro-fatty layer which forms the superficial fascia of the scalp. It is attached to the pericranium by loose cellular tissue, which allows the aponeurosis, carrying with it the integument, to move through a considerable distance. Clinical significance Subgaleal haemorrhage is defined as bleeding between the epicranial aponeurosis and the skull. Conservative management is usually appropriate for these, as there is little risk of further damage to surrounding structures. History The epicranial aponeurosis is also known as the aponeurosis epicranialis (from Latin), and the galea aponeurotica. Additional images See also Epicranium Aponeurosis
https://en.wikipedia.org/wiki/Epicranium
The epicranium is the medical term for the collection of structures covering the cranium. It consists of the muscles, aponeurosis, and skin. Parts The epicranial aponeurosis is a tough layer of dense fibrous tissue that covers the upper part of the skull. The epicranial muscle (also called the epicranius) has two sections: the occipital belly, near the occipital bone, and the frontal belly, near the frontal bone. It is supplied by the supraorbital artery, the supratrochlear artery, and the occipital artery. It is innervated by the facial nerve. The epicranium also includes the skin of the scalp and the layer of subcutaneous tissue below it.
https://en.wikipedia.org/wiki/Tucson%20Bird%20Count
The Tucson Bird Count (TBC) is a community-based program that monitors bird populations in and around the Tucson, Arizona, United States metropolitan area. With nearly 1000 sites monitored annually, the Tucson Bird Count is among the largest urban biological monitoring programs in the world. Methods Each spring, TBC participants collect data on bird abundance and distribution at hundreds of point count locations arrayed across the Tucson basin. The TBC is an example of citizen science, drawing on the combined efforts of hundreds of volunteers. So that data are of suitable quality for scientific analysis and decisionmaking, all TBC volunteers are skilled birdwatchers; many are also professional field guides or biologists. TBC methods are similar to those employed by the North American Breeding Bird Survey, although the TBC uses more closely spaced sites (one site per 1-km2 square) over a smaller total area (approximately 1000 km2). The TBC's spatially systematic monitoring is complemented by a TBC park monitoring program that surveys parks, watercourses, or other areas of particular interest multiple times throughout the year. Uses of Data Uses of Tucson Bird Count data include monitoring the status of the Tucson-area bird community over time, finding the areas and land-use practices that are succeeding at sustaining native birds, and investigating the ecology of birds in human-dominated landscapes. Tucson Bird Count results have led to scientific publications, informed Tucson-area planning, and contributed a variety of projects, from locating populations of imperiled species to estimating risk to humans from West Nile Virus. The TBC and several associated research projects are examples of reconciliation ecology, in that they investigate how native species can be sustained in and around the places people live, work, and play. Researchers have also used TBC data to explore the extent to which urban humans are separated from nature (Turner et al. 2004) Recently, the
https://en.wikipedia.org/wiki/Facial%20artery
The facial artery (external maxillary artery in older texts) is a branch of the external carotid artery that supplies structures of the superficial face. Structure The facial artery arises in the carotid triangle from the external carotid artery, a little above the lingual artery and, sheltered by the ramus of the mandible. It passes obliquely up beneath the digastric and stylohyoid muscles, over which it arches to enter a groove on the posterior surface of the submandibular gland. It then curves upward over the body of the mandible at the antero-inferior angle of the masseter; passes forward and upward across the cheek to the angle of the mouth, then ascends along the side of the nose, and ends at the medial commissure of the eye, under the name of the angular artery. The facial artery is remarkably tortuous. This is to accommodate itself to neck movements such as those of the pharynx in swallowing; and facial movements such as those of the mandible, lips, and cheeks. Relations In the neck, its origin is superficial, being covered by the integument, platysma, and fascia; it then passes beneath the digastric and stylohyoid muscles and part of the submandibular gland, but superficial to the hypoglossal nerve. It lies upon the middle pharyngeal constrictor and the superior pharyngeal constrictor, the latter of which separates it, at the summit of its arch, from the lower and back part of the tonsil. On the face, where it passes over the body of the mandible, it is comparatively superficial, lying immediately beneath the dilators of the mouth. In its course over the face, it is covered by the integument, the fat of the cheek, and, near the angle of the mouth, by the platysma, risorius, and zygomaticus major. It rests on the buccinator and levator anguli oris, and passes either over or under the infraorbital head of the levator labii superioris. The anterior facial vein lies lateral/posterior to the artery, and takes a more direct course across the face, wher
https://en.wikipedia.org/wiki/Bacterial%20adhesin
Adhesins are cell-surface components or appendages of bacteria that facilitate adhesion or adherence to other cells or to surfaces, usually in the host they are infecting or living in. Adhesins are a type of virulence factor. Adherence is an essential step in bacterial pathogenesis or infection, required for colonizing a new host. Adhesion and bacterial adhesins are also a potential target either for prophylaxis or for the treatment of bacterial infections. Background Bacteria are typically found attached to and living in close association with surfaces. During the bacterial lifespan, a bacterium is subjected to frequent shear-forces. In the crudest sense, bacterial adhesins serve as anchors allowing bacteria to overcome these environmental shear forces, thus remaining in their desired environment. However, bacterial adhesins do not serve as a sort of universal bacterial Velcro. Rather, they act as specific surface recognition molecules, allowing the targeting of a particular bacterium to a particular surface such as root tissue in plants, lacrimal duct tissues in mammals, or even tooth enamel. Most fimbria of gram-negative bacteria function as adhesins, but in many cases it is a minor subunit protein at the tip of the fimbriae that is the actual adhesin. In gram-positive bacteria, a protein or polysaccharide surface layer serves as the specific adhesin. To effectively achieve adherence to host surfaces, many bacteria produce multiple adherence factors called adhesins. Bacterial adhesins provide species and tissue tropism. Adhesins are expressed by both pathogenic bacteria and saprophytic bacteria. This prevalence marks them as key microbial virulence factors in addition to a bacterium's ability to produce toxins and resist the immune defenses of the host. Structures Through the mechanisms of evolution, different species of bacteria have developed different solutions to the problem of attaching receptor specific proteins to the bacteria surface. Today many diff
https://en.wikipedia.org/wiki/WLAE-TV
WLAE-TV (channel 32) is an educational independent television station in New Orleans, Louisiana, United States. The station is owned by the Educational Broadcasting Foundation, a partnership between the Willwoods Community (a Catholic-related organization) and the Louisiana Educational Television Authority (operator of Louisiana Public Broadcasting, which owns the PBS member stations in Louisiana that are located outside of New Orleans). WLAE's studios are located on Howard Avenue in New Orleans, and its transmitter is located on Paris Road/Highway 47 (northeast of Chalmette). History As a PBS member station In 1978, a group of married couples, supported by the Catholic Church, formed the Willwoods Community. The organization joined forces with the Louisiana Educational Television Authority, which had been looking for a way to get its locally-based programming into the state's largest market. At the time, WYES-TV (channel 12) was the city's sole public TV station, and Willwoods sought to obtain the other non-commercial license allocated to the New Orleans market. On December 14, 1981, under the banner of the "Educational Broadcasting Foundation," the partnership was granted an educational station license from the Federal Communications Commission (FCC). WLAE-TV first signed on the air on July 8, 1984; it originally served as a member station of the Public Broadcasting Service (PBS). WLAE-TV operated as a secondary member of the network through PBS' Program Differentiation Plan, and thus only carried 25% of the programming broadcast by PBS, while the remainder aired on WYES. As a side note, Sesame Street was one of the few programs that was shown on both stations. In addition to offering PBS programming, WLAE also aired, and still airs, locally produced educational programs, as well as select programming from Louisiana Public Broadcasting (mostly consisting of news and public affairs programming). WLAE is also one of very few public television stations to televi
https://en.wikipedia.org/wiki/Motor%20variable
In mathematics, a function of a motor variable is a function with arguments and values in the split-complex number plane, much as functions of a complex variable involve ordinary complex numbers. William Kingdon Clifford coined the term motor for a kinematic operator in his "Preliminary Sketch of Biquaternions" (1873). He used split-complex numbers for scalars in his split-biquaternions. Motor variable is used here in place of split-complex variable for euphony and tradition. For example, Functions of a motor variable provide a context to extend real analysis and provide compact representation of mappings of the plane. However, the theory falls well short of function theory on the ordinary complex plane. Nevertheless, some of the aspects of conventional complex analysis have an interpretation given with motor variables, and more generally in hypercomplex analysis. Elementary functions Let D = , the split-complex plane. The following exemplar functions f have domain and range in D: The action of a hyperbolic versor is combined with translation to produce the affine transformation . When c = 0, the function is equivalent to a squeeze mapping. The squaring function has no analogy in ordinary complex arithmetic. Let and note that The result is that the four quadrants are mapped into one, the identity component: . Note that forms the unit hyperbola . Thus, the reciprocation involves the hyperbola as curve of reference as opposed to the circle in C. Linear fractional transformations Using the concept of a projective line over a ring, the projective line P(D) is formed. The construction uses homogeneous coordinates with split-complex number components. The projective line P(D) is transformed by linear fractional transformations: sometimes written provided cz + d is a unit in D. Elementary linear fractional transformations include hyperbolic rotations translations and the inversion Each of these has an inverse, and compositions fill out a group of line
https://en.wikipedia.org/wiki/Liquefaction%20of%20gases
Liquefaction of gases is physical conversion of a gas into a liquid state (condensation). The liquefaction of gases is a complicated process that uses various compressions and expansions to achieve high pressures and very low temperatures, using, for example, turboexpanders. Uses Liquefaction processes are used for scientific, industrial and commercial purposes. Many gases can be put into a liquid state at normal atmospheric pressure by simple cooling; a few, such as carbon dioxide, require pressurization as well. Liquefaction is used for analyzing the fundamental properties of gas molecules (intermolecular forces), or for the storage of gases, for example: LPG, and in refrigeration and air conditioning. There the gas is liquefied in the condenser, where the heat of vaporization is released, and evaporated in the evaporator, where the heat of vaporization is absorbed. Ammonia was the first such refrigerant, and is still in widespread use in industrial refrigeration, but it has largely been replaced by compounds derived from petroleum and halogens in residential and commercial applications. Liquid oxygen is provided to hospitals for conversion to gas for patients with breathing problems, and liquid nitrogen is used in the medical field for cryosurgery, by inseminators to freeze semen, and by field and lab scientists to preserve samples. Liquefied chlorine is transported for eventual solution in water, after which it is used for water purification, sanitation of industrial waste, sewage and swimming pools, bleaching of pulp and textiles and manufacture of carbon tetrachloride, glycol and numerous other organic compounds as well as phosgene gas. Liquefaction of helium (4He) with the precooled Hampson–Linde cycle led to a Nobel Prize for Heike Kamerlingh Onnes in 1913. At ambient pressure the boiling point of liquefied helium is . Below 2.17 K liquid 4He becomes a superfluid (Nobel Prize 1978, Pyotr Kapitsa) and shows characteristic properties such as heat conduc
https://en.wikipedia.org/wiki/Ancient%20UNIX
Ancient UNIX is any early release of the Unix code base prior to Unix System III, particularly the Research Unix releases prior to and including Version 7 (the base for UNIX/32V as well as later developments of AT&T Unix). After the publication of the Lions' book, work was undertaken to release earlier versions of the codebase. SCO first released the code under a limited educational license. Later, in January 2002, Caldera International (now SCO Group) relicensed (but has not made available) several versions under the four-clause BSD license, namely: Research Unix: (early versions only) Version 1 Unix Version 2 Unix Version 3 Unix Version 4 Unix Version 5 Unix Version 6 Unix Version 7 Unix UNIX/32V , there has been no widespread use of the code, but it can be used on emulator systems, and Version 5 Unix runs on the Nintendo Game Boy Advance using the SIMH PDP-11 emulator. Version 6 Unix provides the basis for the MIT xv6 teaching system, which is an update of that version to ANSI C and the x86 or RISC-V platform. The BSD vi text editor is based on code from the ed line editor in those early Unixes. Therefore, "traditional" vi could not be distributed freely, and various work-alikes (such as nvi) were created. Now that the original code is no longer encumbered, the "traditional" vi has been adapted for modern Unix-like operating systems. SCO Group, Inc. was previously called Caldera International. As a result of the SCO Group, Inc. v. Novell, Inc. case, Novell, Inc. was found to not have transferred the copyrights of UNIX to SCO Group, Inc. Concerns have been raised regarding the validity of the Caldera license. The Unix Heritage Society The Unix Heritage Society was founded by Warren Toomey. First edition Unix was restored to a usable state by a restoration team from the Unix Heritage Society in 2008. The restoration process started with paper listings of the source code which were in Unix PDP-11 assembly language.
https://en.wikipedia.org/wiki/Sensory%20analysis
Sensory analysis (or sensory evaluation) is a scientific discipline that applies principles of experimental design and statistical analysis to the use of human senses (sight, smell, taste, touch and hearing) for the purposes of evaluating consumer products. The discipline requires panels of human assessors, on whom the products are tested, and recording the responses made by them. By applying statistical techniques to the results it is possible to make inferences and insights about the products under test. Most large consumer goods companies have departments dedicated to sensory analysis. Sensory analysis can mainly be broken down into three sub-sections: Analytical testing (dealing with objective facts about products) Affective testing (dealing with subjective facts such as preferences) Perception (the biochemical and psychological aspects of sensation) Analytical testing This type of testing is concerned with obtaining objective facts about products. This could range from basic discrimination testing (e.g. Do two or more products differ from each other?) to descriptive analysis (e.g. What are the characteristics of two or more products?). The type of panel required for this type of testing would normally be a trained panel. There are several types of sensory tests. The most classic is the sensory profile. In this test, each taster describes each product by means of a questionnaire. The questionnaire includes a list of descriptors (e.g., bitterness, acidity, etc.). The taster rates each descriptor for each product depending on the intensity of the descriptor he perceives in the product (e.g., 0 = very weak to 10 = very strong). In the method of Free choice profiling, each taster builds his own questionnaire. Another family of methods is known as holistic as they are focused on the overall appearance of the product. This is the case of the categorization and the napping. Affective testing Also known as consumer testing, this type of testing is concerned wi
https://en.wikipedia.org/wiki/Freeform%20surface%20modelling
Freeform surface modelling is a technique for engineering freeform surfaces with a CAD or CAID system. The technology has encompassed two main fields. Either creating aesthetic surfaces (class A surfaces) that also perform a function; for example, car bodies and consumer product outer forms, or technical surfaces for components such as gas turbine blades and other fluid dynamic engineering components. CAD software packages use two basic methods for the creation of surfaces. The first begins with construction curves (splines) from which the 3D surface is then swept (section along guide rail) or meshed (lofted) through. The second method is direct creation of the surface with manipulation of the surface poles/control points. From these initially created surfaces, other surfaces are constructed using either derived methods such as offset or angled extensions from surfaces; or via bridging and blending between groups of surfaces. Surfaces Freeform surface, or freeform surfacing, is used in CAD and other computer graphics software to describe the skin of a 3D geometric element. Freeform surfaces do not have rigid radial dimensions, unlike regular surfaces such as planes, cylinders and conic surfaces. They are used to describe forms such as turbine blades, car bodies and boat hulls. Initially developed for the automotive and aerospace industries, freeform surfacing is now widely used in all engineering design disciplines from consumer goods products to ships. Most systems today use nonuniform rational B-spline (NURBS) mathematics to describe the surface forms; however, there are other methods such as Gordon surfaces or Coons surfaces . The forms of freeform surfaces (and curves) are not stored or defined in CAD software in terms of polynomial equations, but by their poles, degree, and number of patches (segments with spline curves). The degree of a surface determines its mathematical properties, and can be seen as representing the shape by a polynomial with variabl
https://en.wikipedia.org/wiki/NetIQ%20eDirectory
eDirectory is an X.500-compatible directory service software product from NetIQ. Previously owned by Novell, the product has also been known as Novell Directory Services (NDS) and sometimes referred to as NetWare Directory Services. NDS was initially released by Novell in 1993 for Netware 4, replacing the Netware bindery mechanism used in previous versions, for centrally managing access to resources on multiple servers and computers within a given network. eDirectory is a hierarchical, object oriented database used to represent certain assets in an organization in a logical tree, including organizations, organizational units, people, positions, servers, volumes, workstations, applications, printers, services, and groups to name just a few. Features eDirectory uses dynamic rights inheritance, which allows both global and specific access controls. Access rights to objects in the tree are determined at the time of the request and are determined by the rights assigned to the objects by virtue of their location in the tree, any security equivalences, and individual assignments. The software supports partitioning at any point in the tree, as well as replication of any partition to any number of servers. Replication between servers occurs periodically using deltas of the objects. Each server can act as a master of the information it holds (provided the replica is not read only). Additionally, replicas may be filtered to only include defined attributes to increase speed (for example, a replica may be configured to only include a name and phone number for use in a corporate address book, as opposed to the entire directory user profile). The software supports referential integrity, multi-master replication, and has a modular authentication architecture. It can be accessed via LDAP, DSML, SOAP, ODBC, JDBC, JNDI, and ADSI. Supported platforms Windows 2000 Windows Server 2003 Windows Server 2008 Windows Server 2012 SUSE Linux Enterprise Server Red Hat Enterprise Li
https://en.wikipedia.org/wiki/Concurrent%20computing
Concurrent computing is a form of computing in which several computations are executed concurrently—during overlapping time periods—instead of sequentially—with one completing before the next starts. This is a property of a system—whether a program, computer, or a network—where there is a separate execution point or "thread of control" for each process. A concurrent system is one where a computation can advance without waiting for all other computations to complete. Concurrent computing is a form of modular programming. In its paradigm an overall computation is factored into subcomputations that may be executed concurrently. Pioneers in the field of concurrent computing include Edsger Dijkstra, Per Brinch Hansen, and C.A.R. Hoare. Introduction The concept of concurrent computing is frequently confused with the related but distinct concept of parallel computing, although both can be described as "multiple processes executing during the same period of time". In parallel computing, execution occurs at the same physical instant: for example, on separate processors of a multi-processor machine, with the goal of speeding up computations—parallel computing is impossible on a (one-core) single processor, as only one computation can occur at any instant (during any single clock cycle). By contrast, concurrent computing consists of process lifetimes overlapping, but execution need not happen at the same instant. The goal here is to model processes in the outside world that happen concurrently, such as multiple clients accessing a server at the same time. Structuring software systems as composed of multiple concurrent, communicating parts can be useful for tackling complexity, regardless of whether the parts can be executed in parallel. For example, concurrent processes can be executed on one core by interleaving the execution steps of each process via time-sharing slices: only one process runs at a time, and if it does not complete during its time slice, it is paused, an
https://en.wikipedia.org/wiki/Cosmic%20pluralism
Cosmic pluralism, the plurality of worlds, or simply pluralism, describes the belief in numerous "worlds" (planets, dwarf planets or natural satellites) in addition to Earth (possibly an infinite number), which may harbour extraterrestrial life. The debate over pluralism began as early as the time of Anaximander (c. 610 – c. 546 BC) as a metaphysical argument, long predating the scientific Copernican conception that the Earth is one of numerous planets. It has continued, in a variety of forms, until the modern era. Ancient Greek debates In Greek times, the debate was largely philosophical and did not conform to present notions of cosmology. Cosmic pluralism was a corollary to notions of infinity and the purported multitude of life-bearing worlds were more akin to parallel universes (either contemporaneously in space or infinitely recurring in time) than to different solar systems. After Anaximander opened the door to both an infinite universe and an infinite amount of universes, a strong pluralist stance was adopted by the atomists, notably Leucippus, Democritus, Epicurus—whose Epistle to Herodotus clearly lays out the Doctrine of Innumerable Worlds—and Lucretius who elaborates this Doctrine in his work De rerum natura. Anaxarchus told Alexander the Great that there were an infinite number of worlds that each harbored an infinite variety of extraterrestrial life, leading Alexander to weep, for he had not yet conquered even one. While these were prominent thinkers, their opponents—Plato and Aristotle—had greater effect. They argued that the Earth is unique and that there can be no other systems of worlds. This stance neatly dovetailed with later Christian ideas, and pluralism was effectively suppressed for approximately a millennium. Medieval Islamic thought Many medieval Muslim scholars endorsed the idea of cosmic pluralism. Imam Muhammad al-Baqir (676–733) wrote "Maybe you see that God created only this single world and that God did not create humans besides yo
https://en.wikipedia.org/wiki/Nikolay%20Bogolyubov
Nikolay Nikolayevich Bogolyubov (; 21 August 1909 – 13 February 1992), also transliterated as Bogoliubov and Bogolubov, was a Soviet and Russian mathematician and theoretical physicist known for a significant contribution to quantum field theory, classical and quantum statistical mechanics, and the theory of dynamical systems; he was the recipient of the 1992 Dirac Medal. Biography Early life (1909–1921) Nikolay Bogolyubov was born on 21 August 1909 in Nizhny Novgorod, Russian Empire to Russian Orthodox Church priest and seminary teacher of theology, psychology and philosophy Nikolay Mikhaylovich Bogolyubov, and Olga Nikolayevna Bogolyubova, a teacher of music. The Bogolyubovs relocated to the village of Velikaya Krucha in the Poltava Governorate (now in Poltava Oblast, Ukraine) in 1919, where the young Nikolay Bogolyubov began to study physics and mathematics. The family soon moved to Kyiv in 1921, where they continued to live in poverty as the elder Nikolay Bogolyubov only found a position as a priest in 1923. He attended research seminars in Kyiv University and soon started to work under the supervision of the well-known contemporary mathematician Nikolay Krylov. In 1924, at the age of 15, Nikolay Bogolyubov wrote his first published scientific paper On the behavior of solutions of linear differential equations at infinity. In 1925 he entered Ph.D. program at the Academy of Sciences of the Ukrainian SSR and obtained the degree of Kandidat Nauk (Candidate of Sciences, equivalent to a Ph.D.) in 1928, at the age of 19, with the doctoral thesis titled On direct methods of variational calculus. In 1930, at the age of 21, he obtained the degree of Doktor nauk (Doctor of Sciences, equivalent to Habilitation), the highest degree in the Soviet Union, which requires the recipient to have made a significant independent contribution to his or her scientific field. This early period of Bogolyubov's work in science was concerned with such mathematical problems as direct me
https://en.wikipedia.org/wiki/Fish%20flake
A fish flake is a platform built on poles and spread with boughs for drying cod on the foreshores of fishing villages and small coastal towns in Newfoundland and Nordic countries. Spelling variations for fish flake in Newfoundland include , , , and . The term's first recorded use in connection with fishing appeared in Richard Whitbourne's book Newfoundland (1623, p. 57). In Norway, a flake is known as a . Construction The flake consists of a horizontal framework of small poles (called lungers), sometimes covered with spruce boughs, and supported by upright poles, the air having free access beneath. Here the cod are spread out to bleach in the sun and air after the fish has been curing all summer in stages under a heavy spreading of salt. There are two types of common known flakes during the height of the fishing season: one a permanent structure as described above, and the other, called a hand flake, that can be erected on short notice and provides for more area in the event that a fishing season is rather bountiful. Flake and fish store The fish flake was a permanent structure that was part of the landscape of a fishing village. These were located not far from the fishing stages that were built on the shoreline. The flake was built high off the ground and required stable construction with a sloping ramp to gain access to it. The ramp was required such that fishermen carrying a full load of fish on barrows were able to negotiate its ascent. At one end of the flake was built a building that served dual purposes. The construction of the building, called a fish store, was of two story construction and had to be large enough to accommodate a season's voyage of fish on the top level. In the lower section of the fish store was kept the gear that was used for fishing: buoys, ropes, killicks, grapples, and other miscellany. The top level of the store was of an open structure with areas that could be sectioned off to contain neat stacks of dried fish. As the drying proce
https://en.wikipedia.org/wiki/Applix
Applix Inc. was a computer software company founded in 1983 based in Westborough, Massachusetts that published Applix TM1, a multi-dimensional online analytical processing (MOLAP) database server, and related presentation tools, including Applix Web and Applix Executive Viewer. Together, Applix TM1, Applix Web and Applix Executive Viewer were the three core components of the Applix Business Analytics Platform. (Executive Viewer was subsequently discontinued by IBM.) On October 25, 2007, Applix was acquired by Cognos. Cognos rebranded all Applix products under its name following the acquisition. On January 31, 2008, Cognos was itself acquired by IBM. Prior to OLAP industry consolidation in 2007, Applix was the purest OLAP vendor among publicly traded independent business intelligence vendors, and had the greatest growth rate. TM1 is now marketed as IBM Cognos TM1; version 10.2 became publicly available on September 13, 2013. Products and technology Applix TM1 is enterprise planning software used for collaborative planning, budgeting and forecasting, as well as analytical and reporting applications. Data in TM1 is stored and represented as multidimensional cubes, with data being stored at the "leaf" level. Computations on the leaf data are performed in real-time (for example, to aggregate numbers up a dimensional hierarchy). IBM Cognos TM1 also includes a data orchestration environment for accessing external data and systems, as well as capabilities designed for common business planning and budgeting requirements (e.g. workflow, top-down adjustments). See also Applixware
https://en.wikipedia.org/wiki/Hubble%20Deep%20Field%20South
The Hubble Deep Field South is a composite of several hundred individual images taken using the Hubble Space Telescope's Wide Field and Planetary Camera 2 over 10 days in September and October 1998. It followed the great success of the original Hubble Deep Field in facilitating the study of extremely distant galaxies in early stages of their evolution. While the WFPC2 took very deep optical images, nearby fields were simultaneously imaged by the Space Telescope Imaging Spectrograph (STIS) and the Near Infrared Camera and Multi-Object Spectrometer (NICMOS). Planning The rationale behind making another Deep Field image was to provide observatories in the southern hemisphere with a similarly deep optical image of the distant universe as had been provided to those in the northern hemisphere. The field chosen was in the constellation of Tucana at a right ascension of and declination of . As with the original Hubble Deep Field (referred to hereafter as the 'HDF-N'), the target area was selected to be far from the plane of the Milky Way's galactic disk, which contains a large amount of obscuring matter, and to contain as few galactic stars as possible. However the field is closer to the galactic plane than the HDF-N, meaning that it contains more galactic stars. It also has a nearby bright star, as well as a moderately strong radio source close by, but in both cases it was decided that these wouldn't compromise follow-up observations. As with the HDF-N, the field lies in Hubble's Continuous Viewing Zone (CVZ), this time in the south, allowing twice the normal observing time per orbit. At specific times of year, the HST can observe this zone continuously, without it being eclipsed by the Earth. Viewing this field, however, has some issues due to passages through the South Atlantic Anomaly and also with scattered earthshine during daylight hours; the latter can be avoided by using instruments with larger sources of noise, for example from the CCD reading process, at th
https://en.wikipedia.org/wiki/Focal%20point%20%28game%20theory%29
In game theory, a focal point (or Schelling point) is a solution that people tend to choose by default in the absence of communication. The concept was introduced by the American economist Thomas Schelling in his book The Strategy of Conflict (1960). Schelling states that "(p)eople can often concert their intentions or expectations with others if each knows that the other is trying to do the same" in a cooperative situation (at page 57), so their action would converge on a focal point which has some kind of prominence compared with the environment. However, the conspicuousness of the focal point depends on time, place and people themselves. It may not be a definite solution. Existence The existence of the focal point is first demonstrated by Schelling with a series of questions. The most famous one is the New York City question: if you are to meet a stranger in New York City, but you cannot communicate with the person, then when and where will you choose to meet? This is a coordination game, where any place and time in the city could be an equilibrium solution. Schelling asked a group of students this question, and found the most common answer was "noon at (the information booth at) Grand Central Terminal". There is nothing that makes Grand Central Terminal a location with a higher payoff (you could just as easily meet someone at a bar or the public library reading room), but its tradition as a meeting place raises its salience and therefore makes it a natural "focal point". Later, Schelling's informal experiments have been replicated under controlled conditions with monetary incentives by Judith Mehta. Theories Although the concept of a focal point has been widely accepted in game theory, it is still unclear how a focal point forms. The researchers have proposed theories from two aspects. The level-n theory Stahl and Wilson argue that a focal point is formed because players would try to predict how other players act. They model the level of "rational expectat
https://en.wikipedia.org/wiki/Virtual%20Library%20museums%20pages
The Virtual Library museums pages (VLmp) formed an early leading directory of online museums around the world. History The VLmp online directory resource was founded by Jonathan Bowen in 1994, originally at the Oxford University Computing Laboratory in the United Kingdom. It has been supported by the International Council of Museums (ICOM) and Museophile Limited. As part of the World Wide Web Virtual Library, initiated by Tim Berners-Lee and later managed by Arthur Secret. The main VLmp site moved to London South Bank University in the early 2000s and is now hosted on the MuseumsWiki wiki, established in 2006 and hosted by Fandom (previously Wikia) as a historical record. The directory was developed and organised in a distributed manner by country, with around twenty people in different countries maintaining various sections. Canada, through the Canadian Heritage Information Network (CHIN), was the first country to become involved. The MDA maintained the United Kingdom section of museums, later the Collections Trust. The Historisches Centrum Hagen has maintained and hosted pages for Germany. Other countries actively participating included Romania. In total, around 20 countries were involved. The directory was influential in the museum field during the 1990s and 2000s. It was used as a standard starting point to find museums online. It was useful for monitoring the growth of museums internationally online. It was also used for online museum surveys. It was recommended as an educational resource and included a search facility. Virtual Museum of Computing The Virtual Museum of Computing (VMoC), part of the Virtual Library museums pages, was created as a virtual museum providing information on the history of computers and computer science. It included virtual "galleries" (e.g., on Alan Turing, curated by Andrew Hodges) and links to other computer museums. VMoC was founded in 1995, initially at the University of Oxford. As part of VLmp, it was hosted by the Internati
https://en.wikipedia.org/wiki/Seminaphtharhodafluor
Seminaphtharhodafluor or SNARF is a fluorescent dye that changes color with pH. It can be used to construct optical biosensors that use enzymes that change pH. The absorption peak of the derivative carboxy-SNARF at pH 6.0 is at wavelength (515 and) 550 nm, while that at pH 9.0 is at 575 nm. The emission peak of carboxy-SNARF at pH 6.0 is at wavelength 585 nm, while that at pH 9.0 is at 640 nm. SNARF-1 can serve as a substrate for the MRP1 (multidrug resistance-associated protein-1) drug transporter, to measure the activity of the MRP1 transporter. For this purpose, an acetomethoxyester group is added to SNARF-1. Cellular esterases cleave off SNARF-1, and its transport out of the cells can be measured by following the loss of fluorescence from the cells.
https://en.wikipedia.org/wiki/DATAR
DATAR, short for Digital Automated Tracking and Resolving, was a pioneering computerized battlefield information system. DATAR combined the data from all of the sensors in a naval task force into a single "overall view" that was then transmitted back to all of the ships and displayed on plan-position indicators similar to radar displays. Commanders could then see information from everywhere, not just their own ship's sensors. Development of the DATAR system was spurred by the Royal Navy's work on the Comprehensive Display System (CDS), which Canadian engineers were familiar with. The project was started by the Royal Canadian Navy in partnership with Ferranti Canada (later known as Ferranti-Packard) in 1949. They were aware of CDS and a US Navy project along similar lines but believed their solution was so superior that they would eventually be able to develop the system on behalf of all three forces. They also believed sales were possible to the Royal Canadian Air Force and US Air Force for continental air control. A demonstration carried out in the fall of 1953 was by most measures an unqualified success, to the point where some observers thought it was being faked. By this time the US Air Force was well into development of their SAGE system and the RCAF decided that commonality with that force was more important than commonality with their own Navy. The Royal Navy computerized their CDS in the new Action Data Automation system, and the US Navy decided on a somewhat simpler system, the Naval Tactical Data System. No orders for DATAR were forthcoming. When one of the two computers was destroyed by fire, the company was unable to raise funds for a replacement, and the project ended. The circuitry design used in the system would be applied to several other Ferranti machines over the next few years. History Canadian Navy during the War At the Atlantic Convoy Conference of 1943, Canada was given shared control of all convoys running between the British Isles and No
https://en.wikipedia.org/wiki/Confit
Confit (, ) (from the French word confire, literally "to preserve") is any type of food that is cooked slowly over a long period as a method of preservation. Confit, as a cooking term, describes when food is cooked in grease, oil, at a lower temperature, as opposed to deep frying. While deep frying typically takes place at temperatures of , confit preparations are done at a much lower temperature, such as an oil temperature of around , or sometimes even cooler. The term is usually used in modern cuisine to mean long, slow cooking in oil or fat at low temperatures, many having no element of preservation, such as in dishes like confit potatoes. For meat, this method requires the meat to be salted as part of the preservation process. After salting and cooking in fat, confit can last for several months or years when sealed and stored in a cool, dark place. Confit is a specialty of southwestern France. Etymology The word comes from the French verb confire (to preserve), which in turn comes from the Latin word (conficere), meaning "to do, to produce, to make, to prepare." The French verb was first applied in medieval times to fruits cooked and preserved in sugar. Fruit confit Fruit confit are candied fruit (whole fruit, or pieces thereof) preserved in sugar. The fruit must be fully infused with sugar to its core; larger fruit takes considerably longer than smaller ones to candy. Thus, while small fruit such as cherries are confits whole, it is quite rare to see whole large fruit, such as melon confits since the time and energy involved in producing large fruit confits makes them quite expensive. Meat confit Confit of goose (confit d'oie) and duck (confit de canard) are usually prepared from the legs of the bird. The meat is salted and seasoned with herbs and slowly cooked submerged in its own rendered fat (never to exceed ), in which it is then preserved by allowing it to cool and storing it in the fat. Turkey and pork may be treated in the same manner. Meat confit
https://en.wikipedia.org/wiki/Prefoldin%20subunit%203
Prefoldin subunit 3 (VBP-1), also Von Hippel–Lindau binding protein 1, is a prefoldin chaperone protein that binds to von Hippel–Lindau protein and transports it from perinuclear granules to the nucleus or cytoplasm inside the cell. It is also involved in transporting nascent polypeptides to cytosolic chaperonins for post-translational folding. VBP-1 is a 197–amino acid heterohexamer comprising two prefoldin-α and four prefoldin-β subunits, and is a member of the prefoldin-α subunit family. It is ubiquitously expressed in tissues, and is located in the cell nucleus and cytoplasm. The VBP1 gene is located at Xq28. Homologues are known to exist between human VBP-1 and proteins in mice, Drosophila and C. elegans. See also Von Hippel–Lindau tumor suppressor Von Hippel–Lindau disease Eugen von Hippel Arvid Lindau
https://en.wikipedia.org/wiki/Formal%20Aspects%20of%20Computing
Formal Aspects of Computing (FAOC) is a peer-reviewed scientific journal published by Springer Science+Business Media, covering the area of formal methods and associated topics in computer science. The editor-in-chief is Jim Woodcock. According to the Journal Citation Reports, the journal has a 2010 impact factor of 1.170. Until 2021, the journal was published by Springer. It is now published by ACM. See also Acta Informatica Innovations in Systems and Software Engineering
https://en.wikipedia.org/wiki/Horse%20mill
A horse mill is a mill, sometimes used in conjunction with a watermill or windmill, that uses a horse engine as the power source. Any milling process can be powered in this way, but the most frequent use of animal power in horse mills was for grinding grain and pumping water. Other animal engines for powering mills are powered by dogs, donkeys, oxen or camels. Treadwheels are engines powered by humans. History The donkey or horse-driven rotary mill was a 4th-century BC Carthaginian invention, with possible origins in Carthaginian Sardinia. Two Carthaginian animal-powered millstones built using red lava from Carthaginian-controlled Mulargia in Sardinia were found in a 375–350 BC shipwreck near Mallorca. The mill spread to Sicily, arriving in Italy in the 3rd century BC. The Carthaginians used hand-powered rotary mills as early as the 6th century BC, and the use of the rotary mill in Spanish lead and silver mines may have contributed to the rise of the larger, animal-powered mill. It freed the miller from most of the heavy burden of his task and greatly increased output through the superior stamina of horses and donkeys over humans. Gallery Horse mill at Beamish Museum This horse mill has not been used since about 1830 when it was superseded by portable engines. It was rescued from Berwick Hills Low Farm in Northumberland by the museum, repaired and set up in their own gin gang at Home Farm as a non-functioning exhibit. The top of the mill's main vertical axle and the end of the main drive shaft are pivoted at the centre of their own separate tie beam which is below and parallel with the main roof tie beam, and set in the gin gang's side walls at either end. The mill's tie beam has to be stabilised with two massive oak beams which run, either side of the drive shaft, from tie beam to threshing barn wall. A large and basic engine like this can create great stresses from the torque engendered. Mill construction This four-horse mill is based on a central, vertical,
https://en.wikipedia.org/wiki/TCP%20tuning
TCP tuning techniques adjust the network congestion avoidance parameters of Transmission Control Protocol (TCP) connections over high-bandwidth, high-latency networks. Well-tuned networks can perform up to 10 times faster in some cases. However, blindly following instructions without understanding their real consequences can hurt performance as well. Network and system characteristics Bandwidth-delay product (BDP) Bandwidth-delay product (BDP) is a term primarily used in conjunction with TCP to refer to the number of bytes necessary to fill a TCP "path", i.e. it is equal to the maximum number of simultaneous bits in transit between the transmitter and the receiver. High performance networks have very large BDPs. To give a practical example, two nodes communicating over a geostationary satellite link with a round-trip delay time (or round-trip time, RTT) of 0.5 seconds and a bandwidth of 10 Gbit/s can have up to 0.5×1010 bits, i.e., 5 Gbit = 625 MB of unacknowledged data in flight. Despite having much lower latencies than satellite links, even terrestrial fiber links can have very high BDPs because their link capacity is so large. Operating systems and protocols designed as recently as a few years ago when networks were slower were tuned for BDPs of orders of magnitude smaller, with implications for limited achievable performance. Buffers The original TCP configurations supported TCP receive window size buffers of up to 65,535 (64 KiB - 1) bytes, which was adequate for slow links or links with small RTTs. Larger buffers are required by the high performance options described below. Buffering is used throughout high performance network systems to handle delays in the system. In general, buffer size will need to be scaled proportionally to the amount of data "in flight" at any time. For very high performance applications that are not sensitive to network delays, it is possible to interpose large end to end buffering delays by putting in intermediate data storage
https://en.wikipedia.org/wiki/Mammoth%20steppe
During the Last Glacial Maximum, the mammoth steppe, also known as steppe-tundra, was once the Earth's most extensive biome. It stretched east-to-west, from the Iberian Peninsula in the west of Europe, across Eurasia to North America, through Beringia (what is today Alaska) and Canada; from north-to-south, the steppe reached from the arctic islands southward to China. The mammoth steppe was cold and dry, and relatively featureless, though topography and geography varied considerably throughout. Some areas featured rivers which, through erosion, naturally created gorges, gulleys, or small glens. The continual glacial recession and advancement over millennia contributed more to the formation of larger valleys and different geographical features. Overall, however, the steppe is known to be flat and expansive grassland. The vegetation was dominated by palatable, high-productivity grasses, herbs and willow shrubs. The animal biomass was dominated by species such as reindeer, muskox, saiga antelope, steppe bison, horses, woolly rhinoceros and woolly mammoth. These herbivores, in turn, were followed and preyed upon by various carnivores, such as brown bears, Panthera spelaea (the cave or steppe-lion), scimitar cats, wolverines and wolves, among others. This ecosystem covered wide areas of the northern part of the globe, thrived for approximately 100,000 years without major changes, but then diminished to small regions around 12,000 years ago. Modern humans began to inhabit the biome following their expansion out of Africa, reaching the Arctic Circle in Northeast Siberia by about 32,000 years ago. Naming At the end of the 19th century, Alfred Nehring (1890) and Jan Czerski (Iwan Dementjewitsch Chersky, 1891) proposed that during the last glacial period a major part of northern Europe had been populated by large herbivores and that a steppe climate had prevailed there. In 1982, the scientist R. Dale Guthrie coined the term "mammoth steppe" for this paleoregion. Origin
https://en.wikipedia.org/wiki/ISPM%2015
International Standards For Phytosanitary Measures No. 15 (ISPM 15) is an International Phytosanitary Measure developed by the International Plant Protection Convention (IPPC) that directly addresses the need to treat wood materials of a thickness greater than 6mm, used to ship products between countries. Its main purpose is to prevent the international transport and spread of disease and insects that could negatively affect plants or ecosystems. ISPM 15 affects all wood packaging material (pallets, crates, dunnages, etc.) and requires that they be debarked and then heat treated or fumigated with methyl bromide, and stamped or branded with a mark of compliance. This mark of compliance is colloquially known as the "wheat stamp". Products exempt from the ISPM 15 are made from an alternative material, like paper, plastic or wood panel products (i.e. OSB, hardboard, and plywood). ISPM 15 revision The Revision of ISPM No. 15 (2009) under Annex 1, requires that wood used to manufacture ISPM 15 compliant Wood Packaging must be made from debarked wood not to be confused with bark free wood. ISPM 15 was updated to adopt the bark restriction regulations proposed by the European Union in 2009. Australia held out for approximately one year with more stringent bark restrictions before conforming July 1, 2010 Debarked wood packaging Wood packaging materials must be debarked prior to being heat treated or fumigated to meet ISPM 15 regulations. The debarking component of the regulation is to prevent the re-infestation of insects while lumber is sitting to be manufactured, or even after it has been manufactured. The official definition for debarked lumber according to the ISPM 15 Revision (2009) is: "Irrespective of the type of treatment applied, wood packaging material must be made of debarked wood. For this standard, any number of visually separate and clearly distinct small pieces of bark may remain if they are: - less than 3 cm in width (regardless of the length) or - g
https://en.wikipedia.org/wiki/Two-element%20Boolean%20algebra
In mathematics and abstract algebra, the two-element Boolean algebra is the Boolean algebra whose underlying set (or universe or carrier) B is the Boolean domain. The elements of the Boolean domain are 1 and 0 by convention, so that B = {0, 1}. Paul Halmos's name for this algebra "2" has some following in the literature, and will be employed here. Definition B is a partially ordered set and the elements of B are also its bounds. An operation of arity n is a mapping from Bn to B. Boolean algebra consists of two binary operations and unary complementation. The binary operations have been named and notated in various ways. Here they are called 'sum' and 'product', and notated by infix '+' and '∙', respectively. Sum and product commute and associate, as in the usual algebra of real numbers. As for the order of operations, brackets are decisive if present. Otherwise '∙' precedes '+'. Hence is parsed as and not as . Complementation is denoted by writing an overbar over its argument. The numerical analog of the complement of is . In the language of universal algebra, a Boolean algebra is a ∙ algebra of type . Either one-to-one correspondence between {0,1} and {True,False} yields classical bivalent logic in equational form, with complementation read as NOT. If 1 is read as True, '+' is read as OR, and '∙' as AND, and vice versa if 1 is read as False. These two operations define a commutative semiring, known as the Boolean semiring. Some basic identities 2 can be seen as grounded in the following trivial "Boolean" arithmetic: Note that: '+' and '∙' work exactly as in numerical arithmetic, except that 1+1=1. '+' and '∙' are derived by analogy from numerical arithmetic; simply set any nonzero number to 1. Swapping 0 and 1, and '+' and '∙' preserves truth; this is the essence of the duality pervading all Boolean algebras. This Boolean arithmetic suffices to verify any equation of 2, including the axioms, by examining every possible assignment of 0s and 1s to each var
https://en.wikipedia.org/wiki/Single-responsibility%20principle
The single-responsibility principle (SRP) is a computer programming principle that states that "A module should be responsible to one, and only one, actor." The term actor refers to a group (consisting of one or more stakeholders or users) that requires a change in the module. Robert C. Martin, the originator of the term, expresses the principle as, "A class should have only one reason to change". Because of confusion around the word "reason" he has also clarified saying that the "principle is about people." In some of his talks, he also argues that the principle is, in particular, about roles or actors. For example, while they might be the same person, the role of an accountant is different from a database administrator. Hence, each module should be responsible for each role. History The term was introduced by Robert C. Martin in his article "The Principles of OOD" as part of his Principles of Object Oriented Design, made popular by his 2003 book Agile Software Development, Principles, Patterns, and Practices. Martin described it as being based on the principle of cohesion, as described by Tom DeMarco in his book Structured Analysis and System Specification, and Meilir Page-Jones in The Practical Guide to Structured Systems Design. In 2014 Martin published a blog post titled "The Single Responsibility Principle" with a goal to clarify what was meant by the phrase "reason for change." Example Martin defines a responsibility as a reason to change, and concludes that a class or module should have one, and only one, reason to be changed (e.g. rewritten). As an example, consider a module that compiles and prints a report. Imagine such a module can be changed for two reasons. First, the content of the report could change. Second, the format of the report could change. These two things change for different causes. The single-responsibility principle says that these two aspects of the problem are really two separate responsibilities, and should, therefore, be in separ
https://en.wikipedia.org/wiki/Trusted%20path
A trusted path or trusted channel is a mechanism that provides confidence that the user is communicating with what the user intended to communicate with, ensuring that attackers can't intercept or modify whatever information is being communicated. The term was initially introduced by Orange Book. As its security architecture concept, it can be implemented with any technical safeguards suitable for particular environment and risk profile. Examples Electronic signature In Common Criteria and European Union electronic signature standards trusted path and trusted channel describe techniques that prevent interception or tampering with sensitive data as it passes through various system components: trusted path — protects data from the user and a security component (e.g. PIN sent to a smart card to unblock it for digital signature), trusted channel — protects data between security component and other information resources (e.g. data read from a file and sent to the smart card for signature). User login One of popular techniques for password stealing in Microsoft Windows was login spoofing, which was based on programs that simulated operating system's login prompt. When users try to log in, the fake login program can then capture user passwords for later use. As a safeguard Windows NT introduced Ctrl-Alt-Del sequence as secure attention key to escape any third party programs and invoke system login prompt. A similar problem arises in case of websites requiring authentication, where the user is expected to enter their credentials without actually knowing if the website is not spoofed. HTTPS mitigates this attack by first authenticating the server to the user (using trust anchor and certification path validation algorithm), and only then displaying the login form.
https://en.wikipedia.org/wiki/Zugunruhe
Zugunruhe (/ˈtsuːk:ʊnʁuːə/; German: [tsuːk:ʊnʁuːə] ; lit. 'migration-anxiety') is the experience of migratory restlessness. Ethology In ethology, Zugunruhe describes anxious behavior in migratory animals, especially in birds during the normal migration period. When these animals are enclosed, such as in an Emlen funnel, Zugunruhe serves to study the seasonal cycles of the migratory syndrome. Zugunruhe involves increased activity towards and after dusk with changes in the normal sleep pattern."In accordance with their inherited calendars, birds get an urge to move. When migratory birds are held in captivity, they hop about, flutter their wings and flit from perch to perch just as birds of the same species are migrating in the wild. The caged birds ‘know’ they should be travelling too. This migratory restlessness, or Zugunruhe, was first described by Johann Andreas Naumann…[who] interpreted Zugunruhe to be an expression of the migratory instinct in birds." --William Fiennes, ‘The Snow Geese’ Etymology Zugunruhe is borrowed from German; it is a German compound word consisting of Zug, "move, migration," and unruhe (anxiety, restlessness). The word was first published in 1707, when it was used to describe the "inborn migratory urge" in captive migrants. Though common nouns are normally not capitalised in English, Zugunruhe is sometimes capitalised following the German convention. Effect Zugunruhe has been artificially induced in experiments by simulating long days. Some studies on White-crowned Sparrows have suggested that prolactin is involved in the pre-migratory hyperphagia (feeding), fattening and Zugunruhe. However, others have found that prolactin may merely be associated with lipogenesis (fat accumulation). Researchers have been able to study the endocrine controls and navigational mechanisms associated with migration by studying Zugunruhe. The phenomenon of Zugunruhe was generally believed to be found only in migratory species; however, a study of a
https://en.wikipedia.org/wiki/AM%20expanded%20band
The extended mediumwave broadcast band, commonly known as the AM expanded band, refers to the broadcast station frequency assignments immediately above the earlier upper limits of 1600 kHz in International Telecommunication Union (ITU) Region 2 (the Americas), and 1602 kHz in ITU Regions 1 (Europe, northern Asia and Africa) and 3 (southern Asia and Oceania). In Region 2, this consists of ten additional frequencies, spaced 10 kHz apart, and running from 1610 kHz to 1700 kHz. In Regions 1 and 3, where frequency assignments are spaced nine kHz apart, the result is eleven additional frequencies, from 1611 kHz to 1701 kHz. ITU Region 1 Europe The extended band is not officially allocated in Europe, and the trend of national broadcasters in the region has been to reduce the number of their AM band stations in favor of FM and digital transmissions. However, new Low-Power AM (LPAM) stations have recently come on the air from countries like Finland, Sweden, Norway, Denmark, the Netherlands and Italy. These frequencies are also used by a number of "hobby" pirate radio stations, particularly in the Netherlands, Greece, and Serbia. Vatican Radio for many years transmitted on 1611 kHz, before ceasing broadcasts on this frequency in 2012. Since 2014 a licensed Norwegian project has been broadcasting both Radio Northern Star and The Sea on 1611 kHz. ITU Region 2 In 1979, a World Administrative Radio Conference (WARC-79) adopted "Radio Regulation No. 480", which stated that "In Region 2, the use of the band 1605-1705 kHz by stations of the broadcasting service shall be subject to a plan to be established by a regional administrative radio conference..." As a consequence, on June 8, 1988 an ITU-sponsored conference held at Rio de Janeiro, Brazil adopted provisions, effective July 1, 1990, to extend the upper end of the Region 2 AM broadcast band, by adding ten frequencies which spanned from 1610 kHz to 1700 kHz. The agreement provided for a standard transmitter power of 1 kilowa
https://en.wikipedia.org/wiki/Class%20A%20surface
In automotive design, a class A surface is any of a set of freeform surfaces of high efficiency and quality. Although, strictly, it is nothing more than saying the surfaces have curvature and tangency alignment – to ideal aesthetical reflection quality, many people interpret class A surfaces to have G2 (or even G3) curvature continuity to one another (see free form surface modelling). Class A surfacing is done using computer-aided industrial design applications. Class A surface modellers are also called "digital sculptors" in the industry. Industrial designers develop their design styling through the A-Surface, the physical surface the end user can feel, touch, see etc. Application A common method of working is to start with a prototype model and produce smooth mathematical Class A surfaces to describe the product's outer body. From this the production of tools and inspection of finished parts can be carried out. Class A surfacing complements the prototype modelling stage by reducing time and increasing control over design iterations. Class A surfaces can be defined as any surface, that has styling intent, that is either seen, touched, or both and mathematically meets the definition for Bézier. Automotive design application In automotive design application Class A surfaces are created on all visible exterior surfaces (ex; body panels, bumper, grill, lights etc.) and all visible surfaces of see-touch & feel parts in interior (ex: Dashboard, seats, door pads etc.). This can also include beauty covers in the engine compartment, mud flaps, trunk panels and carpeting. Product design application In the product design realm, Class A surfacing can be applied to such things like housing for industrial appliances that are injection moulded, home appliances, highly aesthetic plastic packaging defined by highly organic surfaces, toys or furniture. Among the most famous users of Autodesk Alias software in product design is Apple. Aerospace design application Aerospace
https://en.wikipedia.org/wiki/Investors%20in%20People
Investors in People is a standard for people management, offering accreditation to organisations that adhere to the Investors in People Standard. From 1991 to January 2017, Investors in People was owned by the UK government. As of 1 February 2017, Investors in People transitioned into the Investors in People Community Interest Company. Investors in People assessments are conducted locally through local Delivery Centres across the UK and internationally. History In 1990 the Department of Employment was tasked with developing a national standard of good practice for training and development. Investors in People was born and officially launched at that year's CBI Conference in Glasgow by then Secretary of State for Employment, the Rt Hon Michael Howard QC MP. Investors in People UK was formed in 1991 to protect the integrity of the Investors in People framework. It was a non-departmental public body and received funding from the former UK Department for Business, Innovation and Skills (BIS). The organisation was based in London, United Kingdom and managed the development, policy, promotion and quality assurance of the Investors in People framework. From April 2010 the work of the organisation was transferred to UKCES (UK Commission for Employment and Skills) which was responsible for the strategic ownership of Investors in People until January 2017, when the organisation transitioned into a Community Interest Company on 1 February 2017. The Sixth Generation Standard In September 2015, Investors in People launched the sixth generation standard, evolved from the previous fifth generation to keep pace with modern practices. The current framework reflects the latest workplace trends, leading practices and employee conditions required to create outperforming teams. The latest framework focuses on three key areas: leading, supporting and improving. Within these sit nine performance indicators based on the features of organisations that consistently outperform industry n
https://en.wikipedia.org/wiki/Japanese%20theorem%20for%20cyclic%20polygons
In geometry, the Japanese theorem states that no matter how one triangulates a cyclic polygon, the sum of inradii of triangles is constant. Conversely, if the sum of inradii is independent of the triangulation, then the polygon is cyclic. The Japanese theorem follows from Carnot's theorem; it is a Sangaku problem. Proof This theorem can be proven by first proving a special case: no matter how one triangulates a cyclic quadrilateral, the sum of inradii of triangles is constant. After proving the quadrilateral case, the general case of the cyclic polygon theorem is an immediate corollary. The quadrilateral rule can be applied to quadrilateral components of a general partition of a cyclic polygon, and repeated application of the rule, which "flips" one diagonal, will generate all the possible partitions from any given partition, with each "flip" preserving the sum of the inradii. The quadrilateral case follows from a simple extension of the Japanese theorem for cyclic quadrilaterals, which shows that a rectangle is formed by the two pairs of incenters corresponding to the two possible triangulations of the quadrilateral. The steps of this theorem require nothing beyond basic constructive Euclidean geometry. With the additional construction of a parallelogram having sides parallel to the diagonals, and tangent to the corners of the rectangle of incenters, the quadrilateral case of the cyclic polygon theorem can be proved in a few steps. The equality of the sums of the radii of the two pairs is equivalent to the condition that the constructed parallelogram be a rhombus, and this is easily shown in the construction. Another proof of the quadrilateral case is available due to Wilfred Reyes (2002). In the proof, both the Japanese theorem for cyclic quadrilaterals and the quadrilateral case of the cyclic polygon theorem are proven as a consequence of Thébault's problem III. See also Carnot's theorem, which is used in a proof of the theorem above Equal incircles th
https://en.wikipedia.org/wiki/Source-specific%20multicast
Source-specific multicast (SSM) is a method of delivering multicast packets in which the only packets that are delivered to a receiver are those originating from a specific source address requested by the receiver. By so limiting the source, SSM reduces demands on the network and improves security. SSM requires that the receiver specify the source address and explicitly excludes the use of the (*,G) join for all multicast groups in RFC 3376, which is possible only in IPv4's IGMPv3 and IPv6's MLDv2. Any-source multicast (as counterexample) Source-specific multicast is best understood in contrast to any-source multicast (ASM). In the ASM service model a receiver expresses interest in traffic to a multicast address. The multicast network must discover all multicast sources sending to that address, and route data from all sources to all interested receivers. This behavior is particularly well suited to groupware applications where all participants in the group want to be aware of all other participants, and the list of participants is not known in advance. The source discovery burden on the network can become significant when the number of sources is large. Operation In the SSM service model, in addition to the receiver expressing interest in traffic to a multicast address, the receiver expresses interest in receiving traffic from only one specific source sending to that multicast address. This relieves the network of discovering many multicast sources and reduces the amount of multicast routing information that the network must maintain. SSM requires support in last-hop routers and in the receiver's operating system. SSM support is not required in other network components, including routers and even the sending host. Interest in multicast traffic from a specific source is conveyed from hosts to routers using IGMPv3 as specified in RFC 4607. SSM destination addresses must be in the ranges 232.0.0.0/8 for IPv4. For IPv6 current allowed SSM destination addre
https://en.wikipedia.org/wiki/Multicast%20Listener%20Discovery
Multicast Listener Discovery (MLD) is a component of the Internet Protocol Version 6 (IPv6) suite. MLD is used by IPv6 routers for discovering multicast listeners on a directly attached link, much like Internet Group Management Protocol (IGMP) is used in IPv4. The protocol is embedded in ICMPv6 instead of using a separate protocol. MLDv1 is similar to IGMPv2 and MLDv2 similar to IGMPv3. Protocol The ICMPv6 messages use type 143. Support Several operating system support MLDv2: Windows Vista and later FreeBSD since release 8.0 The Linux kernel since 2.5.68 macOS
https://en.wikipedia.org/wiki/ReserVec
ReserVec was a computerized reservation system developed by Ferranti Canada for Trans-Canada Airlines (TCA, today's Air Canada) in the late 1950s. It appears to be the first such system ever developed, predating the more famous SABRE system in the United States by about two years. Although Ferranti had high hopes that the system would be used by other airlines, no further sales were forthcoming and development of the system ended. Major portions of the transistor-based circuit design were put to good use in the Ferranti-Packard 6000 computer, which would later go on to see major sales in Europe as the ICT 1904. Background In the early 1950s the airline industry was undergoing explosive growth. A serious limiting factor was the time taken to make a single booking, which could take upwards of 90 minutes in total. TCA found their bookings typically involved between three and seven calls to the centralized booking centre in Toronto, where telephone operators would scan flight status displayed on a huge board showing all scheduled flights one month into the future. Bookings past that time could not be made, nor could an agent reliably know anything other than if the flight was full or not – to book two seats was much more complex, requiring the operator to find the "flight card" for that flight in a filing cabinet. In 1946 American Airlines decided to tackle this problem through automation, introducing the Reservisor, a simple electromechanical computer based on telephone switching systems. Newer versions of the Reservisor included magnetic drum systems for storing flight information further into the future. The ultimate version of the system, the Magnetronic Reservisor, was installed in 1956 and could store data for 2,000 flights a day up to one month into the future. Reservisors were later sold to a number of airlines, as well as Sheraton for hotel bookings, and Goodyear for inventory control. TCA experiments TCA was aware of the Reservisor, but was unimpressed by i