source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Plus%20construction
In mathematics, the plus construction is a method for simplifying the fundamental group of a space without changing its homology and cohomology groups. Explicitly, if is a based connected CW complex and is a perfect normal subgroup of then a map is called a +-construction relative to if induces an isomorphism on homology, and is the kernel of . The plus construction was introduced by , and was used by Daniel Quillen to define algebraic K-theory. Given a perfect normal subgroup of the fundamental group of a connected CW complex , attach two-cells along loops in whose images in the fundamental group generate the subgroup. This operation generally changes the homology of the space, but these changes can be reversed by the addition of three-cells. The most common application of the plus construction is in algebraic K-theory. If is a unital ring, we denote by the group of invertible -by- matrices with elements in . embeds in by attaching a along the diagonal and s elsewhere. The direct limit of these groups via these maps is denoted and its classifying space is denoted . The plus construction may then be applied to the perfect normal subgroup of , generated by matrices which only differ from the identity matrix in one off-diagonal entry. For , the -th homotopy group of the resulting space, , is isomorphic to the -th -group of , that is, See also Semi-s-cobordism References . . . External links Algebraic topology Homotopy theory
https://en.wikipedia.org/wiki/Cram%C3%A9r%E2%80%93Rao%20bound
In estimation theory and statistics, the Cramér–Rao bound (CRB) relates to estimation of a deterministic (fixed, though unknown) parameter. The result is named in honor of Harald Cramér and C. R. Rao, but has also been derived independently by Maurice Fréchet, Georges Darmois, and by Alexander Aitken and Harold Silverstone. It states that the precision of any unbiased estimator is at most the Fisher information; or (equivalently) the reciprocal of the Fisher information is a lower bound on its variance. An unbiased estimator that achieves this bound is said to be (fully) efficient. Such a solution achieves the lowest possible mean squared error among all unbiased methods, and is therefore the minimum variance unbiased (MVU) estimator. However, in some cases, no unbiased technique exists which achieves the bound. This may occur either if for any unbiased estimator, there exists another with a strictly smaller variance, or if an MVU estimator exists, but its variance is strictly greater than the inverse of the Fisher information. The Cramér–Rao bound can also be used to bound the variance of estimators of given bias. In some cases, a biased approach can result in both a variance and a mean squared error that are the unbiased Cramér–Rao lower bound; see estimator bias. Statement The Cramér–Rao bound is stated in this section for several increasingly general cases, beginning with the case in which the parameter is a scalar and its estimator is unbiased. All versions of the bound require certain regularity conditions, which hold for most well-behaved distributions. These conditions are listed later in this section. Scalar unbiased case Suppose is an unknown deterministic parameter that is to be estimated from independent observations (measurements) of , each from a distribution according to some probability density function . The variance of any unbiased estimator of is then bounded by the reciprocal of the Fisher information : where the Fisher informatio
https://en.wikipedia.org/wiki/Vertex-transitive%20graph
In the mathematical field of graph theory, a vertex-transitive graph is a graph in which, given any two vertices and of , there is some automorphism such that In other words, a graph is vertex-transitive if its automorphism group acts transitively on its vertices. A graph is vertex-transitive if and only if its graph complement is, since the group actions are identical. Every symmetric graph without isolated vertices is vertex-transitive, and every vertex-transitive graph is regular. However, not all vertex-transitive graphs are symmetric (for example, the edges of the truncated tetrahedron), and not all regular graphs are vertex-transitive (for example, the Frucht graph and Tietze's graph). Finite examples Finite vertex-transitive graphs include the symmetric graphs (such as the Petersen graph, the Heawood graph and the vertices and edges of the Platonic solids). The finite Cayley graphs (such as cube-connected cycles) are also vertex-transitive, as are the vertices and edges of the Archimedean solids (though only two of these are symmetric). Potočnik, Spiga and Verret have constructed a census of all connected cubic vertex-transitive graphs on at most 1280 vertices. Although every Cayley graph is vertex-transitive, there exist other vertex-transitive graphs that are not Cayley graphs. The most famous example is the Petersen graph, but others can be constructed including the line graphs of edge-transitive non-bipartite graphs with odd vertex degrees. Properties The edge-connectivity of a vertex-transitive graph is equal to the degree d, while the vertex-connectivity will be at least 2(d + 1)/3. If the degree is 4 or less, or the graph is also edge-transitive, or the graph is a minimal Cayley graph, then the vertex-connectivity will also be equal to d. Infinite examples Infinite vertex-transitive graphs include: infinite paths (infinite in both directions) infinite regular trees, e.g. the Cayley graph of the free group graphs of uniform tessellati
https://en.wikipedia.org/wiki/Littoral%20zone
The littoral zone, also called litoral or nearshore, is the part of a sea, lake, or river that is close to the shore. In coastal ecology, the littoral zone includes the intertidal zone extending from the high water mark (which is rarely inundated), to coastal areas that are permanently submerged — known as the foreshore — and the terms are often used interchangeably. However, the geographical meaning of littoral zone extends well beyond the intertidal zone to include all neritic waters within the bounds of continental shelves. Etymology The word littoral may be used both as a noun and as an adjective. It derives from the Latin noun litus, litoris, meaning "shore". (The doubled t is a late-medieval innovation, and the word is sometimes seen in the more classical-looking spelling litoral.) Description The term has no single definition. What is regarded as the full extent of the littoral zone, and the way the littoral zone is divided into subregions, varies in different contexts. For lakes, the littoral zone is the nearshore habitat where photosynthetically active radiation penetrates to the lake bottom in sufficient quantities to support photosynthesis. The use of the term also varies from one part of the world to another, and between different disciplines. For example, military commanders speak of the littoral in ways that are quite different from the definition used by marine biologists. The adjacency of water gives a number of distinctive characteristics to littoral regions. The erosive power of water results in particular types of landforms, such as sand dunes, and estuaries. The natural movement of the littoral along the coast is called the littoral drift. Biologically, the ready availability of water enables a greater variety of plant and animal life, and particularly the formation of extensive wetlands. In addition, the additional local humidity due to evaporation usually creates a microclimate supporting unique types of organisms. In oceanography and marin
https://en.wikipedia.org/wiki/Table%20of%20standard%20reduction%20potentials%20for%20half-reactions%20important%20in%20biochemistry
The values below are standard apparent reduction potentials for electro-biochemical half-reactions measured at 25 °C, 1 atmosphere and a pH of 7 in aqueous solution. The actual physiological potential depends on the ratio of the reduced () and oxidized () forms according to the Nernst equation and the thermal voltage. When an oxidizer () accepts a number z of electrons () to be converted in its reduced form (), the half-reaction is expressed as: + z → The reaction quotient (r) is the ratio of the chemical activity (ai) of the reduced form (the reductant, aRed) to the activity of the oxidized form (the oxidant, aox). It is equal to the ratio of their concentrations (Ci) only if the system is sufficiently diluted and the activity coefficients (γi) are close to unity (ai = γi Ci): The Nernst equation is a function of and can be written as follows: At chemical equilibrium, the reaction quotient of the product activity (aRed) by the reagent activity (aOx) is equal to the equilibrium constant () of the half-reaction and in the absence of driving force () the potential () also becomes nul. The numerically simplified form of the Nernst equation is expressed as: Where is the standard reduction potential of the half-reaction expressed versus the standard reduction potential of hydrogen. For standard conditions in electrochemistry (T = 25 °C, P = 1 atm and all concentrations being fixed at 1 mol/L, or 1 M) the standard reduction potential of hydrogen is fixed at zero by convention as it serves of reference. The standard hydrogen electrode (SHE), with [] = 1 M works thus at a pH = 0. At pH = 7, when [] = 10−7 M, the reduction potential of differs from zero because it depends on pH. Solving the Nernst equation for the half-reaction of reduction of two protons into hydrogen gas gives: In biochemistry and in biological fluids, at pH = 7, it is thus important to note that the reduction potential of the protons () into hydrogen gas is no longer zero
https://en.wikipedia.org/wiki/Distance%20transform
A distance transform, also known as distance map or distance field, is a derived representation of a digital image. The choice of the term depends on the point of view on the object in question: whether the initial image is transformed into another representation, or it is simply endowed with an additional map or field. Distance fields can also be signed, in the case where it is important to distinguish whether the point is inside or outside of the shape. The map labels each pixel of the image with the distance to the nearest obstacle pixel. A most common type of obstacle pixel is a boundary pixel in a binary image. See the image for an example of a Chebyshev distance transform on a binary image. Usually the transform/map is qualified with the chosen metric. For example, one may speak of Manhattan distance transform, if the underlying metric is Manhattan distance. Common metrics are: Euclidean distance Taxicab geometry, also known as City block distance or Manhattan distance. Chebyshev distance There are several algorithms to compute the distance transform for these different distance metrics, however the computation of the exact Euclidean distance transform (EEDT) needs special treatment if it is computed on the image grid. Recently, distance transform computation has also been proposed using a static Schrodinger's equation. This particular approach has the benefit of obtaining an analytical closed-form solution to distance transforms, and of computing the average distance transform over a set of distance transforms, owing to the linearity of the Schrödinger equation. Further, this approach has also been leveraged to extend distance transforms to line-segments and curves. Applications are digital image processing (e.g., blurring effects, skeletonizing), motion planning in robotics, medical-image analysis for prenatal genetic testing, and even pathfinding. Uniformly-sampled signed distance fields have been used for GPU-accelerated font smoothing, for exam
https://en.wikipedia.org/wiki/Luminous%20flux
In photometry, luminous flux or luminous power is the measure of the perceived power of light. It differs from radiant flux, the measure of the total power of electromagnetic radiation (including infrared, ultraviolet, and visible light), in that luminous flux is adjusted to reflect the varying sensitivity of the human eye to different wavelengths of light. Units The SI unit of luminous flux is the lumen (lm). One lumen is defined as the luminous flux of light produced by a light source that emits one candela of luminous intensity over a solid angle of one steradian. In other systems of units, luminous flux may have units of power. Weighting The luminous flux accounts for the sensitivity of the eye by weighting the power at each wavelength with the luminosity function, which represents the eye's response to different wavelengths. The luminous flux is a weighted sum of the power at all wavelengths in the visible band. Light outside the visible band does not contribute. The ratio of the total luminous flux to the radiant flux is called the luminous efficacy. This model of the human visual brightness perception, is standardized by the CIE and ISO. Context Luminous flux is often used as an objective measure of the useful light emitted by a light source, and is typically reported on the packaging for light bulbs, although it is not always prominent. Consumers commonly compare the luminous flux of different light bulbs since it provides an estimate of the apparent amount of light the bulb will produce, and a lightbulb with a higher ratio of luminous flux to consumed power is more efficient. Luminous flux is not used to compare brightness, as this is a subjective perception which varies according to the distance from the light source and the angular spread of the light from the source. Measurement Luminous flux of artificial light sources is typically measured using and integrating sphere, or a goniophotometer outfitted with a photometer or a spectroradiometer. Re
https://en.wikipedia.org/wiki/MOSIX
MOSIX is a proprietary distributed operating system. Although early versions were based on older UNIX systems, since 1999 it focuses on Linux clusters and grids. In a MOSIX cluster/grid there is no need to modify or to link applications with any library, to copy files or login to remote nodes, or even to assign processes to different nodes – it is all done automatically, like in an SMP. History MOSIX has been researched and developed since 1977 at The Hebrew University of Jerusalem by the research team of Prof. Amnon Barak. So far, ten major versions have been developed. The first version, called MOS, for Multicomputer OS, (1981–83) was based on Bell Lab's Seventh Edition Unix and ran on a cluster of PDP-11 computers. Later versions were based on Unix System V Release 2 (1987–89) and ran on a cluster of VAX and NS32332-based computers, followed by a BSD/OS-derived version (1991–93) for a cluster of 486/Pentium computers. Since 1999 MOSIX is tuned to Linux for x86 platforms. MOSIX2 The second version of MOSIX, called MOSIX2, compatible with Linux-2.6 and 3.0 kernels. MOSIX2 is implemented as an OS virtualization layer that provides users and applications with a single system image with the Linux run-time environment. It allows applications to run in remote nodes as if they run locally. Users run their regular (sequential and parallel) applications while MOSIX transparently and automatically seek resources and migrate processes among nodes to improve the overall performance. MOSIX2 can manage a cluster and a multicluster (grid) as well as workstations and other shared resources. Flexible management of a grid allows owners of clusters to share their computational resources, while still preserving their autonomy over their own clusters and their ability to disconnect their nodes from the grid at any time, without disrupting already running programs. A MOSIX grid can extend indefinitely as long as there is trust between its cluster owners. This must include guarant
https://en.wikipedia.org/wiki/OpenMosix
openMosix was a free cluster management system that provided single-system image (SSI) capabilities, e.g. automatic work distribution among nodes. It allowed program processes (not threads) to migrate to machines in the node's network that would be able to run that process faster (process migration). It was particularly useful for running parallel applications having low to moderate input/output (I/O). It was released as a Linux kernel patch, but was also available on specialized Live CDs. openMosix development has been halted by its developers, but the LinuxPMI project is continuing development of the former openMosix code. History openMosix was originally forked from MOSIX by Moshe Bar on February 10, 2002 when MOSIX became proprietary software. openMosix was considered stable on Linux kernel 2.4.x for the x86 architecture, but porting to Linux 2.6 kernel remained in the alpha stage. Support for the 64-bit AMD64 architecture only started with the 2.6 version. On July 15, 2007, Bar announced that the openMOSIX project would reach its end of life on March 1, 2008, due to the decreasing need for single system image (SSI) clustering as low-cost multi-core processors increase in availability. OpenMosix used to be distributed as a Gentoo Linux kernel choice, but it was removed from Gentoo Linux's Portage tree in February 2007. As of March 1, 2008, openMosix read-only source code is still hosted at SourceForge. The OpenPMIx project is continuing development of the former openMosix code. ClusterKnoppix ClusterKnoppix is a specialized Linux distribution based on the Knoppix distribution, but which uses the openMosix kernel. Traditionally, clustered computing could only be achieved by setting up individual rsh keys, creating NFS shares, editing host files, setting static IPs, and applying kernel patches manually. ClusterKnoppix effectively renders most of this work unnecessary. The distribution contains an autoconfiguration system where new ClusterKnoppix-runnin
https://en.wikipedia.org/wiki/Luhn%20algorithm
The Luhn algorithm or Luhn formula, also known as the "modulus 10" or "mod 10" algorithm, named after its creator, IBM scientist Hans Peter Luhn, is a simple check digit formula used to validate a variety of identification numbers. It is described in U.S. Patent No. 2,950,048, granted on August 23, 1960. The algorithm is in the public domain and is in wide use today. It is specified in ISO/IEC 7812-1. It is not intended to be a cryptographically secure hash function; it was designed to protect against accidental errors, not malicious attacks. Most credit cards and many government identification numbers use the algorithm as a simple method of distinguishing valid numbers from mistyped or otherwise incorrect numbers. Description The check digit is computed as follows: If the number already contains the check digit, drop that digit to form the "payload". The check digit is most often the last digit. With the payload, start from the rightmost digit. Moving left, double the value of every second digit (including the rightmost digit). Sum the values of the resulting digits. The check digit is calculated by , where s is the sum from step 3. This is the smallest number (possibly zero) that must be added to to make a multiple of 10. Other valid formulas giving the same value are , , and . Note that the formula will not work in all environments due to differences in how negative numbers are handled by the modulo operation. Example for computing check digit Assume an example of an account number 1789372997 (just the "payload", check digit not yet included): The sum of the resulting digits is 56. The check digit is equal to . This makes the full account number read 17893729974. Example for validating check digit Drop the check digit (last digit) of the number to validate. (e.g. 17893729974 → 1789372997) Calculate the check digit (see above) Compare your result with the original check digit. If both numbers match, the result is valid. Strengths and weakne
https://en.wikipedia.org/wiki/Oogenesis
Oogenesis, ovogenesis, or oögenesis is the differentiation of the ovum (egg cell) into a cell competent to further develop when fertilized. It is developed from the primary oocyte by maturation. Oogenesis is initiated in the embryonic stage. Oogenesis in non-human mammals In mammals, the first part of oogenesis starts in the germinal epithelium, which gives rise to the development of ovarian follicles, the functional unit of the ovary. Oogenesis consists of several sub-processes: oocytogenesis, ootidogenesis, and finally maturation to form an ovum (oogenesis proper). Folliculogenesis is a separate sub-process that accompanies and supports all three oogenetic sub-processes. Oogonium —(Oocytogenesis)—> Primary Oocyte —(Meiosis I)—> First Polar body (Discarded afterward) + Secondary oocyte —(Meiosis II)—> Second Polar Body (Discarded afterward) + Ovum Oocyte meiosis, important to all animal life cycles yet unlike all other instances of animal cell division, occurs completely without the aid of spindle-coordinating centrosomes. The creation of oogonia The creation of oogonia traditionally doesn't belong to oogenesis proper, but, instead, to the common process of gametogenesis, which, in the female human, begins with the processes of folliculogenesis, oocytogenesis, and ootidogenesis. Oogonia enter meiosis during embryonic development, becoming oocytes. Meiosis begins with DNA replication and meiotic crossing over. It then stops in early prophase. Maintenance of meiotic arrest Mammalian oocytes are maintained in meiotic prophase arrest for a very long time—months in mice, years in humans. Initially the arrest is due to lack of sufficient cell cycle proteins to allow meiotic progression. However, as the oocyte grows, these proteins are synthesized, and meiotic arrest becomes dependent on cyclic AMP. The cyclic AMP is generated by the oocyte by adenylyl cyclase in the oocyte membrane. The adenylyl cyclase is kept active by a constitutively active G-protein-coupled
https://en.wikipedia.org/wiki/MLAN
mLAN, short for Music Local Area Network, is a protocol for synchronized transmission and management of multi-channel digital audio, video, control signals and multi-port MIDI over a network. Description The mLAN protocol was originally developed by Yamaha Corporation, and publicly introduced in January 2000. It was available under a royalty-free license to anyone interested in utilizing the technology. mLAN uses several features of the IEEE 1394 (FireWire) standard such as isochronous transfer and intelligent connection management. There are two versions of the mLAN protocol. Version 1 requires S200 rate, while Version 2 requires S400 rate and supports synchronized streaming of digital audio at up to 24 bit word length and 192 kHz sample rate, MIDI and word clock at a bitrate up to 400 Megabits per second. With the proper driver software, a computer-based digital audio workstation can interact with mLAN-compliant hardware via any OHCI-compliant FireWire port. mLAN consumes the entire bus bandwidth when operating and non-mLAN devices cannot share the same Firewire connection, so it is not possible to use hard drives, optical drives or other sound devices on the same Firewire bus when mLAN Manager software is running. The transport layers of mLAN have been standardized as IEC 61883. End of life By 2005, over 100 manufacturers were part of the mLAN Alliance however very few actual products had surfaced. As of early 2008, mLAN appeared to have reached the end of its product life. Third-party developers discontinued or retracted their mLAN products from the market, and Yamaha itself ceased any new releases of mLAN hardware or updates to the mLAN software and drivers. Even though more recent FireWire based products from Yamaha could interoperate with earlier mLAN devices using a computer, any mention of mLAN is notably absent from new product announcements and driver updates. Products Yamaha 01X digital mixing hub Yamaha i88x audio/MIDI interface Yamaha mLA
https://en.wikipedia.org/wiki/Particle%20number
In thermodynamics, the particle number (symbol ) of a thermodynamic system is the number of constituent particles in that system. The particle number is a fundamental thermodynamic property which is conjugate to the chemical potential. Unlike most physical quantities, the particle number is a dimensionless quantity, specifically a countable quantity. It is an extensive property, as it is directly proportional to the size of the system under consideration and thus meaningful only for closed systems. A constituent particle is one that cannot be broken into smaller pieces at the scale of energy involved in the process (where is the Boltzmann constant and is the temperature). For example, in a thermodynamic system consisting of a piston containing water vapour, the particle number is the number of water molecules in the system. The meaning of constituent particles, and thereby of particle numbers, is thus temperature-dependent. Determining the particle number The concept of particle number plays a major role in theoretical considerations. In situations where the actual particle number of a given thermodynamical system needs to be determined, mainly in chemistry, it is not practically possible to measure it directly by counting the particles. If the material is homogeneous and has a known amount of substance n expressed in moles, the particle number N can be found by the relation : , where NA is the Avogadro constant. Particle number density A related intensive system parameter is the particle number density, a quantity of kind volumetric number density obtained by dividing the particle number of a system by its volume. This parameter is often denoted by the lower-case letter n. In quantum mechanics In quantum mechanical processes, the total number of particles may not be preserved. The concept is therefore generalized to the particle number operator, that is, the observable that counts the number of constituent particles. In quantum field theory, the particle num
https://en.wikipedia.org/wiki/VGA%20connector
The Video Graphics Array (VGA) connector is a standard connector used for computer video output. Originating with the 1987 IBM PS/2 and its VGA graphics system, the 15-pin connector went on to become ubiquitous on PCs, as well as many monitors, projectors and high-definition television sets. Other connectors have been used to carry VGA-compatible signals, such as mini-VGA or BNC, but "VGA connector" typically refers to this design. Devices continue to be manufactured with VGA connectors, although newer digital interfaces such as DVI, HDMI and DisplayPort are increasingly displacing VGA, and many modern computers and other devices do not include it. Physical design The VGA connector is a three-row, 15-pin D-subminiature connector referred to variously as DE-15, HD-15 or erroneously DB-15(HD). DE-15 is the accurate nomenclature under the D-sub specifications: an "E" size D-sub connector, with 15 pins in three rows. Electrical design All VGA connectors carry analog RGBHV (red, green, blue, horizontal sync, vertical sync) video signals. Modern connectors also include VESA DDC pins, for identifying attached display devices. In both its modern and original variants, VGA utilizes multiple scan rates, so attached devices such as monitors are multisync by necessity. The VGA interface includes no affordances for hot swapping, the ability to connect or disconnect the output device during operation, although in practice this can be done and usually does not cause damage to the hardware or other problems. The VESA DDC specification does, however, include a standard for hot-swapping. PS/2 signaling In the original IBM VGA implementation, refresh rates were limited to two vertical (60 and 70 Hz) and three horizontal frequencies, all of which were communicated to the monitor using combinations of different polarity H and V sync signals. Some pins on the connector were also different: pin 9 was keyed by plugging the female connector hole, and four pins carried the monitor
https://en.wikipedia.org/wiki/Telstra%20Tower
Telstra Tower (also known as Black Mountain Tower and formerly Telecom Tower) is a telecommunications tower and lookout that is situated above the summit of Black Mountain in Australia's capital city of Canberra. It is named after Australia's largest telecommunications company, Telstra Corporation. The Tower sits within the InfraCo division, which is responsible for its operations and maintenance. Rising above the mountain summit, it is a landmark in Canberra and offers panoramic views of the city and its surrounding countryside from an indoor observation deck and two outdoor viewing platforms. It is currently closed with no planned date of when it will open. History In April 1971 the Postmaster General (PMG) at the time commissioned the Commonwealth Department of Housing and Construction to carry out a feasibility study in relation to a tower on Black Mountain accommodating both communication services and facilities for visitors. The tower was to replace the microwave relay station on Red Hill and the television broadcast masts already on Black Mountain. Design of the tower was the responsibility of the Department of Housing and Construction, however a conflict arose with the National Capital Development Commission which, at the time, had complete control over planning within the Australian Capital Territory. During the approval process of the tower, protests arose on aesthetic and ecological grounds. Some people felt that the tower would dominate other aesthetic Canberra structures due to its location above Black Mountain and within a nature reserve. There was a brief green ban imposed by the Builders Labourers Federation (under the influence of Jack Mundey) that eventually failed. A case was brought before the High Court of Australia arguing that the Federal Government did not have the constitutional power to construct the tower (Johnson v Kent (1975) 132 CLR 164). The decision was made in favour of the government and construction was able to commence. Will
https://en.wikipedia.org/wiki/Segmentation%20%28biology%29
Segmentation in biology is the division of some animal and plant body plans into a linear series of repetitive segments that may or may not be interconnected to each other. This article focuses on the segmentation of animal body plans, specifically using the examples of the taxa Arthropoda, Chordata, and Annelida. These three groups form segments by using a "growth zone" to direct and define the segments. While all three have a generally segmented body plan and use a growth zone, they use different mechanisms for generating this patterning. Even within these groups, different organisms have different mechanisms for segmenting the body. Segmentation of the body plan is important for allowing free movement and development of certain body parts. It also allows for regeneration in specific individuals. Definition Segmentation is a difficult process to satisfactorily define. Many taxa (for example the molluscs) have some form of serial repetition in their units but are not conventionally thought of as segmented. Segmented animals are those considered to have organs that were repeated, or to have a body composed of self-similar units, but usually it is the parts of an organism that are referred to as being segmented. Embryology Segmentation in animals typically falls into three types, characteristic of different arthropods, vertebrates, and annelids. Arthropods such as the fruit fly form segments from a field of equivalent cells based on transcription factor gradients. Vertebrates like the zebrafish use oscillating gene expression to define segments known as somites. Annelids such as the leech use smaller blast cells budded off from large teloblast cells to define segments. Arthropods Although Drosophila segmentation is not representative of the arthropod phylum in general, it is the most highly studied. Early screens to identify genes involved in cuticle development led to the discovery of a class of genes that was necessary for proper segmentation of the Drosophila
https://en.wikipedia.org/wiki/Function%20type
In computer science and mathematical logic, a function type (or arrow type or exponential) is the type of a variable or parameter to which a function has or can be assigned, or an argument or result type of a higher-order function taking or returning a function. A function type depends on the type of the parameters and the result type of the function (it, or more accurately the unapplied type constructor , is a higher-kinded type). In theoretical settings and programming languages where functions are defined in curried form, such as the simply typed lambda calculus, a function type depends on exactly two types, the domain A and the range B. Here a function type is often denoted , following mathematical convention, or , based on there existing exactly (exponentially many) set-theoretic functions mappings A to B in the category of sets. The class of such maps or functions is called the exponential object. The act of currying makes the function type adjoint to the product type; this is explored in detail in the article on currying. The function type can be considered to be a special case of the dependent product type, which among other properties, encompasses the idea of a polymorphic function. Programming languages The syntax used for function types in several programming languages can be summarized, including an example type signature for the higher-order function composition function: When looking at the example type signature of, for example C#, the type of the function is actually Func<Func<A,B>,Func<B,C>,Func<A,C>>. Due to type erasure in C++11's std::function, it is more common to use templates for higher order function parameters and type inference (auto) for closures. Denotational semantics The function type in programming languages does not correspond to the space of all set-theoretic functions. Given the countably infinite type of natural numbers as the domain and the booleans as range, then there are an uncountably infinite number (2ℵ0 = c) of set
https://en.wikipedia.org/wiki/Oxygen%20cycle
Oxygen cycle refers to the movement of oxygen through the atmosphere (air), biosphere (plants and animals) and the lithosphere (the Earth’s crust). The oxygen cycle demonstrates how free oxygen is made available in each of these regions, as well as how it is used. The oxygen cycle is the biogeochemical cycle of oxygen atoms between different oxidation states in ions, oxides, and molecules through redox reactions within and between the spheres/reservoirs of the planet Earth. The word oxygen in the literature typically refers to the most common oxygen allotrope, elemental/diatomic oxygen (O2), as it is a common product or reactant of many biogeochemical redox reactions within the cycle. Processes within the oxygen cycle are considered to be biological or geological and are evaluated as either a source (O2 production) or sink (O2 consumption). Oxygen is one of the most common elements on Earth and represents a large portion of each main reservoir. By far the largest reservoir of Earth's oxygen is within the silicate and oxide minerals of the crust and mantle (99.5% by weight). The Earth's atmosphere, hydrosphere, and biosphere together hold less than 0.05% of the Earth's total mass of oxygen. Besides O2, additional oxygen atoms are present in various forms spread throughout the surface reservoirs in the molecules of biomass, H2O, CO2, HNO3, NO, NO2, CO, H2O2, O3, SO2, H2SO4, MgO, CaO, AlO, SiO2, and PO4. Atmosphere The atmosphere is 21% oxygen by volume, which equates to a total of roughly 34 × 1018 mol of oxygen. Other oxygen-containing molecules in the atmosphere include ozone (O3), carbon dioxide (CO2), water vapor (H2O), and sulphur and nitrogen oxides (SO2, NO, N2O, etc.). Biosphere The biosphere is 22% oxygen by volume, present mainly as a component of organic molecules (CxHxNxOx) and water. Hydrosphere The hydrosphere is 33% oxygen by volume present mainly as a component of water molecules, with dissolved molecules including free oxygen and carbolic acids
https://en.wikipedia.org/wiki/Theory%20of%20equations
In algebra, the theory of equations is the study of algebraic equations (also called "polynomial equations"), which are equations defined by a polynomial. The main problem of the theory of equations was to know when an algebraic equation has an algebraic solution. This problem was completely solved in 1830 by Évariste Galois, by introducing what is now called Galois theory. Before Galois, there was no clear distinction between the "theory of equations" and "algebra". Since then algebra has been dramatically enlarged to include many new subareas, and the theory of algebraic equations receives much less attention. Thus, the term "theory of equations" is mainly used in the context of the history of mathematics, to avoid confusion between old and new meanings of "algebra". History Until the end of the 19th century, "theory of equations" was almost synonymous with "algebra". For a long time, the main problem was to find the solutions of a single non-linear polynomial equation in a single unknown. The fact that a complex solution always exists is the fundamental theorem of algebra, which was proved only at the beginning of the 19th century and does not have a purely algebraic proof. Nevertheless, the main concern of the algebraists was to solve in terms of radicals, that is to express the solutions by a formula which is built with the four operations of arithmetics and with nth roots. This was done up to degree four during the 16th century. Scipione del Ferro and Niccolò Fontana Tartaglia discovered solutions for cubic equations. Gerolamo Cardano published them in his 1545 book Ars Magna, together with a solution for the quartic equations, discovered by his student Lodovico Ferrari. In 1572 Rafael Bombelli published his L'Algebra in which he showed how to deal with the imaginary quantities that could appear in Cardano's formula for solving cubic equations. The case of higher degrees remained open until the 19th century, when Paolo Ruffini gave an incomplete proof in 17
https://en.wikipedia.org/wiki/Barycentric%20subdivision
In mathematics, the barycentric subdivision is a standard way to subdivide a given simplex into smaller ones. Its extension on simplicial complexes is a canonical method to refine them. Therefore, the barycentric subdivision is an important tool in algebraic topology. Motivation The barycentric subdivision is an operation on simplicial complexes. In algebraic topology it is sometimes useful to replace the original spaces with simplicial complexes via triangulations: The substitution allows to assign combinatorial invariants as the Euler characteristic to the spaces. One can ask if there is an analogous way to replace the continuous functions defined on the topological spaces by functions that are linear on the simplices and which are homotopic to the original maps (see also simplicial approximation). In general, such an assignment requires a refinement of the given complex, meaning, one replaces bigger simplices by a union of smaller simplices. A standard way to effectuate such a refinement is the barycentric subdivision. Moreover, barycentric subdivision induces maps on homology groups and is helpful for computational concerns, see Excision and Mayer-Vietoris-sequence. Definition Subdivision of simplicial complexes Let be a geometric simplicial complex. A complex is said to be a subdivision of if each simplex of is contained in a simplex of each simplex of is a finite union of simplices of These conditions imply that and equal as sets and as topological spaces, only the simplicial structure changes. Barycentric subdivision of a simplex For a simplex spanned by points , the barycenter is defined to be the point . To define the subdivision, we will consider a simplex as a simplicial complex that contains only one simplex of maximal dimension, namely the simplex itself. The barycentric subdivision of a simplex can be defined inductively by its dimension. For points, i.e. simplices of dimension 0, the barycentric subdivision is defined as the p
https://en.wikipedia.org/wiki/Tarski%27s%20undefinability%20theorem
Tarski's undefinability theorem, stated and proved by Alfred Tarski in 1933, is an important limitative result in mathematical logic, the foundations of mathematics, and in formal semantics. Informally, the theorem states that "arithmetical truth cannot be defined in arithmetic". The theorem applies more generally to any sufficiently strong formal system, showing that truth in the standard model of the system cannot be defined within the system. History In 1931, Kurt Gödel published the incompleteness theorems, which he proved in part by showing how to represent the syntax of formal logic within first-order arithmetic. Each expression of the formal language of arithmetic is assigned a distinct number. This procedure is known variously as Gödel numbering, coding and, more generally, as arithmetization. In particular, various sets of expressions are coded as sets of numbers. For various syntactic properties (such as being a formula, being a sentence, etc.), these sets are computable. Moreover, any computable set of numbers can be defined by some arithmetical formula. For example, there are formulas in the language of arithmetic defining the set of codes for arithmetic sentences, and for provable arithmetic sentences. The undefinability theorem shows that this encoding cannot be done for semantic concepts such as truth. It shows that no sufficiently rich interpreted language can represent its own semantics. A corollary is that any metalanguage capable of expressing the semantics of some object language (e.g. a predicate is definable in Zermelo-Fraenkel set theory for whether formulae in the language of Peano arithmetic are true in the standard model of arithmetic) must have expressive power exceeding that of the object language. The metalanguage includes primitive notions, axioms, and rules absent from the object language, so that there are theorems provable in the metalanguage not provable in the object language. The undefinability theorem is conventionally attrib
https://en.wikipedia.org/wiki/HP%202100
The HP 2100 is a series of 16-bit minicomputers that were produced by Hewlett-Packard (HP) from the mid-1960s to early 1990s. Tens of thousands of machines in the series were sold over its twenty-five year lifetime, making HP the fourth largest minicomputer vendor during the 1970s. The design started at Data Systems Inc (DSI), and was originally known as the DSI-1000. HP purchased the company in 1964 and merged it into their Dymec division. The original model, the 2116A built using integrated circuits and magnetic-core memory, was released in 1966. Over the next four years, models A through C were released with different types of memory and expansion, as well as the cost-reduced 2115 and 2114 models. All of these models were replaced by the HP 2100 series in 1971, and then again as the 21MX series in 1974 when the magnetic-core memory was replaced with semiconductor memory. All of these models were also packaged as the HP 2000 series, combining a 2100-series machine with optional components in order to run the BASIC programming language in a multi-user time sharing fashion. HP Time-Shared BASIC was popular in the 1970s, and many early BASIC programs were written on or for the platform, most notably the seminal Star Trek that was popular during the early home computer era. The People's Computer Company published their programs in HP 2000 format. The introduction of the HP 3000 in 1974 provided high-end competition to the 2100 series; the entire line was renamed as the HP 1000 in 1977 and positioned as real-time computers. A greatly redesigned version was introduced in 1979 as the 1000 L-Series, using CMOS large scale integration chips and introducing a desk-side tower case model. This was the first version to break backward compatibility with previous 2100-series expansion cards. The final upgrade was the A-series, with new processors capable of more than 1 MIPS performance, with the final A990 released in 1990. History Origins HP formed Dynac in 1956 to act a
https://en.wikipedia.org/wiki/Combined%20cycle%20power%20plant
A combined cycle power plant is an assembly of heat engines that work in tandem from the same source of heat, converting it into mechanical energy. On land, when used to make electricity the most common type is called a combined cycle gas turbine (CCGT) plant. The same principle is also used for marine propulsion, where it is called a combined gas and steam (COGAS) plant. Combining two or more thermodynamic cycles improves overall efficiency, which reduces fuel costs. The principle is that after completing its cycle in the first engine, the working fluid (the exhaust) is still hot enough that a second subsequent heat engine can extract energy from the heat in the exhaust. Usually the heat passes through a heat exchanger so that the two engines can use different working fluids. By generating power from multiple streams of work, the overall efficiency can be increased by 50–60%. That is, from an overall efficiency of the system of say 34% for a simple cycle, to as much as 64% net for the turbine alone in specified conditions for a combined cycle. This is more than 84% of the theoretical efficiency of a Carnot cycle. Heat engines can only use part of the energy from their fuel, so in a non-combined cycle heat engine, the remaining heat (i.e., hot exhaust gas) from combustion is wasted. Historical cycles Historically successful combined cycles have used mercury vapour turbines, magnetohydrodynamic generators and molten carbonate fuel cells, with steam plants for the low temperature "bottoming" cycle. Very low temperature bottoming cycles have been too costly due to the very large sizes of equipment needed to handle the large mass flows and small temperature differences. However, in cold climates it is common to sell hot power plant water for hot water and space heating. Vacuum-insulated piping can let this utility reach as far as 90 km. The approach is called "combined heat and power" (CHP). In stationary and marine power plants, a widely used combined cycle has a
https://en.wikipedia.org/wiki/Topological%20abelian%20group
In mathematics, a topological abelian group, or TAG, is a topological group that is also an abelian group. That is, a TAG is both a group and a topological space, the group operations are continuous, and the group's binary operation is commutative. The theory of topological groups applies also to TAGs, but more can be done with TAGs. Locally compact TAGs, in particular, are used heavily in harmonic analysis. See also , a topological abelian group that is compact and connected References Fourier analysis on Groups, by Walter Rudin. Abelian group theory Topology Topological groups
https://en.wikipedia.org/wiki/Cladogenesis
Cladogenesis is an evolutionary splitting of a parent species into two distinct species, forming a clade. This event usually occurs when a few organisms end up in new, often distant areas or when environmental changes cause several extinctions, opening up ecological niches for the survivors and causing population bottlenecks and founder effects changing allele frequencies of diverging populations compared to their ancestral population. The events that cause these species to originally separate from each other over distant areas may still allow both of the species to have equal chances of surviving, reproducing, and even evolving to better suit their environments while still being two distinct species due to subsequent natural selection, mutations and genetic drift. Cladogenesis is in contrast to anagenesis, in which an ancestral species gradually accumulates change, and eventually, when enough is accumulated, the species is sufficiently distinct and different enough from its original starting form that it can be labeled as a new form - a new species. With anagenesis, the lineage in a phylogenetic tree does not split. To determine whether a speciation event is cladogenesis or anagenesis, researchers may use simulation, evidence from fossils, molecular evidence from the DNA of different living species, or modelling. It has however been debated whether the distinction between cladogenesis and anagenesis is necessary at all in evolutionary theory. See also Anagenesis Evolutionary biology Speciation References Evolutionary biology concepts Phylogenetics
https://en.wikipedia.org/wiki/Edge-transitive%20graph
In the mathematical field of graph theory, an edge-transitive graph is a graph such that, given any two edges and of , there is an automorphism of that maps to . In other words, a graph is edge-transitive if its automorphism group acts transitively on its edges. Examples and properties The number of connected simple edge-transitive graphs on n vertices is 1, 1, 2, 3, 4, 6, 5, 8, 9, 13, 7, 19, 10, 16, 25, 26, 12, 28 ... Edge-transitive graphs include all symmetric graph, such as the vertices and edges of the cube. Symmetric graphs are also vertex-transitive (if they are connected), but in general edge-transitive graphs need not be vertex-transitive. Every connected edge-transitive graph that is not vertex-transitive must be bipartite, (and hence can be colored with only two colors), and either semi-symmetric or biregular. Examples of edge but not vertex transitive graphs include the complete bipartite graphs where m ≠ n, which includes the star graphs . For graphs on n vertices, there are (n-1)/2 such graphs for odd n and (n-2) for even n. Additional edge transitive graphs which are not symmetric can be formed as subgraphs of these complete bi-partite graphs in certain cases. Subgraphs of complete bipartite graphs Km,n exist when m and n share a factor greater than 2. When the greatest common factor is 2, subgraphs exist when 2n/m is even or if m=4 and n is an odd multiple of 6. So edge transitive subgraphs exist for K3,6, K4,6 and K5,10 but not K4,10. An alternative construction for some edge transitive graphs is to add vertices to the midpoints of edges of a symmetric graph with v vertices and e edges, creating a bipartite graph with e vertices of order 2, and v of order 2e/v. An edge-transitive graph that is also regular, but still not vertex-transitive, is called semi-symmetric. The Gray graph, a cubic graph on 54 vertices, is an example of a regular graph which is edge-transitive but not vertex-transitive. The Folkman graph, a quartic graph on 20
https://en.wikipedia.org/wiki/ABC%20800
The Luxor ABC 800 series are office versions of the ABC 80 home computer. They featured an enhanced BASIC interpreter, a slightly faster clocked CPU and more memory: 32 kilobytes RAM and 32 KB ROM was now standard, the Z80 is clocked at (quarter the 12 MHz crystal). It featured 40×24 text mode with eight colors (ABC 800 C) or 80×24 text mode monochrome (ABC 800 M). They could also be extended with "high" resolution graphics (240×240 pixels at 2 ) using RAM as video memory. Models ABC 800 The ABC 800 came in a monochrome version with amber text on a brown background with an 80 character wide screen, and a color version with 40 characters. The main board is integrated with the keyboard, much like the Amiga 500. However, the ABC computer has a very sturdy metal chassis. Storage is usually two 5.25" floppy disk units in 160, 320 or 640 KB capacity. External hard disk systems became available later (primarily the ABC 850 with 10 MB). Model numbers 'ABC 800 M' for monochrome and 'ABC 800 C' for color. Luxor advertising asked, "Who needs IBM-compatibility?" However, most computer buyers eventually considered it a requirement. A certain degree of compatibility between the ABC and IBM PC platforms could be achieved with the help of a program called 'W ABC'. The ABC 800 computer was also sold by Facit under the name Facit DTC. ABC 802 The ABC 802 is a compact version with 64 KB RAM where 32 KB is used as a RAM disk. The main board is integrated with a 9" CRT screen and has improved graphics, though no high-resolution graphics. Luxor ABC 802 was a model with a small monochrome screen in yellow phosphor, intended for offices. Here with two 5.25 inch disk drives along the side of the display. The grey-brown color was common for all ABC 800 (and ABC 1600) products and was different from the beige ABC 80. ABC 806 The ABC 806 is a version with main board, screen (DA-15) and keyboard (DIN-7) as separate units. It has RAM where is used as a RAM disk, as well as more
https://en.wikipedia.org/wiki/Winner%27s%20curse
The winner's curse is a phenomenon that may occur in common value auctions, where all bidders have the same (ex post) value for an item but receive different private (ex ante) signals about this value and wherein the winner is the bidder with the most optimistic evaluation of the asset and therefore will tend to overestimate and overpay. Accordingly, the winner will be "cursed" in one of two ways: either the winning bid will exceed the value of the auctioned asset making the winner worse off in absolute terms, or the value of the asset will be less than the bidder anticipated, so the bidder may garner a net gain but will be worse off than anticipated. However, an actual overpayment will generally occur only if the winner fails to account for the winner's curse when bidding (an outcome that, according to the revenue equivalence theorem, need never occur). The winner’s curse phenomenon was first addressed in 1971 by three Atlantic Richfield petroleum engineers who claimed that oil companies suffered unexpectedly low returns "year after year" in early Outer Continental Shelf oil lease auctions. Outer Continental Shelf auctions are common value auctions, where value of the oil in the ground is essentially the same to all bidders. Explanation In a common value auction, the auctioned item is of roughly equal value to all bidders, but the bidders don't know the item's market value when they bid. Each player independently estimates the value of the item before bidding. The winner of an auction is the bidder who submits the highest bid. Since the auctioned item is worth roughly the same to all bidders, they are distinguished only by their respective estimates of the market value. The winner, then, is the bidder making the highest estimate. If we assume that the average bid is accurate, then the highest bidder overestimates the item's value. Thus, the auction's winner is likely to overpay. More formally, this result is obtained using conditional expectation. We are inter
https://en.wikipedia.org/wiki/Small%20interfering%20RNA
Small interfering RNA (siRNA), sometimes known as short interfering RNA or silencing RNA, is a class of double-stranded RNA at first non-coding RNA molecules, typically 20–24 (normally 21) base pairs in length, similar to miRNA, and operating within the RNA interference (RNAi) pathway. It interferes with the expression of specific genes with complementary nucleotide sequences by degrading mRNA after transcription, preventing translation. Structure Naturally occurring siRNAs have a well-defined structure that is a short (usually 20 to 24-bp) double-stranded RNA (dsRNA) with phosphorylated 5' ends and hydroxylated 3' ends with two overhanging nucleotides. The Dicer enzyme catalyzes production of siRNAs from long dsRNAs and small hairpin RNAs. siRNAs can also be introduced into cells by transfection. Since in principle any gene can be knocked down by a synthetic siRNA with a complementary sequence, siRNAs are an important tool for validating gene function and drug targeting in the post-genomic era. History In 1998, Andrew Fire at Carnegie Institution for Science in Washington DC and Craig Mello at University of Massachusetts in Worcester discovered the RNAi mechanism while working on the gene expression in the nematode, Caenorhabditis elegans. They won the Nobel prize for their research with RNAi in 2006. siRNAs and their role in post-transcriptional gene silencing(PTGS) was discovered in plants by David Baulcombe's group at the Sainsbury Laboratory in Norwich, England and reported in Science in 1999. Thomas Tuschl and colleagues soon reported in Nature that synthetic siRNAs could induce RNAi in mammalian cells. In 2001, the expression of a specific gene was successfully silenced by introducing chemically synthesized siRNA into mammalian cells (Tuschl et al.) These discoveries led to a surge in interest in harnessing RNAi for biomedical research and drug development. Significant developments in siRNA therapies have been made with both organic (carbon based) and in
https://en.wikipedia.org/wiki/Trusted%20third%20party
In cryptography, a trusted third party (TTP) is an entity which facilitates interactions between two parties who both trust the third party; the third party reviews all critical transaction communications between the parties, based on the ease of creating fraudulent digital content. In TTP models, the relying parties use this trust to secure their own interactions. TTPs are common in any number of commercial transactions and in cryptographic digital transactions as well as cryptographic protocols, for example, a certificate authority (CA) would issue a digital certificate to one of the two parties in the next example. The CA then becomes the TTP to that certificate's issuance. Likewise transactions that need a third party recordation would also need a third-party repository service of some kind. 'Trusted' means that a system needs to be trusted to act in your interests, but it has the option (either at will or involuntarily) to act against your interests. 'Trusted' also means that there is no way to verify if that system is operating in your interests, hence the need to trust it. Corollary: if a system can be verified to operate in your interests, it would not need your trust. And if it can be shown to operate against your interests one would not use it. An example Suppose Alice and Bob wish to communicate securely – they may choose to use cryptography. Without ever having met Bob, Alice may need to obtain a key to use to encrypt messages to him. In this case, a TTP is a third party who may have previously seen Bob (in person), or is otherwise willing to vouch for that this key (typically in a public key certificate) belongs to the person indicated in that certificate, in this case, Bob. Let's call this third person Trent. Trent gives Bob's key to Alice, who then uses it to send secure messages to Bob. Alice can trust this key to be Bob's if she trusts Trent. In such discussions, it is simply assumed that she has valid reasons to do so (of course there is the issu
https://en.wikipedia.org/wiki/Thermomicrobia
The Thermomicrobia is a group of thermophilic green non-sulfur bacteria. Based on species Thermomicrobium roseum (type species) and Sphaerobacter thermophilus, this bacteria class has the following description: The class Thermomicrobia subdivides into two orders with validly published names: Thermomicrobiales Garrity and Holt 2001 and Sphaerobacterales Stackebrandt, Rainey and Ward-Rainey 1997. Gram negative. Pleomorphic, non-motile, non-spore-forming rods. Non-sporulating. No diamino acid present. No peptidoglycan in significant amount. Atypical proteinaceous cell walls. Hyper-thermophilic, optimum growth temperature at 70-75 °C. Obligatory aerobic and chemoorganotrophic. As thermophilic bacteria, members of this class are usually found in environments which are distant from human activity. However, they have features like improved growth in antibiotics and CO oxidizing activity, making them interesting topics of research (e.g. for biotechnology application). History In 1973, a strain of rose-pink thermophilic bacteria was isolated from Toadstool Spring in Yellowstone National Park, which was later named Thermomicrobium roseum and proposed as a novel species of the novel genus Thermomicrobium. At that time the genus was categorized under family Achromobacteraceae, but it became a distinct phylum by 2001. In 2004, it was proposed, on the basis of an analysis of genetic affiliations, that the Thermomicrobia should more properly be reclassified as a class belonging to the phylum Chloroflexota (formerly Chloroflexi). The bacteria Sphaerobacter thermophilus originally described as an Actinobacteria is now considered a Thermomicrobia. In the same year, another strain of rose-pink thermophilic bacteria was isolated from Yellowstone National Park, which was named Thermobaculum terrenum. Later analysis based on genome put this species under Thermomicrobia class. However, the current standing of Thermobaculum terrenum is disputed. In 2012, a thermo-tolerant nitrite-o
https://en.wikipedia.org/wiki/Thermophobia
Thermophobia (adjective: thermophobic) is intolerance for high temperatures by either inorganic materials or organisms. The term has a number of specific usages. In pharmacy, a thermophobic foam consisting of 0.1% betamethasone valerate was found to be at least as effective as conventional remedies for treating dandruff. In addition, the foam is non-greasy and does not irritate the scalp. Another use of thermophobic material is in treating hyperhydrosis of the axilla and the palm: A thermophobic foam named Bettamousse developed by Mipharm, an Italian company, was found to treat hyperhydrosis effectively. In biology, some bacteria are thermophobic, such as mycobacterium leprae which causes leprosy. Thermophobic response in living organisms is negative response to higher temperatures. In physics, thermophobia is motion of particles in mixtures (solutions, suspensions, etc.) towards the areas of lower temperatures, a particular case of thermophoresis. In medicine, thermophobia refers to a sensory dysfunction, sensation of abnormal heat, which may be associated with, e.g., hyperthyroidism. See also Heat intolerance References Temperature Physiology
https://en.wikipedia.org/wiki/Source%20port
A source port is a software project based on the source code of a game engine that allows the game to be played on operating systems or computing platforms with which the game was not originally compatible. Description Source ports are often created by fans after the original developer hands over the maintenance support for a game by releasing its source code to the public (see List of commercial video games with later released source code). In some cases, the source code used to create a source port must be obtained through reverse engineering, in situations where the original source was never formally released by the game's developers. The term was coined after the release of the source code to Doom. Due to copyright issues concerning the sound library used by the original DOS version, id Software released only the source code to the Linux version of the game. Since the majority of Doom players were DOS users the first step for a fan project was to port the Linux source code to DOS. A source port typically only includes the engine portion of the game and requires that the data files of the game in question already be present on users' systems. Source ports share the similarity with unofficial patches that both don't change the original gameplay as such projects are by definition mods. However many source ports add support for gameplay mods, which is usually optional (e.g. DarkPlaces consists of a source port engine and a gameplay mod that are even distributed separately). While the primary goal of any source port is compatibility with newer hardware, many projects support other enhancements. Common examples of additions include support for higher video resolutions and different aspect ratios, hardware accelerated renderers (OpenGL and/or Direct3D), enhanced input support (including the ability to map controls onto additional input devices), 3D character models (in case of 2.5D games), higher resolution textures, support to replace MIDI with digital audio (MP3, O
https://en.wikipedia.org/wiki/Closed-form%20expression
In mathematics, an expression is in closed form if it is formed with constants, variables and a finite set of basic functions connected by arithmetic operations (, and integer powers) and function composition. Commonly, the allowed functions are nth root, exponential function, logarithm, and trigonometric functions . However, the set of basic functions depends on the context. The closed-form problem arises when new ways are introduced for specifying mathematical objects, such as limits, series and integrals: given an object specified with such tools, a natural problem is to find, if possible, a closed-form expression of this object, that is, an expression of this object in terms of previous ways of specifying it. Example: roots of polynomials The quadratic formula is a closed form of the solutions to the general quadratic equation More generally, in the context of polynomial equations, a closed form of a solution is a solution in radicals; that is, a closed-form expression for which the allowed functions are only th-roots and field operations (+-/*). In fact, field theory allows showing that if a solution of a polynomial equation has a closed form involving exponentials, logarithms or trigonometric functions, then it has also a closed form that does not involve these functions. There are expressions in radicals for all solutions of cubic equations (degree 3) and quartic equations (degree 4). However, they are rarely written explicitly because they are too complicated to be useful. In higher degrees, Abel–Ruffini theorem states that there are equations whose solutions cannot be expressed in radicals, and, thus, have no closed forms. The simplest example is the equation Galois theory provides an algorithmic method for deciding whether a particular polynomial equation can be solved in radicals. Symbolic integration Symbolic integration consists essentially of the search of closed forms for antiderivatives of functions that are specified by closed-form expr
https://en.wikipedia.org/wiki/Chroot
chroot is an operation on Unix and Unix-like operating systems that changes the apparent root directory for the current running process and its children. A program that is run in such a modified environment cannot name (and therefore normally cannot access) files outside the designated directory tree. The term "chroot" may refer to the system call or the wrapper program. The modified environment is called a chroot jail. History The chroot system call was introduced during development of Version 7 Unix in 1979. One source suggests that Bill Joy added it on 18 March 1982 – 17 months before 4.2BSD was released – in order to test its installation and build system. All versions of BSD that had a kernel have chroot(2). An early use of the term "jail" as applied to chroot comes from Bill Cheswick creating a honeypot to monitor a hacker in 1991. The first article about a jailbreak has been discussed on the security column of SunWorld Online which is written by Carole Fennelly; the August 1999 and January 1999 editions cover most of the chroot() topics. To make it useful for virtualization, FreeBSD expanded the concept and in its 4.0 release in 2000 introduced the jail command. By 2002, an article written by Nicolas Boiteux described how to create a jail on Linux By 2003, first internet microservices providers with Linux jails provide SAAS/PAAS (shell containers, proxy, ircd, bots, ...) services billed for consumption into the jail by usage By 2005, Sun released Solaris Containers (also known as Solaris Zones), described as "chroot on steroids." By 2008, LXC (upon which Docker was later built) adopted the "container" terminology and gained popularity in 2013 due to inclusion into Linux kernel 3.8 of user namespaces. Uses A chroot environment can be used to create and host a separate virtualized copy of the software system. This can be useful for: Testing and development A test environment can be set up in the chroot for software that would otherwise be too risky
https://en.wikipedia.org/wiki/Sanyo
is a Japanese electronics manufacturer founded in 1947 by Toshio Iue, the brother-in-law of Kōnosuke Matsushita, the founder of Panasonic. Iue left Matsushita Electric Industrial (now Panasonic) to start his own business, acquiring some of its equipment to produce bicycle generator lamps. In 1950, the company was established. Sanyo began to diversify in the 1960s, launching Japan's first spray-type washing machine in 1953. In the 2000s, it was known as one of the 3S along with Sony and Sharp. Sanyo also focused on solar cell and lithium battery businesses. In 1992, it developed the world's first hybrid solar cell, and in 2002, it had a 41% share of the global lithium-ion battery market. In its heyday in 2003, Sanyo had sales of about ¥2.5 trillion. However, it fell into a financial crisis as a result of its huge investment in the semiconductor business. In 2009, Sanyo was acquired by Panasonic, and in 2011, it was fully consolidated into Panasonic and its brand disappeared. The company still exists as a legal entity for the purpose of winding up its affairs. History Beginnings Sanyo was founded when Toshio Iue the brother-in-law of Konosuke Matsushita and also a former Matsushita employee, was lent an unused Matsushita plant in 1947 and used it to make bicycle generator lamps. Sanyo was incorporated in 1949; in 1952 it made Japan's first plastic radio and in 1954 Japan's first pulsator-type washing machine. The company's name means three oceans in Japanese, referring to the founder's ambition to sell their products worldwide, across the Atlantic, Pacific, and Indian oceans. Sanyo in America In 1969 Howard Ladd became the Executive Vice President and COO of Sanyo Corporation. Ladd introduced the Sanyo brand to the United States in 1970. The ambition to sell Sanyo products worldwide was realized in the mid-1970s after Sanyo introduced home audio equipment, car stereos and other consumer electronics to the North American market. The company embarked on a heavy tel
https://en.wikipedia.org/wiki/Homology%20sphere
In algebraic topology, a homology sphere is an n-manifold X having the homology groups of an n-sphere, for some integer . That is, and for all other i. Therefore X is a connected space, with one non-zero higher Betti number, namely, . It does not follow that X is simply connected, only that its fundamental group is perfect (see Hurewicz theorem). A rational homology sphere is defined similarly but using homology with rational coefficients. Poincaré homology sphere The Poincaré homology sphere (also known as Poincaré dodecahedral space) is a particular example of a homology sphere, first constructed by Henri Poincaré. Being a spherical 3-manifold, it is the only homology 3-sphere (besides the 3-sphere itself) with a finite fundamental group. Its fundamental group is known as the binary icosahedral group and has order 120. Since the fundamental group of the 3-sphere is trivial, this shows that there exist 3-manifolds with the same homology groups as the 3-sphere that are not homeomorphic to it. Construction A simple construction of this space begins with a dodecahedron. Each face of the dodecahedron is identified with its opposite face, using the minimal clockwise twist to line up the faces. Gluing each pair of opposite faces together using this identification yields a closed 3-manifold. (See Seifert–Weber space for a similar construction, using more "twist", that results in a hyperbolic 3-manifold.) Alternatively, the Poincaré homology sphere can be constructed as the quotient space SO(3)/I where I is the icosahedral group (i.e., the rotational symmetry group of the regular icosahedron and dodecahedron, isomorphic to the alternating group A5). More intuitively, this means that the Poincaré homology sphere is the space of all geometrically distinguishable positions of an icosahedron (with fixed center and diameter) in Euclidean 3-space. One can also pass instead to the universal cover of SO(3) which can be realized as the group of unit quaternions and
https://en.wikipedia.org/wiki/Line%20number
In computing, a line number is a method used to specify a particular sequence of characters in a text file. The most common method of assigning numbers to lines is to assign every line a unique number, starting at 1 for the first line, and incrementing by 1 for each successive line. In the C programming language the line number of a source code line is one greater than the number of new-line characters read or introduced up to that point. Programmers could also assign line numbers to statements in older programming languages, such as Fortran, JOSS, and BASIC. In Fortran, not every statement needed a line number, and line numbers did not have to be in sequential order. The purpose of line numbers was for branching and for reference by formatting statements. Both JOSS and BASIC made line numbers a required element of syntax. The primary reason for this is that most operating systems at the time lacked interactive text editors; since the programmer's interface was usually limited to a line editor, line numbers provided a mechanism by which specific lines in the source code could be referenced for editing, and by which the programmer could insert a new line at a specific point. Line numbers also provided a convenient means of distinguishing between code to be entered into the program and direct mode commands to be executed immediately when entered by the user (which do not have line numbers). Largely due to the prevalence of interactive text editing in modern operating systems, line numbers are not a feature of most programming languages, even modern Fortran and Basic. History FORTRAN In Fortran, as first specified in 1956, line numbers were used to define input/output patterns, to specify statements to be repeated, and for conditional branching. For example: DIMENSION ALPHA(25), RHO(25) 1) FORMAT(5F12.4) 2) READ 1, ALPHA, RHO, ARG SUM = 0.0 DO 3 I=1, 25 IF (ARG-ALPHA(I)) 4,3,3 3) SUM = SUM + ALPHA(I) 4) VALUE = 3.14159*RHO(I-1) PRINT 1, ARG, SUM
https://en.wikipedia.org/wiki/BESM-6
BESM-6 (, short for Большая электронно-счётная машина, i.e. 'Large Electronic Calculating Machine') was a Soviet electronic computer of the BESM series. It was the first Soviet second-generation, transistor-based computer. Overview The BESM-6 was the most well-known and influential model of the series designed at the Institute of Precision Mechanics and Computer Engineering. The design was completed in 1965. Production started in 1968 and continued for the following 19 years. Like its predecessors, the original BESM-6 was transistor-based (however, the version used in the 1980s as a component of the Elbrus supercomputer was built with integrated circuits). The machine's 48-bit processor ran at 10 MHz clock speed and featured two instruction pipelines, separate for the control and arithmetic units, and a data cache of sixteen 48-bit words. The system achieved a performance of 1 MIPS. The CDC 6600, a common Western supercomputer when the BESM-6 was released, achieved about 2 MIPS. The system memory was word-addressable using 15-bit addresses. The maximum addressable memory space was thus 32K words (192K bytes). A virtual memory system allowed to expand this up to 128K words (768K bytes). The BESM-6 was widely used in USSR in the 1970s for various computation and control tasks. During the 1975 Apollo-Soyuz Test Project the processing of the space mission telemetry data was accomplished by a new computer complex which was based on a BESM-6. The Apollo-Soyuz mission's data processing by soviet scientists finished half an hour earlier than their American colleagues from NASA. A total of 355 of these machines were built. Production ended in 1987. As the first Soviet computer with an installed base that was large for the time, the BESM-6 gathered a dedicated developer community. Over the years several operating systems and compilers for programming languages such as Fortran, ALGOL and Pascal were developed. A modification of the BESM-6 based on integrated circuits,
https://en.wikipedia.org/wiki/Galaxian
is a 1979 fixed shooter arcade video game developed and published by Namco. The player assumes control of the Galaxip starfighter in its mission to protect Earth from waves of aliens. Gameplay involves destroying each formation of aliens, who dive down towards the player in an attempt to hit them. Designed by company engineer Kazunori Sawano, Galaxian was Namco's answer to Space Invaders, a similar space shooter released the previous year by rival developer Taito. Space Invaders was a sensation in Japan, and Namco wanted a game that could compete against it. Sawano strove to make the game simplistic and easy to understand. He was inspired by the cinematic space combat scenes in Star Wars, with enemies originally being in the shape of the film's TIE Fighters. Galaxian is one of the first video games to feature RGB color graphics and the first ever to use a tile-based hardware system, which was capable of animated multi-color sprites as well as scrolling, though the latter was limited to the starfield background while the game itself remained a fixed shooter. Galaxian was Namco's first major arcade video game hit. It was the second highest-grossing arcade video game of 1979 and 1980 in Japan and the second highest-grossing of 1980 in the United States, where it became one of the best-selling arcade games of all time with 50,000 arcade units sold by 1982. The game was celebrated for its gameplay and usage of true color graphics. In retrospect, it has gained fame for its historical importance and technological accomplishments. Its success led to several sequels and reimaginings; most notable of these is Galaga, which usurped the original in popularity. Galaxian has also been ported to many home systems and is included in numerous Namco compilations. Gameplay Galaxian is a space-themed fixed shooter. The player controls a starship called the "Galaxip", the objective being to clear each round of aliens. The enemies appear in formation towards the top of the screen, wi
https://en.wikipedia.org/wiki/Integer-valued%20polynomial
In mathematics, an integer-valued polynomial (also known as a numerical polynomial) is a polynomial whose value is an integer for every integer n. Every polynomial with integer coefficients is integer-valued, but the converse is not true. For example, the polynomial takes on integer values whenever t is an integer. That is because one of t and must be an even number. (The values this polynomial takes are the triangular numbers.) Integer-valued polynomials are objects of study in their own right in algebra, and frequently appear in algebraic topology. Classification The class of integer-valued polynomials was described fully by . Inside the polynomial ring of polynomials with rational number coefficients, the subring of integer-valued polynomials is a free abelian group. It has as basis the polynomials for , i.e., the binomial coefficients. In other words, every integer-valued polynomial can be written as an integer linear combination of binomial coefficients in exactly one way. The proof is by the method of discrete Taylor series: binomial coefficients are integer-valued polynomials, and conversely, the discrete difference of an integer series is an integer series, so the discrete Taylor series of an integer series generated by a polynomial has integer coefficients (and is a finite series). Fixed prime divisors Integer-valued polynomials may be used effectively to solve questions about fixed divisors of polynomials. For example, the polynomials P with integer coefficients that always take on even number values are just those such that is integer valued. Those in turn are the polynomials that may be expressed as a linear combination with even integer coefficients of the binomial coefficients. In questions of prime number theory, such as Schinzel's hypothesis H and the Bateman–Horn conjecture, it is a matter of basic importance to understand the case when P has no fixed prime divisor (this has been called Bunyakovsky's property, after Viktor Bunyakovsky)
https://en.wikipedia.org/wiki/Invariant%20subspace
In mathematics, an invariant subspace of a linear mapping T : V → V i.e. from some vector space V to itself, is a subspace W of V that is preserved by T. More generally, an invariant subspace for a collection of linear mappings is a subspace preserved by each mapping individually. For a single operator Consider a vector space and a linear map A subspace is called an invariant subspace for , or equivalently, -invariant, if transforms any vector back into . In formulas, this can be writtenor In this case, restricts to an endomorphism of : The existence of an invariant subspace also has a matrix formulation. Pick a basis C for W and complete it to a basis B of V. With respect to , the operator has form for some and . Examples Any linear map admits the following invariant subspaces: The vector space , because maps every vector in into The set , because . These are the trivial invariant subspaces. Certain linear operators have no non-trivial invariant subspace: for instance, rotation of a two-dimensional real vector space. However, the axis of a rotation in three dimensions is always an invariant subspace. 1-dimensional subspaces If is a 1-dimensional invariant subspace for operator with vector , then the vectors and must be linearly dependent. Thus In fact, the scalar does not depend on . The equation above formulates an eigenvalue problem. Any eigenvector for spans a 1-dimensional invariant subspace, and vice-versa. In particular, an nonzero invariant vector (i.e. a fixed point of T) spans an invariant subspace of dimension 1. As a consequence of the fundamental theorem of algebra, every linear operator on a nonzero finite-dimensional complex vector space has an eigenvector. Therefore, every such linear operator has a non-trivial invariant subspace. Diagonalization via projections Determining whether a given subspace W is invariant under T is ostensibly a problem of geometric nature. Matrix representation allows one to phr
https://en.wikipedia.org/wiki/Floor%20plan
In architecture and building engineering, a floor plan is a technical drawing to scale, showing a view from above, of the relationships between rooms, spaces, traffic patterns, and other physical features at one level of a structure. Dimensions are usually drawn between the walls to specify room sizes and wall lengths. Floor plans may also include details of fixtures like sinks, water heaters, furnaces, etc. Floor plans may include notes for construction to specify finishes, construction methods, or symbols for electrical items. It is also called a plan which is a measured plane typically projected at the floor height of , as opposed to an elevation which is a measured plane projected from the side of a building, along its height, or a section or cross section where a building is cut along an axis to reveal the interior structure. Overview Similar to a map, the orientation of the view is downward from above, but unlike a conventional map, a plan is drawn at a particular vertical position (commonly at about four feet above the floor). Objects below this level are seen, objects at this level are shown 'cut' in plan-section, and objects above this vertical position within the structure are omitted or shown dashed. Plan view or planform is defined as a vertical orthographic projection of an object on a horizontal plane, like a map. The term may be used in general to describe any drawing showing the physical layout of objects. For example, it may denote the arrangement of the displayed objects at an exhibition, or the arrangement of exhibitor booths at a convention. Drawings are now reproduced using plotters and large format xerographic copiers. A reflected ceiling plan (RCP) shows a view of the room as if looking from above, through the ceiling, at a mirror installed one foot below the ceiling level, which shows the reflected image of the ceiling above. This convention maintains the same orientation of the floor and ceilings plans – looking down from above. RCPs a
https://en.wikipedia.org/wiki/ES%20EVM
The ES EVM (, "Unified System of Electronic Computing Machines"), or YeS EVM, also known in English literature as the Unified System or Ryad (, "Series"), is a series of mainframe computers generally compatible with IBM's System/360 and System/370 mainframes, built in the Comecon countries under the initiative of the Soviet Union between 1968 and 1998. More than 15,000 of the ES EVM mainframes were produced in total. Development In 1966, the Soviet economists suggested creating a unified series of mutually compatible computers. Due to the success of the IBM System/360 in the United States, the economic planners decided to use the IBM design, although some prominent Soviet computer scientists had criticized the idea and suggested instead choosing one of the Soviet indigenous designs, such as BESM or Minsk. The first works on the cloning began in 1968; production started in 1972. In addition, after 1968, other Comecon countries joined the project. With the exception of only a few hardware pieces, the ES EVM machines were recognized in the Western countries as independently designed, based on legitimate Soviet patents. Unlike the hardware, which was quite original, mostly created by reverse engineering, much of the software was based on slightly modified and localized IBM code. In 1974–1976, IBM had contacted the Soviet authorities and expressed interest in ES EVM development; however, after the Soviet Army entered Afghanistan, in 1979, all contacts between IBM and ES developers were interrupted, due to the U.S. embargo on technological cooperation with the USSR. Due to the CoCom restrictions, much of the software localization was done through disassembling the IBM software, with some minimal modification. The most common operating system was OS ES (), a modified version of OS/360; the later versions of OS ES were very original and different from the IBM OSes, but they also included a lot of original IBM code. There were even anecdotal rumors among the Soviet progr
https://en.wikipedia.org/wiki/List%20of%20Soviet%20computer%20systems
This is the list of Soviet computer systems. The Russian abbreviation EVM (ЭВМ), present in some of the names below, means “electronic computing machine” (). List of hardware The Russian abbreviation EVM (ЭВМ), present in some of the names below, means “electronic computing machine” (). Ministry of Radio Technology Computer systems from the Ministry of Radio Technology: Agat (Агат) — Apple II clone ES EVM (ЕС ЭВМ), IBM mainframe clone ES PEVM (ЕС ПЭВМ), IBM PC compatible M series — series of mainframes and mini-computers Minsk (Минск) Poisk (Поиск) — IBM PC-XT clone Setun (Сетунь) — unique balanced ternary computer. Strela (Стрела) Ural (Урал) — mainframe series Vector-06C (Вектор-06Ц) Ministry of Instrument Making Computer systems from the Ministry of Instrument Making: Aragats (Арагац) Iskra (Искра) — common name for many computers with different architecture Iskra-1030 — Intel 8086 XT clone KVM-1 (КВМ-1) SM EVM (СМ ЭВМ) — most models were PDP-11 clones, while some others were HP 2100, VAX or Intel compatible Ministry of the Electronics Industry Computer systems from the Ministry of Electronics Industry: Elektronika (Электроника) family DVK family (ДВК) — PDP-11 clones Elektronika BK-0010 (БК-0010, БК-0011) — LSI-11 clone home computer UKNC (УКНЦ) — educational, PDP11-like Elektronika 60, Elektronika 100 Elektronika 85 — Clone of DEC Professional (computer) 350 (F11) Elektronika 85.1 — Clone of DEC Professional (computer) 380 (J11) Elektronika D3-28 Elektronika SS BIS (Электроника СС БИС) — Cray clone Soviet Academy of Sciences BESM (БЭСМ) — series of mainframes Besta (Беста) — Unix box, Motorola 68020-based, Sun-3 clone Elbrus (Эльбрус) — high-end mainframe series Kronos (Кронос) MESM (МЭСМ) — first Soviet Union computer (1950) M-1 — one of the earliest stored program computers (1950-1951) ZX Spectrum clones ATM Turbo Dubna 48K - running at half the speed of the original Hobbit Pentagon Radon 'Z' Scorpion Other
https://en.wikipedia.org/wiki/Penning%20trap
A Penning trap is a device for the storage of charged particles using a homogeneous magnetic field and a quadrupole electric field. It is mostly found in the physical sciences and related fields of study as a tool for precision measurements of properties of ions and stable subatomic particles, like for example mass, fission yields and isomeric yield ratios. One initial object of study were the so-called geonium atoms, which represent a way to measure the electron magnetic moment by storing a single electron. These traps have been used in the physical realization of quantum computation and quantum information processing by trapping qubits. Penning traps are in use in many laboratories worldwide, including CERN, to store and investigate anti-particles such as antiprotons. The main advantages of Penning traps are the potentially long storage times and the existence of a multitude of techniques to manipulate and non-destructively detect the stored particles. This makes Penning traps versatile tools for the investigation of stored particles, but also for their selection, preparation or mere storage. History The Penning trap was named after F. M. Penning (1894–1953) by Hans Georg Dehmelt (1922–2017) who built the first trap. Dehmelt got inspiration from the vacuum gauge built by F. M. Penning where a current through a discharge tube in a magnetic field is proportional to the pressure. Citing from H. Dehmelt's autobiography: "I began to focus on the magnetron/Penning discharge geometry, which, in the Penning ion gauge, had caught my interest already at Göttingen and at Duke. In their 1955 cyclotron resonance work on photoelectrons in vacuum Franken and Liebes had reported undesirable frequency shifts caused by accidental electron trapping. Their analysis made me realize that in a pure electric quadrupole field the shift would not depend on the location of the electron in the trap. This is an important advantage over many other traps that I decided to exploit. A magnetron
https://en.wikipedia.org/wiki/Signed%20number%20representations
In computing, signed number representations are required to encode negative numbers in binary number systems. In mathematics, negative numbers in any base are represented by prefixing them with a minus sign ("−"). However, in RAM or CPU registers, numbers are represented only as sequences of bits, without extra symbols. The four best-known methods of extending the binary numeral system to represent signed numbers are: sign–magnitude, ones' complement, two's complement, and offset binary. Some of the alternative methods use implicit instead of explicit signs, such as negative binary, using the base −2. Corresponding methods can be devised for other bases, whether positive, negative, fractional, or other elaborations on such themes. There is no definitive criterion by which any of the representations is universally superior. For integers, the representation used in most current computing devices is two's complement, although the Unisys ClearPath Dorado series mainframes use ones' complement. History The early days of digital computing were marked by competing ideas about both hardware technology and mathematics technology (numbering systems). One of the great debates was the format of negative numbers, with some of the era's top experts expressing very strong and differing opinions. One camp supported two's complement, the system that is dominant today. Another camp supported ones' complement, where a negative value is formed by inverting all of the bits in its positive equivalent. A third group supported sign–magnitude, where a value is changed from positive to negative simply by toggling the word's highest-order bit. There were arguments for and against each of the systems. Sign–magnitude allowed for easier tracing of memory dumps (a common process in the 1960s) as small numeric values use fewer 1 bits. These systems did ones' complement math internally, so numbers would have to be converted to ones' complement values when they were transmitted from a register
https://en.wikipedia.org/wiki/Darwin%20Medal
The Darwin Medal is one of the medals awarded by the Royal Society for "distinction in evolution, biological diversity and developmental, population and organismal biology". In 1885, International Darwin Memorial Fund was transferred to the Royal Society. The fund was devoted for promotion of biological research, and was used to establish the Darwin Medal. The medal was first awarded to Alfred Russel Wallace in 1890 for "his independent origination of the theory of the origin of species by natural selection." The medal commemorates the work of English biologist Charles Darwin (1809–1882). Darwin, most famous for his 1859 book On the Origin of Species, was a fellow of the Royal Society, and had received the Royal Medal in 1853 and the Copley Medal in 1864. The diameter of the Darwin Medal is inch (5.7 cm). It is made of silver. The obverse has Darwin's portrait, while the reverse has a wreath of plants with Darwin's name in Latin, "Carolus Darwin". It is surrounded by the years of his birth and death in Roman numerals (MDCCCIX and MDCCCLXXXII). The general design of the medal was by John Evans, the president of the Royal Numismatic Society. Since its creation the Darwin Medal has been awarded over 60 times. Among the recipients are Francis Darwin, Charles Darwin's son, and two married couples: Jack and Yolande Heslop-Harrison in 1982 and Peter and Rosemary Grant in 2002. Initially accompanied by a grant of £100, the medal is currently awarded with a grant of £2,000. All citizens who have been residents of the United Kingdom, Commonwealth of Nations, or the Republic of Ireland for more than three years are eligible for the medal. The medal was awarded biennially from 1890 until 2018; since then it is awarded annually. The most recent winner of the Darwin Medal is Canadian biologist Dolph Schluter, who received it in 2021. List of recipients See also Awards, lectures and medals of the Royal Society References External links Awards established in 1890 Awar
https://en.wikipedia.org/wiki/Lava%20flow%20%28programming%29
In computer programming jargon, lava flow is a problem in which computer code written under sub-optimal conditions is put into production and added to while still in a developmental state. Often, putting the system into production results in a need to maintain backward compatibility (as many additional components now depend on it) with the original, incomplete design. Changes in the development team working on a project often exacerbate lava flows. As workers cycle in and out of the project, knowledge of the purpose of aspects of the system can be lost. Rather than clean up these pieces, subsequent workers work around them, increasing the complexity and mess of the system. Lava flow is considered an anti-pattern, a commonly encountered phenomenon leading to poor design. References Anti-patterns
https://en.wikipedia.org/wiki/Circuit%20diagram
A circuit diagram (or: wiring diagram, electrical diagram, elementary diagram, electronic schematic) is a graphical representation of an electrical circuit. A pictorial circuit diagram uses simple images of components, while a schematic diagram shows the components and interconnections of the circuit using standardized symbolic representations. The presentation of the interconnections between circuit components in the schematic diagram does not necessarily correspond to the physical arrangements in the finished device. Unlike a block diagram or layout diagram, a circuit diagram shows the actual electrical connections. A drawing meant to depict the physical arrangement of the wires and the components they connect is called artwork or layout, physical design, or wiring diagram. Circuit diagrams are used for the design (circuit design), construction (such as PCB layout), and maintenance of electrical and electronic equipment. In computer science, circuit diagrams are useful when visualizing expressions using Boolean algebra. Symbols Circuit diagrams are pictures with symbols that have differed from country to country and have changed over time, but are now to a large extent internationally standardized. Simple components often had symbols intended to represent some feature of the physical construction of the device. For example, the symbol for a resistor dates back to the time when that component was made from a long piece of wire wrapped in such a manner as to not produce inductance, which would have made it a coil. These wirewound resistors are now used only in high-power applications, smaller resistors being cast from carbon composition (a mixture of carbon and filler) or fabricated as an insulating tube or chip coated with a metal film. The internationally standardized symbol for a resistor is therefore now simplified to an oblong, sometimes with the value in ohms written inside, instead of the zig-zag symbol. A less common symbol is simply a series of peaks o
https://en.wikipedia.org/wiki/List%20of%20theorems
This is a list of notable theorems. Lists of theorems and similar statements include: List of fundamental theorems List of lemmas List of conjectures List of inequalities List of mathematical proofs List of misnamed theorems Most of the results below come from pure mathematics, but some are from theoretical physics, economics, and other applied fields. 0–9 2-factor theorem (graph theory) 15 and 290 theorems (number theory) 2π theorem (Riemannian geometry) A B C D E F G H I J K L M N O P Q R S T U V W Z Theorems
https://en.wikipedia.org/wiki/Nerve%20complex
In topology, the nerve complex of a set family is an abstract complex that records the pattern of intersections between the sets in the family. It was introduced by Pavel Alexandrov and now has many variants and generalisations, among them the Čech nerve of a cover, which in turn is generalised by hypercoverings. It captures many of the interesting topological properties in an algorithmic or combinatorial way. Basic definition Let be a set of indices and be a family of sets . The nerve of is a set of finite subsets of the index set . It contains all finite subsets such that the intersection of the whose subindices are in is non-empty: In Alexandrov's original definition, the sets are open subsets of some topological space . The set may contain singletons (elements such that is non-empty), pairs (pairs of elements such that ), triplets, and so on. If , then any subset of is also in , making an abstract simplicial complex. Hence N(C) is often called the nerve complex of . Examples Let X be the circle and , where is an arc covering the upper half of and is an arc covering its lower half, with some overlap at both sides (they must overlap at both sides in order to cover all of ). Then , which is an abstract 1-simplex. Let X be the circle and , where each is an arc covering one third of , with some overlap with the adjacent . Then . Note that {1,2,3} is not in since the common intersection of all three sets is empty; so is an unfilled triangle. The Čech nerve Given an open cover of a topological space , or more generally a cover in a site, we can consider the pairwise fibre products , which in the case of a topological space are precisely the intersections . The collection of all such intersections can be referred to as and the triple intersections as . By considering the natural maps and , we can construct a simplicial object defined by , n-fold fibre product. This is the Čech nerve. By taking connected components we get a simplicial s
https://en.wikipedia.org/wiki/Secure%20copy%20protocol
Secure copy protocol (SCP) is a means of securely transferring computer files between a local host and a remote host or between two remote hosts. It is based on the Secure Shell (SSH) protocol. "SCP" commonly refers to both the Secure Copy Protocol and the program itself. According to OpenSSH developers in April 2019, SCP is outdated, inflexible and not readily fixed; they recommend the use of more modern protocols like SFTP and rsync for file transfer. As of OpenSSH version 9.0, scp client therefore uses SFTP for file transfers by default instead of the legacy SCP/RCP protocol. Secure Copy Protocol The SCP is a network protocol, based on the BSD RCP protocol, which supports file transfers between hosts on a network. SCP uses Secure Shell (SSH) for data transfer and uses the same mechanisms for authentication, thereby ensuring the authenticity and confidentiality of the data in transit. A client can send (upload) files to a server, optionally including their basic attributes (permissions, timestamps). Clients can also request files or directories from a server (download). SCP runs over TCP port 22 by default. Like RCP, there is no RFC that defines the specifics of the protocol. Function Normally, a client initiates an SSH connection to the remote host, and requests an SCP process to be started on the remote server. The remote SCP process can operate in one of two modes: source mode, which reads files (usually from disk) and sends them back to the client, or sink mode, which accepts the files sent by the client and writes them (usually to disk) on the remote host. For most SCP clients, source mode is generally triggered with the -f flag (from), while sink mode is triggered with -t (to). These flags are used internally and are not documented outside the SCP source code. Remote to remote mode In the past, in remote-to-remote secure copy, the SCP client opens an SSH connection to the source host and requests that it, in turn, open an SCP connection to the
https://en.wikipedia.org/wiki/Wastewater%20treatment
Wastewater treatment is a process which removes and eliminates contaminants from wastewater and converts this into an effluent that can be returned to the water cycle. Once returned to the water cycle, the effluent creates an acceptable impact on the environment or is reused for various purposes (called water reclamation). The treatment process takes place in a wastewater treatment plant. There are several kinds of wastewater which are treated at the appropriate type of wastewater treatment plant. For domestic wastewater (also called municipal wastewater or sewage), the treatment plant is called a Sewage Treatment. For industrial wastewater, treatment either takes place in a separate Industrial wastewater treatment, or in a sewage treatment plant (usually after some form of pre-treatment). Further types of wastewater treatment plants include Agricultural wastewater treatment and leachate treatment plants. Processes commonly used in wastewater treatment include phase separation (such as sedimentation), biological and chemical processes (such as oxidation) or polishing. The main by-product from wastewater treatment plants is a type of sludge that is usually treated in the same or another wastewater treatment plant. Biogas can be another by-product if anaerobic treatment processes are used. Treated wastewater can be reused as reclaimed water. The main purpose of wastewater treatment is for the treated wastewater to be able to be disposed or reused safely. However, before it is treated, the options for disposal or reuse must be considered so the correct treatment process is used on the wastewater. Bangladesh has officially inaugurated the largest single sewage treatment plant (STP) in South Asia, located in the Khilgaon area of the city. With a capacity to treat five million sewage per day, the STP marks a significant step towards addressing the country's wastewater management challenges. The term "wastewater treatment" is often used to mean "sewage treatment". Typ
https://en.wikipedia.org/wiki/SM%20EVM
SM EVM (СМ ЭВМ, abbreviation of Система Малых ЭВМ—literally System of Mini Computers) are several types of Soviet and Comecon minicomputers produced from 1975 through the 1980s. Most types of SM EVM are clones of DEC PDP-11 and VAX. SM-1 and SM-2 are clones of Hewlett-Packard minicomputers. The common operating systems for the PDP-11 clones are translated versions of RSX-11 (ОС РВ) for the higher spec models and RT-11 (РАФОС, ФОДОС) for lower spec models. Also available for the high-end PDP-11 clones is MOS, a clone of UNIX. See also SM-4 SM-1420 SM-1600 SM-1710 SM-1720 References Computer-related introductions in 1975 Minicomputers Soviet computer systems PDP-11
https://en.wikipedia.org/wiki/Kakeya%20set
In mathematics, a Kakeya set, or Besicovitch set, is a set of points in Euclidean space which contains a unit line segment in every direction. For instance, a disk of radius 1/2 in the Euclidean plane, or a ball of radius 1/2 in three-dimensional space, forms a Kakeya set. Much of the research in this area has studied the problem of how small such sets can be. Besicovitch showed that there are Besicovitch sets of measure zero. A Kakeya needle set (sometimes also known as a Kakeya set) is a (Besicovitch) set in the plane with a stronger property, that a unit line segment can be rotated continuously through 180 degrees within it, returning to its original position with reversed orientation. Again, the disk of radius 1/2 is an example of a Kakeya needle set. Kakeya needle problem The Kakeya needle problem asks whether there is a minimum area of a region in the plane, in which a needle of unit length can be turned through 360°. This question was first posed, for convex regions, by . The minimum area for convex sets is achieved by an equilateral triangle of height 1 and area 1/, as Pál showed. Kakeya seems to have suggested that the Kakeya set of minimum area, without the convexity restriction, would be a three-pointed deltoid shape. However, this is false; there are smaller non-convex Kakeya sets. Besicovitch needle sets Besicovitch was able to show that there is no lower bound > 0 for the area of such a region , in which a needle of unit length can be turned around. That is, for every , there is region of area within which the needle can moved through a continuous motion that rotates it a full 360 degrees. This built on earlier work of his, on plane sets which contain a unit segment in each orientation. Such a set is now called a Besicovitch set. Besicovitch's work showing such a set could have arbitrarily small measure was from 1919. The problem may have been considered by analysts before that. One method of constructing a Besicovitch set (see figure for co
https://en.wikipedia.org/wiki/ES%20PEVM
ES PEVM (ЕС ПЭВМ) was a Soviet clone of the IBM PC in the 1980s. The ES PEVM models lineup also included analogues of IBM PC XT, IBM PC AT, IBM XT/370. The computers and software were adapted in Minsk, Belarus, at the Scientific Research Institute of Electronic Computer Machines (НИИ ЭВМ). They were manufactured in Minsk as well, at Minsk Production Group for Computing Machinery (Минское производственное объединение вычислительной техники (МПО ВТ)). Description The first models of ES PEVM (ES-1840, ES-1841, ES-1842), unlike the IBM PC, had two units: a system unit and a floppy drives unit. These models used a backplane instead of the main board. Although the system bus was compatible with ISA bus, it used a different type of connector, so the IBM PC expansion cards could not be installed in ES PEVM. Later models (ES-1843, ES-1849 etc.) were fully compatible with IBM PC XT and IBM PC AT. Unlike IBM PC using the Intel 8088 processor, the early ES PEVM models used K1810VM86 processor with a 16-bit bus and a clock frequency of 5 MHz. The processor was placed on a separate board. Early versions of the board did not have a socket for the floating point coprocessor. The following boards were produced for ES-1840 and ES-1841: CPU board (containing 54 chips). It was installed in every configuration of the computer. 128 KiB RAM board. It was installed in early models of ES PEVM. 512 KiB RAM board (containing 110 chips). Variants with 256 KiB or 128 Kib were available. MDA adapter board (containing 91 chips). It was built using the Bulgarian CM607 chip (MC6845 clone). CGA adapter board (containing 94 chips). Floppy-drive adapter board (containing 32 chips). It was built using the Bulgarian CM609 chip (Intel 8272 clone). Serial interface adapter board (containing 56 chips). It was electrically incompatible with the IBM PC COM port and was not supported in the BIOS. Volume of production for a number of models: Software The computers were shipped with AlphaDOS,
https://en.wikipedia.org/wiki/Hardening%20%28computing%29
In computer security, hardening is usually the process of securing a system by reducing its surface of vulnerability, which is larger when a system performs more functions; in principle a single-function system is more secure than a multipurpose one. Reducing available ways of attack typically includes changing default passwords, the removal of unnecessary software, unnecessary usernames or logins, and the disabling or removal of unnecessary services. There are various methods of hardening Unix and Linux systems. This may involve, among other measures, applying a patch to the kernel such as Exec Shield or PaX; closing open network ports; and setting up intrusion detection systems, firewalls and intrusion-prevention systems. There are also hardening scripts and tools like Lynis, Bastille Linux, JASS for Solaris systems and Apache/PHP Hardener that can, for example, deactivate unneeded features in configuration files or perform various other protective measures. Binary hardening Binary hardening is a security technique in which binary files are analyzed and modified to protect against common exploits. Binary hardening is independent of compilers and involves the entire toolchain. For example, one binary hardening technique is to detect potential buffer overflows and to substitute the existing code with safer code. The advantage of manipulating binaries is that vulnerabilities in legacy code can be fixed automatically without the need for source code, which may be unavailable or obfuscated. Secondly, the same techniques can be applied to binaries from multiple compilers, some of which may be less secure than others. Binary hardening often involves the non-deterministic modification of control flow and instruction addresses so as to prevent attackers from successfully reusing program code to perform exploits. Common hardening techniques are: Buffer overflow protection Stack overwriting protection Position independent executables and address space layout randomi
https://en.wikipedia.org/wiki/Second%20Life
Second Life is an online multimedia platform that allows people to create an avatar for themselves and then interact with other users and user-created content within a multi-user online virtual world. Developed and owned by the San Francisco–based firm Linden Lab and launched on June 23, 2003, it saw rapid growth for some years and in 2013 it had approximately one million regular users. Growth eventually stabilized, and by the end of 2017 the active user count had declined to "between 800,000 and 900,000". In many ways, Second Life is similar to massively multiplayer online role-playing games; nevertheless, Linden Lab is emphatic that their creation is not a game: "There is no manufactured conflict, no set objective". The virtual world can be accessed freely via Linden Lab's own client software or via alternative third-party viewers. Second Life users, also called 'residents', create virtual representations of themselves, called avatars, and are able to interact with places, objects and other avatars. They can explore the world (known as the grid), meet other residents, socialize, participate in both individual and group activities, build, create, shop, and trade virtual property and services with one another. The platform principally features 3D-based user-generated content. Second Life also has its own virtual currency, the Linden Dollar (L$), which is exchangeable with real world currency. Second Life is intended for people ages 16 and over, with the exception of 13–15-year-old users, who are restricted to the Second Life region of a sponsoring institution (e.g., a school). History Philip Rosedale formed Linden Lab in 1999 with the intention of developing computer hardware to allow people to become immersed in a virtual world. In its earliest form, the company struggled to produce a commercial version of the hardware, known as "The Rig", which in prototype form was seen as a clunky steel contraption with computer monitors worn on shoulders. That vision chan
https://en.wikipedia.org/wiki/Bimetallic%20strip
A bimetallic strip is used to convert a temperature change into mechanical displacement. The strip consists of two strips of different metals which expand at different rates as they are heated. The different expansions force the flat strip to bend one way if heated, and in the opposite direction if cooled below its initial temperature. The metal with the higher coefficient of thermal expansion is on the outer side of the curve when the strip is heated and on the inner side when cooled. The invention of the bimetallic strip is generally credited to John Harrison, an eighteenth-century clockmaker who made it for his third marine chronometer (H3) of 1759 to compensate for temperature-induced changes in the balance spring. Harrison's invention is recognized in the memorial to him in Westminster Abbey, England. This effect is used in a range of mechanical and electrical devices. Characteristics The strip consists of two strips of different metals which expand at different rates as they are heated, usually steel and copper, or in some cases steel and brass. The strips are joined together throughout their length by riveting, brazing or welding. The different expansions force the flat strip to bend one way if heated, and in the opposite direction if cooled below its initial temperature. The metal with the higher coefficient of thermal expansion is on the outer side of the curve when the strip is heated and on the inner side when cooled. The sideways displacement of the strip is much larger than the small lengthways expansion in either of the two metals. In some applications, the bimetal strip is used in the flat form. In others, it is wrapped into a coil for compactness. The greater length of the coiled version gives improved sensitivity. The radius of curvature of a bimetallic strip depends on temperature according the formula derived by French physicist Yvon Villarceau in 1863 in his research for improving the precision of clocks: , where is the total thick
https://en.wikipedia.org/wiki/Jean-Christophe%20Yoccoz
Jean-Christophe Yoccoz (29 May 1957 – 3 September 2016) was a French mathematician. He was awarded a Fields Medal in 1994, for his work on dynamical systems. Yoccoz died on 3 September 2016 at the age of 59. Biography Yoccoz attended the Lycée Louis-le-Grand, during which time he was a silver medalist at the 1973 International Mathematical Olympiad and a gold medalist in 1974. He entered the École Normale Supérieure in 1975, and completed an agrégation in mathematics in 1977. After completing military service in Brazil, he completed his PhD under Michael Herman in 1985 at Centre de mathématiques Laurent-Schwartz, which is a research unit jointly operated by the French National Center for Scientific Research (CNRS) and École Polytechnique. He took up a position at the University of Paris-Sud in 1987, and became a professor at the Collège de France in 1997, where he remained until his death. He was a member of Bourbaki. Yoccoz won the Salem Prize in 1988. He was an invited speaker at the International Congress of Mathematicians in 1990 at Kyoto, and was awarded the Fields Medal at the International Congress of Mathematicians in 1994 in Zürich. He joined the French Academy of Sciences and Brazilian Academy of Sciences in 1994, became a chevalier in the French Legion of Honor in 1995, and was awarded the Grand Cross of the Brazilian National Order of Scientific Merit in 1998. Mathematical work Yoccoz's worked on the theory of dynamical systems. His contributions include advances to KAM theory, and the introduction of the method of Yoccoz puzzles, a combinatorial technique which proved useful to the study of Julia sets. Notable publications Yoccoz, J.-C. Conjugaison différentiable des difféomorphismes du cercle dont le nombre de rotation vérifie une condition diophantienne. Ann. Sci. École Norm. Sup. (4) 17 (1984), no. 3, 333–359. doi:10.24033/asens.1475 Yoccoz, Jean-Christophe. Théorème de Siegel, nombres de Bruno et polynômes quadratiques. Petits diviseurs en
https://en.wikipedia.org/wiki/Sexual%20maturity
Sexual maturity is the capability of an organism to reproduce. In humans, it is related to both puberty and adulthood. However, puberty is the process of biological sexual maturation, while the concept of adulthood is generally based on broader cultural definitions. Most multicellular organisms are unable to sexually reproduce at birth (animals) or germination (e.g. plants): depending on the species, it may be days, weeks, or years until they have developed enough to be able to do so. Also, certain cues may trigger an organism to become sexually mature. They may be external, such as drought (certain plants), or internal, such as percentage of body fat (certain animals). (Such internal cues are not to be confused with hormones, which directly produce sexual maturity – the production/release of those hormones is triggered by such cues.) Role of reproductive organs Sexual maturity is brought about by a maturing of the reproductive organs and the production of gametes. It may also be accompanied by a growth spurt or other physical changes which distinguish the immature organism from its adult form. In animals these are termed secondary sex characteristics, and often represent an increase in sexual dimorphism. After sexual maturity is achieved, some organisms become infertile, or even change their sex. Some organisms are hermaphrodites and may or may not be able to "completely" mature and/or to produce viable offspring. Also, while in many organisms sexual maturity is strongly linked to age, many other factors are involved, and it is possible for some to display most or all of the characteristics of the adult form without being sexually mature. Conversely it is also possible for the "immature" form of an organism to reproduce. This is called progenesis, in which sexual development occurs faster than other physiological development (in contrast, the term neoteny refers to when non-sexual development is slowed – but the result is the same - the retention of juvenile c
https://en.wikipedia.org/wiki/ViewSonic
ViewSonic Corporation is a Taiwanese-American privately held multinational electronics company with headquarters in Brea, California, United States and a research & development center in New Taipei City, Taiwan. The company was founded in 1987 as Keypoint Technology Corporation by James Chu and was renamed to its present name in 1993, after a brand name of monitors launched in 1990. Today, ViewSonic specializes in visual display hardware—including liquid-crystal displays, projectors, and interactive whiteboards—as well as digital whiteboarding software. The company trades in three key markets: education, enterprise, and entertainment. ViewSonic is a nationally certified minority-owned business by the Southern California Minority Supplier Development Council. Company history The company was initially founded as Keypoint Technology Corporation in 1987 by James Chu. In 1990 it launched the ViewSonic line of color computer monitors, and shortly afterward the company renamed itself after its monitor brand. The ViewSonic logo features Gouldian finches, colorful birds native to Australia. In the mid-1990s, ViewSonic rose to become one of the top-rated makers of computer CRT monitors, alongside Sony, NEC, MAG InnoVision, and Panasonic. ViewSonic soon displaced the rest of these companies to emerge as the largest display manufacturer from America/Japan at the turn of the millennium. In 2000, ViewSonic acquired the Nokia Display Products' branded business. In 2002 ViewSonic announced a 3840x2400 WQUXGA, 22.2-inch monitor, VP2290. In 2005, ViewSonic and Tatung won a British patent lawsuit filed against them by LG Philips in a dispute over which company created technology for rear mounting of LCDs in a mobile PC (U.K. Patent GB2346464B, titled “portable computer"). On July 2, 2007, the company filed with the Securities and Exchange Commission to raise up to $143.8M in an IPO on NASDAQ. On March 5, 2008, the company filed a withdraw request with the Securities and Exch
https://en.wikipedia.org/wiki/Strong%20topology
In mathematics, a strong topology is a topology which is stronger than some other "default" topology. This term is used to describe different topologies depending on context, and it may refer to: the final topology on the disjoint union the topology arising from a norm the strong operator topology the strong topology (polar topology), which subsumes all topologies above. A topology τ is stronger than a topology σ (is a finer topology) if τ contains all the open sets of σ. In algebraic geometry, it usually means the topology of an algebraic variety as complex manifold or subspace of complex projective space, as opposed to the Zariski topology (which is rarely even a Hausdorff space). See also Weak topology Topology
https://en.wikipedia.org/wiki/Swingometer
The swingometer is a graphics device that shows the effects of the swing from one party to another on British election results programmes. It is used to estimate the number of seats that will be won by different parties, given a particular national swing (in percentage points) in the vote towards or away from a given party, and assuming that that percentage change in the vote will apply in each constituency. The device was invented by Peter Milne, and later refined by David Butler and Robert McKenzie. The first outing on British television was during a regional output from the BBC studios in Bristol during the 1955 general election (the first UK general election to be televised) and was used to show the swing in the two constituencies of Southampton Itchen and Southampton Test. Following this use in 1955, the BBC adopted the swingometer on a national basis and it was unveiled in the national broadcasts for the 1959 general election. This swingometer merely showed the national swing in Britain but not the implications on that swing on the composition of parliament. These issues were not addressed until the 1964 general election. The swingometer for that election showed not only the national swing, but also the implications of that national swing. So for instance, a 3.5% swing to Labour would see Labour become a majority government whilst any swing to the Conservatives would see Sir Alec Douglas-Home reelected as Prime Minister with a huge parliamentary majority. In the end the result was a Labour overall majority of 4, and so when the 1966 general election came around, a new element had to be added (namely the prospect of a hung parliament). At the 1970 general election, the swingometer entered the age of colour television and showed the traditional party colours of red for Labour and blue for Conservative and had to be extended due to the success of the Conservative party at that election. However, following the success of the Liberals in the by-elections held
https://en.wikipedia.org/wiki/Gel%20electrophoresis%20of%20proteins
Protein electrophoresis is a method for analysing the proteins in a fluid or an extract. The electrophoresis may be performed with a small volume of sample in a number of alternative ways with or without a supporting medium, namely agarose or polyacrylamide. Variants of gel electrophoresis include SDS-PAGE, free-flow electrophoresis, electrofocusing, isotachophoresis, affinity electrophoresis, immunoelectrophoresis, counterelectrophoresis, and capillary electrophoresis. Each variant has many subtypes with individual advantages and limitations. Gel electrophoresis is often performed in combination with electroblotting or immunoblotting to give additional information about a specific protein. Denaturing gel methods SDS-PAGE SDS-PAGE, sodium dodecyl sulfate polyacrylamide gel electrophoresis, describes a collection of related techniques to separate proteins according to their electrophoretic mobility (a function of the molecular weight of a polypeptide chain) while in the denatured (unfolded) state. In most proteins, the binding of SDS to the polypeptide chain imparts an even distribution of charge per unit mass, thereby resulting in a fractionation by approximate size during electrophoresis. SDS is a strong detergent agent used to denature native proteins to unfolded, individual polypeptides. When a protein mixture is heated to 100 °C in presence of SDS, the detergent wraps around the polypeptide backbone. In this process, the intrinsic charges of polypeptides becomes negligible when compared to the negative charges contributed by SDS. Thus polypeptides after treatment become rod-like structures possessing a uniform charge density, that is same net negative charge per unit length. The electrophoretic mobilities of these proteins will be a linear function of the logarithms of their molecular weights. Native gel methods Native gels, also known as non-denaturing gels, analyze proteins that are still in their folded state. Thus, the electrophoretic mobility depends
https://en.wikipedia.org/wiki/Trend%20Micro
is a Japanese cyber security software company. The company has globally dispersed R&D in 16 locations across every continent excluding Antarctica. The company develops enterprise security software for servers, containers, & cloud computing environments, networks, and end points. Its cloud and virtualization security products provide automated security for customers of VMware, Amazon AWS, Microsoft Azure, and Google Cloud Platform. Eva Chen, who is the founder, currently serves as Trend Micro's chief executive officer, a position she has held since 2005. She succeeded founding CEO Steve Chang, who now serves as chairman. History 1988–1999 The company was founded in 1988 in Los Angeles by Steve Chang, his wife, Jenny Chang, and her sister, Eva Chen (陳怡樺). The company was established with proceeds from Steve Chang's previous sale of a copy protection dongle to a United States-based Rainbow Technologies. Shortly after establishing the company, its founders moved headquarters to Taipei. In 1992, Trend Micro took over a Japanese software firm to form Trend Micro Devices and established headquarters in Tokyo. It then made an agreement with CPU maker Intel, under which it produced an anti-virus product for local area networks (LANs) for sale under the Intel brand. Intel paid royalties to Trend Micro for sales of LANDesk Virus Protect in the United States and Europe, while Trend paid royalties to Intel for sales in Asia. In 1993, Novell began bundling the product with its network operating system. In 1996 the two companies agreed to a two-year continuation of the agreement in which Trend was allowed to globally market the ServerProtect product under its own brand alongside Intel's LANDesk brand. Trend Micro was listed on the Tokyo Stock Exchange in 1998 under the ticker 4704. The company began trading on the United States-based NASDAQ stock exchange in July 1999. 2000s In 2004, founding chief executive officer Steve Chang decided to split the responsibilities of CEO an
https://en.wikipedia.org/wiki/ESET
ESET, s.r.o., is a Slovak software company specializing in cybersecurity. ESET's security products are made in Europe and provide security software in over 200 countries and territories worldwide, and its software is localized into more than 30 languages. The company was founded in 1992 in Bratislava, Slovakia. However, its history dates back to 1987, when two of the company's founders, Miroslav Trnka and Peter Paško, developed their first antivirus program called NOD. This sparked an idea between friends to help protect PC users and soon grew into an antivirus software company. At present, ESET is recognized as Europe's biggest privately held cybersecurity company. History 1987–1992 The product NOD was launched in Czechoslovakia when the country was part of the Soviet Union's sphere of influence. Under the communist regime, private entrepreneurship was banned. It wasn't until 1992 when Miroslav Trnka and Peter Paško, together with Rudolf Hrubý, established ESET as a privately owned limited liability company in the former Czechoslovakia. In parallel with NOD, the company also started developing Perspekt. 2003–2017 In 2013, ESET launched WeLiveSecurity, a blog site dedicated to a vast spectrum of security-related topics. December 2017 marked the 30th anniversary of the company's first security product. To mark its accomplishments, the company released a short documentary describing the company's evolution from the perspective of founders Miroslav Trnka and Peter Paško. In the same year, the company partnered with Google to integrate its technology into Chrome Cleanup. 2018–present In December 2018, ESET partnered with No More Ransom, a global initiative that provides victims of ransomware decryption keys, thus removing the pressure to pay attackers. The initiative is supported by Interpol and has been joined by various national police forces. ESET has developed technologies to address the threat of ransomware and has produced papers documenting its evoluti
https://en.wikipedia.org/wiki/Busdma
In computing, busdma, bus_dma and bus_space is a set of application programming interfaces designed to help make device drivers less dependent on platform-specific code, thereby allowing the host operating system to be more easily ported to new computer hardware. This is accomplished by having abstractions for direct memory access (DMA) mapping across popular machine-independent computer buses like PCI, which are used on distinct architectures from IA-32 (NetBSD/i386) to DEC Alpha (NetBSD/alpha). Additionally, some devices may come in multiple flavours supporting more than one bus, e.g., ISA, EISA, VESA Local Bus and PCI, still sharing the same core logic irrespective of the bus, and such device drivers would also benefit from this same abstraction. Thus the rationale of busdma is to facilitate maximum code reuse across a wide range of platforms. Circa 2006, bus and DMA abstractions made it possible for NetBSD to support 50 hardware platforms and 14 CPU architectures out of a single source tree, compared to the forking model used by Linux ports. Originally implemented as the "bus_dma" APIs by the developers of the NetBSD operating system, busdma has been adopted by OpenBSD, FreeBSD and their derivatives; with FreeBSD incorporating it under a busdma umbrella (without an underscore). Both NetBSD and OpenBSD have additional "bus_space" APIs that have been amalgamated into the version of busdma incorporated into FreeBSD. DragonFly BSD developers are also slowly converting their drivers to use busdma. References External links — NetBSD, FreeBSD, OpenBSD and DragonFly BSD Kernel Developer's Manuals — NetBSD, FreeBSD, OpenBSD and DragonFly BSD Kernel Developer's Manuals FreeBSD busdma and SMPng driver conversion project page Application programming interfaces BSD software NetBSD FreeBSD OpenBSD DragonFly BSD Operating system APIs Operating system technology
https://en.wikipedia.org/wiki/Haversine%20formula
The haversine formula determines the great-circle distance between two points on a sphere given their longitudes and latitudes. Important in navigation, it is a special case of a more general formula in spherical trigonometry, the law of haversines, that relates the sides and angles of spherical triangles. The first table of haversines in English was published by James Andrew in 1805, but Florian Cajori credits an earlier use by José de Mendoza y Ríos in 1801. The term haversine was coined in 1835 by James Inman. These names follow from the fact that they are customarily written in terms of the haversine function, given by . The formulas could equally be written in terms of any multiple of the haversine, such as the older versine function (twice the haversine). Prior to the advent of computers, the elimination of division and multiplication by factors of two proved convenient enough that tables of haversine values and logarithms were included in 19th- and early 20th-century navigation and trigonometric texts. These days, the haversine form is also convenient in that it has no coefficient in front of the function. Formulation Let the central angle between any two points on a sphere be: where: is the distance between the two points along a great circle of the sphere (see spherical distance), is the radius of the sphere. The haversine formula allows the haversine of (that is, ) to be computed directly from the latitude (represented by ) and longitude (represented by ) of the two points: where , are the latitude of point 1 and latitude of point 2, , are the longitude of point 1 and longitude of point 2. Finally, the haversine function , applied above to both the central angle and the differences in latitude and longitude, is The haversine function computes half a versine of the angle . To solve for the distance , apply the archaversine (inverse haversine) to or use the arcsine (inverse sine) function: or more explicitly: When using these f
https://en.wikipedia.org/wiki/Intermodulation
Intermodulation (IM) or intermodulation distortion (IMD) is the amplitude modulation of signals containing two or more different frequencies, caused by nonlinearities or time variance in a system. The intermodulation between frequency components will form additional components at frequencies that are not just at harmonic frequencies (integer multiples) of either, like harmonic distortion, but also at the sum and difference frequencies of the original frequencies and at sums and differences of multiples of those frequencies. Intermodulation is caused by non-linear behaviour of the signal processing (physical equipment or even algorithms) being used. The theoretical outcome of these non-linearities can be calculated by generating a Volterra series of the characteristic, or more approximately by a Taylor series. Practically all audio equipment has some non-linearity, so it will exhibit some amount of IMD, which however may be low enough to be imperceptible by humans. Due to the characteristics of the human auditory system, the same percentage of IMD is perceived as more bothersome when compared to the same amount of harmonic distortion. Intermodulation is also usually undesirable in radio, as it creates unwanted spurious emissions, often in the form of sidebands. For radio transmissions this increases the occupied bandwidth, leading to adjacent channel interference, which can reduce audio clarity or increase spectrum usage. IMD is only distinct from harmonic distortion in that the stimulus signal is different. The same nonlinear system will produce both total harmonic distortion (with a solitary sine wave input) and IMD (with more complex tones). In music, for instance, IMD is intentionally applied to electric guitars using overdriven amplifiers or effects pedals to produce new tones at subharmonics of the tones being played on the instrument. See Power chord#Analysis. IMD is also distinct from intentional modulation (such as a frequency mixer in superheterody
https://en.wikipedia.org/wiki/Computational%20irreducibility
Computational irreducibility is one of the main ideas proposed by Stephen Wolfram in his 2002 book A New Kind of Science, although the concept goes back to studies from the 1980s. The idea Many physical systems are complex enough that they cannot be effectively measured. Even simpler programs contain a great diversity of behavior. Therefore no model can predict using only initial conditions, exactly what will occur in a given physical system before an experiment is conducted. Because of this problem of undecidability in the formal language of computation, Wolfram terms this inability to "shortcut" a system (or "program"), or otherwise describe its behavior in a simple way, "computational irreducibility." The idea demonstrates that there are occurrences where theory's predictions are effectively not possible. Wolfram states several phenomena are normally computationally irreducible. Computational irreducibility explains observed limitations of existing mainstream science. In cases of computational irreducibility, only observation and experiment can be used. Implications There is no easy theory for any behavior that seems complex. Complex behavior features can be captured with models that have simple underlying structures. An overall system's behavior based on simple structures can still exhibit behavior indescribable by reasonably "simple" laws. Analysis Navot Israeli and Nigel Goldenfeld found that some less complex systems behaved simply and predictably (thus, they allowed approximations). However, more complex systems were still computationally irreducible and unpredictable. It is unknown what conditions would allow complex phenomena to be described simply and predictably. Compatibilism Marius Krumm and Markus P Muller tie computational irreducibility to Compatibilism. They refine concepts via the intermediate requirement of a new concept called computational sourcehood that demands essentially full and almost-exact representation of features associat
https://en.wikipedia.org/wiki/Kirchhoff%27s%20circuit%20laws
Kirchhoff's circuit laws are two equalities that deal with the current and potential difference (commonly known as voltage) in the lumped element model of electrical circuits. They were first described in 1845 by German physicist Gustav Kirchhoff. This generalized the work of Georg Ohm and preceded the work of James Clerk Maxwell. Widely used in electrical engineering, they are also called Kirchhoff's rules or simply Kirchhoff's laws. These laws can be applied in time and frequency domains and form the basis for network analysis. Both of Kirchhoff's laws can be understood as corollaries of Maxwell's equations in the low-frequency limit. They are accurate for DC circuits, and for AC circuits at frequencies where the wavelengths of electromagnetic radiation are very large compared to the circuits. Kirchhoff's current law This law, also called Kirchhoff's first law, or Kirchhoff's junction rule, states that, for any node (junction) in an electrical circuit, the sum of currents flowing into that node is equal to the sum of currents flowing out of that node; or equivalently: The algebraic sum of currents in a network of conductors meeting at a point is zero. Recalling that current is a signed (positive or negative) quantity reflecting direction towards or away from a node, this principle can be succinctly stated as: where is the total number of branches with currents flowing towards or away from the node. Kirchhoff's circuit laws were originally obtained from experimental results. However, the current law can be viewed as an extension of the conservation of charge, since charge is the product of current and the time the current has been flowing. If the net charge in a region is constant, the current law will hold on the boundaries of the region. This means that the current law relies on the fact that the net charge in the wires and components is constant. Uses A matrix version of Kirchhoff's current law is the basis of most circuit simulation software, such
https://en.wikipedia.org/wiki/Automatic%20transmission%20system
An automatic transmission system (ATS) is an automated system designed to keep a broadcast radio or television station's transmitter and antenna system running without direct human oversight or attention for long periods. Such systems are occasionally referred to as automated transmission systems to avoid confusion with the automatic transmission of an automobile. History Traditionally, radio and television stations were required to have a licensed operator, technician or electrical engineer available to tend to a transmitter at all times it was operating or capable of operating. Any condition (such as distorted or off-frequency transmission) that could interfere with other broadcast services would require immediate manual intervention to correct the fault or take the transmitter off the air. Facilities also had to be monitored for any fault conditions which could impair the transmitted signal or cause damage to the transmitting equipment. Because broadcast transmitters were often at a different location from the broadcast studios, attended operation required an operator to be physically located at the transmitter site. In the 1950s and 1960s, remote control systems were introduced to allow an operator at the studio to power the transmitter on or off. At the same time, an early remote control system, the Automon, was developed by RCA engineers in Montréal that included a relay system that automatically detected if the transmitter was operating outside of its allowed parameters. The Automon could send the studio an alarm if the transmitter was out of tolerance and, if contact to the studio was lost, it could automatically power down the transmitter. A similar system was developed in 1953 by Paul Schafer in California, using a rotary telephone to raise or lower transmitter parameters remotely. As technology improved, transmitters became more reliable, and electromechanical means of checking and later correcting problems became commonplace. Regulations eventually ca
https://en.wikipedia.org/wiki/Telebit
Telebit Corporation was a US-based modem manufacturer, known for their TrailBlazer series of high-speed modems. One of the first modems to routinely exceed 9600 bit/s speeds, the TrailBlazer used a proprietary modulation scheme that proved highly resilient to interference, earning the product an almost legendary reputation for reliability despite mediocre (or worse) line quality. They were particularly common in Unix installations in the 1980s and 1990s. The high price of the Telebit modems was initially not a concern as their performance was equally high compared to other systems. However, as new designs using V.32 and V.32bis began to arrive in the early 1990s, Telebit's price/performance ratio was seriously eroded. A series of new designs followed, but these never regained their performance lead. By the mid-1990s the company had been part of a series of mergers and eventually disappeared in 1998 after being acquired by Digi International. Startup Telebit was founded by Paul Baran, one of the inventors of the packet switching networking concept. Baran had recently started a networking company known as Packet Technologies on Bubb Road in Cupertino, California, which was working on systems for interactive television. While working there, he hit on the idea for a new way to implement high-speed modems, and started Telebit across the street. Packet Technologies was a major beta customer for Telebit in late 1985. Packet Technologies later failed, and several of their employees were folded into Telebit, while most of the others formed StrataCom, makers of the first Asynchronous Transfer Mode (ATM) switches. PEP and the TrailBlazer In contrast to then-existing ITU Telecommunication Standardization Sector (ITU-T) V-series protocols, such as the common 2400 bit/s V.22bis, the TrailBlazers' used a proprietary modulation system known as Packetized Ensemble Protocol (PEP), based on orthogonal frequency-division multiplexing (OFDM). It employed a large number (initially u
https://en.wikipedia.org/wiki/Trigonometric%20polynomial
In the mathematical subfields of numerical analysis and mathematical analysis, a trigonometric polynomial is a finite linear combination of functions sin(nx) and cos(nx) with n taking on the values of one or more natural numbers. The coefficients may be taken as real numbers, for real-valued functions. For complex coefficients, there is no difference between such a function and a finite Fourier series. Trigonometric polynomials are widely used, for example in trigonometric interpolation applied to the interpolation of periodic functions. They are used also in the discrete Fourier transform. The term trigonometric polynomial for the real-valued case can be seen as using the analogy: the functions sin(nx) and cos(nx) are similar to the monomial basis for polynomials. In the complex case the trigonometric polynomials are spanned by the positive and negative powers of eix, Laurent polynomials in z under the change of variables z = eix. Formal definition Any function T of the form with for , is called a complex trigonometric polynomial of degree N. Using Euler's formula the polynomial can be rewritten as Analogously, letting and or , then is called a real trigonometric polynomial of degree N. Properties A trigonometric polynomial can be considered a periodic function on the real line, with period some divisor of 2, or as a function on the unit circle. A basic result is that the trigonometric polynomials are dense in the space of continuous functions on the unit circle, with the uniform norm; this is a special case of the Stone–Weierstrass theorem. More concretely, for every continuous function f and every ε > 0, there exists a trigonometric polynomial T such that |f(z) − T(z)| < ε for all z. Fejér's theorem states that the arithmetic means of the partial sums of the Fourier series of f converge uniformly to f, provided f is continuous on the circle, thus giving an explicit way to find an approximating trigonometric polynomial T. A trigonometric polynomial o
https://en.wikipedia.org/wiki/Szemer%C3%A9di%27s%20theorem
In arithmetic combinatorics, Szemerédi's theorem is a result concerning arithmetic progressions in subsets of the integers. In 1936, Erdős and Turán conjectured that every set of integers A with positive natural density contains a k-term arithmetic progression for every k. Endre Szemerédi proved the conjecture in 1975. Statement A subset A of the natural numbers is said to have positive upper density if Szemerédi's theorem asserts that a subset of the natural numbers with positive upper density contains infinitely many arithmetic progressions of length k for all positive integers k. An often-used equivalent finitary version of the theorem states that for every positive integer k and real number , there exists a positive integer such that every subset of {1, 2, ..., N} of size at least δN contains an arithmetic progression of length k. Another formulation uses the function rk(N), the size of the largest subset of {1, 2, ..., N} without an arithmetic progression of length k. Szemerédi's theorem is equivalent to the asymptotic bound That is, rk(N) grows less than linearly with N. History Van der Waerden's theorem, a precursor of Szemerédi's theorem, was proven in 1927. The cases k = 1 and k = 2 of Szemerédi's theorem are trivial. The case k = 3, known as Roth's theorem, was established in 1953 by Klaus Roth via an adaptation of the Hardy–Littlewood circle method. Endre Szemerédi proved the case k = 4 through combinatorics. Using an approach similar to the one he used for the case k = 3, Roth gave a second proof for this in 1972. The general case was settled in 1975, also by Szemerédi, who developed an ingenious and complicated extension of his previous combinatorial argument for k = 4 (called "a masterpiece of combinatorial reasoning" by Erdős). Several other proofs are now known, the most important being those by Hillel Furstenberg in 1977, using ergodic theory, and by Timothy Gowers in 2001, using both Fourier analysis and combinatorics. Terence Tao has
https://en.wikipedia.org/wiki/Index%20of%20information%20theory%20articles
This is a list of information theory topics. A Mathematical Theory of Communication algorithmic information theory arithmetic coding channel capacity Communication Theory of Secrecy Systems conditional entropy conditional quantum entropy confusion and diffusion cross-entropy data compression entropic uncertainty (Hirchman uncertainty) entropy encoding entropy (information theory) Fisher information Hick's law Huffman coding information bottleneck method information theoretic security information theory joint entropy Kullback–Leibler divergence lossless compression negentropy noisy-channel coding theorem (Shannon's theorem) principle of maximum entropy quantum information science range encoding redundancy (information theory) Rényi entropy self-information Shannon–Hartley theorem Information theory Information theory topics
https://en.wikipedia.org/wiki/Local%20multipoint%20distribution%20service
Local multipoint distribution service (LMDS) is a broadband wireless access technology originally designed for digital television transmission (DTV). It was conceived as a fixed wireless, point-to-multipoint technology for utilization in the last mile. LMDS commonly operates on microwave frequencies across the 26 GHz and 29 GHz bands. In the United States, frequencies from 31.0 through 31.3 GHz are also considered LMDS frequencies. Throughput capacity and reliable distance of the link depends on common radio link constraints and the modulation method used either phase-shift keying or amplitude modulation. Distance is typically limited to about due to rain fade attenuation constraints. Deployment links of up to from the base station are possible in some circumstances such as in point-to-point systems that can reach slightly farther distances due to increased antenna gain. History and outlook United States There was interest in LMDS in the late 1990s and it became known in some circles as "wireless cable" for its potential to compete with cable companies for provision of broadband television to the home. The Federal Communications Commission auctioned spectrum for LMDS in 1998 and 1999. Despite its early potential and the hype that surrounded the technology, LMDS was slow to find commercial traction. Many equipment and technology vendors simply abandoned their LMDS product portfolios. Industry observers believe that the window for LMDS has closed with newer technologies replacing it. Major telecommunications companies have been aggressive about deploying alternative technologies such as IPTV and fiber to the premises, also called "fiber optics". Moreover, LMDS has been surpassed in both technological and commercial potential by the LTE, WiMax and 5G NR standards. Europe and worldwide Although some operators use LMDS to provide access services, LMDS is more commonly used for high-capacity backhaul for interconnection of networks such as GSM, UMTS, LTE and
https://en.wikipedia.org/wiki/Dependency%20ratio
The dependency ratio is an age-population ratio of those typically not in the labor force (the dependent part ages 0 to 14 and 65+) and those typically in the labor force (the productive part ages 15 to 64). It is used to measure the pressure on the productive population. Consideration of the dependency ratio is essential for governments, economists, bankers, business, industry, universities and all other major economic segments which can benefit from understanding the impacts of changes in population structure. A low dependency ratio means that there are sufficient people working who can support the dependent population. A lower ratio could allow for better pensions and better health care for citizens. A higher ratio indicates more financial stress on working people and possible political instability. While the strategies of increasing fertility and of allowing immigration especially of younger working age people have been formulas for lowering dependency ratios, future job reductions through automation may impact the effectiveness of those strategies. Formula In published international statistics, the dependent part usually includes those under the age of 15 and over the age of 64. The productive part makes up the population in between, ages 15 – 64. It is normally expressed as a percentage: As the ratio increases there may be an increased burden on the productive part of the population to maintain the upbringing and pensions of the economically dependent. This results in direct impacts on financial expenditures on things like social security, as well as many indirect consequences. The (total) dependency ratio can be decomposed into the child dependency ratio and the aged dependency ratio: Total dependency ratio by regions Projections Below is a table constructed from data provided by the UN Population Division. It shows a historical ratio for the regions shown for the period 1950 - 2010. Columns to the right show projections of the ratio. Each number
https://en.wikipedia.org/wiki/Cryptographic%20protocol
A cryptographic protocol is an abstract or concrete protocol that performs a security-related function and applies cryptographic methods, often as sequences of cryptographic primitives. A protocol describes how the algorithms should be used and includes details about data structures and representations, at which point it can be used to implement multiple, interoperable versions of a program. Cryptographic protocols are widely used for secure application-level data transport. A cryptographic protocol usually incorporates at least some of these aspects: Key agreement or establishment Entity authentication Symmetric encryption and message authentication material construction Secured application-level data transport Non-repudiation methods Secret sharing methods Secure multi-party computation For example, Transport Layer Security (TLS) is a cryptographic protocol that is used to secure web (HTTPS) connections. It has an entity authentication mechanism, based on the X.509 system; a key setup phase, where a symmetric encryption key is formed by employing public-key cryptography; and an application-level data transport function. These three aspects have important interconnections. Standard TLS does not have non-repudiation support. There are other types of cryptographic protocols as well, and even the term itself has various readings; Cryptographic application protocols often use one or more underlying key agreement methods, which are also sometimes themselves referred to as "cryptographic protocols". For instance, TLS employs what is known as the Diffie–Hellman key exchange, which although it is only a part of TLS per se, Diffie–Hellman may be seen as a complete cryptographic protocol in itself for other applications. Advanced cryptographic protocols A wide variety of cryptographic protocols go beyond the traditional goals of data confidentiality, integrity, and authentication to also secure a variety of other desired characteristics of computer-mediated co
https://en.wikipedia.org/wiki/Code%20folding
Code or text folding, or less commonly holophrasting, is a feature of some graphical user interfaces that allows the user to selectively hide ("fold") or display ("unfold") parts of a document. This allows the user to manage large amounts of text while viewing only those subsections that are currently of interest. It is typically used with documents which have a natural tree structure consisting of nested elements. Other names for these features include expand and collapse, code hiding, and outlining. In Microsoft Word, the feature is called "collapsible outlining". Many user interfaces provide disclosure widgets for code folding in a sidebar, indicated for example by a triangle that points sideways (if collapsed) or down (if expanded), or by a [-] box for collapsible (expanded) text, and a [+] box for expandable (collapsed) text. Code folding is found in text editors, source code editors, and IDEs. The folding structure typically follows the syntax tree of the program defined by the computer language. It may also be defined by levels of indentation, or be specified explicitly using an in-band marker (saved as part of the source code) or out-of-band. Text folding is a similar feature used on ordinary text, where the nested elements consist of paragraphs, sections, or outline levels. Programs offering this include folding editors, outliners, and some word processors. Data folding is found in some hex editors and is used to structure a binary file or hide inaccessible data sections. Folding is also frequently used in data comparison, to select one version or another, or only the differences. History The earliest known example of code folding in an editor is in NLS. Probably the first widely available folding editor was the 1974 Structured Programming Facility (SPF) editor for IBM 370 mainframes, which could hide lines based on their indentation. It displayed on character-mapped 3270 terminals. It was very useful for prolix languages like COBOL. It evolved into
https://en.wikipedia.org/wiki/Even%20and%20odd%20functions
In mathematics, even functions and odd functions are functions which satisfy particular symmetry relations, with respect to taking additive inverses. They are important in many areas of mathematical analysis, especially the theory of power series and Fourier series. They are named for the parity of the powers of the power functions which satisfy each condition: the function is an even function if n is an even integer, and it is an odd function if n is an odd integer. Definition and examples Evenness and oddness are generally considered for real functions, that is real-valued functions of a real variable. However, the concepts may be more generally defined for functions whose domain and codomain both have a notion of additive inverse. This includes abelian groups, all rings, all fields, and all vector spaces. Thus, for example, a real function could be odd or even (or neither), as could a complex-valued function of a vector variable, and so on. The given examples are real functions, to illustrate the symmetry of their graphs. Even functions Let f be a real-valued function of a real variable. Then f is even if the following equation holds for all x such that x and −x are in the domain of f: or equivalently if the following equation holds for all such x: Geometrically, the graph of an even function is symmetric with respect to the y-axis, meaning that its graph remains unchanged after reflection about the y-axis. Examples of even functions are: The absolute value cosine hyperbolic cosine Gaussian function Odd functions Again, let f be a real-valued function of a real variable. Then f is odd if the following equation holds for all x such that x and −x are in the domain of f: or equivalently if the following equation holds for all such x: Geometrically, the graph of an odd function has rotational symmetry with respect to the origin, meaning that its graph remains unchanged after rotation of 180 degrees about the origin. Examples of odd functions are: T
https://en.wikipedia.org/wiki/Altered%20Beast
Altered Beast is a 1988 beat 'em up arcade video game developed and manufactured by Sega. The game is set in Ancient Greece and follows a player character chosen by Zeus to rescue his daughter Athena from the demonic ruler of the underworld, Neff. Through the use of power-ups, the player character can assume the form of different magical beasts (wolf, dragon, bear, tiger, and golden wolf). It was ported to several home video game consoles and home computers. It was the pack-in game for the Mega Drive when that system launched in 1988. The game was developed by Makoto Uchida, who developed the game as his first project as a lead developer. Uchida and his team used the System 16 arcade system board for its graphical capabilities with sprites. Altered Beast was ported numerous times in addition to its Genesis conversion, including for the Master System by Sega and to several computer systems and video game consoles by various third parties. Altered Beasts arcade release and its various ports have all received mixed reviews, mainly targeting the game's gameplay and graphics. The game has been re-released several times for various consoles and as part of video game compilations, and two sequels to the game have been developed. Gameplay Altered Beast is a side scrolling beat 'em up game with light platform elements. It has five levels and can be played by up to two players simultaneously. Combat takes place across five levels set in Ancient Greece and populated by aggressive undead creatures and monsters resembling those from Greek mythology. The demonic god Neff waits at the end of each level. Between each level are small animations giving the player glimpses of Athena's peril. Players can punch, kick and jump. The game's premise is that Neff, ruler of the underworld, captures the goddess Athena. Angry, her father, the Olympian god Zeus, decides to choose a champion to save her. Respecting the bravery of Roman Centurions, Zeus resurrects one of them and empowers him
https://en.wikipedia.org/wiki/Declassification
Declassification is the process of ceasing a protective classification, often under the principle of freedom of information. Procedures for declassification vary by country. Papers may be withheld without being classified as secret, and eventually made available. United Kingdom Classified information has been governed by various Official Secrets Acts, the latest being the Official Secrets Act 1989. Until 1989 requested information was routinely kept secret invoking the public interest defence; this was largely removed by the 1989 Act. The Freedom of Information Act 2000 largely requires information to be disclosed unless there are good reasons for secrecy. Confidential government papers such as the yearly cabinet papers used routinely to be withheld formally, although not necessarily classified as secret, for 30 years under the thirty year rule, and released usually on a New Year's Day; freedom of information legislation has relaxed this rigid approach. United States Executive Order 13526 establishes the mechanisms for most declassifications, within the laws passed by Congress. The originating agency assigns a declassification date, by default 25 years. After 25 years, declassification review is automatic with nine narrow exceptions that allow information to remain as classified. At 50 years, there are two exceptions, and classifications beyond 75 years require special permission. Because of changes in policy and circumstances, agencies are expected to actively review documents that have been classified for fewer than 25 years. They must also respond to Mandatory Declassification Review and Freedom of Information Act requests. The National Archives and Records Administration houses the National Declassification Center to coordinate reviews and Information Security Oversight Office to promulgate rules and enforce quality measures across all agencies. NARA reviews documents on behalf of defunct agencies and permanently stores declassified documents for public i
https://en.wikipedia.org/wiki/Padding%20%28cryptography%29
In cryptography, padding is any of a number of distinct practices which all include adding data to the beginning, middle, or end of a message prior to encryption. In classical cryptography, padding may include adding nonsense phrases to a message to obscure the fact that many messages end in predictable ways, e.g. sincerely yours. Classical cryptography Official messages often start and end in predictable ways: My dear ambassador, Weather report, Sincerely yours, etc. The primary use of padding with classical ciphers is to prevent the cryptanalyst from using that predictability to find known plaintext that aids in breaking the encryption. Random length padding also prevents an attacker from knowing the exact length of the plaintext message. A famous example of classical padding which caused a great misunderstanding is "the world wonders" incident, which nearly caused an Allied loss at the WWII Battle off Samar, part of the larger Battle of Leyte Gulf. In that example, Admiral Chester Nimitz, the Commander in Chief, U.S. Pacific Fleet in World War II, sent the following message to Admiral Bull Halsey, commander of Task Force Thirty Four (the main Allied fleet) at the Battle of Leyte Gulf, on October 25, 1944: With padding (bolded) and metadata added, the message became: Halsey's radio operator mistook some of the padding for the message and so Admiral Halsey ended up reading the following message: Admiral Halsey interpreted the padding phrase "the world wonders" as a sarcastic reprimand, which caused him to have an emotional outburst and then lock himself in his bridge and sulk for an hour before he moved his forces to assist at the Battle off Samar. Halsey's radio operator should have been tipped off by the letters RR that "the world wonders" was padding; all other radio operators who received Admiral Nimitz's message correctly removed both padding phrases. Many classical ciphers arrange the plaintext into particular patterns (e.g., squares, rectangles, etc.)
https://en.wikipedia.org/wiki/The%20world%20wonders
"The world wonders" is a phrase which rose to notoriety following its use during World War II when it appeared as part of a decoded message sent by Fleet Admiral Chester Nimitz, Commander in Chief, U.S. Pacific Fleet, to Admiral William Halsey Jr. at the height of the Battle of Leyte Gulf on October 25, 1944. The words, intended to be without meaning, were added as security padding in an encrypted message to hinder Japanese attempts at cryptanalysis, but were mistakenly included in the decoded text given to Halsey. Halsey interpreted the phrase as a harsh and sarcastic rebuke, and as a consequence dropped his futile pursuit of a decoy Japanese carrier task force, and, belatedly, reversed some of his ships in a fruitless effort to aid United States forces in the Battle off Samar. Encryption strategy Encryption cyphers can be defeated when easily guessed common patterns are recognized in the messages. Messages typically have common intros and salutations such as "Dear" and "Sincerely" which can lead to the defeat of the cypher. To remove common phrases from the start and the end, in World War II the US Navy would add unique non-relevant padding phrases separated from the main text by a word of two characters. The padding would be added before encoding and stripped after decoding. For example, a simple message such as "Halsey: Come home. - CINCPAC" might become "Road less taken nn Halsey: Come home. - CINCPAC rr bacon and eggs" during encrypted transmission. Background On October 20, 1944, United States troops invaded the island of Leyte as part of a strategy aimed at isolating Japan from the resource-rich territory it had occupied in South East Asia, and in particular depriving its forces and industry of vital oil supplies. The Imperial Japanese Navy (IJN) mobilized nearly all of its remaining major naval vessels in an attempt to defeat the Allied invasion. In the ensuing Battle of Leyte Gulf the Japanese intended to use ships commanded by Vice-Admiral Jisaburō
https://en.wikipedia.org/wiki/Network%20security
Network security consists of the policies, processes and practices adopted to prevent, detect and monitor unauthorized access, misuse, modification, or denial of a computer network and network-accessible resources. Network security involves the authorization of access to data in a network, which is controlled by the network administrator. Users choose or are assigned an ID and password or other authenticating information that allows them access to information and programs within their authority. Network security covers a variety of computer networks, both public and private, that are used in everyday jobs: conducting transactions and communications among businesses, government agencies and individuals. Networks can be private, such as within a company, and others which might be open to public access. Network security is involved in organizations, enterprises, and other types of institutions. It does as its title explains: it secures the network, as well as protecting and overseeing operations being done. The most common and simple way of protecting a network resource is by assigning it a unique name and a corresponding password. Network security concept Network security starts with authentication, commonly with a username and a password. Since this requires just one detail authenticating the user name—i.e., the password—this is sometimes termed one-factor authentication. With two-factor authentication, something the user 'has' is also used (e.g., a security token or 'dongle', an ATM card, or a mobile phone); and with three-factor authentication, something the user 'is' is also used (e.g., a fingerprint or retinal scan). Once authenticated, a firewall enforces access policies such as what services are allowed to be accessed by the network users. Though effective to prevent unauthorized access, this component may fail to check potentially harmful content such as computer worms or Trojans being transmitted over the network. Anti-virus software or an intrusion prevent
https://en.wikipedia.org/wiki/Gate%20array
A gate array is an approach to the design and manufacture of application-specific integrated circuits (ASICs) using a prefabricated chip with components that are later interconnected into logic devices (e.g. NAND gates, flip-flops, etc.) according to custom order by adding metal interconnect layers in the factory. It was popular during the upheaval in the semiconductor industry in the 1980s, and its usage declined by the end of the 1990s. Similar technologies have also been employed to design and manufacture analog, analog-digital, and structured arrays, but, in general, these are not called gate arrays. Gate arrays have also been known as uncommitted logic arrays (ULAs), which also offered linear circuit functions, and semi-custom chips. History Development Gate arrays had several concurrent development paths. Ferranti in the UK pioneered commercializing bipolar ULA technology, offering circuits of "100 to 10,000 gates and above" by 1983. The company's early lead in semi-custom chips, with the initial application of a ULA integrated circuit involving a camera from Rollei in 1972, expanding to "practically all European camera manufacturers" as users of the technology, led to the company's dominance in this particular market throughout the 1970s. However, by 1982, as many as 30 companies had started to compete with Ferranti, reducing the company's market share to around 30 percent. Ferranti's "major competitors" were other British companies such as Marconi and Plessey, both of which had licensed technology from another British company, Micro Circuit Engineering. A contemporary initiative, UK5000, also sought to produce a CMOS gate array with "5,000 usable gates", with involvement from British Telecom and a number of other major British technology companies. IBM developed proprietary bipolar master slices that it used in mainframe manufacturing in the late 1970s and early 1980s, but never commercialized them externally. Fairchild Semiconductor also flirted brief
https://en.wikipedia.org/wiki/Secure%20channel
In cryptography, a secure channel is a means of data transmission that is resistant to overhearing and tampering. A confidential channel is a means of data transmission that is resistant to overhearing, or eavesdropping (e.g., reading the content), but not necessarily resistant to tampering (i.e., manipulating the content). An authentic channel is a means of data transmission that is resistant to tampering but not necessarily resistant to overhearing. In contrast to a secure channel, an insecure channel is unencrypted and may be subject to eavesdropping and tampering. Secure communications are possible over an insecure channel if the content to be communicated is encrypted prior to transmission. Secure channels in the real world There are no perfectly secure channels in the real world. There are, at best, only ways to make insecure channels (e.g., couriers, homing pigeons, diplomatic bags, etc.) less insecure: padlocks (between courier wrists and a briefcase), loyalty tests, security investigations, and guns for courier personnel, diplomatic immunity for diplomatic bags, and so forth. In 1976, two researchers proposed a key exchange technique (now named after them)—Diffie–Hellman key exchange (D-H). This protocol allows two parties to generate a key only known to them, under the assumption that a certain mathematical problem (e.g., the Diffie–Hellman problem in their proposal) is computationally infeasible (i.e., very very hard) to solve, and that the two parties have access to an authentic channel. In short, that an eavesdropper—conventionally termed 'Eve', who can listen to all messages exchanged by the two parties, but who can not modify the messages—will not learn the exchanged key. Such a key exchange was impossible with any previously known cryptographic schemes based on symmetric ciphers, because with these schemes it is necessary that the two parties exchange a secret key at some prior time, hence they require a confidential channel at that time which is
https://en.wikipedia.org/wiki/FTP%20server
An FTP server is computer software consisting of one or more programs that can execute commands given by remote client(s) such as receiving, sending, deleting files, creating or removing directories, etc. The software may run as a software component of a program, as a standalone program or even as one or more processes (in the background). An FTP server plays the role of a server in a client–server model using the FTP and/or the FTPS and/or the SFTP network protocol(s). An FTP server can also be intended as a computer that runs an FTP server program to host collections of files. Big FTP sites can be run by many computers in order to be able to serve the desired maximum number of clients connected to servers. A client program connects to an FTP server, then, unless anonymous access is enabled, it has to authenticate itself by sending username and password; after that it can retrieve and/or send files to the server along with other operations (depending on user's privileges). See also Server (computing) File Transfer Protocol Comparison of FTP server software packages References Servers (computing) FTP server software
https://en.wikipedia.org/wiki/Hydrogen%20hypothesis
The hydrogen hypothesis is a model proposed by William F. Martin and Miklós Müller in 1998 that describes a possible way in which the mitochondrion arose as an endosymbiont within a prokaryotic host in the archaea, giving rise to a symbiotic association of two cells from which the first eukaryotic cell could have arisen (symbiogenesis). According to the hydrogen hypothesis: The hosts that acquired the mitochondria were hydrogen-dependent archaea, possibly similar in physiology to modern methanogenic archaea, which use hydrogen and carbon dioxide to produce methane; The future mitochondrion was a facultatively anaerobic eubacterium which produced hydrogen and carbon dioxide as byproducts of anaerobic respiration; A symbiotic relationship between the two started, based on the host's hydrogen dependence (anaerobic syntrophy). Mechanism The hypothesis differs from many alternative views within the endosymbiotic theory framework, which suggest that the first eukaryotic cells evolved a nucleus but lacked mitochondria, the latter arising as a eukaryote engulfed a primitive bacterium that eventually became the mitochondrion. The hypothesis attaches evolutionary significance to hydrogenosomes and provides a rationale for their common ancestry with mitochondria. Hydrogenosomes are anaerobic mitochondria that produce ATP by, as a rule, converting pyruvate into hydrogen, carbon dioxide and acetate. Examples from modern biology are known where methanogens cluster around hydrogenosomes within eukaryotic cells. Most theories within the endosymbiotic theory framework do not address the common ancestry of mitochondria and hydrogenosomes. The hypothesis provides a straightforward explanation for the observation that eukaryotes are genetic chimeras with genes of archaeal and eubacterial ancestry. Furthermore, it would imply that archaea and eukarya split after the modern groups of archaea appeared. Most theories within the endosymbiotic theory framework predict that some eukaryo
https://en.wikipedia.org/wiki/CPU%20socket
In computer hardware, a CPU socket or CPU slot contains one or more mechanical components providing mechanical and electrical connections between a microprocessor and a printed circuit board (PCB). This allows for placing and replacing the central processing unit (CPU) without soldering. Common sockets have retention clips that apply a constant force, which must be overcome when a device is inserted. For chips with many pins, zero insertion force (ZIF) sockets are preferred. Common sockets include Pin Grid Array (PGA) or Land Grid Array (LGA). These designs apply a compression force once either a handle (PGA type) or a surface plate (LGA type) is put into place. This provides superior mechanical retention while avoiding the risk of bending pins when inserting the chip into the socket. Certain devices use Ball Grid Array (BGA) sockets, although these require soldering and are generally not considered user replaceable. CPU sockets are used on the motherboard in desktop and server computers. Because they allow easy swapping of components, they are also used for prototyping new circuits. Laptops typically use surface-mount CPUs, which take up less space on the motherboard than a socketed part. As the pin density increases in modern sockets, increasing demands are placed on the printed circuit board fabrication technique, which permits the large number of signals to be successfully routed to nearby components. Likewise, within the chip carrier, the wire bonding technology also becomes more demanding with increasing pin counts and pin densities. Each socket technology will have specific reflow soldering requirements. As CPU and memory frequencies increase, above 30 MHz or thereabouts, electrical signalling increasingly shifts to differential signaling over parallel buses, bringing a new set of signal integrity challenges. The evolution of the CPU socket amounts to a coevolution of all these technologies in tandem. Modern CPU sockets are almost always designed in con
https://en.wikipedia.org/wiki/Point%20%28geometry%29
In classical Euclidean geometry, a point is a primitive notion that models an exact location in space, and has no length, width, or thickness. In modern mathematics, a point is considered as an element of some set, a point set. A space is a point set with some additional structure. An isolated point has no other neighboring points in a given subset. Being a primitive notion means that a point cannot be defined in terms of previously defined objects. That is, a point is defined only by some properties, called axioms, that it must satisfy; for example, "there is exactly one line that passes through two different points". Points in Euclidean geometry Points, considered within the framework of Euclidean geometry, are one of the most fundamental objects. Euclid originally defined the point as "that which has no part". In the two-dimensional Euclidean plane, a point is represented by an ordered pair (, ) of numbers, where the first number conventionally represents the horizontal and is often denoted by , and the second number conventionally represents the vertical and is often denoted by . This idea is easily generalized to three-dimensional Euclidean space, where a point is represented by an ordered triplet (, , ) with the additional third number representing depth and often denoted by . Further generalizations are represented by an ordered tuplet of terms, where is the dimension of the space in which the point is located. Many constructs within Euclidean geometry consist of an infinite collection of points that conform to certain axioms. This is usually represented by a set of points; As an example, a line is an infinite set of points of the form where through and are constants and is the dimension of the space. Similar constructions exist that define the plane, line segment, and other related concepts. A line segment consisting of only a single point is called a degenerate line segment. In addition to defining points and constructs related to points, Eucli
https://en.wikipedia.org/wiki/S/MIME
S/MIME (Secure/Multipurpose Internet Mail Extensions) is a standard for public-key encryption and signing of MIME data. S/MIME is on an IETF standards track and defined in a number of documents, most importantly . It was originally developed by RSA Data Security, and the original specification used the IETF MIME specification with the de facto industry standard PKCS #7 secure message format. Change control to S/MIME has since been vested in the IETF, and the specification is now layered on Cryptographic Message Syntax (CMS), an IETF specification that is identical in most respects with PKCS #7. S/MIME functionality is built into the majority of modern email software and interoperates between them. Since it is built on CMS, MIME can also hold an advanced digital signature. Function S/MIME provides the following cryptographic security services for electronic messaging applications: Authentication Message integrity Non-repudiation of origin (using digital signatures) Privacy Data security (using encryption) S/MIME specifies the MIME type application/pkcs7-mime (smime-type "enveloped-data") for data enveloping (encrypting) where the whole (prepared) MIME entity to be enveloped is encrypted and packed into an object which subsequently is inserted into an application/pkcs7-mime MIME entity. S/MIME certificates Before S/MIME can be used in any of the above applications, one must obtain and install an individual key/certificate either from one's in-house certificate authority (CA) or from a public CA. The accepted best practice is to use separate private keys (and associated certificates) for signature and for encryption, as this permits escrow of the encryption key without compromise to the non-repudiation property of the signature key. Encryption requires having the destination party's certificate on store (which is typically automatic upon receiving a message from the party with a valid signing certificate). While it is technically possible to send a message enc
https://en.wikipedia.org/wiki/Unimodular%20matrix
In mathematics, a unimodular matrix M is a square integer matrix having determinant +1 or −1. Equivalently, it is an integer matrix that is invertible over the integers: there is an integer matrix N that is its inverse (these are equivalent under Cramer's rule). Thus every equation , where M and b both have integer components and M is unimodular, has an integer solution. The n × n unimodular matrices form a group called the n × n general linear group over , which is denoted . Examples of unimodular matrices Unimodular matrices form a subgroup of the general linear group under matrix multiplication, i.e. the following matrices are unimodular: Identity matrix The inverse of a unimodular matrix The product of two unimodular matrices Other examples include: Pascal matrices Permutation matrices the three transformation matrices in the ternary tree of primitive Pythagorean triples Certain transformation matrices for rotation, shearing (both with determinant 1) and reflection (determinant −1). The unimodular matrix used (possibly implicitly) in lattice reduction and in the Hermite normal form of matrices. The Kronecker product of two unimodular matrices is also unimodular. This follows since where p and q are the dimensions of A and B, respectively. Total unimodularity A totally unimodular matrix (TU matrix) is a matrix for which every square non-singular submatrix is unimodular. Equivalently, every square submatrix has determinant 0, +1 or −1. A totally unimodular matrix need not be square itself. From the definition it follows that any submatrix of a totally unimodular matrix is itself totally unimodular (TU). Furthermore it follows that any TU matrix has only 0, +1 or −1 entries. The converse is not true, i.e., a matrix with only 0, +1 or −1 entries is not necessarily unimodular. A matrix is TU if and only if its transpose is TU. Totally unimodular matrices are extremely important in polyhedral combinatorics and combinatorial optimization since they give a
https://en.wikipedia.org/wiki/Simple%20Authentication%20and%20Security%20Layer
Simple Authentication and Security Layer (SASL) is a framework for authentication and data security in Internet protocols. It decouples authentication mechanisms from application protocols, in theory allowing any authentication mechanism supported by SASL to be used in any application protocol that uses SASL. Authentication mechanisms can also support proxy authorization, a facility allowing one user to assume the identity of another. They can also provide a data security layer offering data integrity and data confidentiality services. DIGEST-MD5 provides an example of mechanisms which can provide a data-security layer. Application protocols that support SASL typically also support Transport Layer Security (TLS) to complement the services offered by SASL. John Gardiner Myers wrote the original SASL specification (RFC 2222) in 1997. In 2006, that document was replaced by RFC 4422 authored by Alexey Melnikov and Kurt D. Zeilenga. SASL, as defined by RFC 4422 is an IETF Standard Track protocol and is, , a Proposed Standard. SASL mechanisms A SASL mechanism implements a series of challenges and responses. Defined SASL mechanisms include: SASL-aware application protocols Application protocols define their representation of SASL exchanges with a profile. A protocol has a service name such as "ldap" in a registry shared with GSSAPI and Kerberos. protocols currently supporting SASL include: Application Configuration Access Protocol Advanced Message Queuing Protocol (AMQP) Blocks Extensible Exchange Protocol Internet Message Access Protocol (IMAP) Internet Message Support Protocol Internet Relay Chat (IRC) (with IRCX or the IRCv3 SASL extension) Lightweight Directory Access Protocol (LDAP) libvirt ManageSieve (RFC 5804) memcached Post Office Protocol (POP) Remote framebuffer protocol used by VNC Simple Mail Transfer Protocol (SMTP) Subversion protocol Extensible Messaging and Presence Protocol (XMPP) See also Transport Layer Security (TLS) References
https://en.wikipedia.org/wiki/Lucas%20chain
In mathematics, a Lucas chain is a restricted type of addition chain, named for the French mathematician Édouard Lucas. It is a sequence a0, a1, a2, a3, ... that satisfies a0=1, and for each k > 0: ak = ai + aj, and either ai = aj or |ai − aj| = am, for some i, j, m < k. The sequence of powers of 2 (1, 2, 4, 8, 16, ...) and the Fibonacci sequence (with a slight adjustment of the starting point 1, 2, 3, 5, 8, ...) are simple examples of Lucas chains. Lucas chains were introduced by Peter Montgomery in 1983. If L(n) is the length of the shortest Lucas chain for n, then Kutz has shown that most n do not have L < (1-ε) logφ n, where φ is the Golden ratio. References Integer sequences Addition chains
https://en.wikipedia.org/wiki/List%20of%20postal%20codes
This list shows an overview of postal code notation schemes for all countries that have postal or ZIP Code systems. List [Lists of postal codes]. Legend A = letter N = number ? = letter or number CC = ISO 3166-1 alpha-2 country code On the use of country codes The use of the country codes in conjunction with postal codes started as a recommendation from CEPT (European Conference of Postal and Telecommunications Administrations) in the 1960s. In the original CEPT recommendation the distinguishing signs of motor vehicles in international traffic ("car codes") were placed before the postal code, and separated from it by a "-" (dash). Codes were only used on international mail and were hardly ever used internally in each country. Since the late 1980s, however, a number of postal administrations have changed the recommended codes to the two-letter country codes of ISO 3166. This would allow a universal, standardized code set to be used, and bring it in line with country codes used elsewhere in the UPU (Universal Postal Union). Attempts were also made (without success) to make this part of the official address guidelines of the UPU. Recently introduced postal code systems where the UPU has been involved have included the ISO 3166 country code as an integral part of the postal code. At present there are no universal guidelines as to which code set to use, and recommendations vary from country to country. In some cases, the applied country code will differ according to recommendations of the sender's postal administration. UPU recommends that the country name always be included as the last line of the address. In the list above, the following principles have been applied: Integral country codes have been included in the code format, in bold type and without brackets. These are also used on internal mail in the respective countries. The ISO 3166 codes is used alone for countries that have explicitly recommended them. Where there is no explicit recommendation
https://en.wikipedia.org/wiki/Vietnamese%20Quoted-Readable
Vietnamese Quoted-Readable (usually abbreviated VIQR), also known as Vietnet, is a convention for writing Vietnamese using ASCII characters encoded in only 7 bits, making possible for Vietnamese to be supported in computing and communication systems at the time. Because the Vietnamese alphabet contains a complex system of diacritical marks, VIQR requires the user to type in a base letter, followed by one or two characters that represent the diacritical marks. Syntax VIQR uses the following convention: VIQR uses DD or Dd for the Vietnamese letter Đ, and dd for the Vietnamese letter đ. To type certain punctuation marks (namely, the period, question mark, apostrophe, forward slash, opening parenthesis, or tilde) directly after most Vietnamese words, a backslash (\) must be typed directly before the punctuation mark, functioning as an escape character, so that it will not be interpreted as a diacritical mark. For example: What is your name [Sir]? My name is Trần Văn Hiếu. Software support VIQR is primarily used as a Vietnamese input method in software that supports Unicode. Similar input methods include Telex and VNI. Input method editors such as VPSKeys convert VIQR sequences to Unicode precomposed characters as one types, typically allowing modifier keys to be input after all the base letters of each word. However, in the absence of input method software or Unicode support, VIQR can still be input using a standard keyboard and read as plain ASCII text without suffering from mojibake. Unlike the VISCII and VPS code pages, VIQR is rarely used as a character encoding. While VIQR is registered with the Internet Assigned Numbers Authority as a MIME charset, MIME-compliant software is not required to support it. Nevertheless, the Mozilla Vietnamese Enabling Project once produced builds of the open source version of Netscape Communicator, as well as its successor, the Mozilla Application Suite, that were capable of decoding VIQR-encoded webpages, e-mails, and ne