source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Broadcasting%20%28networking%29
In computer networking, telecommunication and information theory, broadcasting is a method of transferring a message to all recipients simultaneously. Broadcasting can be performed as a high-level operation in a program, for example, broadcasting in Message Passing Interface, or it may be a low-level networking operation, for example broadcasting on Ethernet. All-to-all communication is a computer communication method in which each sender transmits messages to all receivers within a group. In networking this can be accomplished using broadcast or multicast. This is in contrast with the point-to-point method in which each sender communicates with one receiver. Addressing methods There are four principal addressing methods in the Internet Protocol: Overview In computer networking, broadcasting refers to transmitting a packet that will be received by every device on the network. In practice, the scope of the broadcast is limited to a broadcast domain. Broadcasting is the most general communication method and is also the most intensive, in the sense that many messages may be required and many network devices are involved. This is in contrast to unicast addressing in which a host sends datagrams to another single host, identified by a unique address. Broadcasting may be performed as all scatter in which each sender performs its own scatter in which the messages are distinct for each receiver, or all broadcast in which they are the same. The MPI message passing method which is the de facto standard on large computer clusters includes the MPI_Alltoall method. Not all network technologies support broadcast addressing; for example, neither X.25 nor Frame Relay have broadcast capability. The Internet Protocol Version 4 (IPv4), which is the primary networking protocol in use today on the Internet and all networks connected to it, supports broadcast, but the broadcast domain is the broadcasting host's subnet, which is typically small; there is no way to do an Internet-
https://en.wikipedia.org/wiki/Awn%20%28botany%29
In botany, an awn is either a hair- or bristle-like appendage on a larger structure, or in the case of the Asteraceae, a stiff needle-like element of the pappus. Awns are characteristic of various plant families, including Geraniaceae and many grasses (Poaceae). A common name for awns includes foxtails, for they tend to stick to animals passing by the plant. Description In grasses, awns typically extend from the lemmas of the florets. This often makes the hairy appearance of the grass synflorescence. Awns may be long (several centimeters) or short, straight or curved, single or multiple per floret. Some biological genera are named after their awns, such as the three-awns (Aristida). In some species, the awns can contribute significantly to photosynthesis, as, for example, in barley. The awns of wild emmer-wheat spikelets effectively self-cultivate by propelling themselves mechanically into soils. During a period of increased humidity during the night, the awns of the spikelet become erect and draw together, and in the process push the grain into the soil. During the daytime the humidity drops and the awns slacken back again; however, fine silica hairs on the awns act as ratchet hooks in the soil and prevent the spikelets from reversing back out again. During the course of alternating stages of daytime and nighttime humidity, the awns' pumping movements, which resemble swimming frog kick, drill the spikelet as much as an inch into the soil. When awns occur in the Geraniaceae, they form the distal (rostral) points of the five carpels, lying parallel in the style above the ovary. Depending on the species, such awns have various seed-dispersal functions, either dispersing the seed by flinging it out (seed ejection); flinging away the entire carpel so that it snaps off (carpel projection); entangling the awn or bristles on passing animals (zoochory); or possibly burying the seed by twisting as it lies on soft soil.
https://en.wikipedia.org/wiki/Air%20preheater
An air preheater is any device designed to heat air before another process (for example, combustion in a boiler With the primary objective of increasing the thermal efficiency of the process. They may be used alone or to replace a recuperative heat system or to replace a steam coil. In particular, this article describes the combustion air preheaters used in large boilers found in thermal power stations producing electric power from e.g. fossil fuels, biomass or waste. For instance, as the Ljungström air preheater has been attributed worldwide fuel savings estimated to 4,960,000,000 tons of oil, "few inventions have been as successful in saving fuel as the Ljungström Air Preheater", marked as the 44th International Historic Mechanical Engineering Landmark by the American Society of Mechanical Engineers. The purpose of the air preheater is to recover the heat from the boiler flue gas which increases the thermal efficiency of the boiler by reducing the useful heat lost in the flue gas. As a consequence, the flue gases are also conveyed to the flue gas stack (or chimney) at a lower temperature, allowing simplified design of the conveyance system and the flue gas stack. It also allows control over the temperature of gases leaving the stack (to meet emissions regulations, for example). It is installed between the economizer and chimney. Types There are two types of air preheaters for use in steam generators in thermal power stations: One is a tubular type built into the boiler flue gas ducting, and the other is a regenerative air preheater. These may be arranged so the gas flows horizontally or vertically across the axis of rotation. Another type of air preheater is the regenerator used in iron or glass manufacture. Tubular type Construction features Tubular preheaters consist of straight tube bundles which pass through the outlet ducting of the boiler and open at each end outside of the ducting. Inside the ducting, the hot furnace gases pass around the preheater t
https://en.wikipedia.org/wiki/Electronic%20speed%20control
An electronic speed control (ESC) is an electronic circuit that controls and regulates the speed of an electric motor. It may also provide reversing of the motor and dynamic braking. Miniature electronic speed controls are used in electrically powered radio controlled models. Full-size electric vehicles also have systems to control the speed of their drive motors. Function An electronic speed control follows a speed reference signal (derived from a throttle lever, joystick, or other manual input) and varies the switching rate of a network of field effect transistors (FETs). By adjusting the duty cycle or switching frequency of the transistors, the speed of the motor is changed. The rapid switching of the current flowing through the motor is what causes the motor itself to emit its characteristic high-pitched whine, especially noticeable at lower speeds. Different types of speed controls are required for brushed DC motors and brushless DC motors. A brushed motor can have its speed controlled by varying the voltage on its armature. (Industrially, motors with electromagnet field windings instead of permanent magnets can also have their speed controlled by adjusting the strength of the motor field current.) A brushless motor requires a different operating principle. The speed of the motor is varied by adjusting the timing of pulses of current delivered to the several windings of the motor. Brushless ESC systems basically create three-phase AC power, like a variable frequency drive, to run brushless motors. Brushless motors are popular with radio controlled airplane hobbyists because of their efficiency, power, longevity and light weight in comparison to traditional brushed motors. Brushless DC motor controllers are much more complicated than brushed motor controllers. The correct phase of the current fed to the motor varies with the motor rotation, which is to be taken into account by the ESC: Usually, back EMF from the motor windings is used to detect this
https://en.wikipedia.org/wiki/System%20requirements%20specification
A System Requirements Specification (SyRS) (abbreviated SysRS to be distinct from a software requirements specification (SRS)) is a structured collection of information that embodies the requirements of a system. A business analyst (BA), sometimes titled system analyst, is responsible for analyzing the business needs of their clients and stakeholders to help identify business problems and propose solutions. Within the systems development life cycle domain, the BA typically performs a liaison function between the business side of an enterprise and the information technology department or external service providers. See also Business analysis Business process reengineering Business requirements Concept of operations Data modeling Information technology Process modeling Requirement Requirements analysis Software requirements specification Systems analysis Use case
https://en.wikipedia.org/wiki/F-ATPase
F-ATPase, also known as F-Type ATPase, is an ATPase/synthase found in bacterial plasma membranes, in mitochondrial inner membranes (in oxidative phosphorylation, where it is known as Complex V), and in chloroplast thylakoid membranes. It uses a proton gradient to drive ATP synthesis by allowing the passive flux of protons across the membrane down their electrochemical gradient and using the energy released by the transport reaction to release newly formed ATP from the active site of F-ATPase. Together with V-ATPases and A-ATPases, F-ATPases belong to superfamily of related rotary ATPases. F-ATPase consists of two domains: the Fo domain, which is integral in the membrane and is composed of 3 different types of integral proteins classified as a, b and c. the F1, which is peripheral (on the side of the membrane that the protons are moving into). F1 is composed of 5 polypeptide units α3β3γδε that bind to the surface of the Fo domain. F-ATPases usually work as ATP synthases instead of ATPases in cellular environments. That is to say, it usually makes ATP from the proton gradient instead of working in the other direction like V-ATPases typically do. They do occasionally revert as ATPases in bacteria. Structure Fo-F1 particles are mainly formed of polypeptides. The F1-particle contains 5 types of polypeptides, with the composition-ratio—3α:3β:1δ:1γ:1ε. The Fo has the 1a:2b:12c composition. Together they form a rotary motor. As the protons bind to the subunits of the Fo domains, they cause parts of it to rotate. This rotation is propagated by a 'camshaft' to the F1 domain. ADP and Pi (inorganic phosphate) bind spontaneously to the three β subunits of the F1 domain, so that every time it goes through a 120° rotation ATP is released (rotational catalysis). The Fo domains sits within the membrane, spanning the phospholipid bilayer, while the F1 domain extends into the cytosol of the cell to facilitate the use of newly synthesized ATP. The Bovine Mitochondrial F1-ATPa
https://en.wikipedia.org/wiki/Two-graph
In mathematics, a two-graph is a set of (unordered) triples chosen from a finite vertex set X, such that every (unordered) quadruple from X contains an even number of triples of the two-graph. A regular two-graph has the property that every pair of vertices lies in the same number of triples of the two-graph. Two-graphs have been studied because of their connection with equiangular lines and, for regular two-graphs, strongly regular graphs, and also finite groups because many regular two-graphs have interesting automorphism groups. A two-graph is not a graph and should not be confused with other objects called 2-graphs in graph theory, such as 2-regular graphs. Examples On the set of vertices {1,...,6} the following collection of unordered triples is a two-graph: 123  124  135  146  156  236  245  256  345  346 This two-graph is a regular two-graph since each pair of distinct vertices appears together in exactly two triples. Given a simple graph G = (V,E), the set of triples of the vertex set V whose induced subgraph has an odd number of edges forms a two-graph on the set V. Every two-graph can be represented in this way. This example is referred to as the standard construction of a two-graph from a simple graph. As a more complex example, let T be a tree with edge set E. The set of all triples of E that are not contained in a path of T form a two-graph on the set E. Switching and graphs A two-graph is equivalent to a switching class of graphs and also to a (signed) switching class of signed complete graphs. Switching a set of vertices in a (simple) graph means reversing the adjacencies of each pair of vertices, one in the set and the other not in the set: thus the edge set is changed so that an adjacent pair becomes nonadjacent and a nonadjacent pair becomes adjacent. The edges whose endpoints are both in the set, or both not in the set, are not changed. Graphs are switching equivalent if one can be obtained from the other by switching. An equivalence c
https://en.wikipedia.org/wiki/Equiangular%20lines
In geometry, a set of lines is called equiangular if all the lines intersect at a single point, and every pair of lines makes the same angle. Equiangular lines in Euclidean space Computing the maximum number of equiangular lines in n-dimensional Euclidean space is a difficult problem, and unsolved in general, though bounds are known. The maximal number of equiangular lines in 2-dimensional Euclidean space is 3: we can take the lines through opposite vertices of a regular hexagon, each at an angle 120 degrees from the other two. The maximum in 3 dimensions is 6: we can take lines through opposite vertices of an icosahedron. It is known that the maximum number in any dimension is less than or equal to . This upper bound is tight up to a constant factor to a construction by de Caen. The maximum in dimensions 1 through 16 is listed in the On-Line Encyclopedia of Integer Sequences as follows: 1, 3, 6, 6, 10, 16, 28, 28, 28, 28, 28, 28, 28, 28, 36, 40, ... . In particular, the maximum number of equiangular lines in 7 dimensions is 28. We can obtain these lines as follows. Take the vector (−3,−3,1,1,1,1,1,1) in , and form all 28 vectors obtained by permuting the components of this. The dot product of two of these vectors is 8 if both have a component 3 in the same place or −8 otherwise. Thus, the lines through the origin containing these vectors are equiangular. Moreover, all 28 vectors are orthogonal to the vector (1,1,1,1,1,1,1,1) in , so they lie in a 7-dimensional space. In fact, these 28 vectors and their negatives are, up to rotation and dilation, the 56 vertices of the 321 polytope. In other words, they are the weight vectors of the 56-dimensional representation of the Lie group E7. Equiangular lines are equivalent to two-graphs. Given a set of equiangular lines, let c be the cosine of the common angle. We assume that the angle is not 90°, since that case is trivial (i.e., not interesting, because the lines are just coordinate axes); thus, c is nonzer
https://en.wikipedia.org/wiki/Registered%20state%20change%20notification
In Fibre Channel protocol, a registered state change notification (RSCN) is a Fibre Channel fabric's notification sent to all specified nodes in case of any major fabric changes. This allows nodes to immediately gain knowledge about the fabric and react accordingly. Overview Implementation of this function is obligatory for each Fibre Channel switch, but is optional for a node. This function belongs to a second level of the protocol, or FC2. Some events that trigger notifications are: Nodes joining or leaving the fabric (most common usage) Switches joining or leaving the fabric Changing the switch name The nodes wishing to be notified in such way need to register themselves first at the Fabric Controller, which is a standardized FC virtual address present at each switch. RSCN and zoning If a fabric has some zones configured for additional security, notifications do not cross zone boundaries if not needed. Simply, there is no need to notify a node about a change that it cannot see anyway (because it happened in a separate zone). Example For example, let's assume there is a fabric with just one node, namely a server's FC-compatible HBA. First it registers itself for notifications. Then a human administrator connects another node, like a disk array, to the fabric. This event is known at first only to a single switch, the one that detected one of its ports going online. The switch, however, has a list of registered nodes (currently containing only the HBA node) and notifies every one of them. As the HBA receives the notification, it chooses to query the nearest switch about current list of nodes. It detects a new disk array and starts to communicate with it on a SCSI level, asking for a list of SCSI LUNs. Then it notifies a server's operating system, that there is a new SCSI target containing some LUNs. The operating system auto-configures those as new block devices, ready for use. See also Storage area network Fibre Channel Fibre Channel fabric Fibre Chann
https://en.wikipedia.org/wiki/Jim%20Horning
James Jay Horning (24 August 1942 – 18 January 2013) was an American computer scientist and ACM Fellow. Overview Jim Horning received a PhD in computer science from Stanford University in 1969 for a thesis entitled A Study of Grammatical Inference. He was a founding member, and later chairman, of the Computer Systems Research Group at the University of Toronto, Canada, from 1969 until 1977. He was then a Research Fellow at the Xerox Palo Alto Research Center (PARC) from 1977 until 1984 and a founding member and senior consultant at DEC Systems Research Center (DEC/SRC) from 1984 until 1996. He was founder and director of STAR Lab from 1997 until 2001 at InterTrust Technologies Corp. Peter G. Neumann reported on 22 January 2013 in the RISKS Digest, Volume 27, Issue 14, that Horning had died on 18 January 2013. Horning's interests included programming languages, programming methodology, specification, formal methods, digital rights management and computer/network security. A major contribution was his involvement with the Larch approach to formal specification with John Guttag (MIT) et al. Selected publications A Compiler Generator (with William M. McKeeman and D. B. Wortman), Prentice Hall (1970). .
https://en.wikipedia.org/wiki/Trantor%3A%20The%20Last%20Stormtrooper
Trantor: The Last Stormtrooper is a video game for the ZX Spectrum, Commodore 64, MSX, Amstrad CPC and Atari ST released by Go! (a label of U.S. Gold) in 1987. A version for MS-DOS was released by KeyPunch Software. It was produced by Probe Software (the team consisted of David Quinn, Nick Bruty and David Perry). It was released in Spain (as "Trantor") by Erbe Software. The game is a mix between shoot 'em up and a platform game, but it was mostly known for its large and well-animated sprites. Bruty, who had previously produced graphics within tight limits on other projects, decided instead to focus on artwork and keep other aspects of the game simple to fit the constraints of the platforms. Gameplay The player controls the titular stormtrooper who is the only survivor of the destruction of his spaceship (hence the title). Gameplay revolves around exploring the play-area and collecting code-letters. The play-area consists of several different floors which can be explored freely via connecting lifts. However, Trantor is up against a very strict time limit. The levels are infested by various aliens and small flying robots which sap Trantor's strength if he touches them. Fortunately, Trantor is armed with a flamethrower with which to destroy these pests. Unfortunately, fuel for this is limited although he can re-fill this at fuel-points located on many of the floors. Whenever Trantor finds a code-letter, his timer countdown is reset and then counts down again until he finds another letter. For this reason, much of the gameplay is a race-against-time. There are also lockers scattered around the floors which contain pick-ups to assist Trantor. These include hamburgers (restore strength) and clocks (resetting the time, as finding a code letter would). The game ends when Trantor's energy runs out or if the timer reaches zero. The player's performance is shown as a percentage of the game completed, along with a short comment. The comment for nine percent is "Is that
https://en.wikipedia.org/wiki/Perfect%20Developer
Perfect Developer (PD) is a tool for developing computer programs in a rigorous manner. It is used to develop applications in areas including IT systems and airborne critical systems. The principle is to develop a formal specification and refine the specification to code. Even though the tool is founded on formal methods, the suppliers claim that advanced mathematical knowledge is not a prerequisite. PD supports the Verified Design by Contract paradigm, which is an extension of Design by contract. In Verified Design by Contract, the contracts are verified by static analysis and automated theorem proving, so that it is certain that they will not fail at runtime. The Perfect specification language used has an object-oriented style, producing code in programming languages including Java, C# and C++. It has been developed by the UK company Escher Technologies Ltd. They note on their website that their claim is not that the language itself is perfect, but that it can be used to produce code which perfectly implements a precise specification. See also JML Safety Integrity Level External links Perfect Developer Escher Technologies Defence Standards Formal methods tools Formal specification languages
https://en.wikipedia.org/wiki/Nord-10
Nord-10 was a medium-sized general-purpose 16-bit minicomputer designed for multilingual time-sharing applications and for real-time multi-program systems, produced by Norsk Data. It was introduced in 1973. The later follow up model, Nord-10/S, introduced in 1975, introduced CPU cache, paging, and other miscellaneous improvements. The CPU had a microprocessor, which was defined in the manual as a portmanteau of microcode processor, not to be confused with the then nascent microprocessor. The CPU additionally contained instructions, operator communication, bootstrap loaders, and hardware test programs, that were implemented in a 1K read-only memory. The microprocessor also allowed for customer specified instructions to be built in. Nord-10 had a memory management system with hardware paging extending the memory size from 64 to 256K 16-bit words and two independent protecting systems, one acting on each page and one on the mode of instructions. The interrupt system had 16 program levels in hardware, each with its own set of general-purpose registers. Note: Much of the following information is taken from a document written by Norsk Data introducing the Nord-10. Some information, particularly about the memory system, may be inaccurate for the later Nord-10/S. Central processor The central processing unit (CPU) consisted of a total 24 printed circuit boards. The last eight positions in the rack were used for input/output (I/O) devices operated by program control, such as the console teleprinter (teletype), paper punched tape and punched card reader and punch, line printer, display, operator's panel, and a real-time clock. The Nord-10 had 160 processor registers, of which 128 were available to programs, eight on each of the 16 program levels. Six of those registers were general registers, one was the program counter, and the other contained status information. Floating point arithmetic operations were standard. The instructions could operate on five different forma
https://en.wikipedia.org/wiki/Nord-1
Nord-1 was Norsk Data's first minicomputer and the first commercially available computer made in Norway. It was a 16-bit system, developed in 1967 from the Simulation for Automatic Machinery. The first Nord-1 (serial number 2) installed was at the heart of a complete ship system aboard a Japanese-built cargo liner, the Taimyr. The system included bridge control, power management, load condition monitoring, and the first ever computer-controlled, radar-sensed anti-collision system (Automatic Radar Plotting Aid). Taimyr's Nord-1 turned out reliable for the time, with more than a year between failures. It was probably the first minicomputer to feature floating-point arithmetic equipment as standard, and had an unusually rich complement of hardware registers for its time. It also featured relative addressing, and a fully automatic context switched interrupt system. It was also the first minicomputer to offer virtual memory, offered as an option by 1969. It was succeeded by the Nord-10. Remaining machines The Nord-1 has been unusually well-preserved. Approximately 60 machines seem to have been produced, and at the very least ten machines have been preserved, including serial numbers 2, 4, and 5. This may be because the company Norsk Data was already a very large and very rapidly growing corporation by the time many of these machines were decommissioned.
https://en.wikipedia.org/wiki/John%20Rushby
John Rushby (born 1949) is a British computer scientist now based in the United States and working for SRI International. He previously taught and did research for Manchester University and later Newcastle University. Early life and education John Rushby was born and brought up in London, where he attended Dartford Grammar School. He studied at Newcastle University in the United Kingdom, gaining his computer science BSc there in 1971 and his PhD in 1977. Career From 1974 to 1975, he was a lecturer in the Computer Science Department at Manchester University. From 1979 to 1982, he was a research associate in the Department of Computing Science at the Newcastle University. Rushby joined SRI International in Menlo Park, California in 1983. Currently he is Program Director for Formal Methods and Dependable Systems in the Computer Science Laboratory at SRI. He developed the Prototype Verification System, which is a theorem prover. Awards and memberships Rushby was the recipient of the 2011 Harlan D. Mills Award from the IEEE Computer Society.
https://en.wikipedia.org/wiki/1%3A144%20scale
1:144 scale is a scale used for some scale models such as micro/mini armor. 1:144 means that the dimensions of the model are 1/144 (0.00694) the dimensions of the original life-sized object; this equates to a scale of 1/2 inch per 6 feet of original dimension. For instance, an airplane in length would be a mere long as a 1:144 scale model. 1:144 scale finished and semi-finished models are becoming a popular trend not only in Asia, but in the West as well. Many European and American collectors are welcoming them for both model military vehicle display and miniature wargaming purposes. It is twice as large as traditional micro armor / mini armor of the 1:285 (~ figure) and 1:300 (~ figure) scale yet practically just as useful. 1:144 (~ figure) scale modeling and miniatures are considered closely related to N scale (1:148-1:160 scale) (~ figure) and many pieces from both scales can be used interchangeably. Dollhouses and miniatures In the construction and use of dollhouses, 1:144 scale represents the scale that a 1:12 scale dollhouse would have in a 1:12 scale dollhouse. This is often called Dolls' dollhouse or Dollhouse's dollhouse. At this scale, a typical house is about across. Making internal parts for such a house is a difficult task for the home hobbyist. Commercial manufacturers often use laser cutting technology to ensure clean lines. Die-cast models 1:144 is a popular scale for die-cast model airplanes. This scale is usually for large aircraft such as airliners and bombers. Racing Champions also made many lines of micro cars and trucks during the late 1990s. These models included NASCAR stock cars, NHRA funny cars and top fuel rail dragsters, classic automobiles, sought-after muscle cars, and even semi trucks. Although these Ertl-manufactured models are described as 1:144 scale; they are actually replicas. Action figures 1:144 scale is also the primary scale of High Grade and Real Grade Gundam model-kits and toys. Plastic military models There are
https://en.wikipedia.org/wiki/VACTERL%20association
The VACTERL association (also VATER association, and less accurately VACTERL syndrome) refers to a recognized group of birth defects which tend to co-occur (see below). This pattern is a recognized association, as opposed to a syndrome, because there is no known pathogenetic cause to explain the grouped incidence. Each child with this condition can be unique. At present this condition is treated after birth with issues being approached one at a time. Some infants are born with symptoms that cannot be treated and they do not survive. Also, VACTERL association can be linked to other similar conditions such as Klippel Feil and Goldenhar syndrome including crossovers of conditions. No specific genetic or chromosome problem has been identified with VACTERL association. VACTERL can be seen with some chromosomal defects such as Trisomy 18 and is more frequently seen in babies of diabetic mothers. VACTERL association, however, is most likely caused by multiple factors. VACTERL association specifically refers to the abnormalities in structures derived from the embryonic mesoderm. Signs and symptoms The following features are observed with VACTERL association: V - Vertebral anomalies A - Anorectal malformations C - Cardiovascular anomalies T - Tracheoesophageal fistula E - Esophageal atresia R - Renal (Kidney) and/or radial anomalies L - Limb defects Although it was not conclusive whether VACTERL should be defined by at least two or three component defects, it is typically defined by the presence of at least three of the above congenital malformations. Spine Vertebral anomalies, or defects of the spinal column, usually consist of small (hypoplastic) vertebrae or hemivertebra where only one half of the bone is formed. About 80 percent of patients with VACTERL association will have vertebral anomalies. In early life these rarely cause any difficulties, although the presence of these defects on a chest x-ray may alert the physician to other defects associated with
https://en.wikipedia.org/wiki/Butterfly%20diagram
In the context of fast Fourier transform algorithms, a butterfly is a portion of the computation that combines the results of smaller discrete Fourier transforms (DFTs) into a larger DFT, or vice versa (breaking a larger DFT up into subtransforms). The name "butterfly" comes from the shape of the data-flow diagram in the radix-2 case, as described below. The earliest occurrence in print of the term is thought to be in a 1969 MIT technical report. The same structure can also be found in the Viterbi algorithm, used for finding the most likely sequence of hidden states. Most commonly, the term "butterfly" appears in the context of the Cooley–Tukey FFT algorithm, which recursively breaks down a DFT of composite size n = rm into r smaller transforms of size m where r is the "radix" of the transform. These smaller DFTs are then combined via size-r butterflies, which themselves are DFTs of size r (performed m times on corresponding outputs of the sub-transforms) pre-multiplied by roots of unity (known as twiddle factors). (This is the "decimation in time" case; one can also perform the steps in reverse, known as "decimation in frequency", where the butterflies come first and are post-multiplied by twiddle factors. See also the Cooley–Tukey FFT article.) Radix-2 butterfly diagram In the case of the radix-2 Cooley–Tukey algorithm, the butterfly is simply a DFT of size-2 that takes two inputs (x0, x1) (corresponding outputs of the two sub-transforms) and gives two outputs (y0, y1) by the formula (not including twiddle factors): If one draws the data-flow diagram for this pair of operations, the (x0, x1) to (y0, y1) lines cross and resemble the wings of a butterfly, hence the name (see also the illustration at right). More specifically, a radix-2 decimation-in-time FFT algorithm on n = 2 p inputs with respect to a primitive n-th root of unity relies on O(n log2 n) butterflies of the form: where k is an integer depending on the part of the transform being computed. Wh
https://en.wikipedia.org/wiki/Twiddle%20factor
A twiddle factor, in fast Fourier transform (FFT) algorithms, is any of the trigonometric constant coefficients that are multiplied by the data in the course of the algorithm. This term was apparently coined by Gentleman & Sande in 1966, and has since become widespread in thousands of papers of the FFT literature. More specifically, "twiddle factors" originally referred to the root-of-unity complex multiplicative constants in the butterfly operations of the Cooley–Tukey FFT algorithm, used to recursively combine smaller discrete Fourier transforms. This remains the term's most common meaning, but it may also be used for any data-independent multiplicative constant in an FFT. The prime-factor FFT algorithm is one unusual case in which an FFT can be performed without twiddle factors, albeit only for restricted factorizations of the transform size. For example, W82 is a twiddle factor used in 8-point radix-2 FFT.
https://en.wikipedia.org/wiki/Code%20bloat
In computer programming, code bloat is the production of program code (source code or machine code) that is perceived as unnecessarily long, slow, or otherwise wasteful of resources. Code bloat can be caused by inadequacies in the programming language in which the code is written, the compiler used to compile it, or the programmer writing it. Thus, while code bloat generally refers to source code size (as produced by the programmer), it can be used to refer instead to the generated code size or even the binary file size. Examples The following JavaScript algorithm has a large number of redundant variables, unnecessary logic and inefficient string concatenation. // Complex function TK2getImageHTML(size, zoom, sensor, markers) { var strFinalImage = ""; var strHTMLStart = '<img src="'; var strHTMLEnd = '" alt="The map"/>'; var strURL = "http://maps.google.com/maps/api/staticmap?center="; var strSize = '&size='+ size; var strZoom = '&zoom='+ zoom; var strSensor = '&sensor='+ sensor; strURL += markers[0].latitude; strURL += ","; strURL += markers[0].longitude; strURL += strSize; strURL += strZoom; strURL += strSensor; for (var i = 0; i < markers.length; i++) { strURL += markers[i].addMarker(); } strFinalImage = strHTMLStart + strURL + strHTMLEnd; return strFinalImage; }; The same logic can be stated more efficiently as follows: // Simplified const TK2getImageHTML = (size, zoom, sensor, markers) => { const [ { latitude, longitude } ] = markers; let url = `http://maps.google.com/maps/api/staticmap?center=${ latitude },${ longitude }&size=${ size }&zoom=${ zoom }&sensor=${ sensor }`; markers.forEach(marker => url += marker.addMarker()); return `<img src="${ url }" alt="The map" />`; }; Code density of different languages The difference in code density between various computer languages is so great that often less memory is needed to hold both a progr
https://en.wikipedia.org/wiki/Condorcet%27s%20jury%20theorem
Condorcet's jury theorem is a political science theorem about the relative probability of a given group of individuals arriving at a correct decision. The theorem was first expressed by the Marquis de Condorcet in his 1785 work Essay on the Application of Analysis to the Probability of Majority Decisions. The assumptions of the theorem are that a group wishes to reach a decision by majority vote. One of the two outcomes of the vote is correct, and each voter has an independent probability p of voting for the correct decision. The theorem asks how many voters we should include in the group. The result depends on whether p is greater than or less than 1/2: If p is greater than 1/2 (each voter is more likely to vote correctly), then adding more voters increases the probability that the majority decision is correct. In the limit, the probability that the majority votes correctly approaches 1 as the number of voters increases. On the other hand, if p is less than 1/2 (each voter is more likely to vote incorrectly), then adding more voters makes things worse: the optimal jury consists of a single voter. Since Condorcet, many other researchers have proved various other jury theorems, relaxing some or all of Condorcet's assumptions. Proofs Proof 1: Calculating the probability that two additional voters change the outcome To avoid the need for a tie-breaking rule, we assume n is odd. Essentially the same argument works for even n if ties are broken by adding a single voter. Now suppose we start with n voters, and let m of these voters vote correctly. Consider what happens when we add two more voters (to keep the total number odd). The majority vote changes in only two cases: m was one vote too small to get a majority of the n votes, but both new voters voted correctly. m was just equal to a majority of the n votes, but both new voters voted incorrectly. The rest of the time, either the new votes cancel out, only increase the gap, or don't make enough of a differe
https://en.wikipedia.org/wiki/Edge%20disjoint%20shortest%20pair%20algorithm
Edge disjoint shortest pair algorithm is an algorithm in computer network routing. The algorithm is used for generating the shortest pair of edge disjoint paths between a given pair of vertices. For an undirected graph G(V, E), it is stated as follows: Run the shortest path algorithm for the given pair of vertices Replace each edge of the shortest path (equivalent to two oppositely directed arcs) by a single arc directed towards the source vertex Make the length of each of the above arcs negative Run the shortest path algorithm (Note: the algorithm should accept negative costs) Erase the overlapping edges of the two paths found, and reverse the direction of the remaining arcs on the first shortest path such that each arc on it is directed towards the destination vertex now. The desired pair of paths results. In lieu of the general purpose Ford's shortest path algorithm valid for negative arcs present anywhere in a graph (with nonexistent negative cycles), Bhandari provides two different algorithms, either one of which can be used in Step 4. One algorithm is a slight modification of the traditional Dijkstra's algorithm, and the other called the Breadth-First-Search (BFS) algorithm is a variant of the Moore's algorithm. Because the negative arcs are only on the first shortest path, no negative cycle arises in the transformed graph (Steps 2 and 3). In a nonnegative graph, the modified Dijkstra algorithm reduces to the traditional Dijkstra's algorithm, and can therefore be used in Step 1 of the above algorithm (and similarly, the BFS algorithm). The Modified Dijkstra AlgorithmBhandari, Ramesh (1994), “Optimal Diverse Routing in Telecommunication Fiber Networks”, Proc. of IEEE INFOCOM, Toronto, Canada, pp. 1498-1508. G = (V, E) d(i) – the distance of vertex i (i∈V) from source vertex A; it is the sum of arcs in a possible path from vertex A to vertex i. Note that d(A)=0; P(i) – the predecessor of vertex i on the same path. Z – the destination vertex Step 1. Sta
https://en.wikipedia.org/wiki/Slow-wave%20sleep
Slow-wave sleep (SWS), often referred to as deep sleep, consists of stage three of non-rapid eye movement sleep. It usually lasts between 70 and 90 minutes and takes place during the first hours of the night. Initially, SWS consisted of both Stage 3, which has 20–50 percent delta wave activity, and Stage 4, which has more than 50 percent delta wave activity. Overview This period of sleep is called slow-wave sleep because the EEG activity is synchronized, characterised by slow waves with a frequency range of 0.5–4.5  Hz, relatively high amplitude power with peak-to-peak amplitude greater than 75µV. The first section of the wave signifies a "down state", an inhibition or hyperpolarizing phase in which the neurons in the neocortex are silent. This is the period when the neocortical neurons are able to rest. The second section of the wave signifies an "up state", an excitation or depolarizing phase in which the neurons fire briefly at a high rate. The principal characteristics during slow-wave sleep that contrast with REM sleep are moderate muscle tone, slow or absent eye movement, and lack of genital activity. Slow-wave sleep is considered important for memory consolidation. This is sometimes referred to as "sleep-dependent memory processing". Impaired memory consolidation has been seen in individuals with primary insomnia, who thus do not perform as well as those who are healthy in memory tasks following a period of sleep. Furthermore, slow-wave sleep improves declarative memory (which includes semantic and episodic memory). A central model has been hypothesized that the long-term memory storage is facilitated by an interaction between the hippocampal and neocortical networks. In several studies, after the subjects have had training to learn a declarative memory task, the density of human sleep spindles present was significantly higher than the signals observed during the control tasks, which involved similar visual stimulation and cognitively-demanding tasks but di
https://en.wikipedia.org/wiki/Mashup%20%28web%20application%20hybrid%29
A mashup (computer industry jargon), in web development, is a web page or web application that uses content from more than one source to create a single new service displayed in a single graphical interface. For example, a user could combine the addresses and photographs of their library branches with a Google map to create a map mashup. The term implies easy, fast integration, frequently using open application programming interfaces (open API) and data sources to produce enriched results that were not necessarily the original reason for producing the raw source data. The term mashup originally comes from creating something by combining elements from two or more sources. The main characteristics of a mashup are combination, visualization, and aggregation. It is important to make existing data more useful, for personal and professional use. To be able to permanently access the data of other services, mashups are generally client applications or hosted online. In the past years, more and more Web applications have published APIs that enable software developers to easily integrate data and functions the SOA way, instead of building them by themselves. Mashups can be considered to have an active role in the evolution of social software and Web 2.0. Mashup composition tools are usually simple enough to be used by end-users. They generally do not require programming skills and rather support visual wiring of GUI widgets, services and components together. Therefore, these tools contribute to a new vision of the Web, where users are able to contribute. The term "mashup" is not formally defined by any standard-setting body. History The broader context of the history of the Web provides a background for the development of mashups. Under the Web 1.0 model, organizations stored consumer data on portals and updated them regularly. They controlled all the consumer data, and the consumer had to use their products and services to get the information. The advent of Web 2.0 intr
https://en.wikipedia.org/wiki/Nine%20Views
Nine Views () is an ambiental installation in Zagreb, Croatia which, together with the sculpture Prizemljeno Sunce (The Grounded Sun), comprises a scale model of the Solar System. Prizemljeno Sunce by Ivan Kožarić was first displayed in 1971 by the building of the Croatian National Theatre, and since then changed location a few times. Since 1994, it has been situated in Bogovićeva Street. It is a bronze sphere around in diameter. In 2004, artist Davor Preis had a two-week exhibition in the Josip Račić Exhibition Hall in Margaretska Street in Zagreb, and afterwards, he placed 9 models of the planets of the Solar System around Zagreb, to complete a model of the entire solar system. The models' sizes as well as their distances from the Prizemljeno Sunce are all in the same scale as the Prizemljeno Sunce itself. Preis did this installation with very little or no publicity, so his installation is not well known among citizens of Zagreb. On a few occasions, individuals or small groups of people, particularly physics students, "discovered" that there was a model of the Solar System in Zagreb. One of the earliest efforts to find all of the planets was started in November 2004 on the web forum of the student section of the Croatian Physics Society. The locations of the planets are as follows: Mercury - 3 Margaretska Street Venus - 3 Ban Josip Jelačić Square Earth - 9 Varšavska Street Mars - 21 Tkalčićeva Street Jupiter - 71 Voćarska Street Saturn - 1 Račićeva Street Uranus - 9 Siget (not at the residential building but at the garage across the street) Neptune - Kozari 17 Pluto - Bologna Alley (underpass) - included in the installation before being demoted to dwarf planet (someone has since ripped Pluto off, however the plaque remains) The system is at scale 1:680 000 000. Earth's model is about in diameter and is distance from the Sun's model, while Pluto's model is away from it. Gallery See also Monument to the Sun, a Solar System model in Zadar, Croatia Solar
https://en.wikipedia.org/wiki/Copper%28I%29%20iodide
Copper(I) iodide is the inorganic compound with the formula CuI. It is also known as cuprous iodide. It is useful in a variety of applications ranging from organic synthesis to cloud seeding. Copper(I) iodide is white, but samples often appear tan or even, when found in nature as rare mineral marshite, reddish brown, but such color is due to the presence of impurities. It is common for samples of iodide-containing compounds to become discolored due to the facile aerobic oxidation of the iodide anion to molecular iodine. Structure Copper(I) iodide, like most binary (containing only two elements) metal halides, is an inorganic polymer. It has a rich phase diagram, meaning that it exists in several crystalline forms. It adopts a zinc blende structure below 390 °C (γ-CuI), a wurtzite structure between 390 and 440 °C (β-CuI), and a rock salt structure above 440 °C (α-CuI). The ions are tetrahedrally coordinated when in the zinc blende or the wurtzite structure, with a Cu-I distance of 2.338 Å. Copper(I) bromide and copper(I) chloride also transform from the zinc blende structure to the wurtzite structure at 405 and 435 °C, respectively. Therefore, the longer the copper – halide bond length, the lower the temperature needs to be to change the structure from the zinc blende structure to the wurtzite structure. The interatomic distances in copper(I) bromide and copper(I) chloride are 2.173 and 2.051 Å, respectively. Consistent with its covalency, CuI is a p-type semiconductor. Preparation Copper(I) iodide can be prepared by heating iodine and copper in concentrated hydriodic acid. In the laboratory however, copper(I) iodide is prepared by simply mixing an aqueous solution of potassium iodide and a soluble copper(II) salt such copper sulfate. Cu2+ + 2I− → CuI + 0.5I2 Reactions Cuprous iodide, which degrades on standing, can be purified by dissolution into concentrated solution of potassium iodide followed by dilution. CuI + I− CuI2− Copper(I) iodide reacts
https://en.wikipedia.org/wiki/Internet%20Experiment%20Note
An Internet Experiment Note (IEN) is a sequentially numbered document in a series of technical publications issued by the participants of the early development work groups that created the precursors of the modern Internet. After DARPA began the Internet program in earnest in 1977, the project members were in need of communication and documentation of their work in order to realize the concepts laid out by Bob Kahn and Vint Cerf some years before. The Request for Comments (RFC) series was considered the province of the ARPANET project and the Network Working Group (NWG) which defined the network protocols used on it. Thus, the members of the Internet project decided on publishing their own series of documents, Internet Experiment Notes, which were modeled after the RFCs. Jon Postel became the editor of the new series, in addition to his existing role of administering the long-standing RFC series. Between March, 1977, and September, 1982, 206 IENs were published. After that, with the plan to terminate support of the Network Control Protocol (NCP) on the ARPANET and switch to TCP/IP, the production of IENs was discontinued, and all further publication was conducted within the existing RFC system. External links Internet Experiment Notes index at postel.org IEN archive at postel.org (plain text) IEN archive at postel.org (PDF) IEN index at rfc-editor.org History of the Internet Internet Standards
https://en.wikipedia.org/wiki/Hazy%20Sighted%20Link%20State%20Routing%20Protocol
The Hazy-Sighted Link State Routing Protocol (HSLS) is a wireless mesh network routing protocol being developed by the CUWiN Foundation. This is an algorithm allowing computers communicating via digital radio in a mesh network to forward messages to computers that are out of reach of direct radio contact. Its network overhead is theoretically optimal, utilizing both proactive and reactive link-state routing to limit network updates in space and time. Its inventors believe it is a more efficient protocol to route wired networks as well. HSLS was invented by researchers at BBN Technologies. Efficiency HSLS was made to scale well to networks of over a thousand nodes, and on larger networks begins to exceed the efficiencies of the other routing algorithms. This is accomplished by using a carefully designed balance of update frequency, and update extent in order to propagate link state information optimally. Unlike traditional methods, HSLS does not flood the network with link-state information to attempt to cope with moving nodes that change connections with the rest of the network. Further, HSLS does not require each node to have the same view of the network. Why a link-state protocol? Link-state algorithms are theoretically attractive because they find optimal routes, reducing waste of transmission capacity. The inventors of HSLS claim that routing protocols fall into three basically different schemes: proactive (such as OLSR), reactive (such as AODV), and algorithms that accept sub-optimal routings. If one graphs them, they become less efficient as they are more purely any single strategy, and the network grows larger. The best algorithms seem to be in a sweet spot in the middle. The routing information is called a "link state update." The distance that a link-state is copied is the "time to live" and is a count of the number of times it may be copied from one node to the next. HSLS is said to optimally balance the features of proactive, reactive, and subo
https://en.wikipedia.org/wiki/Egon%20B%C3%B6rger
Egon Börger (born 13 May 1946) is a German-born computer scientist based in Italy. Life and work Börger was born in Bad Laer, Westphalia, Lower Saxony, Germany. Between 1965 and 1971 he studied at the Sorbonne, Paris (France), Université Catholique de Louvain, Institut Supérieur de Philosophie de Louvain and University of Münster (Germany). Between 1972 and 1976, he was at the Università di Salerno in Italy, where he taught the first courses in the newborn Computer Science Degree. Since 1985 he has held a Chair in computer science at the University of Pisa, Italy. Since September 2010, he has been an elected member of the Academia Europaea. Egon Börger is a pioneer of applying logical methods in computer science. He is co-founder of the international conference series CSL. He is also one of the founders of the Abstract State Machines (ASM) formal method for accurate and controlled design and analysis of computer-based systems and cofounder of the series of international ASM workshops, which in 2008 merged with the regular meetings of the B and Z User Groups to form the international ABZ conference. Börger contributed to the theoretical foundations of the method and initiated its industrial applications in a variety of fields, in particular programming languages, System architecture, requirements and software (re-)engineering, control systems, protocols, web services. To this date, he is one of the leading scientists in ASM-based modeling and verification technology, which he has crucially shaped by his activities. In 2007, he received the Humboldt Research Award. Festschrifts were produced for Börger's 60th and 75th birthdays. Selected publications Egon Börger and Robert Stärk, Abstract State Machines: A Method for High-Level System Design and Analysis, Springer-Verlag, 2003. () Egon Börger Computability, Complexity, Logic (North-Holland, Amsterdam 1989, translated from the German original from 1985, Italian Translation Bollati-Borighieri 1989) Egon Börge
https://en.wikipedia.org/wiki/Duxelles
Duxelles () is a French cuisine term that refers to a mince of mushrooms, onions, herbs (such as thyme or parsley), and black pepper, sautéed in butter and reduced to a paste. Cream is sometimes used, and some recipes add a dash of madeira or sherry. It is a basic preparation used in stuffings and sauces (notably, Beef Wellington) or as a garnish. It can also be filled into a pocket of raw pastry and baked as a savory tart. The flavor depends on the mushrooms used. For example, wild porcini mushrooms have a much stronger flavor than white or brown mushrooms. Duxelles is said to have been created by the 17th-century French chef François Pierre La Varenne (1615–1678) and to have been named after his employer, Nicolas Chalon du Blé, marquis d'Uxelles, maréchal de France. Some classical cookbooks call for dehydrated mushrooms. According to Auguste Escoffier, dehydration enhances flavor and prevents water vapor from building up pressure that could cause a pastry to crack or even explode. See also Sautéed mushrooms List of mushroom dishes
https://en.wikipedia.org/wiki/Delay-tolerant%20networking
Delay-tolerant networking (DTN) is an approach to computer network architecture that seeks to address the technical issues in heterogeneous networks that may lack continuous network connectivity. Examples of such networks are those operating in mobile or extreme terrestrial environments, or planned networks in space. Recently, the term disruption-tolerant networking has gained currency in the United States due to support from DARPA, which has funded many DTN projects. Disruption may occur because of the limits of wireless radio range, sparsity of mobile nodes, energy resources, attack, and noise. History In the 1970s, spurred by the decreasing size of computers, researchers began developing technology for routing between non-fixed locations of computers. While the field of ad hoc routing was inactive throughout the 1980s, the widespread use of wireless protocols reinvigorated the field in the 1990s as mobile ad hoc networking (MANET) and vehicular ad hoc networking became areas of increasing interest. Concurrently with (but separate from) the MANET activities, DARPA had funded NASA, MITRE and others to develop a proposal for the Interplanetary Internet (IPN). Internet pioneer Vint Cerf and others developed the initial IPN architecture, relating to the necessity of networking technologies that can cope with the significant delays and packet corruption of deep-space communications. In 2002, Kevin Fall started to adapt some of the ideas in the IPN design to terrestrial networks and coined the term delay-tolerant networking and the DTN acronym. A paper published in 2003 SIGCOMM conference gives the motivation for DTNs. The mid-2000s brought about increased interest in DTNs, including a growing number of academic conferences on delay and disruption-tolerant networking, and growing interest in combining work from sensor networks and MANETs with the work on DTN. This field saw many optimizations on classic ad hoc and delay-tolerant networking algorithms and began to e
https://en.wikipedia.org/wiki/Libipq
libipq is a development library for iptables userspace packet queuing. Libipq provides an API for communicating with ip_queue. Libipq has been deprecated in favour of the newer libnetfilter_queue in Linux kernel-2.6.14 onwards. Use in widely used software applications libipq has been used by some widely deployed applications as their interface to the Linux kernel-space iptables packet filter. Snort - Snort is an Intrusion Detection System which runs in user-space and uses libipq to interface with Linux's iptables packet filter. External links iptables at netfilter.org libipq subversion repository Linux Man Page A quick intro to libipq Libipq network simulator example Linux kernel features Linux security software
https://en.wikipedia.org/wiki/Physics%20education
Physics education or physics teaching refers to the education methods currently used to teach physics. The occupation is called physics educator or physics teacher. Physics education research refers to an area of pedagogical research that seeks to improve those methods. Historically, physics has been taught at the high school and college level primarily by the lecture method together with laboratory exercises aimed at verifying concepts taught in the lectures. These concepts are better understood when lectures are accompanied with demonstration, hand-on experiments, and questions that require students to ponder what will happen in an experiment and why. Students who participate in active learning for example with hands-on experiments learn through self-discovery. By trial and error they learn to change their preconceptions about phenomena in physics and discover the underlying concepts. Physics education is part of the broader area of science education. Ancient Greece Aristotle wrote what is considered now as the first textbook of physics. Aristotle's ideas were taught unchanged until the Late Middle Ages, when scientists started making discoveries that didn't fit them. For example, Copernicus' discovery contradicted Aristotle's idea of an Earth-centric universe. Aristotle's ideas about motion weren't displaced until the end of the 17th century, when Newton published his ideas. Today's physics students often think of physics concepts in Aristotelian terms, despite being taught only Newtonian concepts. Hong Kong High schools In Hong Kong, physics is a subject for public examination. Local students in Form 6 take the public exam of Hong Kong Diploma of Secondary Education (HKDSE). Compare to the other syllabus include GCSE, GCE etc. which learn wider and boarder of different topics, the Hong Kong syllabus is learning more deeply and more challenges with calculations. Topics are narrow down to a smaller amount compared to the A-level due to the insufficient teachi
https://en.wikipedia.org/wiki/Tidal%20tensor
In Newton's theory of gravitation and in various relativistic classical theories of gravitation, such as general relativity, the tidal tensor represents tidal accelerations of a cloud of (electrically neutral, nonspinning) test particles, tidal stresses in a small object immersed in an ambient gravitational field. The tidal tensor represents the relative acceleration due to gravity of two test masses separated by an infinitesimal distance. The component represents the relative acceleration in the direction produced by a displacement in the direction. Tidal tensor for a spherical body The most common example of tides is the tidal force around a spherical body (e.g., a planet or a moon). Here we compute the tidal tensor for the gravitational field outside an isolated spherically symmetric massive object. According to Newton's gravitational law, the acceleration a at a distance r from a central mass m is (to simplify the math, in the following derivations we use the convention of setting the gravitational constant G to one. To calculate the differential accelerations, the results are to be multiplied by G.) Let us adopt the frame in polar coordinates for our three-dimensional Euclidean space, and consider infinitesimal displacements in the radial and azimuthal directions, and , which are given the subscripts 1, 2, and 3 respectively. We will directly compute each component of the tidal tensor, expressed in this frame. First, compare the gravitational forces on two nearby objects lying on the same radial line at distances from the central body differing by a distance h: Because in discussing tensors we are dealing with multilinear algebra, we retain only first order terms, so . Since there is no acceleration in the or direction due to a displacement in the radial direction, the other radial terms are zero: . Similarly, we can compare the gravitational force on two nearby observers lying at the same radius but displaced by an (infinitesimal) distance h
https://en.wikipedia.org/wiki/Disclosure%20and%20Barring%20Service
The Disclosure and Barring Service (DBS) is a non-departmental public body of the Home Office of the United Kingdom. The DBS enables organisations in the public, private and voluntary sectors to make safer recruitment decisions by identifying candidates who may be unsuitable for certain work, especially involving children or vulnerable adults, and provides wider access to criminal record information through its disclosure service for England and Wales. Legal context It is a legal requirement in the UK for regulated activity employers to notify the DBS if a person leaves or changes their job in relation to having harmed someone. It is an offence for any person who has been barred by the DBS to work or apply to work in Regulated Activity (whether paid or voluntary) with the group (children or adults) from which they are barred. It is also an offence for an employer to knowingly employ a barred person in regulated activity with the group from which they are barred. An organisation which is entitled to ask exempted questions (under the Rehabilitation of Offenders Act 1974) must register with the DBS, or a registered DBS Umbrella Body before they can request a DBS check on an applicant. The applicant applies to the DBS with their application countersigned by the DBS Registered Organisation or Umbrella Body. The applicant's criminal record is then accessed from the Police National Computer (PNC), as well as checked, if appropriate, against lists of people considered unsuitable to work with children and vulnerable people maintained by the DBS (formerly maintained by the Independent Safeguarding Authority. A copy of the completed certificate is sent to the applicant's home address. If an individual or organisation has safeguarding concerns regarding a member of staff, they can make a safeguarding referral to the DBS who will work with multiple agencies to assess whether that individual should be Barred from working in regulated activity with children and/or vulnerable gr
https://en.wikipedia.org/wiki/MCM6
DNA replication licensing factor MCM6 is a protein that in humans is encoded by the MCM6 gene. MCM6 is one of the highly conserved mini-chromosome maintenance proteins (MCM) that are essential for the initiation of eukaryotic genome replication. Function The MCM complex consisting of MCM6 (this protein) and MCM2, 4 and 7 possesses DNA helicase activity, and may act as a DNA unwinding enzyme. The hexameric protein complex formed by the MCM proteins is a key component of the pre-replication complex (pre-RC) and may be involved in the formation of replication forks and in the recruitment of other DNA replication related proteins. The phosphorylation of the complex by CDC2 kinase reduces the helicase activity, suggesting a role in the regulation of DNA replication. Mcm 6 has recently been shown to interact strongly Cdt1 at defined residues, by mutating these target residues Wei et al. observed lack of Cdt1 recruitment of Mcm2-7 to the pre-RC. Gene The MCM6 gene, MCM6, is expressed at very high level. MCM6 contains 18 introns. There are 2 non overlapping alternative last exons. The transcripts appear to differ by truncation of the 3' end, presence or absence of 2 cassette exons, common exons with different boundaries. MCM6 produces, by alternative splicing, 3 different transcripts, all with introns, putatively encoding 3 different protein isoforms. MCM6 contains two of the regulatory regions for LCT, the gene encoding the protein lactase, located in two of the MCM6 introns, approximately 14 kb and 22 kb upstream of LCT. A substitution of thymine for cytosine in the first region (at -13910), in particular, has been shown to function in vitro as an enhancer element capable of differentially activating transcription of LCT promoter. Mutations in these regions are associated with lactose tolerance into adult life. " Two variants were associated with lactase persistence..." Interactions MCM6 has been shown to interact with: CDC45-related protein, MCM2, MCM4
https://en.wikipedia.org/wiki/Open%20Media%20Network
The Open Media Network (OMN) was a P2PTV service and application which provided distribution of educational and public service programs. The network was founded in 2005 by Netscape pioneers Mike Homer and Marc Andreessen. After operating for an extended beta period, development ended with the serious illness and subsequent death in 2009 of founder Homer. The OMN network operated as a large, centrally controlled grid network for the distribution of free radio and TV content over P2P, described as "part TiVo, part BitTorrent file swapping". The Open Media Network client application was available for Apple Mac OS X (but not Intel based Macs as of October 2007) and Microsoft Windows (XP and 2000, but not Vista as of October 2007). The OMN infrastructure was powered by Kontiki grid network technology, a commercial alternative to BitTorrent. The U.S. Public Broadcasting Service (PBS) launched a "download to own" initiative with OMN and Google which allowed viewers to purchase episodes of popular PBS programs via the Internet for viewing anytime, anywhere. The fees for downloading videos ranged from about $2 to about $8 (U.S.). Video files were made available in whatever format the producer chose, including WMV, QuickTime and Google's GVI format. See also PPLive Cybersky-TV Octoshape Miro
https://en.wikipedia.org/wiki/P2PTV
P2PTV refers to peer-to-peer (P2P) software applications designed to redistribute video streams in real time on a P2P network; the distributed video streams are typically TV channels from all over the world but may also come from other sources. The draw to these applications is significant because they have the potential to make any TV channel globally available by any individual feeding the stream into the network where each peer joining to watch the video is a relay to other peer viewers, allowing a scalable distribution among a large audience with no incremental cost for the source. Technology and use In a P2PTV system, each user, while downloading a video stream, is simultaneously also uploading that stream to other users, thus contributing to the overall available bandwidth. The arriving streams are typically a few minutes time-delayed compared to the original sources. The video quality of the channels usually depends on how many users are watching; the video quality is better if there are more users. The architecture of many P2PTV networks can be thought of as real-time versions of BitTorrent: if a user wishes to view a certain channel, the P2PTV software contacts a "tracker server" for that channel in order to obtain addresses of peers who distribute that channel; it then contacts these peers to receive the feed. The tracker records the user's address, so that it can be given to other users who wish to view the same channel. In effect, this creates an overlay network on top of the regular internet for the distribution of real-time video content. The need for a tracker can also be eliminated by the use of distributed hash table technology. Some applications allow users to broadcast their own streams, whether self-produced, obtained from a video file, or through a TV tuner card or video capture card. Many of the commercial P2PTV applications were developed in China (TVUPlayer, PPLive, QQLive, PPStream). The majority of available applications broadcast mainly
https://en.wikipedia.org/wiki/Guess%20value
In mathematical modeling, a guess value is more commonly called a starting value or initial value. These are necessary for most optimization problems which use search algorithms, because those algorithms are mainly deterministic and iterative, and they need to start somewhere. One common type of application is nonlinear regression. Use The quality of the initial values can have a considerable impact on the success or lack of such of the search algorithm. This is because the fitness function or objective function (in many cases a sum of squared errors (SSE)) can have difficult shapes. In some parts of the search region, the function may increase exponentially, in others quadratically, and there may be regions where the function asymptotes to a plateau. Starting values that fall in an exponential region can lead to algorithm failure because of arithmetic overflow. Starting values that fall in the asymptotic plateau region can lead to algorithm failure because of "dithering". Deterministic search algorithms may use a slope function to go to a minimum. If the slope is very small, then underflow errors can cause the algorithm to wander, seemingly aimlessly; this is dithering. Finding value Guess values can be determined a number of ways. Guessing is one of them. If one is familiar with the type of problem, then this is an educated guess or guesstimate. Other techniques include linearization, solving simultaneous equations, reducing dimensions, treating the problem as a time series, converting the problem to a (hopefully) linear differential equation, and using mean values. Further methods for determining starting values and optimal values in their own right come from stochastic methods, the most commonly known of these being evolutionary algorithms and particularly genetic algorithms. Mathematical optimization Regression analysis Computational statistics
https://en.wikipedia.org/wiki/Bitonic%20sorter
Bitonic mergesort is a parallel algorithm for sorting. It is also used as a construction method for building a sorting network. The algorithm was devised by Ken Batcher. The resulting sorting networks consist of comparators and have a delay of , where is the number of items to be sorted. This makes it a popular choice for sorting large numbers of elements on an architecture which itself contains a large number of parallel execution units running in lockstep, such as a typical GPU. A sorted sequence is a monotonically non-decreasing (or non-increasing) sequence. A bitonic sequence is a sequence with for some , or a circular shift of such a sequence. Complexity Let and . It is evident from the construction algorithm that the number of rounds of parallel comparisons is given by . It follows that the number of comparators is bounded (which establishes an exact value for when is a power of 2). Although the absolute number of comparisons is typically higher than Batcher's odd-even sort, many of the consecutive operations in a bitonic sort retain a locality of reference, making implementations more cache-friendly and typically more efficient in practice. How the algorithm works The following is a bitonic sorting network with 16 inputs: The 16 numbers enter as the inputs at the left end, slide along each of the 16 horizontal wires, and exit at the outputs at the right end. The network is designed to sort the elements, with the largest number at the bottom. The arrows are comparators. Whenever two numbers reach the two ends of an arrow, they are compared to ensure that the arrow points toward the larger number. If they are out of order, they are swapped. The colored boxes are just for illustration and have no effect on the algorithm. Every red box has the same structure: each input in the top half is compared to the corresponding input in the bottom half, with all arrows pointing down (dark red) or all up (light red). If the inputs happen to form a biton
https://en.wikipedia.org/wiki/UNOS%20%28operating%20system%29
UNOS is the first, now discontinued, 32-bit Unix-like real-time operating system (RTOS) with real-time extensions. It was developed by Jeffery Goldberg, MS. who left Bell Labs after using Unix and became VP of engineering for Charles River Data Systems (CRDS), now defunct. UNOS was written to capitalize on the first 32-bit microprocessor, the Motorola 68k central processing unit (CPU). CRDS sold a UNOS based 68K system, and sold porting services and licenses to other manufacturers who had embedded CPUs. History Jeff Goldberg created an experimental OS using only eventcounts for synchronization, that allowed a preemptive kernel, for a Charles River Data Systems (CRDS) PDP-11. CRDS hired Goldberg to create UNOS and began selling it in 1981. UNOS was written for the Motorola 68000 series processors. While compatible with Version 7 Unix, it is also an RTOS. CRDS supported it on the company's Universe 68 computers, as did Motorola's Versabus systems. CRDS's primary market was OEMs embedding the CRDS unit within a larger pile of hardware, often requiring better real-time response than Unix could deliver. UNOS has a cleaner kernel interface than UNIX in 1981. There was e.g., a system call to obtain ps information instead of reading /dev/kmem. UNOS required memory protection, with the 68000 using an MMU developed by CRDS; and only used Motorola MMUs after UNOS 7 on the 68020 (CRDS System CP20) (using the MC68851 PMMU). UNOS was written in the programming languages C and assembly language, and supported Fortran, COBOL, Pascal, and Business Basic. Limits UNOS from CRDS never supported paged virtual memory and multiprocessor support had not been built in from the start, so the kernel remained mostly single-threaded on the few multiprocessor systems built. A UNOS variant enhanced by H. Berthold AG under the name vBertOS added demanded page loading and paged processes in 1984, but was given up in favor of SunOS because of the missing GUI and the missing networking code in
https://en.wikipedia.org/wiki/A%20Requiem%20for%20Homo%20Sapiens
A Requiem for Homo Sapiens is a trilogy of science fiction novels by American writer David Zindell, made up of The Broken God (1992), The Wild (1995), and War in Heaven (1998). The trilogy is a sequel to the standalone novel Neverness (1988). The series has been described as containing "some of the most striking writing, vivid spectacles, memorable characters, and insightful presentations of philosophy and religion seen in SF for many a year." David Langford commented on similarities between the trilogy's hero Danlo and Paul Atreides, protagonist of Frank Herbert's Dune. Books The Broken God Set 10 years after the events of Neverness, and narrated by its protagonist, Mallory Ringess, this book tells the story of the early life of his son, Danlo. After Danlo's tribe, the Devaki, is destroyed by a plague, he undertakes a perilous journey to Neverness City, where he is taken in and instructed by an alien Fravashi named "Old Father", joins the Academy, and becomes a pilot like his father. A new religion forms around the various tales told about Mallory Ringess, and Danlo comes into conflict with his former friend, Hanuman li Tosh, who assumes control of the "Way of Ringess" for his own purposes. The Wild Danlo's story continues as he explores the galaxy on a dual quest: first, to locate the home of the Architects of the Universal Cybernetic Church and persuade them to stop the Program of Increase that has resulted in the continual explosions of stars in the Vild (or Wild); and second, to find the cure for the engineered plague that killed the Devaki and will kill the rest of the primitive Alaloi back on Neverness. Like his father, Danlo penetrates the Solid State Entity and interacts with her. Based on her information, he seeks out a remnant of the great cybernetic god Ede. With Ede's help, Danlo at last reaches the distant planet of Tannahill, home of the Architects. His coming sparks a bloody war between various factions. The defeated faction escapes with a s
https://en.wikipedia.org/wiki/Symmetry%20in%20mathematics
Symmetry occurs not only in geometry, but also in other branches of mathematics. Symmetry is a type of invariance: the property that a mathematical object remains unchanged under a set of operations or transformations. Given a structured object X of any sort, a symmetry is a mapping of the object onto itself which preserves the structure. This can occur in many ways; for example, if X is a set with no additional structure, a symmetry is a bijective map from the set to itself, giving rise to permutation groups. If the object X is a set of points in the plane with its metric structure or any other metric space, a symmetry is a bijection of the set to itself which preserves the distance between each pair of points (i.e., an isometry). In general, every kind of structure in mathematics will have its own kind of symmetry, many of which are listed in the given points mentioned above. Symmetry in geometry The types of symmetry considered in basic geometry include reflectional symmetry, rotation symmetry, translational symmetry and glide reflection symmetry, which are described more fully in the main article Symmetry (geometry). Symmetry in calculus Even and odd functions Even functions Let f(x) be a real-valued function of a real variable, then f is even if the following equation holds for all x and -x in the domain of f: Geometrically speaking, the graph face of an even function is symmetric with respect to the y-axis, meaning that its graph remains unchanged after reflection about the y-axis. Examples of even functions include , x2, x4, cos(x), and cosh(x). Odd functions Again, let f be a real-valued function of a real variable, then f is odd if the following equation holds for all x and -x in the domain of f: That is, Geometrically, the graph of an odd function has rotational symmetry with respect to the origin, meaning that its graph remains unchanged after rotation of 180 degrees about the origin. Examples of odd functions are x, x3, sin(x), sinh(x), and
https://en.wikipedia.org/wiki/3D%20scanning
3D scanning is the process of analyzing a real-world object or environment to collect three dimensional data of its shape and possibly its appearance (e.g. color). The collected data can then be used to construct digital 3D models. A 3D scanner can be based on many different technologies, each with its own limitations, advantages and costs. Many limitations in the kind of objects that can be digitised are still present. For example, optical technology may encounter many difficulties with dark, shiny, reflective or transparent objects. For example, industrial computed tomography scanning, structured-light 3D scanners, LiDAR and Time Of Flight 3D Scanners can be used to construct digital 3D models, without destructive testing. Collected 3D data is useful for a wide variety of applications. These devices are used extensively by the entertainment industry in the production of movies and video games, including virtual reality. Other common applications of this technology include augmented reality, motion capture, gesture recognition, robotic mapping, industrial design, orthotics and prosthetics, reverse engineering and prototyping, quality control/inspection and the digitization of cultural artifacts. Functionality The purpose of a 3D scanner is usually to create a 3D model. This 3D model consists of a polygon mesh or point cloud of geometric samples on the surface of the subject. These points can then be used to extrapolate the shape of the subject (a process called reconstruction). If colour information is collected at each point, then the colours or textures on the surface of the subject can also be determined. 3D scanners share several traits with cameras. Like most cameras, they have a cone-like field of view, and like cameras, they can only collect information about surfaces that are not obscured. While a camera collects colour information about surfaces within its field of view, a 3D scanner collects distance information about surfaces within its field of vie
https://en.wikipedia.org/wiki/Local%20Inter-Process%20Communication
The Local Inter-Process Communication (LPC, often also referred to as Local Procedure Call or Lightweight Procedure Call) is an internal, undocumented inter-process communication facility provided by the Microsoft Windows NT kernel for lightweight IPC between processes on the same computer. As of Windows Vista, LPC has been rewritten as Asynchronous Local Inter-Process Communication (ALPC, often also Advanced Local Procedure Call) in order to provide a high-speed scalable communication mechanism required to efficiently implement User-Mode Driver Framework (UMDF), whose user-mode parts require an efficient communication channel with UMDF's components in the executive. The (A)LPC interface is part of Windows NT's undocumented Native API, and as such is not available to applications for direct use. However, it can be used indirectly in the following instances: when using the Microsoft RPC API to communicate locally, i.e. between the processes on the same machine by calling Windows APIs that are implemented with (A)LPC (see below) Implementation (A)LPC is implemented using kernel "port" objects, which are securable (with ACLs, allowing e.g. only specific SIDs to use them) and allow identification of the process on the other side of the connection. Individual messages are also securable: applications can set per-message SIDs, and also test for changes of the security context in the token associated with the (A)LPC message. The typical communication scenario between the server and the client is as follows: A server process first creates a named server connection port object, and waits for clients to connect. A client requests a connection to that named port by sending a connect message. If the server accepts the connection, two unnamed ports are created: client communication port - used by client threads to communicate with a particular server server communication port - used by the server to communicate with a particular client; one such port per client is cre
https://en.wikipedia.org/wiki/Butyraldehyde
Butyraldehyde, also known as butanal, is an organic compound with the formula CH3(CH2)2CHO. This compound is the aldehyde derivative of butane. It is a colorless flammable liquid with an unpleasant smell. It is miscible with most organic solvents. Production Butyraldehyde is produced almost exclusively by the hydroformylation of propylene: CH3CH=CH2 + H2 + CO → CH3CH2CH2CHO Traditionally, hydroformylation was catalyzed by cobalt carbonyl and later rhodium complexes of triphenylphosphine. The dominant technology involves the use of rhodium catalysts derived from the water-soluble ligand tppts. An aqueous solution of the rhodium catalyst converts the propylene to the aldehyde, which forms a lighter immiscible phase. About 6 billion kilograms are produced annually by hydroformylation. Butyraldehyde can be produced by the catalytic dehydrogenation of n-butanol. At one time, it was produced industrially by the catalytic hydrogenation of crotonaldehyde, which is derived from acetaldehyde. Reactions Butyraldehyde undergoes reactions typical of alkyl aldehydes, and these define many of the uses of this compound. Important reactions include hydrogenation to the alcohol, oxidation to the acid, and base-catalyzed condensation. Uses Aldol condensation in the presence of a base forms 2-ethyl-2-hexenal, which is then hydrogenated to form 2-ethylhexanol, a precursor to the plasticizer bis(2-ethylhexyl) phthalate. Butyraldehyde is a precursor in the two-step synthesis of trimethylolpropane, which is used for the production of alkyd resins.
https://en.wikipedia.org/wiki/Applications%20of%20randomness
Randomness has many uses in science, art, statistics, cryptography, gaming, gambling, and other fields. For example, random assignment in randomized controlled trials helps scientists to test hypotheses, and random numbers or pseudorandom numbers help video games such as video poker. These uses have different levels of requirements, which leads to the use of different methods. Mathematically, there are distinctions between randomization, pseudorandomization, and quasirandomization, as well as between random number generators and pseudorandom number generators. For example, applications in cryptography usually have strict requirements, whereas other uses (such as generating a "quote of the day") can use a looser standard of pseudorandomness. Early uses Games Unpredictable (by the humans involved) numbers (usually taken to be random numbers) were first investigated in the context of gambling developing, sometimes, pathological forms like apophenia. Many randomizing devices such as dice, shuffling playing cards, and roulette wheels, seem to have been developed for use in games of chance. Electronic gambling equipment cannot use these and so theoretical problems are less easy to avoid; methods of creating them are sometimes regulated by governmental gaming commissions. Modern electronic casino games contain often one or more random number generators which decide the outcome of a trial in the game. Even in modern slot machines, where mechanical reels seem to spin on the screen, the reels are actually spinning for entertainment value only. They eventually stop exactly where the machine's software decided they would stop when the handle was first pulled. It has been alleged that some gaming machines' software is deliberately biased to prevent true randomness, in the interests of maximizing their owners' revenue; the history of biased machines in the gambling industry is the reason government inspectors attempt to supervise the machines—electronic equipment has extende
https://en.wikipedia.org/wiki/Neuro-fuzzy
In the field of artificial intelligence, the designation neuro-fuzzy refers to combinations of artificial neural networks and fuzzy logic. Overview Neuro-fuzzy hybridization results in a hybrid intelligent system that combines the human-like reasoning style of fuzzy systems with the learning and connectionist structure of neural networks. Neuro-fuzzy hybridization is widely termed as fuzzy neural network (FNN) or neuro-fuzzy system (NFS) in the literature. Neuro-fuzzy system (the more popular term is used henceforth) incorporates the human-like reasoning style of fuzzy systems through the use of fuzzy sets and a linguistic model consisting of a set of IF-THEN fuzzy rules. The main strength of neuro-fuzzy systems is that they are universal approximators with the ability to solicit interpretable IF-THEN rules. The strength of neuro-fuzzy systems involves two contradictory requirements in fuzzy modeling: interpretability versus accuracy. In practice, one of the two properties prevails. The neuro-fuzzy in fuzzy modeling research field is divided into two areas: linguistic fuzzy modeling that is focused on interpretability, mainly the Mamdani model; and precise fuzzy modeling that is focused on accuracy, mainly the Takagi-Sugeno-Kang (TSK) model. Although generally assumed to be the realization of a fuzzy system through connectionist networks, this term is also used to describe some other configurations including: Deriving fuzzy rules from trained RBF networks. Fuzzy logic based tuning of neural network training parameters. Fuzzy logic criteria for increasing a network size. Realising fuzzy membership function through clustering algorithms in unsupervised learning in SOMs and neural networks. Representing fuzzification, fuzzy inference and defuzzification through multi-layers feed-forward connectionist networks. It must be pointed out that interpretability of the Mamdani-type neuro-fuzzy systems can be lost. To improve the interpretability of neuro-fuzzy systems, c
https://en.wikipedia.org/wiki/Eric%20Hehner
Eric "Rick" C. R. Hehner (born 16 September 1947) is a Canadian computer scientist. He was born in Ottawa. He studied mathematics and physics at Carleton University, graduating with a Bachelor of Science (B.Sc.) in 1969. He studied computer science at the University of Toronto, graduating with a Master of Science (M.Sc.) in 1970, and a Doctor of Philosophy (Ph.D.) in 1974. He then joined the faculty there, becoming a full professor in 1983. He became the Bell University Chair in software engineering in 2001, and retired in 2012. Hehner's main research area is formal methods of software design. His method, initially called predicative programming, later called Practical Theory of Programming, is to consider each specification to be a binary (boolean) expression, and each programming construct to be a binary expression specifying the effect of executing the programming construct. Refinement is just implication. This is the simplest formal method, and the most general, applying to sequential, parallel, stand-alone, communicating, terminating, nonterminating, natural-time, real-time, deterministic, and probabilistic programs, and includes time and space bounds. This idea has influenced other computer science researchers, including Tony Hoare. Hehner's other research areas include probabilistic programming, unified algebra, and high-level circuit design. In 1979, Hehner invented a generalization of radix complement called quote notation, which is a representation of the rational numbers that allows easier arithmetic and precludes roundoff error. He was involved with developing international standards in programming and informatics, as a member of the International Federation for Information Processing (IFIP) IFIP Working Group 2.1 on Algorithmic Languages and Calculi, which specified, maintains, and supports the programming languages ALGOL 60 and ALGOL 68. and of IFIP Working Group 2.3 on Programming Methodology.
https://en.wikipedia.org/wiki/Prime%20signature
In mathematics, the prime signature of a number is the multiset of (nonzero) exponents of its prime factorization. The prime signature of a number having prime factorization is the multiset . For example, all prime numbers have a prime signature of {1}, the squares of primes have a prime signature of {2}, the products of 2 distinct primes have a prime signature of } and the products of a square of a prime and a different prime (e.g. 12, 18, 20, ...) have a prime signature of }. Properties The divisor function τ(n), the Möbius function μ(n), the number of distinct prime divisors ω(n) of n, the number of prime divisors Ω(n) of n, the indicator function of the squarefree integers, and many other important functions in number theory, are functions of the prime signature of n. In particular, τ(n) equals the product of the incremented by 1 exponents from the prime signature of n. For example, 20 has prime signature {2,1} and so the number of divisors is (2+1) × (1+1) = 6. Indeed, there are six divisors: 1, 2, 4, 5, 10 and 20. The smallest number of each prime signature is a product of primorials. The first few are: 1, 2, 4, 6, 8, 12, 16, 24, 30, 32, 36, 48, 60, 64, 72, 96, 120, 128, 144, 180, 192, 210, 216, ... . A number cannot divide another unless its prime signature is included in the other numbers prime signature in the Young's lattice. Numbers with same prime signature Sequences defined by their prime signature Given a number with prime signature S, it is A prime number if S = {1}, A square if gcd S is even, A cube if gcd S is divisible by 3, A square-free integer if max S = 1, A cube-free integer if max S ≤ 2, A powerful number if min S ≥ 2, A perfect power if gcd S > 1, An Achilles number if min S ≥ 2 and gcd S = 1, k-almost prime if sum S = k. See also Canonical representation of a positive integer
https://en.wikipedia.org/wiki/Robin%20Wilson%20%28mathematician%29
Robin James Wilson (born 5 December 1943) is an emeritus professor in the Department of Mathematics at the Open University, having previously been Head of the Pure Mathematics Department and Dean of the Faculty. He was a stipendiary lecturer at Pembroke College, Oxford and, , Gresham Professor of Geometry at Gresham College, London, where he has also been a visiting professor. On occasion, he teaches at Colorado College in the United States. He is also a long standing fellow of Keble College, Oxford. Professor Wilson is a son of former British Prime Minister Harold Wilson and his wife, Mary. Early life and education Wilson was born in 1943 to the politician Harold Wilson, who later became Prime Minister, and his wife the poet Mary Wilson (née Baldwin). He has a younger brother, Giles, who in his 50s gave up a career as a teacher to be a train driver. Wilson attended University College School in Hampstead, North London. He achieved a BA First Class Honours in Mathematics from Balliol College, Oxford, an MA from the University of Pennsylvania, a PhD from the University of Pennsylvania (1965–1968) and a BA First Class Honours in Humanities with Music from the Open University. In a Guardian interview in 2008, Wilson spoke of the fact he grew up known to everyone primarily as a son of the Labour Party leader and Prime Minister Harold Wilson: "I hated the attention and I still dislike being introduced as Harold Wilson's son. I feel uncomfortable talking about it to strangers even now." Mathematics career Wilson's academic interests lie in graph theory, particularly in colouring problems, e.g. the four colour problem, and algebraic properties of graphs. He also researches the history of mathematics, particularly British mathematics and mathematics in the 17th century and the period 1860 to 1940, and the history of graph theory and combinatorics. In 1974, he won the Lester R. Ford Award from the Mathematical Association of America for his expository article An introduc
https://en.wikipedia.org/wiki/Symmetry%20%28physics%29
In physics, a symmetry of a physical system is a physical or mathematical feature of the system (observed or intrinsic) that is preserved or remains unchanged under some transformation. A family of particular transformations may be continuous (such as rotation of a circle) or discrete (e.g., reflection of a bilaterally symmetric figure, or rotation of a regular polygon). Continuous and discrete transformations give rise to corresponding types of symmetries. Continuous symmetries can be described by Lie groups while discrete symmetries are described by finite groups (see Symmetry group). These two concepts, Lie and finite groups, are the foundation for the fundamental theories of modern physics. Symmetries are frequently amenable to mathematical formulations such as group representations and can, in addition, be exploited to simplify many problems. Arguably the most important example of a symmetry in physics is that the speed of light has the same value in all frames of reference, which is described in special relativity by a group of transformations of the spacetime known as the Poincaré group. Another important example is the invariance of the form of physical laws under arbitrary differentiable coordinate transformations, which is an important idea in general relativity. As a kind of invariance Invariance is specified mathematically by transformations that leave some property (e.g. quantity) unchanged. This idea can apply to basic real-world observations. For example, temperature may be homogeneous throughout a room. Since the temperature does not depend on the position of an observer within the room, we say that the temperature is invariant under a shift in an observer's position within the room. Similarly, a uniform sphere rotated about its center will appear exactly as it did before the rotation. The sphere is said to exhibit spherical symmetry. A rotation about any axis of the sphere will preserve how the sphere "looks". Invariance in force The above
https://en.wikipedia.org/wiki/John%20C.%20Reynolds
John Charles Reynolds (June 1, 1935 – April 28, 2013) was an American computer scientist. Education and affiliations John Reynolds studied at Purdue University and then earned a Doctor of Philosophy (Ph.D.) in theoretical physics from Harvard University in 1961. He was a professor of information science at Syracuse University from 1970 to 1986. From then until his death, he was a professor of computer science at Carnegie Mellon University. He also held visiting positions at Aarhus University (Denmark), The University of Edinburgh, Imperial College London, Microsoft Research (Cambridge, UK) and Queen Mary University of London. Academic work Reynolds's main research interest was in the area of programming language design and associated specification languages, especially concerning formal semantics. He invented the polymorphic lambda calculus (System F) and formulated the property of semantic parametricity; the same calculus was independently discovered by Jean-Yves Girard. He wrote a seminal paper on definitional interpreters, which clarified early work on continuations and introduced the technique of defunctionalization. He applied category theory to programming language semantics. He defined the programming languages Gedanken and Forsythe, known for their use of intersection types. He worked on a separation logic to describe and reason about shared mutable data structures. Reynolds created an elegant, idealized formulation of the programming language ALGOL, which exhibits ALGOL's syntactic and semantic purity, and is used in programming language research. It also made a convincing methodologic argument regarding the suitability of local effects in the context of call-by-name languages, in contrast with the global effects used by call-by-value languages such as ML. The conceptual integrity of the language made it one of the main objects of semantic research, along with Programming Computable Functions (PCF) and ML. He was an editor of journals such as the Commun
https://en.wikipedia.org/wiki/NA60%20experiment
The NA60 experiment was a high energy heavy ions experiment at the CERN Super Proton Synchrotron. It studied "prompt dimuon and charm production with proton and heavy ion beams". The spokesperson for the experiment is Gianluca Usai. The experiment was proposed on 7 March 2000 and accepted on 15 June 2000. The experiment ran from October 2001 to 15 November 2004. External links NA60 website CERN-NA-60 experiment record on INSPIRE-HEP Grey Book entry CERN experiments Particle experiments
https://en.wikipedia.org/wiki/Kahlenberg%20Transmitter
The Kahlenberg Transmitter is a facility for FM- and TV on the Kahlenberg near Vienna. It was established in 1953 and used until 1956 an antenna on the observation tower Stefaniewarte. From 1956 to 1974 a 129-metre-high guyed mast built of lattice steel was used. Since 1974 a 165-metre-high guyed steel tube mast has been used, which is equipped with rooms of technical equipment. See also List of masts
https://en.wikipedia.org/wiki/4-Aminosalicylic%20acid
4-Aminosalicylic acid, also known as para-aminosalicylic acid (PAS) and sold under the brand name Paser among others, is an antibiotic primarily used to treat tuberculosis. Specifically it is used to treat active drug resistant tuberculosis together with other antituberculosis medications. It has also been used as a second line agent to sulfasalazine in people with inflammatory bowel disease such as ulcerative colitis and Crohn's disease. It is typically taken by mouth. Common side effects include nausea, abdominal pain, and diarrhea. Other side effects may include liver inflammation and allergic reactions. It is not recommended in people with end stage kidney disease. While there does not appear to be harm with use during pregnancy it has not been well studied in this population. 4-Aminosalicylic acid is believed to work by blocking the ability of bacteria to make folic acid. 4-Aminosalicylic acid was first made in 1902, and came into medical use in 1943. It is on the World Health Organization's List of Essential Medicines. Medical uses The main use for 4-aminosalicylic acid is for the treatment of tuberculosis infections. In the United States, 4-aminosalicylic acid is indicated for the treatment of tuberculosis in combination with other active agents. In the European Union, it is used in combination with other medicines to treat adults and children from 28 days of age who have multi-drug resistant tuberculosis when combinations without this medicine cannot be used, either because the disease is resistant to them or because of their side effects. Tuberculosis Aminosalicylic acid was introduced to clinical use in 1944. It was the second antibiotic found to be effective in the treatment of tuberculosis, after streptomycin. PAS formed part of the standard treatment for tuberculosis prior to the introduction of rifampicin and pyrazinamide. Its potency is less than that of the current five first-line drugs (isoniazid, rifampicin, ethambutol, pyrazinamide, and str
https://en.wikipedia.org/wiki/Primefree%20sequence
In mathematics, a primefree sequence is a sequence of integers that does not contain any prime numbers. More specifically, it usually means a sequence defined by the same recurrence relation as the Fibonacci numbers, but with different initial conditions causing all members of the sequence to be composite numbers that do not all have a common divisor. To put it algebraically, a sequence of this type is defined by an appropriate choice of two composite numbers a1 and a2, such that the greatest common divisor is equal to 1, and such that for there are no primes in the sequence of numbers calculated from the formula . The first primefree sequence of this type was published by Ronald Graham in 1964. Wilf's sequence A primefree sequence found by Herbert Wilf has initial terms The proof that every term of this sequence is composite relies on the periodicity of Fibonacci-like number sequences modulo the members of a finite set of primes. For each prime , the positions in the sequence where the numbers are divisible by repeat in a periodic pattern, and different primes in the set have overlapping patterns that result in a covering set for the whole sequence. Nontriviality The requirement that the initial terms of a primefree sequence be coprime is necessary for the question to be non-trivial. If the initial terms share a prime factor (e.g., set and for some and both greater than 1), due to the distributive property of multiplication and more generally all subsequent values in the sequence will be multiples of . In this case, all the numbers in the sequence will be composite, but for a trivial reason. The order of the initial terms is also important. In Paul Hoffman's biography of Paul Erdős, The man who loved only numbers, the Wilf sequence is cited but with the initial terms switched. The resulting sequence appears primefree for the first hundred terms or so, but term 138 is the 45-digit prime . Other sequences Several other primefree sequences are known:
https://en.wikipedia.org/wiki/Patrilocal%20residence
In social anthropology, patrilocal residence or patrilocality, also known as virilocal residence or virilocality, are terms referring to the social system in which a married couple resides with or near the husband's parents. The concept of location may extend to a larger area such as a village, town or clan territory. The practice has been found in around 70 percent of the world's modern human cultures that have been described ethnographically. Archaeological evidence for patrilocality has also been found among Neanderthal remains in Spain and for ancient hominids in Africa. Description In a patrilocal society, when a man marries, his wife joins him in his father's home or compound, where they raise their children. These children will follow the same pattern. Sons will stay and daughters will move in with their husbands' families. Families living in a patrilocal residence generally assume joint ownership of domestic sources. The household is led by a senior member, who also directs the labor of all other members. Matrilocal residence may be regarded as the opposite of patrilocal residence. Early theories explaining the determinants of postmarital residence (e.g., Lewis Henry Morgan, Edward Tylor, or George Peter Murdock) connected it with the sexual division of labor. However, to date, cross-cultural tests of this hypothesis using worldwide samples have failed to find any significant relationship between these two variables. However, Korotayev's tests show that the female contribution to subsistence does correlate significantly with matrilocal (as opposed to patrilocal) residence in general; however, this correlation is masked by a general polygyny factor. Although an increase in the female contribution to subsistence tends to lead to matrilocal residence, it also tends simultaneously to lead to general non-sororal polygyny which effectively destroys matrilocality, and pushes a social system toward patrilocality. If this polygyny factor is controlled (e.g., th
https://en.wikipedia.org/wiki/Chimpanzee%20genome%20project
The Chimpanzee Genome Project was an effort to determine the DNA sequence of the chimpanzee genome. Sequencing began in 2005 and by 2013 twenty-four individual chimpanzees had been sequenced. This project was folded into the Great Ape Genome Project. In 2013 high resolution sequences were published from each of the four recognized chimpanzee subspecies: Central chimpanzee, Pan troglodytes troglodytes, 10 sequences; Western chimpanzee, Pan troglodytes verus, 6 sequences; Nigeria-Cameroon chimpanzee, Pan troglodytes ellioti, 4 sequences; and Eastern chimpanzee, Pan troglodytes schweinfurthii, 4 sequences. They were all sequenced to a mean of 25-fold coverage per individual. The research showed considerable genome diversity in chimpanzees with many population-specific traits. The central chimpanzees retain the highest diversity in the chimpanzee lineage, whereas the other subspecies demonstrate signs of population bottlenecks. Background Human and chimpanzee chromosomes are very alike. The primary difference is that humans have one fewer pair of chromosomes than do other great apes. Humans have 23 pairs of chromosomes and other great apes have 24 pairs of chromosomes. In the human evolutionary lineage, two ancestral ape chromosomes fused at their telomeres, producing human chromosome 2. There are nine other major chromosomal differences between chimpanzees and humans: chromosome segment inversions on human chromosomes 1, 4, 5, 9, 12, 15, 16, 17, and 18. After the completion of the Human genome project, a common chimpanzee genome project was initiated. In December 2003, a preliminary analysis of 7600 genes shared between the two genomes confirmed that certain genes such as the forkhead-box P2 transcription factor, which is involved in speech development, are different in the human lineage. Several genes involved in hearing were also found to have changed during human evolution, suggesting selection involving human language-related behavior. Differences between ind
https://en.wikipedia.org/wiki/Fischertechnik
Fischertechnik is a brand of construction toy. It was invented by Artur Fischer and is produced by fischertechnik GmbH in Waldachtal, Germany. Fans often refer to Fischertechnik as "FT" or "ft". It is used in education for teaching about simple machines, as well as motorization and mechanisms. The company also offers computer interface technology, which can be used to teach the theory of automation and robotics. Origin The company is a German manufacturer of fasteners, and the original Fischertechnik set was intended as a Christmas (1964) novelty gift for engineers and buyers at industrial clients. The gifts proved popular, so for Christmas 1965, the company introduced its first building set for retail sale in Germany. In part, it has been claimed to foster education and interest in technology and science among the young. By about 1970, the construction sets were being sold in the United States at upscale toy retailers such as FAO Schwarz. Building blocks The basic building blocks were of channel-and-groove design, manufactured of hard nylon. Basic blocks came in 15×15×15 and 15×15×30 millimeter sizes. A peg on one side of each block could be attached into a channel on any of the other five sides of a similar block, producing a tightly-fitting assembly that could assume almost any shape. Red cladding plates could be used to complete the exterior surfaces of the models. Accessories The original blocks were characteristically gray with red accessories such as wheels and angled blocks. Electric motors, power sources, and gears were soon added to mobilize models. Additional building pieces such as struts were added in “statics” sets, allowing the construction of realistic-looking bridges and tower cranes. A few Fischertechnik girders actually are made of aluminum. At least one company made Fischertechnik-compatible aluminum bars of any desired length. To teach the physics of such models, some sets included measuring devices, so that trigonometric vectors could be ca
https://en.wikipedia.org/wiki/Hexazinone
Hexazinone is an organic compound that is used as a broad spectrum herbicide. It is a colorless solid. It exhibits some solubility in water but is highly soluble in most organic solvents except alkanes. A member of the triazine class herbicides, it is manufactured by DuPont and sold under the trade name Velpar. It functions by inhibiting photosynthesis and thus is a nonselective herbicide. It is used to control grasses, broadleaf, and woody plants. In the United States approximately 33% is used on alfalfa, 31% in forestry, 29% in industrial areas, 4% on rangeland and pastures, and < 2% on sugarcane. Hexazinone is a pervasive groundwater contaminant. Use of hexazinone causes groundwater to be at high risk of contamination due to the high leaching potential it exhibits. History Hexazinone is widely used as a herbicide. It is a non-selective herbicide from the triazine family. It is used among a broad range of places. It is used to control weeds within all sort of applications. From sugarcane plantations, forestry field nurseries, pineapple plantations to high- and railway grasses and industrial plant sites. Hexazinone was first registered in 1975 for the overall control of weeds and later for uses in crops. Structure and reactivity Triazines like hexazinone can bind to the D-1 quinone protein of the electron transport chain in photosystem II to inhibit the photosynthesis. These diverted electrons can thereby damage membranes and destroy cells. Synthesis Hexazinone can be synthesized in two different reaction processes. One process starts with a reaction of methyl chloroformate with cyanamide, forming hexazinone after a five-step pathway: A second synthesis starts with methylthiourea.: Degradation The degradation of hexazinone has long been studied. It degrades approximately 10% in five weeks, when exposed to artificial sunlight in distilled water. However, degradation in natural waters can be three to seven times greater. Surprisingly, the pH and the
https://en.wikipedia.org/wiki/ISO%2011783
ISO 11783, known as Tractors and machinery for agriculture and forestry—Serial control and communications data network (commonly referred to as "ISO Bus" or "ISOBUS") is a communication protocol for the agriculture industry based on the SAE J1939 protocol (which includes CANbus) . It is managed by the ISOBUS group in VDMA. The ISOBUS standard specifies a serial data network for control and communications on forestry or agricultural tractors and implements. Parts The standard comes in 14 parts: ISO 11783-175388: General standard for mobile data communication ISO 18883-2: Physical layer ISO 11783-3: Data link layer ISO 11783-4: Network layer ISO 11783-5: Network management ISO 11783-6: Virtual terminal ISO 11783-7: Implement messages application layer ISO 11783-8: Power train messages ISO 11783-9: Tractor ECU ISO 11783-10: Task controller and management information system data interchange ISO 11783-11: Mobile data element dictionary ISO 11783-12: Diagnostics services ISO 11783-13: File server ISO 11783-14: Sequence control Agricultural Industry Electronics Foundation and ISOBUS The Agricultural Industry Electronics Foundation works to promote ISOBUS and coordinate enhanced certification tests for the ISO 11783 standard. External links ISO 11783-1:2017 Official VDMA page for ISOBUS Open-source PoolEdit editor for creating ISOBUS user interfaces 11783 Network protocols
https://en.wikipedia.org/wiki/Intraspecific%20competition
Intraspecific competition is an interaction in population ecology, whereby members of the same species compete for limited resources. This leads to a reduction in fitness for both individuals, but the more fit individual survives and is able to reproduce. By contrast, interspecific competition occurs when members of different species compete for a shared resource. Members of the same species have rather similar requirements for resources, whereas different species have a smaller contested resource overlap, resulting in intraspecific competition generally being a stronger force than interspecific competition. Individuals can compete for food, water, space, light, mates, or any other resource which is required for survival or reproduction. The resource must be limited for competition to occur; if every member of the species can obtain a sufficient amount of every resource then individuals do not compete and the population grows exponentially. Prolonged exponential growth is rare in nature because resources are finite and so not every individual in a population can survive, leading to intraspecific competition for the scarce resources. When resources are limited, an increase in population size reduces the quantity of resources available for each individual, reducing the per capita fitness in the population. As a result, the growth rate of a population slows as intraspecific competition becomes more intense, making it a negatively density dependent process. The falling population growth rate as population increases can be modelled effectively with the logistic growth model. The rate of change of population density eventually falls to zero, the point ecologists have termed the carrying capacity (K). However, a population can only grow to a very limited number within an environment. The carrying capacity, defined by the variable k, of an environment is the maximum number of individuals or species an environment can sustain and support over a longer period of time. The r
https://en.wikipedia.org/wiki/Uncinate%20process%20of%20ethmoid%20bone
In the ethmoid bone, a sickle shaped projection, the uncinate process, projects posteroinferiorly from the ethmoid labyrinth. Between the posterior edge of this process and the anterior surface of the ethmoid bulla, there is a two-dimensional space, resembling a crescent shape. This space continues laterally as a three-dimensional slit-like space - the ethmoidal infundibulum. This is bounded by the uncinate process, medially, the orbital lamina of ethmoid bone (lamina papyracea), laterally, and the ethmoidal bulla, posterosuperiorly. This concept is easier to understand if one imagine the infundibulum as a prism so that its medial face is the hiatus semilunaris. The "lateral face" of this infundibulum contains the ostium of the maxillary sinus, which, therefore, opens into the infundibulum. Variations The uncinate process can be attached to either the lateral nasal wall, on the lamina papyracea (50%), the anterior cranial fossa, on the ethmoidal roof (25%), or the middle concha (25%). The superior attachment of the uncinate process determines the drainage pattern of the frontal sinus. In the first case, the infundibulum and the frontal recess are separated from each other, forcing the frontal sinus to drain directly into the middle meatus and not into the ethmoidal infundibulum. With the other configurations, the sinus will drain, firstly, into the infundibulum.
https://en.wikipedia.org/wiki/Sticky%20bead%20argument
In general relativity, the sticky bead argument is a simple thought experiment designed to show that gravitational radiation is indeed predicted by general relativity, and can have physical effects. These claims were not widely accepted prior to about 1955, but after the introduction of the bead argument, any remaining doubts soon disappeared from the research literature. The argument is often credited to Hermann Bondi, who popularized it, but it was originally proposed by Richard Feynman. Description The thought experiment was first described by Feynman in 1957 at a conference at Chapel Hill, North Carolina, and later addressed in his private letter to Victor Weisskopf: As the gravitational waves are mainly transverse, the rod has to be oriented perpendicular to the propagation direction of the wave. History of arguments on the properties of gravitational waves Einstein's double reversal The creator of the theory of general relativity, Albert Einstein, argued in 1916 that gravitational radiation should be produced, according to his theory, by any mass-energy configuration that has a time-varying quadrupole moment (or higher multipole moment). Using a linearized field equation (appropriate for the study of weak gravitational fields), he derived the famous quadrupole formula quantifying the rate at which such radiation should carry away energy. Examples of systems with time varying quadrupole moments include vibrating strings, bars rotating about an axis perpendicular to the symmetry axis of the bar, and binary star systems, but not rotating disks. In 1922, Arthur Stanley Eddington wrote a paper expressing (apparently for the first time) the view that gravitational waves are in essence ripples in coordinates, and have no physical meaning. He did not appreciate Einstein's arguments that the waves are real. In 1936, together with Nathan Rosen, Einstein rediscovered the Beck vacuums, a family of exact gravitational wave solutions with cylindrical symmetry (so
https://en.wikipedia.org/wiki/Richard%20Swann%20Lull
Richard Swann Lull (November 6, 1867 – April 22, 1957) was an American paleontologist and Sterling Professor at Yale University who is largely remembered now for championing a non-Darwinian view of evolution, whereby mutation(s) could unlock presumed "genetic drives" that, over time, would lead populations to increasingly extreme phenotypes (and perhaps, ultimately, to extinction). Life Lull was born in Annapolis, Maryland, the son of naval officer Edward Phelps Lull and Elizabeth Burton, daughter of General Henry Burton. He married Clara Coles Boggs and he has a daughter Dorothy. He majored in zoology at Rutgers College where he received both his undergraduate and master's degrees (M.S. 1896). He worked for the Division of Entomology of the United States Department of Agriculture, but in 1894 became an assistant professor of zoology at the State Agricultural College in Amherst, Massachusetts (now the University of Massachusetts Amherst). Lull's interest in fossil footprints began at Amherst College, renowned for its collection of fossil footprints, and eventually led him to switch from entomology to paleontology. In 1899 Lull worked as a member of the American Museum of Natural History's expedition to Bone Cabin Quarry, Wyoming, helping to collect that museum's brontosaur skeleton. In 1902 he again joined an American Museum team in Montana, then studied under Columbia University Prof. Henry Fairfield Osborn. In 1903 he received his Ph.D. from Columbia University, and in 1906, after a brief time at Amherst, was named Assistant Professor of Vertebrate Paleontology in Yale College and Associate Curator of Vertebrate Paleontology at the Peabody Museum of Natural History. He stayed at Yale for the next 50 years. In 1933 Lull was awarded the Daniel Giraud Elliot Medal from the National Academy of Sciences. One famous example he used to support his non-Darwinian evolution theory concerned the enormous antlers of the Irish elk: he argued that these could not possibly b
https://en.wikipedia.org/wiki/Institute%20of%20Mathematical%20Sciences%2C%20Chennai
The Institute of Mathematical Sciences (IMSc) (sometimes also referred to as Matscience) is a research centre located in Chennai, India. It is a constituent institute of the Homi Bhabha National Institute. IMSc is a national institute for fundamental research in frontier disciplines of the mathematical and physical sciences: theoretical computer science, mathematics, theoretical physics, and computational biology. It is funded mainly by the Department of Atomic Energy. The institute operates the Kabru supercomputer. History The institute was founded by Alladi Ramakrishnan in 1962. It is modelled after the Institute for Advanced Study, Princeton, New Jersey, United States. It went through a phase of expansion when E. C. G. Sudarshan in the 1980s and R. Ramachandran in 1990s were the directors. The current director of the institute is V.Ravindran. Academics The institute has a graduate research program to which a group of students are admitted each year to work towards a Ph.D. degree. IMSc hosts scientists at the post-doctoral level and supports a visiting scientist program in areas of research in the institute. Campus Located in South Chennai, in the Adyar-Taramani area, the institute is on the Central Institutes of Technology (CIT) campus. The institute maintains a student hostel, flatlets for long-term visitors, married students and post-doctoral fellows, and the institute guest house. IMSc has its own faculty housing in Tiruvanmiyur near the seashore. Notable people Ramachandran Balasubramanian, mathematician Ganapathy Baskaran, physicist Indumathi D., physicist Rajiah Simon, physicist Radha Balakrishnan, physicist
https://en.wikipedia.org/wiki/Melt%20pond
Melt ponds are pools of open water that form on sea ice in the warmer months of spring and summer. The ponds are also found on glacial ice and ice shelves. Ponds of melted water can also develop under the ice, which may lead to the formation of thin underwater ice layers called false bottoms. Melt ponds are usually darker than the surrounding ice, and their distribution and size is highly variable. They absorb solar radiation rather than reflecting it as ice does and, thereby, have a significant influence on Earth's radiation balance. This differential, which had not been scientifically investigated until recently, has a large effect on the rate of ice melting and the extent of ice cover. Melt ponds can melt through to the ocean's surface. Seawater entering the pond increases the melt rate because the salty water of the ocean is warmer than the fresh water of the pond. The increase in salinity also depresses the water's freezing point. Water from melt ponds over land surface can run into crevasses or moulins – tubes leading under ice sheets or glaciers – turning into meltwater. The water may reach the underlying rock. The effect is an increase in the rate of ice flow to the oceans, as the fluid behaves like a lubricant in the basal sliding of glaciers. Effects of melt ponds The effects of melt ponds are diverse (this subsection refers to melt ponds on ice sheets and ice shelves). Research by Ted Scambos, of the National Snow and Ice Data Center, has supported the melt water fracturing theory that suggests the melting process associated with melt ponds has a substantial effect on ice shelf disintegration. Seasonal melt ponded and penetrating under glaciers shows seasonal acceleration and deceleration of ice flows affecting whole icesheets. Accumulated changes by ponding on ice sheets appear in the earthquake record of Greenland and other glaciers: "Quakes ranged from six to 15 per year from 1993 to 2002, then jumped to 20 in 2003, 23 in 2004, and 32 in th
https://en.wikipedia.org/wiki/Channel%20use
Channel use is a quantity used in signal processing or telecommunication related to symbol rate and channel capacity. Capacity is measured in bits per input symbol into the channel (bits per channel use). If a symbol enters the channel every Ts seconds (for every symbol period a symbol is transmitted) the channel capacity in bits per second is C/Ts. The phrase "1 bit per channel use" denotes the transmission of 1 symbol (of duration Ts) containing 1 data bit. See also Adaptive communications End instrument Spectral efficiency and modulation efficiency in (bit/s)/Hz Data transmission Information theory
https://en.wikipedia.org/wiki/Sampling%20fraction
In sampling theory, the sampling fraction is the ratio of sample size to population size or, in the context of stratified sampling, the ratio of the sample size to the size of the stratum. The formula for the sampling fraction is where n is the sample size and N is the population size. A sampling fraction value close to 1 will occur if the sample size is relatively close to the population size. When sampling from a finite population without replacement, this may cause dependence between individual samples. To correct for this dependence when calculating the sample variance, a finite population correction (or finite population multiplier) of (N-n)/(N-1) may be used. If the sampling fraction is small, less than 0.05, then the sample variance is not appreciably affected by dependence, and the finite population correction may be ignored.
https://en.wikipedia.org/wiki/Pidgeon%20process
The Pidgeon process is a practical method for smelting magnesium. The most common method involves the raw material, dolomite being fed into an externally heated reduction tank and then thermally reduced to metallic magnesium using 75% ferrosilicon as a reducing agent in a vacuum. Overall the processes in magnesium smelting via the Pidgeon process involve dolomite calcination, grinding and pelleting, and vacuum thermal reduction. Besides the Pidgeon process, electrolysis of magnesium chloride for commercial production of magnesium is also used, at one point in time accounting for 75% of the world's magnesium production. Chemistry The general reaction that occurs in the Pidgeon process is: 2MgO*CaO + Si(Fe) -> 2Mg +Ca2SiO4 For industrial use, ferrosilicon is used because its cheaper and more readily available than silicon. The iron from the alloy is a spectator in the reaction. CaC2 may also be used as an even cheaper alternative for silicon and ferrosilicon, but is disadvantageous because it decreases the magnesium yield slightly. The magnesium raw material of this type of reaction is magnesium oxide, which is obtained in many ways. In all cases, the raw materials have to be calcined to remove both water and carbon dioxide. Without doing so, the reaction would be gaseous at reaction temperatures and may even revert the reaction. Magnesium oxide can be obtained by sea or lake water magnesium chloride hydrolyzed to hydroxide. It is calcined to magnesium oxide by removing water. Another option is to use mined magnesite (MgCO3) calcined to magnesium oxide by carbon dioxide removal. The most used raw material is mined dolomite, a mixed (Ca,Mg)CO3, where the calcium oxide present in the reaction zone scavenges the silica formed, releasing heat and consuming one of the products, ultimately helping push the equilibrium to the right. (1) Dolomite calcination CaCO3*MgCO3 -> MgO*CaO +2CO2 (2) Reduction 2MgO*CaO +Si(Fe) -> 2Mg + Ca2SiO4 The Pidgeon process is an endoth
https://en.wikipedia.org/wiki/Computational%20epidemiology
Computational epidemiology is a multidisciplinary field that uses techniques from computer science, mathematics, geographic information science and public health to better understand issues central to epidemiology such as the spread of diseases or the effectiveness of a public health intervention. Computational epidemiology traces its origins to mathematical epidemiology, but began to experience significant growth with the rise of big data and the democratization of high-performance computing through cloud computing. Introduction In contrast with traditional epidemiology, computational epidemiology looks for patterns in unstructured sources of data, such as social media. It can be thought of as the hypothesis-generating antecedent to hypothesis-testing methods such as national surveys and randomized controlled trials. A mathematical model is developed which describes the observed behavior of the viruses, based on the available data. Then simulations of the model are performed to understand the possible outcomes given the model used. These simulations produce as results projections which can then be used to make predictions or verify the facts and then be used to plan interventions and meters towards the control of the disease's spread.
https://en.wikipedia.org/wiki/Undefined%20variable
An undefined variable in the source code of a computer program is a variable that is accessed in the code but has not been declared by that code. In some programming languages, an implicit declaration is provided the first time such a variable is encountered at compile time. In other languages such a usage is considered to be sufficiently serious that a diagnostic being issued and the compilation fails. Some language definitions initially used the implicit declaration behavior and as they matured provided an option to disable it (e.g. Perl's "use warnings" or Visual Basic's "Option Explicit"). Examples The following provides some examples of how various programming language implementations respond to undefined variables. Each code snippet is followed by an error message (if any). CLISP (setf y x) *** - EVAL: variable X has no value C int main() { int y = x; return 0; } foo.c: In function `main': foo.c:2: error: `x' undeclared (first use in this function) foo.c:2: error: (Each undeclared identifier is reported only once foo.c:2: error: for each function it appears in.) JavaScript A ReferenceError only happens if the same piece of executed code has a or a (but not ) declaration later on, or if the code is executed in strict mode. In all other cases, the variable will have the special value . "use strict"; let y = x; let y = x; let x; // causes error on line 1 ReferenceError: x is not defined Source File: file:///c:/temp/foo.js Lua y = x (no error, continuing) print(y) nil ML (Standard ML of New Jersey) val y = x; stdIn:1.9 Error: unbound variable or constructor: x MUMPS Set Y=X <UNDEF> OCaml let y = x;; Unbound value x Perl my $y = ($x // 0) + 1; # defined-or operator (no error) PHP 5 $y = $x; (no error) $y=""; $x=""; error_reporting(E_ALL); $y = $x; PHP Notice: Undefined variable: x in foo.php on line 3 Python 2.4 >>> x = y Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'y' is
https://en.wikipedia.org/wiki/Team%20OS/2
Team OS/2 was an advocacy group formed to promote IBM's OS/2 operating system. Originally internal to IBM with no formal IBM support, Team OS/2 successfully converted to a grassroots movement formally supported (but not directed) by IBM - consisting of well over ten thousand OS/2 enthusiasts both within and without IBM. It is one of the earliest examples of both an online viral phenomenon and a cause attracting supporters primarily through online communications. The decline of Team OS/2 largely coincided with IBM's abandonment of OS/2 and the coinciding attacks orchestrated by Microsoft on OS/2, Team OS/2, and IBM's early attempts at online evangelism. History Beginnings Team OS/2 was a significant factor in the spread and acceptance of OS/2. Formed in February 1992, Team OS/2 began when IBM employee Dave Whittle, recently appointed by IBM to evangelize OS/2 online, formed an internal IBM discussion group titled TEAMOS2 FORUM on IBM's worldwide network, which at the time, served more individuals than did the more academic Internet. The forum header stated that its purpose was The forum went viral as increasing numbers of IBMers worldwide began to contribute a wide variety of ideas as to how IBM could effectively compete with Microsoft to establish OS/2 as the industry standard desktop operating system. Within a short time, thousands of IBM employees had added the words TEAMOS2 to their internet phone directory listing, which enabled anyone within IBM to find like-minded OS/2 enthusiasts within the company and work together to overcome the challenges posed by IBM's size, insularity, and top-down marketing style. TEAMOS2 FORUM quickly caught the attention of some IBM executives, including Lee Reiswig and Lucy Baney, who after initial scepticism, offered moral and financial support for Whittle's grass roots and online marketing efforts. IBM's official program for generating word-of-mouth enthusiasm was called the "OS/2 Ambassador Program", where OS/2 enthusias
https://en.wikipedia.org/wiki/Lacticaseibacillus%20casei
Lacticaseibacillus casei is an organism that belongs to the largest genus in the family Lactobacillaceae, a lactic acid bacteria (LAB), that was previously classified as Lactobacillus casei-01. This bacteria has been identified as facultatively anaerobic or microaerophilic, acid-tolerant, non-spore-forming bacteria. The taxonomy of this group has been debated for several years because researchers struggled to differentiate between the strains of L. casei and L. paracasei. It has recently been accepted as a single species with five subspecies: L. casei subsp. rhamnosus, L. casei subsp. alactosus, L. casei subsp. casei, L. casei subsp. tolerans, and L. casei subsp. pseudoplantarum. The taxonomy of this genus was determined according to the phenotypic, physiological, and biochemical similarities. This species is a non-sporing, rod-shaped, gram positive microorganism that can be found within the reproductive and digestive tract of the human body. Since L. casei can survive in a variety of environmental habitats, it has and continues to be extensively studied by health scientists. Commercially, L. casei is used in fermenting dairy products and its application as a probiotic. Uses Dairy The most common application of L. casei is industrial, specifically for dairy production. Lacticaseibacillus casei is typically the dominant species of nonstarter lactic acid bacteria (i.e. contaminant bacteria) present in ripening cheddar cheese, and, recently, the complete genome sequence of L. casei ATCC 334 has become available. L. casei is also the dominant species in naturally fermented Sicilian green olives. Medical A commercial beverage containing L. casei strain Shirota has been shown to inhibit the in vivo growth of Helicobacter pylori, but when the same beverage was consumed by humans in a small trial, H. pylori colonization decreased only slightly, and the trend was not statistically significant. Some L. casei strains are considered to be probiotic, and may be effective in
https://en.wikipedia.org/wiki/Data%20analysis
Data analysis is the process of inspecting, cleansing, transforming, and modeling data with the goal of discovering useful information, informing conclusions, and supporting decision-making. Data analysis has multiple facets and approaches, encompassing diverse techniques under a variety of names, and is used in different business, science, and social science domains. In today's business world, data analysis plays a role in making decisions more scientific and helping businesses operate more effectively. Data mining is a particular data analysis technique that focuses on statistical modeling and knowledge discovery for predictive rather than purely descriptive purposes, while business intelligence covers data analysis that relies heavily on aggregation, focusing mainly on business information. In statistical applications, data analysis can be divided into descriptive statistics, exploratory data analysis (EDA), and confirmatory data analysis (CDA). EDA focuses on discovering new features in the data while CDA focuses on confirming or falsifying existing hypotheses. Predictive analytics focuses on the application of statistical models for predictive forecasting or classification, while text analytics applies statistical, linguistic, and structural techniques to extract and classify information from textual sources, a species of unstructured data. All of the above are varieties of data analysis. Data integration is a precursor to data analysis, and data analysis is closely linked to data visualization and data dissemination. The process of data analysis Analysis refers to dividing a whole into its separate components for individual examination. Data analysis is a process for obtaining raw data, and subsequently converting it into information useful for decision-making by users. Data is collected and analyzed to answer questions, test hypotheses, or disprove theories. Statistician John Tukey, defined data analysis in 1961, as:"Procedures for analyzing data, techni
https://en.wikipedia.org/wiki/WWJE-DT
WWJE-DT (channel 50) is a television station licensed to Derry, New Hampshire, United States, serving the Boston area as an affiliate of True Crime Network. It is owned by TelevisaUnivision alongside Marlborough, Massachusetts–licensed Univision-owned station WUNI (channel 66). The two stations share main studios and transmitter facilities on Parmenter Road in Hudson, Massachusetts. WWJE is operated separately from WUNI's joint sales agreement (JSA) with Entravision Communications–owned UniMás affiliate WUTF-TV (channel 27) in Worcester. WWJE formerly broadcast local newscasts from a studio located in Concord, branded as the NH1 News Network or NH1 News. Besides WBIN, sister radio station WNNH also used the NH1 News branding from August 2015 to August 2017. WBIN-TV was one of only two television stations based in the state of New Hampshire to broadcast local newscasts (alongside WMUR-TV), as much of the state is part of the Boston media market. On February 17, 2017, WBIN canceled its newscasts as part of a wind-down of the station's operations following the sale of its spectrum in the Federal Communications Commission (FCC)'s incentive auction. The station shut down its channel 35 transmitter on Merrill Hill in Hudson, New Hampshire on September 15, 2017, and began operating on channel 27 through a channel sharing agreement with channel 66 (then WUTF-DT); the WBIN-TV license was subsequently sold by Carlisle One Media, a company controlled by Bill Binnie, to WUNI's owner, Univision Communications. History Prior history of channel 50 in Boston The channel 50 allocation in the Boston market originally belonged to WXPO-TV, which launched in October 1969. It operated from two studios: its offices and master production facilities were located on Dutton Street in downtown Lowell, Massachusetts; however, its transmitter and "main" studio was on Governor Dinsmore Road in Windham, New Hampshire, to comply with FCC regulations requiring that a station's transmitter be l
https://en.wikipedia.org/wiki/Diane%20Pozefsky
Diane P. Pozefsky is a research professor at the University of North Carolina in the department of Computer Science. Pozefsky was awarded the Women in Technology International (WITI) 2011 Hall of Fame Award for contributions to the fields of Science and Technology. Education Pozefsky earned a A.B in applied mathematics from Brown University in 1972 and her Ph.D. from the Department of Computer Science at UNC in 1979 under the tutelage of Doctor Mehdi Jazayeri. Career Pozefsky joined IBM Corporation, Raleigh, NC, in 1979 as a member of the Communication Systems Architecture Department working in the specification and application of the Systems Network Architecture (SNA), a large and complex feature-rich network architecture developed in the 1970s by IBM. Similar in some respects to the OSI reference model, but with a number of differences. SNA is essentially composed of seven layers. She worked for IBM for 25 years and was named an IBM Fellow in 1994 in recognition of her work on APPN and AnyNet architectures and development. She was tasked with the network and application design for the 1998 and 2000 Olympics. Her work life has largely been focused on networking and software engineering, including: developing networking protocols deploying the network at the Nagano Olympics development processes storage networking application development mobile computing She has worked in development, design, and architecture and two areas that she has become particularly interested in later in here career are improving quality and blending theory and practice. Dr. Diane Pozefsky returned to UNC after retiring from IBM in June 2004. Publications Pozefsky's publications include: “Storage Networking: More than an SNA Anagram” in NCP and 3745/46 Today, Summer 2001. “MPTN Transport Gateway”, with D. Ogle in SNA and TCP/IP Enterprise Networking, Manning Publications Co, 1997. “Multiprotocol Transport Networking: Eliminating Application Dependencies on Communications Prot
https://en.wikipedia.org/wiki/Social%20return%20on%20investment
Social return on investment (SROI) is a principles-based method for measuring extra-financial value (such as environmental or social value not currently reflected or involved in conventional financial accounts). It can be used by any entity to evaluate impact on stakeholders, identify ways to improve performance, and enhance the performance of investments. The SROI method as it has been standardized by Social Value UK provides a consistent quantitative approach to understanding and managing the impacts of a project, business, organisation, fund or policy. It accounts for stakeholders' views of impact, and puts financial 'proxy' values on all those impacts identified by stakeholders which do not typically have market values. The aim is to include the values of people that are often excluded from markets in the same terms as used in markets, that is money, in order to give people a voice in resource allocation decisions. Some SROI users employ a version of the method that does not require that all impacts be assigned a financial proxy. Instead the "numerator" includes monetized, quantitative but not monetized, qualitative, and narrative types of information about value. A network was formed in 2008 to facilitate the continued evolution of the method. Some 2000 globally are members of this network called Social Value International (formerly the SROI Network). Development While the term SROI exists in cost–benefit analysis, a methodology for calculating social return on investment in the context of social enterprise was first documented in 2000 by REDF (formerly the Roberts Enterprise Development Fund), a San Francisco-based philanthropic fund that makes long-term grants to organizations that run businesses for social benefit. Since then the approach has evolved to take into account developments in corporate sustainability reporting as well as development in the field of accounting for social and environmental impact. Interest has been fuelled by the increasing r
https://en.wikipedia.org/wiki/Stochastic%20hill%20climbing
Stochastic hill climbing is a variant of the basic hill climbing method. While basic hill climbing always chooses the steepest uphill move, "stochastic hill climbing chooses at random from among the uphill moves; the probability of selection can vary with the steepness of the uphill move." See also Stochastic gradient descent
https://en.wikipedia.org/wiki/Osmotrophy
Osmotrophy is a feeding mechanism involving the movement of dissolved organic compounds by osmosis for nutrition. Organisms that use osmotrophy are called osmotrophs. Some mixotrophic microorganisms use osmotrophy to derive some of their energy. Osmotrophy is used by many diverse organisms. Organisms that use osmotrophy include bacteria, many species of protists and most fungi. Some macroscopic animals like molluscs, sponges, corals, brachiopods and echinoderms may use osmotrophic feeding as a supplemental food source. Process Osmotrophy as a means of gathering nutrients in microscopic organisms relies on cellular surface area to ensure that proper diffusion of nutrients occur in the cell. In other words, an osmotroph is an organism that has their "stomach" outside of their body. Sometimes, osmotrophs may still have an internal digestive system in addition to still using osmosis as a way to gain supplemental nutrients. Additionally, when organisms increase in size, the surface area per volume ratio drops and osmotrophy becomes insufficient to meet nutrient demands. Larger macroscopic organisms that rely on osmotrophy can compensate for a reduced surface area per volume ratio with a very flat, thin body. A tapeworm is an example of such adaptation. In stagnant waters, photoautotrophs have a relative advantage over heterotrophic osmotrophs since the flux of photons as an energy source are not hindered at low temperatures; thus, it depends on diffusion for mass acquisition through Brownian diffusion. Osmotrophy differs from other cellular feeding mechanisms, but can also be found in many organisms. This allows for organisms to use osmosis in different environments. Fungi Fungi are the biggest osmotrophic specialist since they are major degraders in all ecosystems. For organisms like fungi, osmotrophy facilitates the decomposition process. This is a result of the osmotrophy resulting in metabolites that continue growth. See also Autotrophy Heterotrophy Mixo
https://en.wikipedia.org/wiki/Polarizer
A polarizer or polariser (see spelling differences) is an optical filter that lets light waves of a specific polarization pass through while blocking light waves of other polarizations. It can filter a beam of light of undefined or mixed polarization into a beam of well-defined polarization, that is polarized light. The common types of polarizers are linear polarizers and circular polarizers. Polarizers are used in many optical techniques and instruments, and polarizing filters find applications in photography and LCD technology. Polarizers can also be made for other types of electromagnetic waves besides visible light, such as radio waves, microwaves, and X-rays. Linear polarizers Linear polarizers can be divided into two general categories: absorptive polarizers, where the unwanted polarization states are absorbed by the device, and beam-splitting polarizers, where the unpolarized beam is split into two beams with opposite polarization states. Polarizers which maintain the same axes of polarization with varying angles of incidence are often called Cartesian polarizers, since the polarization vectors can be described with simple Cartesian coordinates (for example, horizontal vs. vertical) independent from the orientation of the polarizer surface. When the two polarization states are relative to the direction of a surface (usually found with Fresnel reflection), they are usually termed s and p. This distinction between Cartesian and s–p polarization can be negligible in many cases, but it becomes significant for achieving high contrast and with wide angular spreads of the incident light. Absorptive polarizers Certain crystals, due to the effects described by crystal optics, show dichroism, preferential absorption of light which is polarized in particular directions. They can therefore be used as linear polarizers. The best known crystal of this type is tourmaline. However, this crystal is seldom used as a polarizer, since the dichroic effect is strongly wavelen
https://en.wikipedia.org/wiki/Fragaria%20%C3%97%20Comarum%20hybrids
There are several commercially important hybrids between Fragaria and Comarum species in existence. A name for Fragaria × Comarum is available as × Comagaria Büscher & G.H. Loos in Veroff. [Bohumer Bot. Ver. 2(1): 6. 2010], along with the combination × Comagaria rosea (Mabb.) Büscher & G.H. Loos. The first-generation hybrids have been recorded as heptaploid, i.e. with seven sets of chromosomes; four sets of chromosomes came from their octoploid strawberry parent, and three from their hexaploid Comarum parent. Commercial cultivars All commercial cultivars resemble strawberries more closely than they do Comarum. They are all vigorous, and produce runners profusely. 'Frel', also known as , is a patented hybrid strawberry that is the result of crossing the garden strawberry Fragaria × ananassa subsp. cuneifolia (syn. Fragaria grandiflora) with Marsh Cinquefoil, Comarum palustre (formerly Potentilla palustris), followed by backcrossing to strawberry. The plant is grown for ornamental purposes. It has bright pink flowers (in contrast to the white flowers of naturally occurring strawberry species) and it produces a small number of strawberries. 'Franor' (marketed as ) developed as a sport of 'Frel', and features a more intense red color in the flowers. 'Gerald Straley' is a seedling of 'Frel', selected at Heronswood in Washington for its bright red flowers. It was named after the former curator of the University of British Columbia Botanical Gardens. 'Lipstick' is a variety developed in 1966 from a cross between the Marsh Cinquefoil, Comarum palustre and the Garden Strawberry, Fragaria × ananassa. It has deep pink to red flowers, and slightly larger, more flavorful berries than 'Frel'. It, too, is grown for ornamental purposes.
https://en.wikipedia.org/wiki/ProClarity
ProClarity Corporation was a software company specializing in business intelligence and data analysis applications. The company was founded in 1995 as Knosys Inc. in Boise, Idaho. The company was renamed ProClarity after its primary commercial software product, "ProClarity", in 2001. ProClarity's software products integrated tightly with Microsoft Analysis Services. Among ProClarity's more than 2,000 global clients were AT&T, Ericsson, Hewlett-Packard, Home Depot, Pennzoil QuakerState, Reckitt Benckiser, Roche, Siemens, USDA, Verizon, and Wells Fargo. On April 3, 2006, Microsoft announced the acquisition of ProClarity. The company was gradually folded into Microsoft's Business Division while a final update to the software product, version 6.3, was released in 2007. Additional business intelligence components, such as PerformancePoint Services for SharePoint 2010, and business intelligence improvements in Excel were released by the division in subsequent years.
https://en.wikipedia.org/wiki/Group%20code
In coding theory, group codes are a type of code. Group codes consist of linear block codes which are subgroups of , where is a finite Abelian group. A systematic group code is a code over of order defined by homomorphisms which determine the parity check bits. The remaining bits are the information bits themselves. Construction Group codes can be constructed by special generator matrices which resemble generator matrices of linear block codes except that the elements of those matrices are endomorphisms of the group instead of symbols from the code's alphabet. For example, considering the generator matrix the elements of this matrix are matrices which are endomorphisms. In this scenario, each codeword can be represented as where are the generators of . See also Group coded recording (GCR)
https://en.wikipedia.org/wiki/Genetic%20distance
Genetic distance is a measure of the genetic divergence between species or between populations within a species, whether the distance measures time from common ancestor or degree of differentiation. Populations with many similar alleles have small genetic distances. This indicates that they are closely related and have a recent common ancestor. Genetic distance is useful for reconstructing the history of populations, such as the multiple human expansions out of Africa. It is also used for understanding the origin of biodiversity. For example, the genetic distances between different breeds of domesticated animals are often investigated in order to determine which breeds should be protected to maintain genetic diversity. Biological foundation In the genome of an organism, each gene is located at a specific place called the locus for that gene. Allelic variations at these loci cause phenotypic variation within species (e.g. hair colour, eye colour). However, most alleles do not have an observable impact on the phenotype. Within a population new alleles generated by mutation either die out or spread throughout the population. When a population is split into different isolated populations (by either geographical or ecological factors), mutations that occur after the split will be present only in the isolated population. Random fluctuation of allele frequencies also produces genetic differentiation between populations. This process is known as genetic drift. By examining the differences between allele frequencies between the populations and computing genetic distance, we can estimate how long ago the two populations were separated. Measures Although it is simple to define genetic distance as a measure of genetic divergence, there are several different statistical measures that have been proposed. This has happened because different authors considered different evolutionary models. The most commonly used are Nei's genetic distance, Cavalli-Sforza and Edwards measure, an
https://en.wikipedia.org/wiki/Replication%20%28computing%29
Replication in computing involves sharing information so as to ensure consistency between redundant resources, such as software or hardware components, to improve reliability, fault-tolerance, or accessibility. Terminology Replication in computing can refer to: Data replication, where the same data is stored on multiple storage devices Computation replication, where the same computing task is executed many times. Computational tasks may be: Replicated in space, where tasks are executed on separate devices Replicated in time, where tasks are executed repeatedly on a single device Replication in space or in time is often linked to scheduling algorithms. Access to a replicated entity is typically uniform with access to a single non-replicated entity. The replication itself should be transparent to an external user. In a failure scenario, a failover of replicas should be hidden as much as possible with respect to quality of service. Computer scientists further describe replication as being either: Active replication, which is performed by processing the same request at every replica Passive replication, which involves processing every request on a single replica and transferring the result to the other replicas When one leader replica is designated via leader election to process all the requests, the system is using a primary-backup or primary-replica scheme, which is predominant in high-availability clusters. In comparison, if any replica can process a request and distribute a new state, the system is using a multi-primary or multi-master scheme. In the latter case, some form of distributed concurrency control must be used, such as a distributed lock manager. Load balancing differs from task replication, since it distributes a load of different computations across machines, and allows a single computation to be dropped in case of failure. Load balancing, however, sometimes uses data replication (especially multi-master replication) internally, to distrib
https://en.wikipedia.org/wiki/Synchronization%20in%20telecommunications
Many services running on modern digital telecommunications networks require accurate synchronization for correct operation. For example, if telephone exchanges are not synchronized, then bit slips will occur and degrade performance. Telecommunication networks rely on the use of highly accurate primary reference clocks which are distributed network-wide using synchronization links and synchronization supply units. Ideally, clocks in a telecommunications network are synchronous, controlled to run at identical rates, or at the same mean rate with a fixed relative phase displacement, within a specified limited range. However, they may be mesochronous in practice. In common usage, mesochronous networks are often described as synchronous. Components Primary reference clock (PRC) Modern telecommunications networks use highly accurate primary master clocks that must meet the international standards requirement for long term frequency accuracy better than 1 part in 1011. To get this performance, atomic clocks or GPS disciplined oscillators are normally used. Synchronization supply unit Synchronization supply units (SSU) are used to ensure reliable synchronisation distribution. They have a number of key functions: They filter the synchronisation signal they receive to remove the higher frequency phase noise. They provide distribution by providing a scalable number of outputs to synchronise other local equipment. They provide a capability to carry on producing a high quality output even when their input reference is lost, this is referred to as holdover mode. Quality metrics In telecoms networks two key parameters are used for measurement of synchronisation performance. These parameters are defined by the International Telecommunication Union in its recommendation G.811, by European Telecommunications Standards Institute in its standard EN 300 462-1-1, by the ANSI Synchronization Interface Standard T1.101 defines profiles for clock accuracy at each stratum level, and b
https://en.wikipedia.org/wiki/Industrial%20fermentation
Industrial fermentation is the intentional use of fermentation in manufacturing processes. In addition to the mass production of fermented foods and drinks, industrial fermentation has widespread applications in chemical industry. Commodity chemicals, such as acetic acid, citric acid, and ethanol are made by fermentation. Moreover, nearly all commercially produced industrial enzymes, such as lipase, invertase and rennet, are made by fermentation with genetically modified microbes. In some cases, production of biomass itself is the objective, as is the case for single-cell proteins, baker's yeast, and starter cultures for lactic acid bacteria used in cheesemaking. In general, fermentations can be divided into four types: Production of biomass (viable cellular material) Production of extracellular metabolites (chemical compounds) Production of intracellular components (enzymes and other proteins) Transformation of substrate (in which the transformed substrate is itself the product) These types are not necessarily disjoined from each other, but provide a framework for understanding the differences in approach. The organisms used are typically microorganisms, particularly bacteria, algae, and fungi, such as yeasts and molds, but industrial fermentation may also involve cell cultures from plants and animals, such as CHO cells and insect cells. Special considerations are required for the specific organisms used in the fermentation, such as the dissolved oxygen level, nutrient levels, and temperature. The rate of fermentation depends on the concentration of microorganisms, cells, cellular components, and enzymes as well as temperature, pH and level of oxygen for aerobic fermentation. Product recovery frequently involves the concentration of the dilute solution. General process overview In most industrial fermentations, the organisms or eukaryotic cells are submerged in a liquid medium; in others, such as the fermentation of cocoa beans, coffee cherries, and miso,
https://en.wikipedia.org/wiki/Loop%20bin%20duplicator
A loop bin duplicator is a specialized audio tape machine used in the duplication of pre-recorded audio cassettes and 8-track cartridges. Loop bin duplicators were first introduced in the early 1990s. They had fewer moving parts than previous systems, so were more reliable to operate. Analog loop bin duplicator An analog loop bin uses a long loop of either 1/2" wide (for cassette duplication) or 1" wide (for 8-track tape duplication) loaded in a large bin located in the front of the duplicator. This loop master tape is loaded into the duplicator's bin from a traditional open-reel of tape, where the program material has been recorded to it using a studio-type multitrack tape recorder in real-time beforehand. The loop tape for cassette duplication has 4 tracks on the loop bin master tape (2 stereo tracks for Side A recorded in one direction, and the other 2 for Side B recorded in the opposite direction), and for 8-tracks has all of the 8 tracks (4 2-track stereo programs) recorded in one direction. The loop-bin master tape is read by the duplicator at a very high speed. For cassettes, either 32, 64, 80, or 100 times the normal speed of playback (1.875 ips) of an audio cassette (60, 120, 150, and 187.5 ips respectively) is used, and 10 or 20 times the normal speed of playback (3.75 ips) is used for 8-track duplication (37.50 and 75 ips respectively). While this loop is being played back, the audio signals for the A and B side (or all 4 programs for 8-track) are sent to a "slave" recorder or an audio bus that contains multiple "slaves". The "slave" records from the loop bin master tape the 4 tracks for both A and B sides to an open-faced "pancake" reel (similar to motion picture film wound on a plastic core) of raw 1/8" audio tape (for cassettes), or all 8 tape tracks to back-lubricated 1/4" audio tape (for 8-track cartridges) also wound on a "pancake" reel, at the same high speed. After it is recorded, this pancake of tape is then loaded onto special machin
https://en.wikipedia.org/wiki/Resolution%20%28logic%29
In mathematical logic and automated theorem proving, resolution is a rule of inference leading to a refutation-complete theorem-proving technique for sentences in propositional logic and first-order logic. For propositional logic, systematically applying the resolution rule acts as a decision procedure for formula unsatisfiability, solving the (complement of the) Boolean satisfiability problem. For first-order logic, resolution can be used as the basis for a semi-algorithm for the unsatisfiability problem of first-order logic, providing a more practical method than one following from Gödel's completeness theorem. The resolution rule can be traced back to Davis and Putnam (1960); however, their algorithm required trying all ground instances of the given formula. This source of combinatorial explosion was eliminated in 1965 by John Alan Robinson's syntactical unification algorithm, which allowed one to instantiate the formula during the proof "on demand" just as far as needed to keep refutation completeness. The clause produced by a resolution rule is sometimes called a resolvent. Resolution in propositional logic Resolution rule The resolution rule in propositional logic is a single valid inference rule that produces a new clause implied by two clauses containing complementary literals. A literal is a propositional variable or the negation of a propositional variable. Two literals are said to be complements if one is the negation of the other (in the following, is taken to be the complement to ). The resulting clause contains all the literals that do not have complements. Formally: where all , , and are literals, the dividing line stands for "entails". The above may also be written as: Or schematically as: We have the following terminology: The clauses and are the inference's premises (the resolvent of the premises) is its conclusion. The literal is the left resolved literal, The literal is the right resolved literal, is the resolved atom or
https://en.wikipedia.org/wiki/Specman
Specman is an EDA tool that provides advanced automated functional verification of hardware designs. It provides an environment for working with, compiling, and debugging testbench environments written in the e Hardware Verification Language. Specman also offers automated testbench generation to boost productivity in the context of block, chip, and system verification. The Specman tool itself does not include an HDL simulator (for design languages such as VHDL or Verilog.) To simulate an e-testbench with a design written in VHDL/Verilog, Specman must be run in conjunction with a separate HDL simulation tool. Specman is a feature of Cadence's new Xcelium simulator, where tighter product integration offers both faster runtime performance and debugs capabilities not available with other HDL simulators. In principle, Specman can co-simulate with any HDL simulator supporting standard PLI or VHPI interface, such as Synopsys's VCS, or Mentor's Questa. History Specman was originally developed at Verisity, an Israel-based company, which was acquired by Cadence on April 7, 2005. It is now part of Cadence's functional verification suite.
https://en.wikipedia.org/wiki/Kneser%20graph
In graph theory, the Kneser graph (alternatively ) is the graph whose vertices correspond to the -element subsets of a set of elements, and where two vertices are adjacent if and only if the two corresponding sets are disjoint. Kneser graphs are named after Martin Kneser, who first investigated them in 1956. Examples The Kneser graph is the complete graph on vertices. The Kneser graph is the complement of the line graph of the complete graph on vertices. The Kneser graph is the odd graph ; in particular is the Petersen graph (see top right figure). The Kneser graph , visualized on the right. Properties Basic properties The Kneser graph has vertices. Each vertex has exactly neighbors. The Kneser graph is vertex transitive and arc transitive. When , the Kneser graph is a strongly regular graph, with parameters . However, it is not strongly regular when , as different pairs of nonadjacent vertices have different numbers of common neighbors depending on the size of the intersection of the corresponding pairs of sets. Because Kneser graphs are regular and edge-transitive, their vertex connectivity equals their degree, except for which is disconnected. More precisely, the connectivity of is the same as the number of neighbors per vertex. Chromatic number As conjectured, the chromatic number of the Kneser graph for is exactly ; for instance, the Petersen graph requires three colors in any proper coloring. This conjecture was proved in several ways. László Lovász proved this in 1978 using topological methods, giving rise to the field of topological combinatorics. Soon thereafter Imre Bárány gave a simple proof, using the Borsuk–Ulam theorem and a lemma of David Gale. Joshua E. Greene won the 2002 the Morgan Prize for outstanding undergraduate research for his further simplified but still topological proof. In 2004, Jiří Matoušek found a purely combinatorial proof. In contrast, the fractional chromatic number of these graphs is . When , has no
https://en.wikipedia.org/wiki/Swain%20equation
The Swain equation relates the kinetic isotope effect for the protium/tritium combination with that of the protium/deuterium combination according to: where kH,D,T are the reaction rate constants for the protonated, deuterated and tritiated reactants respectively. External links Applied Swain equation
https://en.wikipedia.org/wiki/Bone%20crusher
A bone crusher is a device regularly used for crushing animal bones. Bones obtained during slaughter are cleaned, boiled in water and dried for several months. After that, they are suitable for crushing with the special machine into a relatively dry gritty powder which is used as fertilizer. The machine, shown in the picture, is powered by a water wheel. It contains eight S-shaped pairs of cams that raise the crushers alternately and let them fall into material to be crushed. The simple transmission increases the rotation speed of the crusher wheel to 21 rpm from the water wheel speed of about 7 rpm. Bone meal has been used since about 1790 as a fertilizer supplement to ordinary farmyard manure. From about 1880 onwards it was supplanted by chemical fertilizers. See also Stamp mill, a similar machine used to crush ore in mining situations. Grinding mills
https://en.wikipedia.org/wiki/Software%20craftsmanship
Software craftsmanship is an approach to software development that emphasizes the coding skills of the software developers. It is a response by software developers to the perceived ills of the mainstream software industry, including the prioritization of financial concerns over developer accountability. Historically, programmers have been encouraged to see themselves as practitioners of the well-defined statistical analysis and mathematical rigor of a scientific approach with computational theory. This has changed to an engineering approach with connotations of precision, predictability, measurement, risk mitigation, and professionalism. Practice of engineering led to calls for licensing, certification and codified bodies of knowledge as mechanisms for spreading engineering knowledge and maturing the field. The Agile Manifesto, with its emphasis on "individuals and interactions over processes and tools" questioned some of these assumptions. The Software Craftsmanship Manifesto extends and challenges further the assumptions of the Agile Manifesto, drawing a metaphor between modern software development and the apprenticeship model of medieval Europe. Overview The movement traces its roots to the ideas expressed in written works. The Pragmatic Programmer by Andy Hunt and Dave Thomas and Software Craftsmanship by Pete McBreen explicitly position software development as heir to the guild traditions of medieval Europe. The philosopher Richard Sennett wrote about software as a modern craft in his book The Craftsman. Freeman Dyson, in his essay "Science as a Craft Industry", expands software crafts to include mastery of using software as a driver for economic benefit: "In spite of the rise of Microsoft and other giant producers, software remains in large part a craft industry. Because of the enormous variety of specialized applications, there will always be room for individuals to write software based on their unique knowledge. There will always be niche markets to keep
https://en.wikipedia.org/wiki/Dudeney%20number
In number theory, a Dudeney number in a given number base is a natural number equal to the perfect cube of another natural number such that the digit sum of the first natural number is equal to the second. The name derives from Henry Dudeney, who noted the existence of these numbers in one of his puzzles, Root Extraction, where a professor in retirement at Colney Hatch postulates this as a general method for root extraction. Mathematical definition Let be a natural number. We define the Dudeney function for base and power to be the following: where is the times the number of digits in the number in base . A natural number is a Dudeney root if it is a fixed point for , which occurs if . The natural number is a generalised Dudeney number, and for , the numbers are known as Dudeney numbers. and are trivial Dudeney numbers for all and , all other trivial Dudeney numbers are nontrivial trivial Dudeney numbers. For and , there are exactly six such integers : A natural number is a sociable Dudeney root if it is a periodic point for , where for a positive integer , and forms a cycle of period . A Dudeney root is a sociable Dudeney root with , and a amicable Dudeney root is a sociable Dudeney root with . Sociable Dudeney numbers and amicable Dudeney numbers are the powers of their respective roots. The number of iterations needed for to reach a fixed point is the Dudeney function's persistence of , and undefined if it never reaches a fixed point. It can be shown that given a number base and power , the maximum Dudeney root has to satisfy this bound: implying a finite number of Dudeney roots and Dudeney numbers for each order and base . is the digit sum. The only Dudeney numbers are the single-digit numbers in base , and there are no periodic points with prime period greater than 1. Dudeney numbers, roots, and cycles of Fp,b for specific p and b All numbers are represented in base . Extension to negative integers Dudeney numbers can b
https://en.wikipedia.org/wiki/Cray-4
The Cray-4 was intended to be Cray Computer Corporation's successor to the failed Cray-3 supercomputer. It was marketed to compete with the T90 from Cray Research. CCC went bankrupt in 1995 before any Cray-4 had been delivered. Design The earlier Cray-3 was the first major application of gallium arsenide (GaAs) semiconductors in computing. It was not considered a success, and only one Cray-3 was delivered. Seymour Cray moved on to the Cray-4 design, announcing the design in 1994. The Cray-4 was essentially a shrunk and sped-up version of the Cray-3, consisting of a number of vector processors attached to a fast memory. The Cray-3 supported from four to sixteen processors running at 474 MHz, while the Cray-4 scaled from four to sixty-four processors running at 1 GHz. The final packaging for the Cray-4 was intended to fit into , and was to be tested in the smaller one-CPU "tanks" from the Cray-3. A midrange system included 16 processors, 1,024 megawords (8192 MB) of memory and provided 32 gigaflops for $11 million. The local memory architecture used on the Cray-2 and Cray-3 was dropped, returning to the mass of B- and T- registers on earlier designs, owing to Seymour's lack of success using the local memory effectively. 1994 "Significant technical progress was made during 1994 on the CRAY-4, which takes advantage of technologies and manufacturing processes developed during the design and manufacture of the CRAY-3. The Company announced introduction of the CRAY-4 to the market on November 10, 1994. Several single processor CRAY-4 prototype systems, each with 64 megawords of memory, were undergoing diagnostic testing prior to the Company filing for bankruptcy. The Company began testing individual CRAY-4 modules at the start of 1994 and planned to be able to deliver a 4-processor CRAY-4 prototype system by approximately the end of the second quarter of 1995. Upon filing of bankruptcy, the Company stopped work on the CRAY-4." Legacy The processor with serial number 0
https://en.wikipedia.org/wiki/Input/output%20Buffer%20Information%20Specification
Input/output Buffer Information Specification (IBIS) is a specification of a method for integrated circuit vendors to provide information about the input/output buffers of their product to their prospective customers without revealing the intellectual property of their implementation and without requiring proprietary encryption keys. From version 5.0, specification contains two separate types of models, "traditional IBIS" and "IBIS-AMI." The traditional model is generated in text format and consists of a number of tables that captures current vs. voltage (IV) and voltage vs. time (Vt) characteristics of the buffer, as well as the values of certain parasitic components. It is a standard data exchange format for exchanging modeling information among semiconductor device suppliers, simulation software suppliers, and end users. Traditional IBIS models are generally used instead of SPICE models to perform various board level signal integrity (SI) simulations and timing analyses. IBIS models could be used to verify signal integrity requirements, especially for high-speed products. IBIS-AMI models run in a special-purpose SerDes channel simulator, not in a SPICE-like simulator, and consist of two text files (*.ibs and *.ami) plus a platform-specific machine code executable file (*.dll on Windows, *.so on Linux). IBIS-AMI support statistical and so-called time-domain channel simulations, and three types of IC model ("impulse-only," "GetWave-only," and "dual mode") History Intel initiated IBIS in the early 1990s. Intel needed to have all of its divisions to present a common standardized model format to its external customers. This prompted Intel to solicit EDA vendors to participate in the development of a common model format. The first IBIS model, version 1.0, was aimed at describing CMOS circuits and TTL I/O buffers. As IBIS evolved with the participation of more companies and industry members, an IBIS Open Forum was created to promote the application of IBIS as a s
https://en.wikipedia.org/wiki/Uncleftish%20Beholding
"Uncleftish Beholding" (1989) is a short text by Poul Anderson, included in his anthology "All One Universe". It is designed to illustrate what English might look like without its large number of loanwords from languages such as French, Greek, and Latin, especially with regard to the proportion of scientific words with origins in those languages. Written as a demonstration of linguistic purism in English, the work explains atomic theory using Germanic words almost exclusively and coining new words when necessary; many of these new words have cognates in modern German, an important scientific language in its own right. The title phrase uncleftish beholding calques "atomic theory." To illustrate, the text begins: It goes on to define firststuffs (chemical elements), such as waterstuff (hydrogen), sourstuff (oxygen), and ymirstuff (uranium), as well as bulkbits (molecules), bindings (compounds), and several other terms important to uncleftish worldken (atomic science). and are the modern German words for hydrogen and oxygen, and in Dutch the modern equivalents are and . Sunstuff refers to helium, which derives from , the Ancient Greek word for 'sun'. Ymirstuff references Ymir, a giant in Norse mythology similar to Uranus in Greek mythology. Glossary The vocabulary used in "Uncleftish Beholding" does not completely derive from Anglo-Saxon. Around, from Old French (Modern French ), completely displaced Old English (modern English (now obsolete), cognate to German and Latin ) and left no "native" English word for this concept. The text also contains the French-derived words rest, ordinary and sort. The text gained increased exposure and popularity after being circulated around the Internet, and has served as inspiration for some inventors of Germanic English conlangs. Douglas Hofstadter, in discussing the piece in his book , jocularly refers to the use of only Germanic roots for scientific pieces as "Ander-Saxon." See also Anglish Thing Explainer