source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Patch%20panel
A patch panel is a device or unit featuring a number of jacks, usually of the same or similar type, for the use of connecting and routing circuits for monitoring, interconnecting, and testing circuits in a convenient, flexible manner. Patch panels are commonly used in computer networking, recording studios, and radio and television. The term patch came from early use in telephony and radio studios, where extra equipment kept on standby could be temporarily substituted for failed devices. This reconnection was done via patch cords and patch panels, like the jack fields of cord-type telephone switchboards. Terminology Patch panels are also referred to as patch bays, patch fields, jack panels or jack fields. Uses and connectors In recording studios, television and radio broadcast studios, and concert sound reinforcement systems, patchbays are widely used to facilitate the connection of different devices, such as microphones, electric or electronic instruments, effects (e.g. compression, reverb, etc.), recording gear, amplifiers, or broadcasting equipment. Patchbays make it easier to connect different devices in different orders for different projects, because all of the changes can be made at the patchbay. Additionally, patchbays make it easier to troubleshoot problems such as ground loops; even small home studios and amateur project studios often use patchbays, because it groups all of the input jacks into one location. This means that devices mounted in racks or keyboard instruments can be connected without having to hunt around behind the rack or instrument with a flashlight for the right jack. Using a patchbay also saves wear and tear on the input jacks of studio gear and instruments, because all of the connections are made with the patchbay. Patch panels are being used more prevalently in domestic installations, owing to the popularity of "Structured Wiring" installs. They are also found in home cinema installations more and more. Normalization It is convent
https://en.wikipedia.org/wiki/Agitator%20%28device%29
An agitator is a device or mechanism to put something into motion by shaking or stirring. There are several types of agitation machines, including washing machine agitators (which rotate back and forth) and magnetic agitators (which contain a magnetic bar rotating in a magnetic field). Agitators can come in many sizes and varieties, depending on the application. In general, agitators usually consist of an impeller and a shaft. An impeller is a rotor located within a tube or conduit attached to the shaft. It helps enhance the pressure in order for the flow of a fluid be done. Modern industrial agitators incorporate process control to maintain better control over the mixing process. Washing machine agitator In a top load washing machine the agitator projects from the bottom of the wash basket and creates the wash action by rotating back and forth, rolling garments from the top of the load, down to the bottom, then back up again. There are several types of agitators with the most common being the "straight-vane" and "dual-action" agitators. The "straight-vane" is a one-part agitator with bottom and side fins that usually turns back and forth. The Dual-action is a two-part agitator that has bottom washer fins that move back and forth and a spiral top that rotates clockwise to help guide the clothes to the bottom washer fins. The modern agitator, which is dual-action, was first made in Kenmore Appliances washing machines in the 1980s to present. These agitators are known by the company as dual-rollover and triple-rollover action agitators. Magnetic agitator This is a device formed by a metallic bar (called the agitation bar) which is normally covered by a plastic layer, and a sheet that has underneath it a rotatory magnet or a series of electromagnets arranged in a circular form to create a magnetic rotatory field. Commonly, the sheet has an arrangement of electric resistances that can heat some chemical solutions. During the operation of a typical magnetic agita
https://en.wikipedia.org/wiki/Orthometric%20height
The orthometric height is the vertical distance H along the plumb line from a point of interest to a reference surface known as the geoid, the vertical datum that approximates mean sea level. Orthometric height is one of the scientific formalizations of a laypersons' "height above sea level", along with other types of heights in Geodesy. In the US, the current NAVD88 datum is tied to a defined elevation at one point rather than to any location's exact mean sea level. Orthometric heights are usually used in the US for engineering work, although dynamic height may be chosen for large-scale hydrological purposes. Heights for measured points are shown on National Geodetic Survey data sheets, data that was gathered over many decades by precise spirit leveling over thousands of miles. Alternatives to orthometric height include dynamic height and normal height, and various countries may choose to operate with those definitions instead of orthometric. They may also adopt slightly different but similar definitions for their reference surface. Since gravity is not constant over large areas the orthometric height of a level surface (equipotential) other than the reference surface is not constant, and orthometric heights need to be corrected for that effect. For example, gravity is 0.1% stronger in the northern United States than in the southern, so a level surface that has an orthometric height of 1000 meters in one place will be 1001 meters high in other places. In fact, dynamic height is the most appropriate height measure when working with the level of water over a large geographic area. Orthometric heights may be obtained from differential leveling height differences by correcting for gravity variations. Practical applications must use a model rather than measurements to calculate the change in gravitational potential versus depth in the earth, since the geoid is below most of the land surface (e.g., the Helmert orthometric heights of NAVD88). GPS measurements give
https://en.wikipedia.org/wiki/363%20%28number%29
363 (three hundred [and] sixty-three) is the natural number following 362 and preceding 364. In mathematics It is an odd, composite, positive, real integer, composed of a prime (3) and a prime squared (112). 363 is a deficient number and a perfect totient number. 363 is a palindromic number in bases 3, 10, 11 and 32. 363 is a repdigit (BB) in base 32. The Mertens function returns 0. Any subset of its digits is divisible by three. 363 is the sum of nine consecutive primes (23 + 29 + 31 + 37 + 41 + 43 + 47 + 53 + 59). 363 is the sum of five consecutive powers of 3 (3 + 9 + 27 + 81 + 243). 363 can be expressed as the sum of three squares in four different ways: 112 + 112 + 112, 52 + 72 + 172, 12 + 12 + 192, and 132 + 132 + 52. 363 cubits is the solution given to Rhind Mathematical Papyrus question 50 – find the side length of an octagon with the same area as a circle 9 khet in diameter . References Integers
https://en.wikipedia.org/wiki/Elliptic%20surface
In mathematics, an elliptic surface is a surface that has an elliptic fibration, in other words a proper morphism with connected fibers to an algebraic curve such that almost all fibers are smooth curves of genus 1. (Over an algebraically closed field such as the complex numbers, these fibers are elliptic curves, perhaps without a chosen origin.) This is equivalent to the generic fiber being a smooth curve of genus one. This follows from proper base change. The surface and the base curve are assumed to be non-singular (complex manifolds or regular schemes, depending on the context). The fibers that are not elliptic curves are called the singular fibers and were classified by Kunihiko Kodaira. Both elliptic and singular fibers are important in string theory, especially in F-theory. Elliptic surfaces form a large class of surfaces that contains many of the interesting examples of surfaces, and are relatively well understood in the theories of complex manifolds and smooth 4-manifolds. They are similar to (have analogies with, that is), elliptic curves over number fields. Examples The product of any elliptic curve with any curve is an elliptic surface (with no singular fibers). All surfaces of Kodaira dimension 1 are elliptic surfaces. Every complex Enriques surface is elliptic, and has an elliptic fibration over the projective line. Kodaira surfaces Dolgachev surfaces Shioda modular surfaces Kodaira's table of singular fibers Most of the fibers of an elliptic fibration are (non-singular) elliptic curves. The remaining fibers are called singular fibers: there are a finite number of them, and each one consists of a union of rational curves, possibly with singularities or non-zero multiplicities (so the fibers may be non-reduced schemes). Kodaira and Néron independently classified the possible fibers, and Tate's algorithm can be used to find the type of the fibers of an elliptic curve over a number field. The following table lists the possible fibers of a minimal el
https://en.wikipedia.org/wiki/Why%27s%20%28poignant%29%20Guide%20to%20Ruby
why's (poignant) Guide to Ruby, sometimes called w(p)GtR or just "the poignant guide", is an introductory book to the Ruby programming language, written by why the lucky stiff. The book is distributed under the Creative Commons Attribution-ShareAlike license. The book is unusual among programming books in that it includes much strange humor and many narrative side tracks which are sometimes completely unrelated to the topic. Many motifs have become inside jokes in the Ruby community, such as references to the words "chunky bacon". The book includes many characters which have become popular as well, particularly the cartoon foxes and Trady Blix, a large black feline friend of why's, who acts as a guide to the foxes (and occasionally teaches them some Ruby). The book is published in HTML and PDF. Chapter three was reprinted in The Best Software Writing I: Selected and Introduced by Joel Spolsky (Apress, 2005). Contents About this book Kon'nichi wa, Ruby A Quick (and Hopefully Painless) Ride Through Ruby (with Cartoon Foxes): basic introduction to central Ruby concepts Floating Little Leaves of Code: evaluation and values, hashes and lists Them What Make the Rules and Them What Live the Dream: case/when, while/until, variable scope, blocks, methods, class definitions, class attributes, objects, modules, introspection in IRB, dup, self, module Downtown: metaprogramming, regular expressions When You Wish Upon a Beard: send method, new methods in existing classes The following chapters are "Expansion Packs": The Tiger's Vest (with a Basic Introduction to IRB): discusses IRB, the interactive Ruby interpreter. External links Original Site Actively maintained fork 3rd-party PDF version: Ruby Inside Computer programming books Creative Commons-licensed books Ruby (programming language) Books about free software
https://en.wikipedia.org/wiki/Push%20Proxy%20Gateway
A Push Proxy Gateway is a component of WAP Gateways that pushes URL notifications to mobile handsets. Notifications typically include MMS, email, IM, ringtone downloads, and new device firmware notifications. Most notifications will have an audible alert to the user of the device. The notification will typically be a text string with a URL link. Note that only a notification is pushed to the device; the device must do something with the notification in order to download or view the content associated with it. Technical specifications PUSH to PPG A push message is sent as an HTTP POST to the Push Proxy Gateway. The POST will be a multipart XML document, with the first part being the PAP (Push Access Protocol) Section and the second part being either a Service Indication or a Service Loading. +---------------------------------------------+ | HTTP POST | \ +---------------------------------------------+ | WAP | PAP XML | | PUSH +---------------------------------------------+ | Flow | Service Indication or Service Loading XML | / +---------------------------------------------+ POST The POST contains at a minimum the URL being posted to (this is not standard across different PPG vendors), and the content type. An example of a PPG POST: PAP The PAP XML contains at the minimum, a <pap> element, a <push-message> element, and an <address> element. An example of a PAP XML: --someboundarymesg Content-Type: application/xml The important parts of this PAP message are the address value and type. The value is typically a MSISDN and type indicates whether to send to an MSISDN (typical case) or to an IP Address. The TYPE is almost always MSISDN as the Push Initiator (PI) will not typically have the Mobile Station's IP address - which is generally dynamic. In the case of IP Address: TYPE=USER@a.b.c.d Additional capability of PAP can be found in the PAP article. Service In
https://en.wikipedia.org/wiki/Partial%20order%20reduction
In computer science, partial order reduction is a technique for reducing the size of the state-space to be searched by a model checking or automated planning and scheduling algorithm. It exploits the commutativity of concurrently executed transitions that result in the same state when executed in different orders. In explicit state space exploration, partial order reduction usually refers to the specific technique of expanding a representative subset of all enabled transitions. This technique has also been described as model checking with representatives. There are various versions of the method, the so-called stubborn set method, ample set method, and persistent set method. Ample sets Ample sets are an example of model checking with representatives. Their formulation relies on a separate notion of dependency. Two transitions are considered independent only if they cannot disable another whenever they are mutually enabled. The execution of both results in a unique state regardless of the order in which they are executed. Transitions that are not independent, are dependent. In practice dependency is approximated using static analysis. Ample sets for different purposes can be defined by giving conditions as to when a set of transitions is "ample" in a given state. C0 C1 If a transition depends on some transition relation in , this transition cannot be invoked until some transition in the ample set is executed. Conditions C0 and C1 are sufficient for preserving all the deadlocks in the state space. Further restrictions are needed in order to preserve more nuanced properties. For instance, in order to preserve properties of linear temporal logic, the following two conditions are needed: C2 If , each transition in the ample set is invisible. C3 A cycle is not allowed if it contains a state in which some transition is enabled, but is never included in ample(s) for any states s on the cycle. These conditions are sufficient for an ample set, but not necessary
https://en.wikipedia.org/wiki/Orbital%20plane
The orbital plane of a revolving body is the geometric plane in which its orbit lies. Three non-collinear points in space suffice to determine an orbital plane. A common example would be the positions of the centers of a massive body (host) and of an orbiting celestial body at two different times/points of its orbit. The orbital plane is defined in relation to a reference plane by two parameters: inclination (i) and longitude of the ascending node (Ω). By definition, the reference plane for the Solar System is usually considered to be Earth's orbital plane, which defines the ecliptic, the circular path on the celestial sphere that the Sun appears to follow over the course of a year. In other cases, for instance a moon or artificial satellite orbiting another planet, it is convenient to define the inclination of the Moon's orbit as the angle between its orbital plane and the planet's equatorial plane. The coordinate system defined that uses the orbital plane as the plane is known as the perifocal coordinate system. Artificial satellites around the Earth For launch vehicles and artificial satellites, the orbital plane is a defining parameter of an orbit; as in general, it will take a very large amount of propellant to change the orbital plane of an object. Other parameters, such as the orbital period, the eccentricity of the orbit and the phase of the orbit are more easily changed by propulsion systems. Orbital planes of satellites are perturbed by the non-spherical nature of the Earth's gravity. This causes the orbital plane of the satellite's orbit to slowly rotate around the Earth, depending on the angle the plane makes with the Earth's equator. For planes that are at a critical angle this can mean that the plane will track the Sun around the Earth, forming a Sun-synchronous orbit. A launch vehicle's launch window is usually determined by the times when the target orbital plane intersects the launch site. See also Earth-centered inertial coordinate sys
https://en.wikipedia.org/wiki/MOS%20Technology%208568
The MOS Technology 8568 Video Display Controller (VDC) was the graphics processor responsible for the 80 column or RGBI display on the Commodore 128DCR personal computer. In the Commodore 128 service manual, this part was referred to as the "80 column CRT controller." The 8568 embodied many of the features of the older 6545E monochrome CRT controller plus RGBI color. The original ("flat") Commodore 128 and the Commodore 128D (European plastic hausing) used the 8563 video controller to generate the 80 column display. The 8568 was essentially an updated version of the 8563, combining the latter's functionality with glue logic that had been implemented by discrete components in physical proximity to the 8563. Unlike the 8563, the 8568 included an unused (in the C-128) active low interrupt request line (/INTR), which was asserted when the "ready" bit in the 8568's status register changed from 0 to 1. Reading the control register would automatically deassert /INTR. Owing to differences in pin assignments and circuit interfacing, the 8563 and 8568 are not electrically interchangeable. The Commodore 128 had two video display modes, which were usually used singularly, but could be used simultaneously if the computer was connected to two compatible video monitors. The VIC-II chip, also found in the Commodore 64, was mapped directly into main memory—the video memory and CPUs (the 8502 and Z80A processors) shared a common 128 KB RAM, and the VIC-II control registers were accessed as memory locations (that is, they were memory mapped). Unlike the VIC-II, the 8568 had its own local video RAM, 64K in the C-128DCR model (sold in North America) and, depending on the date of manufacture of the particular machine, either 16 or 64K in the C-128D model (marketed in Europe). Addressing the VDC's internal registers and dedicated video memory must be accomplished by indirect means. First the program must tell the VDC which of its 37 internal registers is to be accessed. Next t
https://en.wikipedia.org/wiki/Instruction%20set%20simulator
An instruction set simulator (ISS) is a simulation model, usually coded in a high-level programming language, which mimics the behavior of a mainframe or microprocessor by "reading" instructions and maintaining internal variables which represent the processor's registers. Instruction simulation is a methodology employed for one of several possible reasons: To simulate the instruction set architecture (ISA) of a future processor to allow software development and test to proceed without waiting for the development and production of the hardware to finish. This is often known as "shift-left" or "pre-silicon support" in the hardware development field. A full system simulator or virtual platform for the future hardware typically includes one or more instruction set simulators. To simulate the machine code of another hardware device or entire computer for upward compatibility. For example, the IBM 1401 was simulated on the later IBM/360 through use of microcode emulation. To monitor and execute the machine code instructions (but treated as an input stream) on the same hardware for test and debugging purposes, e.g. with memory protection (which protects against accidental or deliberate buffer overflow). To improve the speed performance—compared to a slower cycle-accurate simulator—of simulations involving a processor core where the processor itself is not one of the elements being verified; in hardware description language design using Verilog where simulation with tools like ISS can be run faster by means of "PLI" (not to be confused with PL/1, which is a programming language). Implementation Instruction-set simulators can be implemented using three main techniques: Interpretation, where each instruction is executed directly by the ISS. Just-in-time compilation (JIT), where the code to be executed is first translated into the instruction set of the host computer. This is typically about ten times faster than a well-optimized interpreter. Virtualization, wher
https://en.wikipedia.org/wiki/Machine%20epsilon
Machine epsilon or machine precision is an upper bound on the relative approximation error due to rounding in floating point arithmetic. This value characterizes computer arithmetic in the field of numerical analysis, and by extension in the subject of computational science. The quantity is also called macheps and it has the symbols Greek epsilon . There are two prevailing definitions. In numerical analysis, machine epsilon is dependent on the type of rounding used and is also called unit roundoff, which has the symbol bold Roman u. However, by a less formal, but more widely used definition, machine epsilon is independent of rounding method and may be equivalent to u or 2u. Values for standard hardware arithmetics The following table lists machine epsilon values for standard floating-point formats. Each format uses round-to-nearest. Formal definition Rounding is a procedure for choosing the representation of a real number in a floating point number system. For a number system and a rounding procedure, machine epsilon is the maximum relative error of the chosen rounding procedure. Some background is needed to determine a value from this definition. A floating point number system is characterized by a radix which is also called the base, , and by the precision , i.e. the number of radix digits of the significand (including any leading implicit bit). All the numbers with the same exponent, , have the spacing, . The spacing changes at the numbers that are perfect powers of ; the spacing on the side of larger magnitude is times larger than the spacing on the side of smaller magnitude. Since machine epsilon is a bound for relative error, it suffices to consider numbers with exponent . It also suffices to consider positive numbers. For the usual round-to-nearest kind of rounding, the absolute rounding error is at most half the spacing, or . This value is the biggest possible numerator for the relative error. The denominator in the relative error is the number be
https://en.wikipedia.org/wiki/Purlin
A purlin (or historically purline, purloyne, purling, perling) is a longitudinal, horizontal, structural member in a roof. In traditional timber framing there are three basic types of purlin: purlin plate, principal purlin, and common purlin. Purlins also appear in steel frame construction. Steel purlins may be painted or greased for protection from the environment. Etymology Information on the origin of the term "purlin" is scant. The Oxford Dictionary suggests a French origin, with the earliest quote using a variation of purlin in 1447, though the accuracy of this claim has been disputed. In wood construction Purlin plate A purlin plate in wood construction is also called an "arcade plate" in European English, "under purlin", and "principal purlin". The term plate means a major, horizontal, supporting timber. Purlin plates are beams which support the mid-span of rafters and are supported by posts. By supporting the rafters they allow longer spans than the rafters alone could span, thus allowing a wider building. Purlin plates are very commonly found in large old barns in North America. A crown plate has similarities to a purlin plate but supports collar beams in the middle of a timber-framed building. Principal purlin Principal purlins in wood construction, also called "major purlins" and "side purlins," are supported by principal rafters and support common rafters in what is known as a "double roof" (a roof framed with a layer of principal rafters and a layer of common rafters). Principal purlins are further classified by how they connect to the principal rafters: "through purlins" pass over the top; "butt purlins" tenon into the sides of the principal rafters; and "clasped purlins," of which only one historic U.S. example is known,) are captured by a collar beam. Through purlins are further categorized as “trenched,” “back,” or “clasped;” butt purlins are classified as “threaded,” “tenoned,” and/or “staggered.” Common purlin Common purlins in wood cons
https://en.wikipedia.org/wiki/Metal%20Slug%203
is a run and gun video game developed by SNK. It was originally released in 2000 for the Neo-Geo MVS arcade platform as the sequel to Metal Slug 2/Metal Slug X. The music of the game was developed by Noise Factory. The game was ported to the PlayStation 2, Xbox, Microsoft Windows, iOS, Android, Wii, PlayStation Portable, PlayStation 3, PlayStation 4, PlayStation Vita, and Nintendo Switch. The game adds several features to the gameplay of the original Metal Slug and Metal Slug 2, such as weapons and vehicles, as well as introducing branching paths into the series. It received generally positive reviews. Gameplay The gameplay mechanics are the same as in previous Metal Slug games; the player must shoot constantly at a continual stream of enemies in order to reach the end of each level. At this point, the player confronts a boss, who is usually considerably larger and tougher than regular enemies. On the way through each level, the player can find numerous weapon upgrades and "Metal Slug" tanks. The tank is known as the SV-001 ("SV" stands for Super Vehicle), which increases the player's offense and considerably adds to their defense. In addition to shooting, the player can perform melee attacks by using a knife and/or kicking. The player does not die by coming into contact with enemies, and correspondingly, many of the enemy troops have melee attacks. Much of the game's scenery is destructible, and occasionally, this reveals extra items or power-ups. During the course of a level, the player encounters prisoners of war (POWs), who, if freed, offer the player bonuses in the form of random items or weapons. At the end of each level, the player receives a scoring bonus based on the number of freed POWs. If the player dies before the end of the level, the tally of freed POWs reverts to zero. A new feature in Metal Slug 3 is the branching path system; in most missions, there are forking paths from which the player must choose one, each with their own obstacles, and
https://en.wikipedia.org/wiki/Fenchel%27s%20duality%20theorem
In mathematics, Fenchel's duality theorem is a result in the theory of convex functions named after Werner Fenchel. Let ƒ be a proper convex function on Rn and let g be a proper concave function on Rn. Then, if regularity conditions are satisfied, where ƒ * is the convex conjugate of ƒ (also referred to as the Fenchel–Legendre transform) and g * is the concave conjugate of g. That is, Mathematical theorem Let X and Y be Banach spaces, and be convex functions and be a bounded linear map. Then the Fenchel problems: satisfy weak duality, i.e. . Note that are the convex conjugates of f,g respectively, and is the adjoint operator. The perturbation function for this dual problem is given by . Suppose that f,g, and A satisfy either f and g are lower semi-continuous and where is the algebraic interior and , where h is some function, is the set , or where are the points where the function is continuous. Then strong duality holds, i.e. . If then supremum is attained. One-dimensional illustration In the following figure, the minimization problem on the left side of the equation is illustrated. One seeks to vary x such that the vertical distance between the convex and concave curves at x is as small as possible. The position of the vertical line in the figure is the (approximate) optimum. The next figure illustrates the maximization problem on the right hand side of the above equation. Tangents are drawn to each of the two curves such that both tangents have the same slope p. The problem is to adjust p in such a way that the two tangents are as far away from each other as possible (more precisely, such that the points where they intersect the y-axis are as far from each other as possible). Imagine the two tangents as metal bars with vertical springs between them that push them apart and against the two parabolas that are fixed in place. Fenchel's theorem states that the two problems have the same solution. The points having the minimum vertical sepa
https://en.wikipedia.org/wiki/Kiss%20%28cryptanalysis%29
In cryptanalysis, a kiss is a pair of identical messages sent using different ciphers, one of which has been broken. The term was used at Bletchley Park during World War II. A deciphered message in the breakable system provided a "crib" (piece of known plaintext) which could then be used to read the unbroken messages. One example was where messages read in a German meteorological cipher could be used to provide cribs for reading the difficult 4-wheel Naval Enigma cipher. cribs from re-encipherments ... were known as 'kisses' in Bletchley Park parlance because the relevant signals were marked with 'xx' See also Cryptanalysis of the Enigma Known-plaintext attack References Smith, Michael and Erskine, Ralph (editors): Action this Day (2001, Bantam London) Bletchley Park Classical cryptography Cryptographic attacks
https://en.wikipedia.org/wiki/Radio%20teleswitch
A radio teleswitch is a device used in the United Kingdom primarily to allow electricity suppliers to switch large numbers of electricity meters between different tariff rates, by broadcasting an embedded signal in broadcast radio signals. Radio teleswitches are also used to switch on/off consumer appliances to make use of cheaper differential tariffs such as Economy 7. Service role The typical use of a teleswitch is to manage the start and end times of off-peak charging periods associated with tariffs such as Economy 7 and Economy 10. This includes switching between 'peak' and 'off-peak' meter registers as well as controlling the supply to dedicated off-peak loads such as night storage heating. The use of dynamic switching instead of a fixed timer allows some additional demand management, such as by flexing start and finish times for electric heating loads according to prevailing overall demand levels. Some suppliers also offer more sophisticated heating control using the radio teleswitch network. For example, Scottish Power 'Weathercall' and SSE's 'Total Heat Total Control' both dynamically vary the length of time storage heating is energised each night depending on the forecast temperature for the following day to help maintain a consistent household temperature. Teleswitching has also been used to help level out demand in areas where the supply network is close to capacity. In the 1990s, Manweb used such a system to provide different households with different off-peak periods on a weekly alternating basis. By spreading out the high peak demand associated with electric storage heating in Mid Wales, the company avoided upgrading costs of over a million pounds, and £200,000 a year in reduced use-of-system charges. In the north of Scotland, the radio teleswitch service is also used to help control the local electricity distribution network for resilience purposes. Operation Each of the user companies (the RTS Users, or Service Providers) has its own database
https://en.wikipedia.org/wiki/Multidrop%20bus
A multidrop bus (MDB) is a computer bus in which all components are connected to the electrical circuit. A process of arbitration determines which device sends information at any point. The other devices listen for the data they are intended to receive. Multidrop buses have the advantage of simplicity and extensibility, but their differing electrical characteristics make them relatively unsuitable for high frequency or high bandwidth applications. In computing Since 2000, multidrop standards such as PCI and Parallel ATA are increasingly being replaced by point-to-point systems such as PCI Express and SATA. Modern SDRAM chips exemplify the problem of electrical impedance discontinuity. Fully Buffered DIMM is an alternative approach to connecting multiple DRAM modules to a memory controller. For vending machines MDB/ICP MDB/ICP (formerly known as MDB) is a multidrop bus computer networking protocol used within the vending machine industry, currently published by the American National Automatic Merchandising Association. ccTalk The ccTalk multidrop bus protocol uses an TTL-level asynchronous serial protocol. It uses address randomization to allow multiple similar devices on the bus (after randomisation the devices can be distinguished by their serial number). ccTalk was developed by CoinControls, but is used by multiple vendors. See also Bus network topology EIA-485 1-Wire Open collector External links IBM Journal of Research and Development Computer buses
https://en.wikipedia.org/wiki/Carothers%20equation
In step-growth polymerization, the Carothers equation (or Carothers' equation) gives the degree of polymerization, , for a given fractional monomer conversion, . There are several versions of this equation, proposed by Wallace Carothers, who invented nylon in 1935. Linear polymers: two monomers in equimolar quantities The simplest case refers to the formation of a strictly linear polymer by the reaction (usually by condensation) of two monomers in equimolar quantities. An example is the synthesis of nylon-6,6 whose formula is from one mole of hexamethylenediamine, , and one mole of adipic acid, . For this case In this equation is the number-average value of the degree of polymerization, equal to the average number of monomer units in a polymer molecule. For the example of nylon-6,6 ( diamine units and diacid units). is the extent of reaction (or conversion to polymer), defined by is the number of molecules present initially as monomer is the number of molecules present after time . The total includes all degrees of polymerization: monomers, oligomers and polymers. This equation shows that a high monomer conversion is required to achieve a high degree of polymerization. For example, a monomer conversion, , of 98% is required for = 50, and = 99% is required for = 100. Linear polymers: one monomer in excess If one monomer is present in stoichiometric excess, then the equation becomes r is the stoichiometric ratio of reactants, the excess reactant is conventionally the denominator so that r < 1. If neither monomer is in excess, then r = 1 and the equation reduces to the equimolar case above. The effect of the excess reactant is to reduce the degree of polymerization for a given value of p. In the limit of complete conversion of the limiting reagent monomer, p → 1 and Thus for a 1% excess of one monomer, r = 0.99 and the limiting degree of polymerization is 199, compared to infinity for the equimolar case. An excess of one reactant can be used to c
https://en.wikipedia.org/wiki/Hilario%20Fern%C3%A1ndez%20Long
Hilario Fernández Long (12 September 1918 – 23 December 2002) was an Argentine structural engineer and educator. He was born in Bahía Blanca and was of Spanish and Volga German descent. He graduated as a Civil Engineer from the University of Buenos Aires in 1941 and his professional life was centered on structural engineering. He participated in, among other projects, the construction of the Argentine National Library, the Buenos Aires IBM Building and the Zárate–Brazo Largo and Chaco-Corrientes bridges. He pioneered the use of computer tools in his discipline. He coauthored many technical articles, and published an Introduction to Go and a Go Manual that were instrumentals in the introduction of the game in Argentina. Besides his teaching activities in several universities, he was the dean of the Engineering School and Rector (i.e. President) of the University of Buenos Aires. After Juan Carlos Onganía's coup d'état in 1966, he resigned as the University autonomy was violated in the Noche de los Bastones Largos (Long Stick's Night) when the police entered by force in the campuses, beating students and professors, effectively ending the so-called "Golden Age" of the Buenos Aires University. His political commitment was reaffirmed when President Raúl Alfonsín nominated him as a member of the CONADEP commission that investigated the fate of the desaparecidos. After retirement, he was distinguished as Emeritus Professor of both the University of Buenos Aires and the Pontifical Catholic University of Argentina and Doctor Honoris Causa of the University of Buenos Aires. He died in Necochea. He belonged to several organizations: Structural Engineer Association (honor member) National Academy of Exact, Physical and Natural Sciences National Education Academy American Society of Civil Engineering (life member) Argentine Go Association (founding member) Argentine Center of Engineers Asamblea Permanente pro Derechos Humanos – APDH (Permanent Assembly for Human Rights –
https://en.wikipedia.org/wiki/High-level%20design
High-level design (HLD) explains the architecture that would be used to develop a system. The architecture diagram provides an overview of an entire system, identifying the main components that would be developed for the product and their interfaces. The HLD can use non-technical to mildly technical terms which should be understandable to the administrators of the system. In contrast, low-level design further exposes the logical detailed design of each of these elements for use by engineers and programmers. HLD documentation should cover the planned implementation of both software and hardware. Purpose Preliminary design: In the preliminary stages of system development, the need is to size the project and to identify those parts which might be risky or time-consuming. Design overview: As the project proceeds, the need is to provide an overview of how the various sub-systems and components of the system fit together. In both cases, the high-level design should be a complete view of the entire system, breaking it down into smaller parts that are more easily understood. To minimize the maintenance overhead as construction proceeds and the lower-level design is done, it is best that the high-level design is elaborated only to the degree needed to satisfy these needs. High-level design document A high-level design document or HLDD adds the necessary details to the current project description to represent a suitable model for building. This document includes a high-level architecture diagram depicting the structure of the system, such as the hardware, database architecture, application architecture (layers), application flow (navigation), security architecture and technology architecture. Design overview A high-level design provides an overview of a system, product, service, or process. Such an overview helps supporting components be compatible to others. The highest-level design should briefly describe all platforms, systems, products, services, and processes
https://en.wikipedia.org/wiki/Processing%20medium
In industrial engineering, a processing medium is a gaseous, vaporous, fluid or shapeless solid material that plays an active role in manufacturing processes - comparable to that of a tool. Examples A processing medium for washing is a soap solution, a processing medium for steel melting is a plasma, and a processing medium for steam drying is superheated steam. Synonyms Operating medium Working medium. Engineering concepts
https://en.wikipedia.org/wiki/P%E2%80%93n%20junction%20isolation
p–n junction isolation is a method used to electrically isolate electronic components, such as transistors, on an integrated circuit (IC) by surrounding the components with reverse biased p–n junctions. Introduction By surrounding a transistor, resistor, capacitor or other component on an IC with semiconductor material which is doped using an opposite species of the substrate dopant, and connecting this surrounding material to a voltage which reverse-biases the p–n junction that forms, it is possible to create a region which forms an electrically isolated "well" around the component. Operation Assume that the semiconductor wafer is p-type material. Also assume a ring of n-type material is placed around a transistor, and placed beneath the transistor. If the p-type material within the n-type ring is now connected to the negative terminal of the power supply and the n-type ring is connected to the positive terminal, the 'holes' in the p-type region are pulled away from the p–n junction, causing the width of the nonconducting depletion region to increase. Similarly, because the n-type region is connected to the positive terminal, the electrons will also be pulled away from the junction. This effectively increases the potential barrier and greatly increases the electrical resistance against the flow of charge carriers. For this reason there will be no (or minimal) electric current across the junction. At the middle of the junction of the p–n material, a depletion region is created to stand-off the reverse voltage. The width of the depletion region grows larger with higher voltage. The electric field grows as the reverse voltage increases. When the electric field increases beyond a critical level, the junction breaks down and current begins to flow by avalanche breakdown. Therefore, care must be taken that circuit voltages do not exceed the breakdown voltage or electrical isolation ceases. History In an article entitled "Microelectronics", published in Scientifi
https://en.wikipedia.org/wiki/Origination%20of%20Organismal%20Form
Origination of Organismal Form: Beyond the Gene in Developmental and Evolutionary Biology is an anthology published in 2003 edited by Gerd B. Müller and Stuart A. Newman. The book is the outcome of the 4th Altenberg Workshop in Theoretical Biology on "Origins of Organismal Form: Beyond the Gene Paradigm", hosted in 1999 at the Konrad Lorenz Institute for Evolution and Cognition Research. It has been cited over 200 times and has a major influence on extended evolutionary synthesis research. Description of the book The book explores the multiple factors that may have been responsible for the origination of biological form in multicellular life. These biological forms include limbs, segmented structures, and different body symmetries. It explores why the basic body plans of nearly all multicellular life arose in the relatively short time span of the Cambrian Explosion. The authors focus on physical factors (structuralism) other than changes in an organism's genome that may have caused multicellular life to form new structures. These physical factors include differential adhesion of cells and feedback oscillations between cells. The book also presents recent experimental results that examine how the same embryonic tissues or tumor cells can be coaxed into forming dramatically different structures under different environmental conditions. One of the goals of the book is to stimulate research that may lead to a more comprehensive theory of evolution. It is frequently cited as foundational to the development of the extended evolutionary synthesis. List of contributions Origination of Organismal Form: The Forgotten Cause in Evolutionary Theory, Gerd B. Müller and Stuart A. Newman The Cambrian "Explosion" of Metazoans, Simon Conway Morris Convergence and Homoplasy in the Evolution of Organismal Form, Pat Willmer Homology:The Evolution of Morphological Organization, Gerd B. Müller Only Details Determine, Roy J. Britten The Reactive Genome, Scott F. Gilbert Tis
https://en.wikipedia.org/wiki/Amazon%20Mechanical%20Turk
Amazon Mechanical Turk (MTurk) is a crowdsourcing website with which businesses can hire remotely located "crowdworkers" to perform discrete on-demand tasks that computers are currently unable to do as economically. It is operated under Amazon Web Services, and is owned by Amazon. Employers (known as requesters) post jobs known as Human Intelligence Tasks (HITs), such as identifying specific content in an image or video, writing product descriptions, or answering survey questions. Workers, colloquially known as Turkers or crowdworkers, browse among existing jobs and complete them in exchange for a fee set by the employer. To place jobs, the use an open application programming interface (API), or the more limited MTurk Requester site. , Requesters could register from 49 approved countries. History The service was conceived by Venky Harinarayan in a U.S. patent disclosure in 2001. Amazon coined the term artificial artificial intelligence for processes that outsource some parts of a computer program to humans, for those tasks carried out much faster by humans than computers. It is claimed that Jeff Bezos was responsible for proposing the development of Amazon's Mechanical Turk to realize this process. The name Mechanical Turk was inspired by "The Turk", an 18th-century chess-playing automaton made by Wolfgang von Kempelen that toured Europe, and beat both Napoleon Bonaparte and Benjamin Franklin. It was later revealed that this "machine" was not an automaton, but a human chess master hidden in the cabinet beneath the board and controlling the movements of a humanoid dummy. Analogously, the Mechanical Turk online service uses remote human labor hidden behind a computer interface to help employers perform tasks that are not possible using a true machine. MTurk launched publicly on November 2, 2005. Its user base grew quickly. In early- to mid-November 2005, there were tens of thousands of jobs, all uploaded to the system by Amazon itself for some of its internal tas
https://en.wikipedia.org/wiki/VMOS
A VMOS () (vertical metal oxide semiconductor or V-groove MOS) transistor is a type of metal–oxide–semiconductor field-effect transistor (MOSFET). VMOS is also used to describe the V-groove shape vertically cut into the substrate material. The "V" shape of the MOSFET's gate allows the device to deliver a higher amount of current from the source to the drain of the device. The shape of the depletion region creates a wider channel, allowing more current to flow through it. During operation in blocking mode, the highest electric field occurs at the N+/p+ junction. The presence of a sharp corner at the bottom of the groove enhances the electric field at the edge of the channel in the depletion region, thus reducing the breakdown voltage of the device. This electric field launches electrons into the gate oxide and consequently, the trapped electrons shift the threshold voltage of the MOSFET. For this reason, the V-groove architecture is no longer used in commercial devices. The device's use was a power device until more suitable geometries, like the UMOS (or Trench-Gate MOS) were introduced in order to lower the maximum electric field at the top of the V shape and thus leading to higher maximum voltages than in case of the VMOS. History The first MOSFET (without a V-groove) was invented by Mohamed Atalla and Dawon Kahng at Bell Labs in 1959. The V-groove construction was pioneered by Jun-ichi Nishizawa in 1969, initially for the static induction transistor (SIT), a type of junction field-effect transistor (JFET). The VMOS was invented by Hitachi in 1969, when they introduced the first vertical power MOSFET in Japan. T. J. Rodgers, while he was a student at Stanford University, filed a US patent for a VMOS in 1973. Siliconix commercially introduced a VMOS in 1975. The VMOS later developed into what became known as the vertical DMOS (VDMOS). In 1978, American Microsystems (AMI) released the S2811. It was the first integrated circuit chip specifically designed as a d
https://en.wikipedia.org/wiki/Software%20assurance
Software assurance (SwA) is a critical process in software development that ensures the reliability, safety, and security of software products. It involves a variety of activities, including requirements analysis, design reviews, code inspections, testing, and formal verification. One crucial component of software assurance is secure coding practices, which follow industry-accepted standards and best practices, such as those outlined by the Software Engineering Institute (SEI) in their CERT Secure Coding Standards (SCS). Another vital aspect of software assurance is testing, which should be conducted at various stages of the software development process and can include functional testing, performance testing, and security testing. Testing helps to identify any defects or vulnerabilities in software products before they are released. Furthermore, software assurance involves organizational and management practices like risk management and quality management to ensure that software products meet the needs and expectations of stakeholders. Software assurance aims to ensure that software is free from vulnerabilities and functions as intended, conforming to all requirements and standards governing the software development process.[3] Additionally, software assurance aims to produce software-intensive systems that are more secure. To achieve this, a preventive dynamic and static analysis of potential vulnerabilities is required, and a holistic, system-level understanding is recommended. Architectural risk analysis plays an essential role in any software security program, as design flaws account for 50% of security problems, and they cannot be found by staring at code alone. By following industry-accepted standards and best practices, incorporating testing and management practices, and conducting architectural risk analysis, software assurance can minimize the risk of system failures and security breaches, making it a critical aspect of software development. Initiatives
https://en.wikipedia.org/wiki/Linear%20network%20coding
In computer networking, linear network coding is a program in which intermediate nodes transmit data from source nodes to sink nodes by means of linear combinations. Linear network coding may be used to improve a network's throughput, efficiency, and scalability, as well as reducing attacks and eavesdropping. The nodes of a network take several packets and combine for transmission. This process may be used to attain the maximum possible information flow in a network. It has been proven that, theoretically, linear coding is enough to achieve the upper bound in multicast problems with one source. However linear coding is not sufficient in general; even for more general versions of linearity such as convolutional coding and filter-bank coding. Finding optimal coding solutions for general network problems with arbitrary demands is a hard problem, which can be NP-hard and even undecidable. Encoding and decoding In a linear network coding problem, a group of nodes are involved in moving the data from source nodes to sink nodes. Each node generates new packets which are linear combinations of past received packets by multiplying them by coefficients chosen from a finite field, typically of size . More formally, each node, with indegree, , generates a message from the linear combination of received messages by the formula: Where the values are coefficients selected from . Since operations are computed in a finite field, the generated message is of the same length as the original messages. Each node forwards the computed value along with the coefficients, , used in the level, . Sink nodes receive these network coded messages, and collect them in a matrix. The original messages can be recovered by performing Gaussian elimination on the matrix. In reduced row echelon form, decoded packets correspond to the rows of the form Background A network is represented by a directed graph . is the set of nodes or vertices, is the set of directed links (or edges), and
https://en.wikipedia.org/wiki/Party%20wall
A party wall (occasionally parti-wall or parting wall, shared wall, also known as common wall or as a demising wall) is a wall shared by two adjoining properties. Typically, the builder lays the wall along a property line dividing two terraced houses, so that one half of the wall's thickness lies on each side. This type of wall is usually structural. Party walls can also be formed by two abutting walls built at different times. The term can be also used to describe a division between separate units within a multi-unit apartment complex. Very often the wall in this case is non-structural but designed to meet established criteria for sound and/or fire protection, i.e. a firewall. Waterproofing A waterproofing membrane can extend 6" up a demising wall as well as under the wall. England and Wales While party walls are effectively in common ownership of two or more immediately adjacent owners, there are various possibilities for legal ownership: the wall may belong to both tenants (in common), to one tenant or the other, or partly to one, partly to the other. In cases where the ownership is not shared, both parties have use of the wall, if not ownership. Other party structures can exist, such as floors dividing flats or apartments. Apart from special statutory definitions, the term "Party Wall" may be used in four different legal senses. It may mean: a wall of which the adjoining owners are tenants in common; a wall divided longitudinally into two strips, one belonging to each of the neighbouring owners; a wall which belongs entirely to one of the adjoining owners, but is subject to an easement or right in the other to have it maintained as a dividing wall between the two tenements; a wall divided longitudinally into two moieties, each moiety being subject to a cross easement, in favour of the owner of the other moiety. In English law the party wall does not confirm a boundary at its median point and there are instances where the legal boundary between adjo
https://en.wikipedia.org/wiki/Semi-differentiability
In calculus, a branch of mathematics, the notions of one-sided differentiability and semi-differentiability of a real-valued function f of a real variable are weaker than differentiability. Specifically, the function f is said to be right differentiable at a point a if, roughly speaking, a derivative can be defined as the function's argument x moves to a from the right, and left differentiable at a if the derivative can be defined as x moves to a from the left. One-dimensional case In mathematics, a left derivative and a right derivative are derivatives (rates of change of a function) defined for movement in one direction only (left or right; that is, to lower or higher values) by the argument of a function. Definitions Let f denote a real-valued function defined on a subset I of the real numbers. If is a limit point of   and the one-sided limit exists as a real number, then f is called right differentiable at a and the limit ∂+f(a) is called the right derivative of f at a. If is a limit point of   and the one-sided limit exists as a real number, then f is called left differentiable at a and the limit ∂–f(a) is called the left derivative of f at a. If is a limit point of   and and if f is left and right differentiable at a, then f is called semi-differentiable at a. If the left and right derivatives are equal, then they have the same value as the usual ("bidirectional") derivative. One can also define a symmetric derivative, which equals the arithmetic mean of the left and right derivatives (when they both exist), so the symmetric derivative may exist when the usual derivative does not. Remarks and examples A function is differentiable at an interior point a of its domain if and only if it is semi-differentiable at a and the left derivative is equal to the right derivative. An example of a semi-differentiable function, which is not differentiable, is the absolute value function , at a = 0. We find easily If a function is semi-differentiable at a
https://en.wikipedia.org/wiki/Electric%20bicycle%20laws
Many countries have enacted electric vehicle laws to regulate the use of electric bicycles, also termed e-bikes. Some jurisdictions have regulations governing safety requirements and standards of manufacture. The members of the European Union and other regions have wider-ranging legislation covering use and safety. Laws and terminology are diverse. Some countries have national regulations with additional regional regulations for each state, province, or municipality. Systems of classification and nomenclature vary. Jurisdictions may address "power-assisted bicycles" (Canada) or "electric pedal-assisted cycles" (European Union and United Kingdom) or simply "electric bicycles". Some classify pedelecs as being distinct from other bicycles using electric power. Consequently, any particular e-bike may be subject to different classifications and regulations in different jurisdictions. Australia In Australia, the e-bike is defined by the Australian Vehicle Standards as a bicycle that has an auxiliary motor with a maximum power output not exceeding 250 W without consideration for speed limits or pedal sensors. Each state is responsible for deciding how to treat such a vehicle and currently all states agree that such a vehicle does not require licensing or registration. Some states have their own rules such as no riding under electric power on bike paths and through built-up areas so riders should view the state laws regarding their use. There is no license and no registration required for e-bike use. Since 30 May 2012, Australia has an additional new e-bike category using the European Union model of a pedelec as per the CE EN15194 standard. This means the e-bike can have a motor of 250,W of continuous rated power which can only be activated by pedalling (if above 6 km/h) and must cut out over 25 km/h – if so it is classed as a normal bicycle. The state of Victoria is the first to amend it's local road rules, see below. Road vehicles in Australia must comply with all app
https://en.wikipedia.org/wiki/SpamBayes
SpamBayes is a Bayesian spam filter written in Python which uses techniques laid out by Paul Graham in his essay "A Plan for Spam". It has subsequently been improved by Gary Robinson and Tim Peters, among others. The most notable difference between a conventional Bayesian filter and the filter used by SpamBayes is that there are three classifications rather than two: spam, non-spam (called ham in SpamBayes), and unsure. The user trains a message as being either ham or spam; when filtering a message, the spam filters generate one score for ham and another for spam. If the spam score is high and the ham score is low, the message will be classified as spam. If the spam score is low and the ham score is high, the message will be classified as ham. If the scores are both high or both low, the message will be classified as unsure. This approach leads to a low number of false positives and false negatives, but it may result in a number of unsures which need a human decision. Web filtering Some work has gone into applying SpamBayes to filter internet content via a proxy web server. References External links Paul Graham's original idea Essay discussing improvements on Graham's original idea Explaining how SpamBayes works Paper on SpamBayes for the Conference on E-mail and Anti-Spam Winning the War on spam: Comparison of Bayesian spam filters Spam filtering Anti-spam Email Free software programmed in Python
https://en.wikipedia.org/wiki/Teenage%20Mutant%20Ninja%20Turtles%20%28NES%20video%20game%29
Teenage Mutant Ninja Turtles, known as in Japan and Teenage Mutant Hero Turtles in Europe, is a 1989 side-scrolling action-platform game for the Nintendo Entertainment System released by Konami. In North America it was published under Konami's Ultra Games imprint in the US and the equivalent PALCOM brand in Europe and Australia. Alongside the arcade game (also developed by Konami), it was one of the first video games based on the 1987 Teenage Mutant Ninja Turtles animated series, being released after the show's second season. The game sold more than cartridges worldwide. Plot Shredder kidnaps April and gains the Life Transformer Gun, a weapon capable of returning Splinter to his human form. In order to save April, the turtles (Leo, Mikey, Donny and Raph) embark on the streets of New York to confront the Foot Clan. While traversing the sewers, the turtles encounter Bebop, a mutated pig, and Rocksteady, a mutant rhino. Though the turtles defeat Bebop, Rocksteady escapes with April O’Neil. The turtles then chase Rocksteady to an abandoned warehouse, fight him, and rescue April. After disabling bombs in the Hudson River dam, Shredder captures Splinter, so the turtles give chase in the Party Wagon. Hot in pursuit, the turtles scour the city and eventually find that Splinter is held captive by the robotic Mecaturtle on a skyscraper rooftop. After the turtles save Splinter, Shredder escapes in a helicopter. The turtles give chase, tracking him to JFK airport, where they encounter Big Mouser. After defeating Big Mouser, the turtles head to Shredder's secret Foot Clan base in the South Bronx via the Turtle Blimp. Once there, they locate and battle the Technodrome underground. The turtles descend into the Technodrome’s reactor and ultimately defeat Shredder. With the Life Transformer Gun, the turtles help Splinter return to his human form. With a tough mission accomplished, the turtles and April celebrate with a pizza. Gameplay Teenage Mutant Ninja Turtles is a s
https://en.wikipedia.org/wiki/Serre%20spectral%20sequence
In mathematics, the Serre spectral sequence (sometimes Leray–Serre spectral sequence to acknowledge earlier work of Jean Leray in the Leray spectral sequence) is an important tool in algebraic topology. It expresses, in the language of homological algebra, the singular (co)homology of the total space X of a (Serre) fibration in terms of the (co)homology of the base space B and the fiber F. The result is due to Jean-Pierre Serre in his doctoral dissertation. Cohomology spectral sequence Let be a Serre fibration of topological spaces, and let F be the (path-connected) fiber. The Serre cohomology spectral sequence is the following: Here, at least under standard simplifying conditions, the coefficient group in the -term is the q-th integral cohomology group of F, and the outer group is the singular cohomology of B with coefficients in that group. Strictly speaking, what is meant is cohomology with respect to the local coefficient system on B given by the cohomology of the various fibers. Assuming for example, that B is simply connected, this collapses to the usual cohomology. For a path connected base, all the different fibers are homotopy equivalent. In particular, their cohomology is isomorphic, so the choice of "the" fiber does not give any ambiguity. The abutment means integral cohomology of the total space X. This spectral sequence can be derived from an exact couple built out of the long exact sequences of the cohomology of the pair , where is the restriction of the fibration over the p-skeleton of B. More precisely, using this notation, f is defined by restricting each piece on to , g is defined using the coboundary map in the long exact sequence of the pair, and h is defined by restricting to There is a multiplicative structure coinciding on the E2-term with (−1)qs times the cup product, and with respect to which the differentials are (graded) derivations inducing the product on the -page from the one on the -page. Homology spectral sequenc
https://en.wikipedia.org/wiki/Gibbs%20measure
In mathematics, the Gibbs measure, named after Josiah Willard Gibbs, is a probability measure frequently seen in many problems of probability theory and statistical mechanics. It is a generalization of the canonical ensemble to infinite systems. The canonical ensemble gives the probability of the system X being in state x (equivalently, of the random variable X having value x) as Here, is a function from the space of states to the real numbers; in physics applications, is interpreted as the energy of the configuration x. The parameter is a free parameter; in physics, it is the inverse temperature. The normalizing constant is the partition function. However, in infinite systems, the total energy is no longer a finite number and cannot be used in the traditional construction of the probability distribution of a canonical ensemble. Traditional approaches in statistical physics studied the limit of intensive properties as the size of a finite system approaches infinity (the thermodynamic limit). When the energy function can be written as a sum of terms that each involve only variables from a finite subsystem, the notion of a Gibbs measure provides an alternative approach. Gibbs measures were proposed by probability theorists such as Dobrushin, Lanford, and Ruelle and provided a framework to directly study infinite systems, instead of taking the limit of finite systems. A measure is a Gibbs measure if the conditional probabilities it induces on each finite subsystem satisfy a consistency condition: if all degrees of freedom outside the finite subsystem are frozen, the canonical ensemble for the subsystem subject to these boundary conditions matches the probabilities in the Gibbs measure conditional on the frozen degrees of freedom. The Hammersley–Clifford theorem implies that any probability measure that satisfies a Markov property is a Gibbs measure for an appropriate choice of (locally defined) energy function. Therefore, the Gibbs measure applies to widespread
https://en.wikipedia.org/wiki/RAM%20parity
RAM parity checking is the storing of a redundant parity bit representing the parity (odd or even) of a small amount of computer data (typically one byte) stored in random-access memory, and the subsequent comparison of the stored and the computed parity to detect whether a data error has occurred. The parity bit was originally stored in additional individual memory chips; with the introduction of plug-in DIMM, SIMM, etc. modules, they became available in non-parity and parity (with an extra bit per byte, storing 9 bits for every 8 bits of actual data) versions. History Early computers sometimes required the use of parity RAM, and parity-checking could not be disabled. A parity error typically caused the machine to halt, with loss of unsaved data; this is usually a better option than saving corrupt data. Logic parity RAM, also known as fake parity RAM, is non-parity RAM that can be used in computers that require parity RAM. Logic parity RAM recalculates an always-valid parity bit each time a byte is read from memory, instead of storing the parity bit when the memory is written to; the calculated parity bit, which will not reveal if the data has been corrupted (hence the name "fake parity"), is presented to the parity-checking logic. It is a means of using cheaper 8-bit RAM in a system designed to use only 9-bit parity RAM. Memory errors In the 1970s-80s, RAM reliability was often less-than-perfect; in particular, the 4116 DRAMs which were an industry standard from 1975 to 1983 had a considerable failure rate as they used triple voltages (-5, +5, and +12) which resulted in high operating temperatures. By the mid-1980s, these had given way to single voltage DRAM such as the 4164 and 41256 with the result of improved reliability. However, RAM did not achieve modern standards of reliability until the 1990s. Since then errors have become less visible as simple parity RAM has fallen out of use; either they are invisible as they are not detected, or they are corrected
https://en.wikipedia.org/wiki/Burr%E2%80%93Erd%C5%91s%20conjecture
In mathematics, the Burr–Erdős conjecture was a problem concerning the Ramsey number of sparse graphs. The conjecture is named after Stefan Burr and Paul Erdős, and is one of many conjectures named after Erdős; it states that the Ramsey number of graphs in any sparse family of graphs should grow linearly in the number of vertices of the graph. The conjecture was proven by Choongbum Lee. Thus it is now a theorem. Definitions If G is an undirected graph, then the degeneracy of G is the minimum number p such that every subgraph of G contains a vertex of degree p or smaller. A graph with degeneracy p is called p-degenerate. Equivalently, a p-degenerate graph is a graph that can be reduced to the empty graph by repeatedly removing a vertex of degree p or smaller. It follows from Ramsey's theorem that for any graph G there exists a least integer , the Ramsey number of G, such that any complete graph on at least vertices whose edges are coloured red or blue contains a monochromatic copy of G. For instance, the Ramsey number of a triangle is 6: no matter how the edges of a complete graph on six vertices are colored red or blue, there is always either a red triangle or a blue triangle. The conjecture In 1973, Stefan Burr and Paul Erdős made the following conjecture: For every integer p there exists a constant cp so that any p-degenerate graph G on n vertices has Ramsey number at most cp n. That is, if an n-vertex graph G is p-degenerate, then a monochromatic copy of G should exist in every two-edge-colored complete graph on cp n vertices. Known results Before the full conjecture was proved, it was first settled in some special cases. It was proven for bounded-degree graphs by ; their proof led to a very high value of cp, and improvements to this constant were made by and . More generally, the conjecture is known to be true for p-arrangeable graphs, which includes graphs with bounded maximum degree, planar graphs and graphs that do not contain a subdivision of Kp.
https://en.wikipedia.org/wiki/Seshadri%20constant
In algebraic geometry, a Seshadri constant is an invariant of an ample line bundle L at a point P on an algebraic variety. It was introduced by Demailly to measure a certain rate of growth, of the tensor powers of L, in terms of the jets of the sections of the Lk. The object was the study of the Fujita conjecture. The name is in honour of the Indian mathematician C. S. Seshadri. It is known that Nagata's conjecture on algebraic curves is equivalent to the assertion that for more than nine general points, the Seshadri constants of the projective plane are maximal. There is a general conjecture for algebraic surfaces, the Nagata–Biran conjecture. Definition Let be a smooth projective variety, an ample line bundle on it, a point of , = { all irreducible curves passing through }. . Here, denotes the intersection number of and , measures how many times passing through . Definition: One says that is the Seshadri constant of at the point , a real number. When is an abelian variety, it can be shown that is independent of the point chosen, and it is written simply . References Algebraic varieties Vector bundles Mathematical constants
https://en.wikipedia.org/wiki/Fujita%20conjecture
In mathematics, Fujita's conjecture is a problem in the theories of algebraic geometry and complex manifolds, unsolved . It is named after Takao Fujita, who formulated it in 1985. Statement In complex geometry, the conjecture states that for a positive holomorphic line bundle L on a compact complex manifold M, the line bundle KM ⊗ L⊗m (where KM is a canonical line bundle of M) is spanned by sections when m ≥ n + 1 ; very ample when m ≥ n + 2, where n is the complex dimension of M. Note that for large m the line bundle KM ⊗ L⊗m is very ample by the standard Serre's vanishing theorem (and its complex analytic variant). Fujita conjecture provides an explicit bound on m, which is optimal for projective spaces. Known cases For surfaces the Fujita conjecture follows from Reider's theorem. For three-dimensional algebraic varieties, Ein and Lazarsfeld in 1993 proved the first part of the Fujita conjecture, i.e. that m≥4 implies global generation. See also Ohsawa–Takegoshi L2 extension theorem References . . . . . External links supporting facts to fujita conjecture Algebraic geometry Complex manifolds Conjectures Unsolved problems in geometry
https://en.wikipedia.org/wiki/Computation%20in%20the%20limit
In computability theory, a function is called limit computable if it is the limit of a uniformly computable sequence of functions. The terms computable in the limit, limit recursive and recursively approximable are also used. One can think of limit computable functions as those admitting an eventually correct computable guessing procedure at their true value. A set is limit computable just when its characteristic function is limit computable. If the sequence is uniformly computable relative to D, then the function is limit computable in D. Formal definition A total function is limit computable if there is a total computable function such that The total function is limit computable in D if there is a total function computable in D also satisfying A set of natural numbers is defined to be computable in the limit if and only if its characteristic function is computable in the limit. In contrast, the set is computable if and only if it is computable in the limit by a function and there is a second computable function that takes input i and returns a value of t large enough that the has stabilized. Limit lemma The limit lemma states that a set of natural numbers is limit computable if and only if the set is computable from (the Turing jump of the empty set). The relativized limit lemma states that a set is limit computable in if and only if it is computable from . Moreover, the limit lemma (and its relativization) hold uniformly. Thus one can go from an index for the function to an index for relative to . One can also go from an index for relative to to an index for some that has limit . Proof As is a [computably enumerable] set, it must be computable in the limit itself as the computable function can be defined whose limit as goes to infinity is the characteristic function of . It therefore suffices to show that if limit computability is preserved by Turing reduction, as this will show that all sets computable from are limit compu
https://en.wikipedia.org/wiki/Myrmecochory
Myrmecochory ( (sometimes myrmechory); from ("ant") and khoreíā ("circular dance") is seed dispersal by ants, an ecologically significant ant–plant interaction with worldwide distribution. Most myrmecochorous plants produce seeds with elaiosomes, a term encompassing various external appendages or "food bodies" rich in lipids, amino acids, or other nutrients that are attractive to ants. The seed with its attached elaiosome is collectively known as a diaspore. Seed dispersal by ants is typically accomplished when foraging workers carry diaspores back to the ant colony, after which the elaiosome is removed or fed directly to ant larvae. Once the elaiosome is consumed, the seed is usually discarded in underground middens or ejected from the nest. Although diaspores are seldom distributed far from the parent plant, myrmecochores also benefit from this predominantly mutualistic interaction through dispersal to favourable locations for germination, as well as escape from seed predation. Distribution and diversity Myrmecochory is exhibited by more than 3,000 plant species worldwide and is present in every major biome on all continents except Antarctica. Seed dispersal by ants is particularly common in the dry heath and sclerophyll woodlands of Australia (1,500 species) and the South African fynbos (1,000 species). Both regions have a Mediterranean climate and largely infertile soils (characterized by low phosphorus availability), two factors that are often cited to explain the distribution of myrmecochory. Myrmecochory is also present in mesic forests in temperate regions of the Northern Hemisphere (i.e. in Europe and in eastern North America), as well as in tropical forests and dry deserts, though to a lesser degree. Estimates for the true biodiversity of myrmecochorous plants range from 11,000 to as high as 23,000 species worldwide, or about 5% of all flowering plant species. Evolutionary history Myrmecochory has evolved independently many times in a large number
https://en.wikipedia.org/wiki/Inquiline
In zoology, an inquiline (from Latin inquilinus, "lodger" or "tenant") is an animal that lives commensally in the nest, burrow, or dwelling place of an animal of another species. For example, some organisms such as insects may live in the homes of gophers or the garages of humans and feed on debris, fungi, roots, etc. The most widely distributed types of inquiline are those found in association with the nests of social insects, especially ants and termites – a single colony may support dozens of different inquiline species. The distinctions between parasites, social parasites, and inquilines are subtle, and many species may fulfill the criteria for more than one of these, as inquilines do exhibit many of the same characteristics as parasites. However, parasites are specifically not inquilines, because by definition they have a deleterious effect on the host species, while inquilines have not been confirmed to do so. In the specific case of termites, the term "inquiline" is restricted to termite species that inhabit other termite species' nests whereas other arthropods cohabiting termitaria are called "termitophiles". It is important to reiterate that inquilinism in termites (Blattodea, formerly Isoptera) contrasts with the inquilinism observed in other eusocial insects such as ants and bees (Hymenoptera), even though the term "inquiline" has been adopted in both cases. A major distinction is that, while in the former the species mostly resemble forms of commensalism, the latter includes species currently confirmed as social parasites, thus, being closely related to parasitism. Inquilines are known especially among the gall wasps (Cynipidae family). In the sub-family Synerginae, this mode of life predominates. These insects are similar in structure to the true gall-inducing wasp but do not produce galls, instead, they deposit their eggs within those of other species. They infest certain species of galls, such as those of the blackberry and some oak galls, in larg
https://en.wikipedia.org/wiki/Ralph%20P.%20Boas%20Jr.
Ralph Philip Boas Jr. (August 8, 1912 – July 25, 1992) was a mathematician, teacher, and journal editor. He wrote over 200 papers, mainly in the fields of real and complex analysis. Biography He was born in Walla Walla, Washington, the son of an English professor at Whitman College, but moved frequently as a child; his younger sister, Marie Boas Hall, later to become a historian of science, was born in Springfield, Massachusetts, where his father had become a high school teacher. He was home-schooled until the age of eight, began his formal schooling in the sixth grade, and graduated from high school while still only 15. After a gap year auditing classes at Mount Holyoke College (where his father had become a professor) he entered Harvard, intending to major in chemistry and go into medicine, but ended up studying mathematics instead. His first mathematics publication was written as an undergraduate, after he discovered an incorrect proof in another paper. He got his A.B. degree in 1933, received a Sheldon Fellowship for a year of travel, and returned to Harvard for his doctoral studies in 1934. He earned his doctorate there in 1937, under the supervision of David Widder. After postdoctoral studies at Princeton University with Salomon Bochner, and then the University of Cambridge in England, he began a two-year instructorship at Duke University, where he met his future wife, Mary Layne, also a mathematics instructor at Duke. They were married in 1941, and when the United States entered World War II later that year, Boas moved to the Navy Pre-flight School in Chapel Hill, North Carolina. In 1942, he interviewed for a position in the Manhattan Project, at the Los Alamos National Laboratory, but ended up returning to Harvard to teach in a Navy instruction program there, while his wife taught at Tufts University. Beginning when he was an instructor at Duke University, Boas had become a prolific reviewer for Mathematical Reviews, and at the end of the war he took a
https://en.wikipedia.org/wiki/Profinet
Profinet (usually styled as PROFINET, as a portmanteau for Process Field Network) is an industry technical standard for data communication over Industrial Ethernet, designed for collecting data from, and controlling equipment in industrial systems, with a particular strength in delivering data under tight time constraints. The standard is maintained and supported by Profibus and Profinet International, an umbrella organization headquartered in Karlsruhe, Germany. Functionalities Overview Profinet implements the interfacing to peripherals. It defines the communication with field connected peripheral devices. Its basis is a cascading real-time concept. Profinet defines the entire data exchange between controllers (called "IO-Controllers") and the devices (called "IO-Devices"), as well as parameter setting and diagnosis. IO-Controllers are typically a PLC, DCS, or IPC; whereas IO-Devices can be varied: I/O blocks, drives, sensors, or actuators. The Profinet protocol is designed for the fast data exchange between Ethernet-based field devices and follows the provider-consumer model. Field devices in a subordinate Profibus line can be integrated in the Profinet system seamlessly via an IO-Proxy (representative of a subordinate bus system). Conformance Classes (CC) Applications with Profinet can be divided according to the international standard IEC 61784-2 into four conformance classes: In Conformance Class A (CC-A), only the devices are certified. A manufacturer certificate is sufficient for the network infrastructure. This is why structured cabling or a wireless local area network for mobile subscribers can also be used. Typical applications can be found in infrastructure (e.g. motorway or railway tunnels) or in building automation. Conformance Class B (CC-B) stipulates that the network infrastructure also includes certified products and is structured according to the guidelines of Profinet. Shielded cables increase robustness and switches with management funct
https://en.wikipedia.org/wiki/List%20of%20complex%20and%20algebraic%20surfaces
This is a list of named algebraic surfaces, compact complex surfaces, and families thereof, sorted according to their Kodaira dimension following Enriques–Kodaira classification. Kodaira dimension −∞ Rational surfaces Projective plane Quadric surfaces Cone (geometry) Cylinder Ellipsoid Hyperboloid Paraboloid Sphere Spheroid Rational cubic surfaces Cayley nodal cubic surface, a certain cubic surface with 4 nodes Cayley's ruled cubic surface Clebsch surface or Klein icosahedral surface Fermat cubic Monkey saddle Parabolic conoid Plücker's conoid Whitney umbrella Rational quartic surfaces Châtelet surfaces Dupin cyclides, inversions of a cylinder, torus, or double cone in a sphere Gabriel's horn Right circular conoid Roman surface or Steiner surface, a realization of the real projective plane in real affine space Tori, surfaces of revolution generated by a circle about a coplanar axis Other rational surfaces in space Boy's surface, a sextic realization of the real projective plane in real affine space Enneper surface, a nonic minimal surface Henneberg surface, a minimal surface of degree 15 Bour's minimal surface, a surface of degree 16 Richmond surfaces, a family of minimal surfaces of variable degree Other families of rational surfaces Coble surfaces Del Pezzo surfaces, surfaces with an ample anticanonical divisor Hirzebruch surfaces, rational ruled surfaces Segre surfaces, intersections of two quadrics in projective 4-space Unirational surfaces of characteristic 0 Veronese surface, the Veronese embedding of the projective plane into projective 5-space White surfaces, the blow-up of the projective plane at points by the linear system of degree- curves through those points Bordiga surfaces, the White surfaces determined by families of quartic curves Non-rational ruled surfaces Class VII surfaces Vanishing second Betti number: Hopf surfaces Inoue surfaces; several other families discovered by Inoue have also been called "
https://en.wikipedia.org/wiki/Bit%20cell
A bit cell is the length of tape, the area of disc surface, or the part of an integrated circuit in which a single bit is recorded. The smaller the bit cells are, the greater the storage density of the medium is. In magnetic storage, the magnetic flux or magnetization doesn't necessarily change at the boundaries of bit cells to indicate bit states. For example, the presence of a magnetic transition within a bit cell might record state 1, and the lack of such a transition might record state 0. Other encodings are also possible. See also Computer data storage References Software Preservation Society glossary entry for bit cell Hitachi research page on patterned magnetic media Computer data storage
https://en.wikipedia.org/wiki/Operation%20CHAOS
Operation CHAOS or Operation MHCHAOS was a Central Intelligence Agency (CIA) domestic espionage project targeting American citizens operating from 1967 to 1974, established by President Lyndon B. Johnson and expanded under President Richard Nixon, whose mission was to uncover possible foreign influence on domestic race, anti-war, and other protest movements. The operation was launched under Director of Central Intelligence (DCI) Richard Helms by chief of counter-intelligence James Jesus Angleton, and headed by Richard Ober. The "MH" designation is to signify the program had a global area of operations. Background The CIA was charged with the collection, correlation, and evaluation of intelligence. While the Act does not specify a prohibition on collecting domestic intelligence, or a restriction to only collect foreign intelligence, Executive Order 12333 of 1981 added prohibitions to limit CIA activities. The CIA began domestic recruiting operations in 1959 in the process of finding Cuban exiles who could be used in the campaign against Cuba and President Fidel Castro. As these operations expanded, the CIA formed a Domestic Operations Division in 1964. In 1965, President Lyndon Johnson requested that the CIA begin its own investigation into domestic dissent—independent of the FBI's ongoing COINTELPRO. The CIA developed numerous operations targeting American dissidents in the US. Many of these programs operated under the CIA's Office of Security, including: HTLINGUAL – Directed at letters passing between the United States and the then Soviet Union; the program involved the examination of correspondence to and from individuals or organizations placed on a watchlist. Project 2 – Directed at infiltration of foreign intelligence targets by agents posing as dissident sympathizers and which, like CHAOS, had placed agents within domestic radical organizations for the purposes of training and establishment of dissident credentials. Project MERRIMAC – Designed to infiltrate
https://en.wikipedia.org/wiki/Interlock%20%28engineering%29
An interlock is a feature that makes the state of two mechanisms or functions mutually dependent. It may consist of any electrical, or mechanical devices or systems. In most applications, an interlock is used to help prevent any damage to the machine or to the operator handling the machine. For example, elevators are equipped with an interlock that prevents the moving elevator from opening its doors and prevents the stationary elevator (with open doors) from moving. Interlocks may include sophisticated elements such as curtains of infrared beams, photodetectors, simple switches, and locks. It can also be a computer containing an interlocking computer program with digital or analogue electronics. Trapped-key interlocking Trapped-key interlocking is a method of ensuring safety in industrial environments by forcing the operator through a predetermined sequence using a defined selection of keys, locks and switches. It is called trapped key as it works by releasing and trapping keys in a predetermined sequence. After the control or power has been isolated, a key is released that can be used to grant access to individual or multiple doors. Below is an example of what a trapped key interlock transfer block would look like. This is a part of a trapped key interlocking system. In order to obtain the keys in this system, a key must be inserted and turned (like the key at the bottom of the system of the picture). Once the key is turned, the operator may retrieve the remaining keys that will be used to open other doors. Once all keys are returned, then the operator will be allowed to take out the original key from the beginning. The key will not turn unless the remaining keys are put back in its place. Another example is an electric kiln. To prevent access to the inside of an electric kiln, a trapped key system may be used to interlock a disconnecting switch and the kiln door. While the switch is turned on, the key is held by the interlock attached to the disconnecting
https://en.wikipedia.org/wiki/MacUpdate
MacUpdate is a Mac software download website founded in 1996. History In the Inc. 5000 list of private American companies with the fastest revenue growth, MacUpdate was listed 319th in 2008, 114th in 2009, and 233rd in 2010. MacUpdate has offered several "bundles" offering Mac software at a discounted price. The company offered an application called MacUpdate Desktop ($20/year with a 10 day trial) which automatically downloaded and installed updates to other installed applications on a user's Mac. MacUpdate Desktop has since been discontinued. In 2020, MacUpdate was acquired by Clario Tech ltd., a London-Kyiv based cybersecurity company. References External links MacUpdate website Macintosh websites Download websites
https://en.wikipedia.org/wiki/Project%20MERRIMAC
Project MERRIMAC was a domestic espionage operation coordinated under the Office of Security of the CIA. It involved information gathering procedures via infiltration and surveillance on Washington-based anti-war groups that might pose potential threats to the CIA. However, the type of data gathered also included general information on the infrastructure of targeted communities. Project MERRIMAC and its twin program, Project RESISTANCE were both coordinated by the CIA Office of Security. In addition, the twin projects were branch operations that relayed civilian information to their parent program, Operation CHAOS. The Assassination Archives and Research Center believes that Project MERRIMAC began in February 1967. See also Operation CHAOS Project RESISTANCE COINTELPRO References External links Project MERRIMAC in the Internet Archive CHAOS, MERRIMAC, and RESISTANCE | PDF Development of Surveillance Technology & Risk of Abuse of Economic Information | PDF MERRIMAC MERRIMAC History of cryptography MERRIMAC MERRIMAC Government databases in the United States
https://en.wikipedia.org/wiki/Rubble
Rubble is broken stone, of irregular size, shape and texture; undressed especially as a filling-in. Rubble naturally found in the soil is known also as 'brash' (compare cornbrash). Where present, it becomes more noticeable when the land is ploughed or worked. Building "Rubble-work" is a name applied to several types of masonry. One kind, where the stones are loosely thrown together in a wall between boards and grouted with mortar almost like concrete, is called in Italian "muraglia di getto" and in French "bocage". In Pakistan, walls made of rubble and concrete, cast in a formwork, are called 'situ', which probably derives from Sanskrit (similar to the Latin 'in situ' meaning 'made on the spot'). Work executed with more or less large stones put together without any attempt at courses is called rubble walling. Where similar work is laid in courses, it is known as coursed rubble. Dry-stone walling is somewhat similar work done without the use of mortar. It is bound together by the fit of the stones and the regular placement of stones which extend through the thickness of the wall. A rubble wall built with mortar will be stronger if assembled in this way. Rubble walls in Malta Rubble walls () are found all over the island of Malta. Similar walls are also frequently found in Sicily and the Arab countries. The various shapes and sizes of the stones used to build these walls look like stones that were found in the area lying on the ground or in the soil. It is most probable that the practice of building these walls around the field was inspired by the Arabs during their rule in Malta, as in Sicily who were also ruled by the Arabs around the same period. The Maltese farmer found that the technique of these walls was very useful especially during an era where resources were limited. Rubble walls are used to serve as borders between the property of one farm from the other. A great advantage that rubble walls offered is that when heavy rain falls, their structure woul
https://en.wikipedia.org/wiki/Radio%20over%20IP
Radio over Internet Protocol, or RoIP, is similar to Voice over IP (VoIP), but augments two-way radio communications rather than telephone calls. From the system point of view, it is essentially VoIP with push-to-talk. To the user it can be implemented like any other radio network. With RoIP, at least one node of a network is a radio (or a radio with an IP interface device) connected via IP to other nodes in the radio network. The other nodes can be two-way radios, but could also be dispatch consoles either traditional (hardware) or modern (software on a PC), POTS telephones, softphone applications running on a computer such as Skype phone, PDA, smartphone, or some other communications device accessible over IP. RoIP can be deployed over private networks as well as the public Internet. It is useful in land mobile radio systems used by public safety departments and fleets of utilities spread over a broad geographic area. Like other centralized radio systems such as trunked radio systems, issues of delay or latency and reliance on centralized infrastructure can be impediments to adoption by public safety agencies. RoIP is not a proprietary or protocol-limited construct but a basic concept that has been implemented in a number of ways. Several systems have been implemented in the amateur radio community such as Galaxy PTT Comms, AllStar Link, BroadNet, IRLP, and EchoLink that have demonstrated the utility of RoIP in a partly or entirely open-source environment. Many commercial radio systems vendors such as Motorola and Harris have adopted RoIP as part of their system designs. The motivation to deploy RoIP technology is usually driven by one of three factors: first, the need to span large geographic areas or operate in areas without sufficient coverage from radio towers; second, the desire to provide more reliable, or at least more repairable links in radio systems; and third, to support the use of many base station users, that is, voice communications from station
https://en.wikipedia.org/wiki/Pad%C3%A9%20approximant
In mathematics, a Padé approximant is the "best" approximation of a function near a specific point by a rational function of given order. Under this technique, the approximant's power series agrees with the power series of the function it is approximating. The technique was developed around 1890 by Henri Padé, but goes back to Georg Frobenius, who introduced the idea and investigated the features of rational approximations of power series. The Padé approximant often gives better approximation of the function than truncating its Taylor series, and it may still work where the Taylor series does not converge. For these reasons Padé approximants are used extensively in computer calculations. They have also been used as auxiliary functions in Diophantine approximation and transcendental number theory, though for sharp results ad hoc methods—in some sense inspired by the Padé theory—typically replace them. Since Padé approximant is a rational function, an artificial singular point may occur as an approximation, but this can be avoided by Borel–Padé analysis. The reason the Padé approximant tends to be a better approximation than a truncating Taylor series is clear from the viewpoint of the multi-point summation method. Since there are many cases in which the asymptotic expansion at infinity becomes 0 or a constant, it can be interpreted as the "incomplete two-point Padé approximation", in which the ordinary Padé approximation improves the method truncating a Taylor series. Definition Given a function f and two integers m ≥ 0 and n ≥ 1, the Padé approximant of order [m/n] is the rational function which agrees with f(x) to the highest possible order, which amounts to Equivalently, if is expanded in a Maclaurin series (Taylor series at 0), its first terms would equal the first terms of , and thus When it exists, the Padé approximant is unique as a formal power series for the given m and n. The Padé approximant defined above is also denoted as Computation F
https://en.wikipedia.org/wiki/Proofs%20of%20quadratic%20reciprocity
In number theory, the law of quadratic reciprocity, like the Pythagorean theorem, has lent itself to an unusually large number of proofs. Several hundred proofs of the law of quadratic reciprocity have been published. Proof synopsis Of the elementary combinatorial proofs, there are two which apply types of double counting. One by Gotthold Eisenstein counts lattice points. Another applies Zolotarev's lemma to , expressed by the Chinese remainder theorem as and calculates the signature of a permutation. The shortest known proof also uses a simplified version of double counting, namely double counting modulo a fixed prime. Eisenstein's proof Eisenstein's proof of quadratic reciprocity is a simplification of Gauss's third proof. It is more geometrically intuitive and requires less technical manipulation. The point of departure is "Eisenstein's lemma", which states that for distinct odd primes p, q, where denotes the floor function (the largest integer less than or equal to x), and where the sum is taken over the even integers u = 2, 4, 6, ..., p−1. For example, This result is very similar to Gauss's lemma, and can be proved in a similar fashion (proof given below). Using this representation of (q/p), the main argument is quite elegant. The sum counts the number of lattice points with even x-coordinate in the interior of the triangle ABC in the following diagram: Because each column has an even number of points (namely q−1 points), the number of such lattice points in the region BCYX is the same modulo 2 as the number of such points in the region CZY: Then by flipping the diagram in both axes, we see that the number of points with even x-coordinate inside CZY is the same as the number of points inside AXY having odd x-coordinates. This can be justified mathematically by noting that . The conclusion is that where μ is the total number of lattice points in the interior of AXY. Switching p and q, the same argument shows that where ν is the number of la
https://en.wikipedia.org/wiki/Glan%E2%80%93Taylor%20prism
A Glan–Taylor prism is a type of prism which is used as a polarizer or polarizing beam splitter. It is one of the most common types of modern polarizing prism. It was first described by Archard and Taylor in 1948. The prism is made of two right-angled prisms of calcite (or sometimes other birefringent materials) separated on their long faces with an air gap. The optical axes of the calcite crystals are aligned parallel to the plane of reflection. Total internal reflection of s-polarized light at the air gap ensures that only p-polarized light is transmitted by the device. Because the angle of incidence at the gap can be reasonably close to Brewster's angle, unwanted reflection of p-polarized light is reduced, giving the Glan–Taylor prism better transmission than the Glan–Foucault design. Note that while the transmitted beam is completely polarized, the reflected beam is not. The sides of the crystal can be polished to allow the reflected beam to exit or can be blackened to absorb it. The latter reduces unwanted Fresnel reflection of the rejected beam. A variant of the design exists called a Glan–laser prism. This is a Glan–Taylor prism with a steeper angle for the cut in the prism, which decreases reflection loss at the expense of reduced angular field of view. These polarizers are also typically designed to tolerate very high beam intensities, such those produced by a laser. The differences may include using calcite selected for low scattering loss, improved polish quality on the faces and especially on the sides of the crystal, and better antireflection coatings. Prisms with irradiance damage thresholds greater than 1 GW/cm2 are commercially available. See also Glan–Foucault prism Glan–Thompson prism References Polarization (waves) Prisms (optics)
https://en.wikipedia.org/wiki/Regenerative%20Satellite%20Mesh%20%E2%80%93%20A
Regenerative Satellite Mesh – A (RSM-A) is an internationally standardized satellite communications protocol by Telecommunications Industry Association and European Telecommunications Standards Institute. It is based upon the Spaceway Ka-band communications system developed by Hughes Network Systems. It is expected to be utilized by the Hughes Network Systems satellite called Spaceway-3. The standard is meant to provide broadband capabilities of up to 512 kbit/s, 2 Mbit/s, and 16 Mbit/s uplink data communication rates with fixed Ka-band satellite terminal antennas sized as small as 77 cm. The standard consists of the following documents: TIA-1040.1.01 Physical Layer Specification; Part 1: General Description TIA-1040.1.02 Physical Layer Specification; Part 2: Frame Structure TIA-1040.1.03 Physical Layer Specification; Part 3: Channel Coding TIA-1040.1.04 Physical Layer Specification; Part 4: Modulation TIA-1040.1.05 Physical Layer Specification; Part 5: Radio Transmission and Reception TIA-1040.1.06 Physical Layer Specification; Part 6: Radio Link Control TIA-1040.1.07 Physical Layer Specification; Part 7: Synchronization TIA-1040.2.01 MAC/SLC Layer Specification; Part 1: General Description TIA-1040.2.02 MAC/SLC Layer Specification; Part 2: SLC Layer TIA-1040.2.03 MAC/SLC Layer Specification; Part 3: ST-SAM interface General Description The standard describes the various segments involved in a RSM-A satellite system including: Satellite Terminal: fixed satellite terminal for satellite communication linked to terrestrial hosts via connected LANs Satellite Payload: geosynchronous regenerative satellite payload and antennas Network Operations Control Center: involved ground network management and resource management The uplink consists of a multi-frequency time-division multiple access (MF-TDMA) scheme where individual uplink spotbeams are assigned frequency channels out of the satellites frequency band. Satellite Terminals transmit on timeslots o
https://en.wikipedia.org/wiki/Cycler
A cycler is a potential spacecraft on a closed transfer orbit that would pass close to two celestial bodies at regular intervals. Cyclers could be used for carrying heavy supplies, life support and radiation shielding. Free return trajectory A free-return trajectory is a symmetrical orbit past the Moon and Earth that was first analysed by Arthur Schwaniger Lunar cycler A lunar cycler or Earth–Moon cycler is a cycler orbit, or spacecraft therein, which periodically passes close by the Earth and the Moon, using gravity assists and occasional propellant-powered corrections to maintain its trajectories between the two. If the fuel required to reach a particular cycler orbit from both the Earth and the Moon is modest, and the travel time between the two along the cycler is reasonable, then having a spacecraft in the cycler can provide an efficient and regular method for space transportation. Mars cycler A Mars cycler or Earth–Mars cycler is a spacecraft trajectory that encounters the Earth and Mars on a regular basis, or a spacecraft on such a trajectory Interstellar cycler An interstellar cycler or Schroeder cycler, a theoretical spacecraft trajectory that encounters two or more stars on a regular basis, or a spacecraft on such a trajectory References Space Spacecraft
https://en.wikipedia.org/wiki/SGI%20Prism
The Silicon Graphics Prism is a series of visualization computer systems developed and manufactured by Silicon Graphics (SGI). Released in April 2005, the Prism's basic system architecture is based on the Altix 3000 servers, but with graphics hardware. The Prism uses the Linux operating system and the OpenGL software library. Three models of the SGI Prism are Power, Team and Extreme levels. The Power level supports two to eight Itanium 2 processors, up to 96 GB of memory and two to four graphics pipelines. The Team level supports 8 to 16 Itanium 2 processors, up to 192 GB of memory and four to eight graphics pipelines. The Extreme level supports 16 to 256 Itanium 2 processors, up to 3 TB of memory and 4 to 16 graphics pipelines. The graphics pipelines for the Prism are ATI FireGL cards based on either the R350 or R420 GPUs. References Prism Prism Very long instruction word computing 64-bit computers
https://en.wikipedia.org/wiki/Logical%20equality
Logical equality is a logical operator that corresponds to equality in Boolean algebra and to the logical biconditional in propositional calculus. It gives the functional value true if both functional arguments have the same logical value, and false if they are different. It is customary practice in various applications, if not always technically precise, to indicate the operation of logical equality on the logical operands x and y by any of the following forms: Some logicians, however, draw a firm distinction between a functional form, like those in the left column, which they interpret as an application of a function to a pair of arguments — and thus a mere indication that the value of the compound expression depends on the values of the component expressions — and an equational form, like those in the right column, which they interpret as an assertion that the arguments have equal values, in other words, that the functional value of the compound expression is true. Definition Logical equality is an operation on two logical values, typically the values of two propositions, that produces a value of true if and only if both operands are false or both operands are true. The truth table of p EQ q (also written as p = q, p ↔ q, Epq, p ≡ q, or p == q) is as follows: {| class="wikitable" style="text-align:center" |+ Logical equality ! p ! q ! p = q |- | 0 || 0 || 1 |- | 0 || 1 || 0 |- | 1 || 0 || 0 |- | 1 || 1 || 1 |} Alternative descriptions The form (x = y) is equivalent to the form (x ∧ y) ∨ (¬x ∧ ¬y). For the operands x and y, the truth table of the logical equality operator is as follows: {| class="wikitable" ! colspan="2" rowspan="2" | !! colspan="2" | y |- ! T !! F |- ! rowspan="2" | x !! T | style="padding: 1em;" | T | style="padding: 1em;" | F |- ! F | style="padding: 1em;" | F | style="padding: 1em;" | T |} Inequality In mathematics, the plus sign "+" almost invariably indicates an operation that satisfies the axioms assigned to addition in the t
https://en.wikipedia.org/wiki/Bit%20slicing
Bit slicing is a technique for constructing a processor from modules of processors of smaller bit width, for the purpose of increasing the word length; in theory to make an arbitrary n-bit central processing unit (CPU). Each of these component modules processes one bit field or "slice" of an operand. The grouped processing components would then have the capability to process the chosen full word-length of a given software design. Bit slicing more or less died out due to the advent of the microprocessor. Recently it has been used in arithmetic logic units (ALUs) for quantum computers and as a software technique, e.g. for cryptography in x86 CPUs. Operational details Bit-slice processors (BSPs) usually include 1-, 2-, 4-, 8- or 16-bit arithmetic logic unit (ALU) and control lines (including carry or overflow signals that are internal to the processor in non-bitsliced CPU designs). For example, two 4-bit ALU chips could be arranged side by side, with control lines between them, to form an 8-bit ALU (result need not be power of two, e.g. three 1-bit units can make a 3-bit ALU, thus 3-bit (or n-bit) CPU, while 3-bit, or any CPU with higher odd number of bits, hasn't been manufactured and sold in volume). Four 4-bit ALU chips could be used to build a 16-bit ALU. It would take eight chips to build a 32-bit word ALU. The designer could add as many slices as required to manipulate longer word lengths. A microsequencer or control ROM would be used to execute logic to provide data and control signals to regulate function of the component ALUs. Known bit-slice microprocessors: 2-bit slice: Intel 3000 family (1974, now discontinued), e.g. Intel 3002 with Intel 3001, second-sourced by Signetics and Intersil Signetics 8X02 family (1977, now discontinued) 4-bit slice: National IMP family, consisting primarily of the IMP-00A/520 RALU (also known as MM5750) and various masked ROM microcode and control chips (CROMs, also known as MM5751) National GPC/P / IMP-4 (1973), seco
https://en.wikipedia.org/wiki/Dolores%20Project
The Dolores Project, located in the Dolores and San Juan River basins in southwestern Colorado, uses water from the Dolores River for irrigation, municipal and industrial use, recreation, fish and wildlife, and production of hydroelectric power. It also provides flood control and aids in economic redevelopment. The primary storage of Dolores River flows for all project purposes is provided by the McPhee Reservoir. Service is provided to the northwest Dove Creek area, central Montezuma Valley area, and south to the Towaoc area on the Ute Mountain Ute Indian Reservation. Irrigation water is available for . External links U.S. Department of the Interior, Bureau of Reclamation United States Bureau of Reclamation Colorado River Storage Project
https://en.wikipedia.org/wiki/Lancelot%20Hogben
Lancelot Thomas Hogben FRS FRSE (9 December 1895 – 22 August 1975) was a British experimental zoologist and medical statistician. He developed the African clawed frog (Xenopus laevis) as a model organism for biological research in his early career, attacked the eugenics movement in the middle of his career, and wrote popular books on science, mathematics and language in his later career. Early life and education Hogben was born and raised in Southsea near Portsmouth in Hampshire. His parents were Methodists. He attended Tottenham County School in London, his family having moved to Stoke Newington, where his mother had grown up, in 1907, and then as a medical student studied physiology at Trinity College, Cambridge. Hogben had matriculated into the University of London as an external student before he could apply to Cambridge and he graduated as a Bachelor of Science (BSc) in 1914. He took his Cambridge degree in 1915, graduating with an Ordinary BA. He had acquired socialist convictions, changing the name of the university's Fabian Society to Socialist Society and went on to become an active member of the Independent Labour Party. Later in life he preferred to describe himself as 'a scientific humanist'. In the First World War he was a pacifist, and joined the Quakers. He worked for six months with the Red Cross in France, under the auspices of the Friends' War Victims Relief Service and then the Friends' Ambulance Unit. He then returned to Cambridge, and was imprisoned in Wormwood Scrubs as a conscientious objector in 1916. His health collapsed and he was released in 1917. His brother George was also a conscientious objector, serving with the Friends' Ambulance Unit. Career After a year's convalescence he took lecturing positions in London universities and in 1921 he became a Doctor of Science (D.Sc.) in Zoology of the University of London. He moved in 1922 to the University of Edinburgh and its Animal Breeding Research Department.In 1923, Hogben was a founder
https://en.wikipedia.org/wiki/Degree%20of%20a%20continuous%20mapping
In topology, the degree of a continuous mapping between two compact oriented manifolds of the same dimension is a number that represents the number of times that the domain manifold wraps around the range manifold under the mapping. The degree is always an integer, but may be positive or negative depending on the orientations. The degree of a map was first defined by Brouwer, who showed that the degree is homotopy invariant (invariant among homotopies), and used it to prove the Brouwer fixed point theorem. In modern mathematics, the degree of a map plays an important role in topology and geometry. In physics, the degree of a continuous map (for instance a map from space to some order parameter set) is one example of a topological quantum number. Definitions of the degree From Sn to Sn The simplest and most important case is the degree of a continuous map from the -sphere to itself (in the case , this is called the winding number): Let be a continuous map. Then induces a homomorphism , where is the th homology group. Considering the fact that , we see that must be of the form for some fixed . This is then called the degree of . Between manifolds Algebraic topology Let X and Y be closed connected oriented m-dimensional manifolds. Orientability of a manifold implies that its top homology group is isomorphic to Z. Choosing an orientation means choosing a generator of the top homology group. A continuous map f : X →Y induces a homomorphism f∗ from Hm(X) to Hm(Y). Let [X], resp. [Y] be the chosen generator of Hm(X), resp. Hm(Y) (or the fundamental class of X, Y). Then the degree of f is defined to be f*([X]). In other words, If y in Y and f −1(y) is a finite set, the degree of f can be computed by considering the m-th local homology groups of X at each point in f −1(y). Differential topology In the language of differential topology, the degree of a smooth map can be defined as follows: If f is a smooth map whose domain is a compact manifold and p
https://en.wikipedia.org/wiki/Castelnuovo%E2%80%93de%20Franchis%20theorem
In mathematics, the Castelnuovo–de Franchis theorem is a classical result on complex algebraic surfaces. Let X be such a surface, projective and non-singular, and let ω1 and ω2 be two differentials of the first kind on X which are linearly independent but with wedge product 0. Then this data can be represented as a pullback of an algebraic curve: there is a non-singular algebraic curve C, a morphism φ: X → C, and differentials of the first kind ω1 and ω2 on C such that φ*(1) = ω1 and φ*(2) = ω2. This result is due to Guido Castelnuovo and Michele de Franchis (1875–1946). The converse, that two such pullbacks would have wedge 0, is immediate. See also de Franchis theorem References . Algebraic surfaces Theorems in geometry
https://en.wikipedia.org/wiki/De%20Franchis%20theorem
In mathematics, the de Franchis theorem is one of a number of closely related statements applying to compact Riemann surfaces, or, more generally, algebraic curves, X and Y, in the case of genus g > 1. The simplest is that the automorphism group of X is finite (see though Hurwitz's automorphisms theorem). More generally, the set of non-constant morphisms from X to Y is finite; fixing X, for all but a finite number of such Y, there is no non-constant morphism from X to Y. These results are named for (1875–1946). It is sometimes referenced as the De Franchis-Severi theorem. It was used in an important way by Gerd Faltings to prove the Mordell conjecture. See also Castelnuovo–de Franchis theorem References M. De Franchis: Un teorema sulle involuzioni irrazionali, Rend. Circ. Mat Palermo 36 (1913), 368 Algebraic curves Riemann surfaces Theorems in algebraic geometry Theorems in algebraic topology
https://en.wikipedia.org/wiki/Tate%20twist
In number theory and algebraic geometry, the Tate twist, named after John Tate, is an operation on Galois modules. For example, if K is a field, GK is its absolute Galois group, and ρ : GK → AutQp(V) is a representation of GK on a finite-dimensional vector space V over the field Qp of p-adic numbers, then the Tate twist of V, denoted V(1), is the representation on the tensor product V⊗Qp(1), where Qp(1) is the p-adic cyclotomic character (i.e. the Tate module of the group of roots of unity in the separable closure Ks of K). More generally, if m is a positive integer, the mth Tate twist of V, denoted V(m), is the tensor product of V with the m-fold tensor product of Qp(1). Denoting by Qp(−1) the dual representation of Qp(1), the -mth Tate twist of V can be defined as References Number theory Algebraic geometry
https://en.wikipedia.org/wiki/Axiality%20and%20rhombicity
In physics and mathematics, axiality and rhombicity are two characteristics of a symmetric second-rank tensor in three-dimensional Euclidean space, describing its directional asymmetry. Let A denote a second-rank tensor in R3, which can be represented by a 3-by-3 matrix. We assume that A is symmetric. This implies that A has three real eigenvalues, which we denote by , and . We assume that they are ordered such that The axiality of A is defined by The rhombicity is the difference between the smallest and the second-smallest eigenvalue: Other definitions of axiality and rhombicity differ from the ones given above by constant factors which depend on the context. For example, when using them as parameters in the irreducible spherical tensor expansion, it is most convenient to divide the above definition of axiality by and that of rhombicity by . Applications The description of physical interactions in terms of axiality and rhombicity is frequently encountered in spin dynamics and, in particular, in spin relaxation theory, where many traceless bilinear interaction Hamiltonians, having the (eigenframe) form (hats denote spin projection operators) may be conveniently rotated using rank 2 irreducible spherical tensor operators: where are Wigner functions, are Euler angles, and the expressions for the rank 2 irreducible spherical tensor operators are: Defining Hamiltonian rotations in this way (axiality, rhombicity, three angles) significantly simplifies calculations, since the properties of Wigner functions are well understood. References D.M. Brink and G.R. Satchler, Angular momentum, 3rd edition, 1993, Oxford: Clarendon Press. D.A. Varshalovich, A.N. Moskalev, V.K. Khersonski, Quantum theory of angular momentum: irreducible tensors, spherical harmonics, vector coupling coefficients, 3nj symbols, 1988, Singapore: World Scientific Publications. I. Kuprov, N. Wagner-Rundell, P.J. Hore, J. Magn. Reson., 2007 (184) 196-206. Article Tensors
https://en.wikipedia.org/wiki/Fano%20variety
In algebraic geometry, a Fano variety, introduced by Gino Fano in , is a complete variety X whose anticanonical bundle KX* is ample. In this definition, one could assume that X is smooth over a field, but the minimal model program has also led to the study of Fano varieties with various types of singularities, such as terminal or klt singularities. Recently techniques in differential geometry have been applied to the study of Fano varieties over the complex numbers, and success has been found in constructing moduli spaces of Fano varieties and proving the existence of Kähler–Einstein metrics on them through the study of K-stability of Fano varieties. Examples The fundamental example of Fano varieties are the projective spaces: the anticanonical line bundle of Pn over a field k is O(n+1), which is very ample (over the complex numbers, its curvature is n+1 times the Fubini–Study symplectic form). Let D be a smooth codimension-1 subvariety in Pn. The adjunction formula implies that KD = (KX + D)|D = (−(n+1)H + deg(D)H)|D, where H is the class of a hyperplane. The hypersurface D is therefore Fano if and only if deg(D) < n+1. More generally, a smooth complete intersection of hypersurfaces in n-dimensional projective space is Fano if and only if the sum of their degrees is at most n. Weighted projective space P(a0,...,an) is a singular (klt) Fano variety. This is the projective scheme associated to a graded polynomial ring whose generators have degrees a0,...,an. If this is well formed, in the sense that no n of the numbers a have a common factor greater than 1, then any complete intersection of hypersurfaces such that the sum of their degrees is less than a0+...+an is a Fano variety. Every projective variety in characteristic zero that is homogeneous under a linear algebraic group is Fano. Some properties The existence of some ample line bundle on X is equivalent to X being a projective variety, so a Fano variety is always projective. For a Fano variety X over t
https://en.wikipedia.org/wiki/The%20Number%20Devil
The Number Devil: A Mathematical Adventure () is a book for children and young adults that explores mathematics. It was originally written in 1997 in German by Hans Magnus Enzensberger and illustrated by Rotraut Susanne Berner. The book follows a young boy named Robert, who is taught mathematics by a sly "number devil" called Teplotaxl over the course of twelve dreams. The book was met with mostly positive reviews from critics, approving its description of math while praising its simplicity. Its colorful use of fictional mathematical terms and its creative descriptions of concepts have made it a suggested book for both children and adults troubled with math. The Number Devil was a bestseller in Europe, and has been translated into English by Michael Henry Heim. Plot Robert is a young boy who suffers from mathematical anxiety due to his boredom in school. His mother is Mrs. Wilson. He also experiences recurring dreams—including falling down an endless slide or being eaten by a giant fish—but is interrupted from this sleep habit one night by a small devil creature who introduces himself as the Number Devil. Although there are many Number Devils (from Number Heaven), Robert only knows him as the Number Devil before learning of his actual name, Teplotaxl, later in the story. Over the course of twelve dreams, the Number Devil teaches Robert mathematical principles. On the first night, the Number Devil appears to Robert in an oversized world and introduces the number one. The next night, the Number Devil emerges in a forest of trees shaped like "ones" and explains the necessity of the number zero, negative numbers, and introduces hopping, a fictional term to describe exponentiation. On the third night, the Number Devil brings Robert to a cave and reveals how prima-donna numbers (prime numbers) can only be divided by themselves and one without a remainder. Later, on the fourth night, the Number Devil teaches Robert about rutabagas, another fictional term to depict squar
https://en.wikipedia.org/wiki/List%20of%20large%20cardinal%20properties
This page includes a list of cardinals with large cardinal properties. It is arranged roughly in order of the consistency strength of the axiom asserting the existence of cardinals with the given property. Existence of a cardinal number κ of a given type implies the existence of cardinals of most of the types listed above that type, and for most listed cardinal descriptions φ of lesser consistency strength, Vκ satisfies "there is an unbounded class of cardinals satisfying φ". The following table usually arranges cardinals in order of consistency strength, with size of the cardinal used as a tiebreaker. In a few cases (such as strongly compact cardinals) the exact consistency strength is not known and the table uses the current best guess. "Small" cardinals: 0, 1, 2, ..., ,..., , ... (see Aleph number) worldly cardinals weakly and strongly inaccessible, α-inaccessible, and hyper inaccessible cardinals weakly and strongly Mahlo, α-Mahlo, and hyper Mahlo cardinals. reflecting cardinals weakly compact (= Π-indescribable), Π-indescribable, totally indescribable cardinals λ-unfoldable, unfoldable cardinals, ν-indescribable cardinals and λ-shrewd, shrewd cardinals (not clear how these relate to each other). ethereal cardinals, subtle cardinals almost ineffable, ineffable, n-ineffable, totally ineffable cardinals remarkable cardinals α-Erdős cardinals (for countable α), 0# (not a cardinal), γ-iterable, γ-Erdős cardinals (for uncountable γ) almost Ramsey, Jónsson, Rowbottom, Ramsey, ineffably Ramsey, completely Ramsey, strongly Ramsey, super Ramsey cardinals measurable cardinals, 0† λ-strong, strong cardinals, tall cardinals Woodin, weakly hyper-Woodin, Shelah, hyper-Woodin cardinals superstrong cardinals (=1-superstrong; for n-superstrong for n≥2 see further down.) subcompact, strongly compact (Woodin< strongly compact≤supercompact), supercompact, hypercompact cardinals η-extendible, extendible cardinals Vopěnka cardinals, Shelah for supercompactness,
https://en.wikipedia.org/wiki/Amphotropism
Amphotropism or amphotropic indicates that a pathogen like a virus or a bacterium has a wide host range and can infect more than one species or cell culture line. See also Tropism, a list of tropisms Ecotropism, indicating a narrow host range Ecology terminology
https://en.wikipedia.org/wiki/Data%20Interchange%20Format
Data Interchange Format (.dif) is a text file format used to import/export single spreadsheets between spreadsheet programs. Applications that still support the DIF format are Collabora Online, Excel, Gnumeric, and LibreOffice Calc. Historical applications that used to support it until they became end of life or no longer acknowledge support of the format are dBase, FileMaker, Framework, Lotus 1-2-3, Multiplan, OpenOffice.org Calc and StarCalc. A limitation with DIF format is that it cannot handle multiple spreadsheets in a single workbook. Due to the similarity in abbreviation and in age (both date to the early 1980s), the DIF spreadsheet format it is often confused with Navy DIF; Navy DIF, however, is an unrelated "document interchange format" for word processors. History DIF was developed by Software Arts, Inc. (the developers of the VisiCalc program) in the early 1980s. The specification was included in many copies of VisiCalc, and published in Byte Magazine. Bob Frankston developed the format, with input from others, including Mitch Kapor, who helped so that it could work with his VisiPlot program. (Kapor later went on to found Lotus and make Lotus 1-2-3 happen.) The specification was copyright 1981. DIF was a registered trademark of Software Arts Products Corp. (a legal name for Software Arts at the time). Syntax DIF stores everything in an ASCII text file to mitigate many cross-platform issues back in the days of its creation. However modern spreadsheet software, e.g. OpenOffice.org Calc and Gnumeric, offer more character encoding to export/import. The file is divided into 2 sections: header and data. Everything in DIF is represented by a 2- or 3-line chunk. Headers get a 3-line chunk; data, 2. Header chunks start with a text identifier that is all caps, only alphabetic characters, and less than 32 letters. The following line must be a pair of numbers, and the third line must be a quoted string. On the other hand, data chunks start with a number pair and
https://en.wikipedia.org/wiki/Runtime%20verification
Runtime verification is a computing system analysis and execution approach based on extracting information from a running system and using it to detect and possibly react to observed behaviors satisfying or violating certain properties. Some very particular properties, such as datarace and deadlock freedom, are typically desired to be satisfied by all systems and may be best implemented algorithmically. Other properties can be more conveniently captured as formal specifications. Runtime verification specifications are typically expressed in trace predicate formalisms, such as finite state machines, regular expressions, context-free patterns, linear temporal logics, etc., or extensions of these. This allows for a less ad-hoc approach than normal testing. However, any mechanism for monitoring an executing system is considered runtime verification, including verifying against test oracles and reference implementations . When formal requirements specifications are provided, monitors are synthesized from them and infused within the system by means of instrumentation. Runtime verification can be used for many purposes, such as security or safety policy monitoring, debugging, testing, verification, validation, profiling, fault protection, behavior modification (e.g., recovery), etc. Runtime verification avoids the complexity of traditional formal verification techniques, such as model checking and theorem proving, by analyzing only one or a few execution traces and by working directly with the actual system, thus scaling up relatively well and giving more confidence in the results of the analysis (because it avoids the tedious and error-prone step of formally modelling the system), at the expense of less coverage. Moreover, through its reflective capabilities runtime verification can be made an integral part of the target system, monitoring and guiding its execution during deployment. History and context Checking formally or informally specified properties again
https://en.wikipedia.org/wiki/Kerala%20school%20of%20astronomy%20and%20mathematics
The Kerala school of astronomy and mathematics or the Kerala school was a school of mathematics and astronomy founded by Madhava of Sangamagrama in Tirur, Malappuram, Kerala, India, which included among its members: Parameshvara, Neelakanta Somayaji, Jyeshtadeva, Achyuta Pisharati, Melpathur Narayana Bhattathiri and Achyuta Panikkar. The school flourished between the 14th and 16th centuries and its original discoveries seem to have ended with Narayana Bhattathiri (1559–1632). In attempting to solve astronomical problems, the Kerala school independently discovered a number of important mathematical concepts. Their most important results—series expansion for trigonometric functions—were described in Sanskrit verse in a book by Neelakanta called Tantrasangraha, and again in a commentary on this work, called Tantrasangraha-vakhya, of unknown authorship. The theorems were stated without proof, but proofs for the series for sine, cosine, and inverse tangent were provided a century later in the work Yuktibhasa (), written in Malayalam, by Jyesthadeva, and also in a commentary on Tantrasangraha. Their work, completed two centuries before the invention of calculus in Europe, provided what is now considered the first example of a power series (apart from geometric series). Background Islamic scholars nearly developed a general formula for finding integrals of polynomials by 1000 AD —and evidently could find such a formula for any polynomial in which they were interested. But, it appears, they were not interested in any polynomial of degree higher than four, at least in any of the material that has come down to us. Indian scholars, on the other hand, were by the year 1600 able to use formula similar to ibn al-Haytham's sum formula for arbitrary integral powers in calculating power series for the functions in which they were interested. By the same time, they also knew how to calculate the differentials of these functions. So some of the basic ideas of calculus were know
https://en.wikipedia.org/wiki/Super%20I/O
Super I/O is a class of I/O controller integrated circuits that began to be used on personal computer motherboards in the late 1980s, originally as add-in cards, later embedded on the motherboards. A super I/O chip combines interfaces for a variety of low-bandwidth devices. Now it is mostly merged with EC. The functions below are usually provided by the super I/O if they are on the motherboard: A floppy-disk controller An IEEE 1284-compatible parallel port (commonly used for printers) One or more 16C550-compatible serial port UARTs Keyboard controller for PS/2 keyboard and/or mouse Most Super I/O chips include some additional low-speed devices, such as: Temperature, voltage, and fan speed interface Thermal Zone Chassis intrusion detection Mainboard power management LED management PWM fan speed control An IrDA Port controller A game port (not provided by recent super I/O chips anymore because Windows XP is the last Windows OS to support a game port unless the vendor has a custom driver in the future OS) A watchdog timer A consumer IR receiver A MIDI port Some GPIO pins Legacy Plug and Play or ACPI support for the included devices By combining many functions in a single chip, the number of parts needed on a motherboard is reduced, thus reducing the cost of production. The original super I/O chips communicated with the central processing unit via the ISA bus. With the evolution away from ISA towards use of the PCI bus, the Super I/O chip was often the biggest remaining reason for continuing inclusion of ISA on the motherboard. Later super I/O chips use the LPC bus instead of ISA for communication with the central processing unit. This normally occurs through an LPC interface on the southbridge chip of the motherboard. Since Intel is replacing the LPC bus with the eSPI bus, super I/O chips that connect to that bus have appeared on the market. Companies that make super I/O controllers include Nuvoton (has incorporated Winbond), , Fintek Inc. ,ENE
https://en.wikipedia.org/wiki/Gonality%20of%20an%20algebraic%20curve
In mathematics, the gonality of an algebraic curve C is defined as the lowest degree of a nonconstant rational map from C to the projective line. In more algebraic terms, if C is defined over the field K and K(C) denotes the function field of C, then the gonality is the minimum value taken by the degrees of field extensions K(C)/K(f) of the function field over its subfields generated by single functions f. If K is algebraically closed, then the gonality is 1 precisely for curves of genus 0. The gonality is 2 for curves of genus 1 (elliptic curves) and for hyperelliptic curves (this includes all curves of genus 2). For genus g ≥ 3 it is no longer the case that the genus determines the gonality. The gonality of the generic curve of genus g is the floor function of (g + 3)/2. Trigonal curves are those with gonality 3, and this case gave rise to the name in general. Trigonal curves include the Picard curves, of genus three and given by an equation y3 = Q(x) where Q is of degree 4. The gonality conjecture, of M. Green and R. Lazarsfeld, predicts that the gonality of the algebraic curve C can be calculated by homological algebra means, from a minimal resolution of an invertible sheaf of high degree. In many cases the gonality is two more than the Clifford index. The Green–Lazarsfeld conjecture is an exact formula in terms of the graded Betti numbers for a degree d embedding in r dimensions, for d large with respect to the genus. Writing b(C), with respect to a given such embedding of C and the minimal free resolution for its homogeneous coordinate ring, for the minimum index i for which βi, i + 1 is zero, then the conjectured formula for the gonality is r + 1 − b(C). According to the 1900 ICM talk of Federico Amodeo, the notion (but not the terminology) originated in Section V of Riemann's Theory of Abelian Functions. Amodeo used the term "gonalità" as early as 1893. References Algebraic curves Homological algebra
https://en.wikipedia.org/wiki/K%C3%B6the%20conjecture
In mathematics, the Köthe conjecture is a problem in ring theory, open . It is formulated in various ways. Suppose that R is a ring. One way to state the conjecture is that if R has no nil ideal, other than {0}, then it has no nil one-sided ideal, other than {0}. This question was posed in 1930 by Gottfried Köthe (1905–1989). The Köthe conjecture has been shown to be true for various classes of rings, such as polynomial identity rings and right Noetherian rings, but a general solution remains elusive. Equivalent formulations The conjecture has several different formulations: (Köthe conjecture) In any ring, the sum of two nil left ideals is nil. In any ring, the sum of two one-sided nil ideals is nil. In any ring, every nil left or right ideal of the ring is contained in the upper nil radical of the ring. For any ring R and for any nil ideal J of R, the matrix ideal Mn(J) is a nil ideal of Mn(R) for every n. For any ring R and for any nil ideal J of R, the matrix ideal M2(J) is a nil ideal of M2(R). For any ring R, the upper nilradical of Mn(R) is the set of matrices with entries from the upper nilradical of R for every positive integer n. For any ring R and for any nil ideal J of R, the polynomials with indeterminate x and coefficients from J lie in the Jacobson radical of the polynomial ring R[x]. For any ring R, the Jacobson radical of R[x] consists of the polynomials with coefficients from the upper nilradical of R. Related problems A conjecture by Amitsur read: "If J is a nil ideal in R, then J[x] is a nil ideal of the polynomial ring R[x]." This conjecture, if true, would have proven the Köthe conjecture through the equivalent statements above, however a counterexample was produced by Agata Smoktunowicz. While not a disproof of the Köthe conjecture, this fueled suspicions that the Köthe conjecture may be false. Kegel proved that a ring which is the direct sum of two nilpotent subrings is itself nilpotent. The question arose whether or not "nilpoten
https://en.wikipedia.org/wiki/List%20of%20Fibre%20Channel%20switches
Major manufacturers of Fibre Channel switches are: Brocade (Broadcom): Switches: G630, G620, G610, 6520, 6510, 6505, 5300, 5100, VA-40FC, 5000, 4900, 2400, 2800, 3800, 3900, 4100, 300, 200E Directors: DCX X6-8, DCX X6-4, DCX 8510-8, DCX 8510-4, DCX, DCX-4S, 48000, 24000, 12000 More complete list in Brocade Communications Systems article. Cisco: Switches: Cisco MDS 9020, 9120, 9124, 9124e, 9134, 9140, 9148, 9216, 9216A, 9216i, 9222i, 9250i, 9148S, 914T, 9396S, 9396T Nexus 5672UP, 5672UP-16G Directors: Cisco MDS 9506, 9509, 9513, 9706, 9710 and 9718 Juniper Networks: Switches: QFabric QFX3500-48S4Q-ACR, QFX3008-CHASA-BASE, QFX3008-SF16Q, QFX3100-GBE-ACR McDATA (acquired and rebranded by Brocade, now Broadcom): Switches: 3232, 4500, 4700 Directors: 6064, 6140, 10000 QLogic (Marvell): Switches: SANbox 5800,5802 5600, 5602 5200, 3050, 1400 Directors / Modular Chassis Switches: SANbox 9000 References Fibre Channel
https://en.wikipedia.org/wiki/Locally%20nilpotent
In the mathematical field of commutative algebra, an ideal I in a commutative ring A is locally nilpotent at a prime ideal p if Ip, the localization of I at p, is a nilpotent ideal in Ap. In non-commutative algebra and group theory, an algebra or group is locally nilpotent if and only if every finitely generated subalgebra or subgroup is nilpotent. The subgroup generated by the normal locally nilpotent subgroups is called the Hirsch–Plotkin radical and is the generalization of the Fitting subgroup to groups without the ascending chain condition on normal subgroups. A locally nilpotent ring is one in which every finitely generated subring is nilpotent: locally nilpotent rings form a radical class, giving rise to the Levitzki radical. Commutative algebra
https://en.wikipedia.org/wiki/Sandin%20Image%20Processor
The Sandin Image Processor is a video synthesizer, usually introduced as invented by Dan Sandin and designed between 1971 and 1974. Some called it the "video equivalent of a Moog audio synthesizer." It accepted basic video signals and mixed and modified them in a fashion similar to what a Moog synthesizer did with audio. An analog, Modular Synthesizer, real time, video processing instrument, it provided video processing performance and produced subtle and delicate video effects of a complexity not seen again until well into the digital video revolution. Its real time nature led to its use in live theater performance, including "Electronic Visualization Events" where it was seen processing the output of Tom DeFanti's Graphics Symbiosis System. The Sandin Image Processor fostered many imaginative videotapes seen, for example at early SIGGRAPH conferences. Sandin's instrument, and his personally delivered instruction in video, trained many of the people who were later to engineer the digital video revolution. Physically, an Image Processor system would be built out of modules. Several types of modules were defined and typically would be an aluminum box containing a circuit board inside, video connectors and knobs on front of box and power connector on back of box. The modules would be organized in rows. Individual systems could vary in size and increase in power with the addition of more modules. Typical modules would be signal sources, combiners and modifiers, effects modules, sync, color encoder, color decoder, and NTSC video interface. Sandin was an advocate of education and a "copy it right distribution religion". Accordingly, he placed the circuit board layouts with a commercial circuit board company where anyone could buy them for ordinary manufacturing costs and freely published schematics and other documentation. A following of video artists, students, and others interested in video electronics would assemble these modules kit style and try to build up the
https://en.wikipedia.org/wiki/Ian%20G.%20Macdonald
Ian Grant Macdonald (11 October 1928 – 8 August 2023) was a British mathematician known for his contributions to symmetric functions, special functions, Lie algebra theory and other aspects of algebra, algebraic combinatorics, and combinatorics. Early life and education Born in London, he was educated at Winchester College and Trinity College, Cambridge, graduating in 1952. Career He then spent five years as a civil servant. He was offered a position at Manchester University in 1957 by Max Newman, on the basis of work he had done while outside academia. In 1960 he moved to the University of Exeter, and in 1963 became a Fellow of Magdalen College, Oxford. Macdonald became Fielden Professor at Manchester in 1972, and professor at Queen Mary College, University of London, in 1976. He worked on symmetric products of algebraic curves, Jordan algebras and the representation theory of groups over local fields. In 1972 he proved the Macdonald identities, after a pattern known to Freeman Dyson. His 1979 book Symmetric Functions and Hall Polynomials has become a classic. Symmetric functions are an old theory, part of the theory of equations, to which both K-theory and representation theory lead. His was the first text to integrate much classical theory, such as Hall polynomials, Schur functions, the Littlewood–Richardson rule, with the abstract algebra approach. It was both an expository work and, in part, a research monograph, and had a major impact in the field. The Macdonald polynomials are now named after him. The Macdonald conjectures from 1982 also proved most influential. Macdonald was elected a Fellow of the Royal Society in 1979. He was an invited speaker in 1970 at the International Congress of Mathematicians (ICM) in Nice and a plenary speaker in 1998 at the ICM in Berlin. In 1991 he received the Pólya Prize of the London Mathematical Society. He was awarded the 2009 Steele Prize for Mathematical Exposition. In 2012 he became a fellow of the American Mathemati
https://en.wikipedia.org/wiki/Ore%20condition
In mathematics, especially in the area of algebra known as ring theory, the Ore condition is a condition introduced by Øystein Ore, in connection with the question of extending beyond commutative rings the construction of a field of fractions, or more generally localization of a ring. The right Ore condition for a multiplicative subset S of a ring R is that for and , the intersection . A (non-commutative) domain for which the set of non-zero elements satisfies the right Ore condition is called a right Ore domain. The left case is defined similarly. General idea The goal is to construct the right ring of fractions R[S−1] with respect to a multiplicative subset S. In other words, we want to work with elements of the form as−1 and have a ring structure on the set R[S−1]. The problem is that there is no obvious interpretation of the product (as−1)(bt−1); indeed, we need a method to "move" s−1 past b. This means that we need to be able to rewrite s−1b as a product b1s1−1. Suppose then multiplying on the left by s and on the right by s1, we get . Hence we see the necessity, for a given a and s, of the existence of a1 and s1 with and such that . Application Since it is well known that each integral domain is a subring of a field of fractions (via an embedding) in such a way that every element is of the form rs−1 with s nonzero, it is natural to ask if the same construction can take a noncommutative domain and associate a division ring (a noncommutative field) with the same property. It turns out that the answer is sometimes "no", that is, there are domains which do not have an analogous "right division ring of fractions". For every right Ore domain R, there is a unique (up to natural R-isomorphism) division ring D containing R as a subring such that every element of D is of the form rs−1 for r in R and s nonzero in R. Such a division ring D is called a ring of right fractions of R, and R is called a right order in D. The notion of a ring of left fractions and le
https://en.wikipedia.org/wiki/Ore%27s%20theorem
Ore's theorem is a result in graph theory proved in 1960 by Norwegian mathematician Øystein Ore. It gives a sufficient condition for a graph to be Hamiltonian, essentially stating that a graph with sufficiently many edges must contain a Hamilton cycle. Specifically, the theorem considers the sum of the degrees of pairs of non-adjacent vertices: if every such pair has a sum that at least equals the total number of vertices in the graph, then the graph is Hamiltonian. Formal statement Let be a (finite and simple) graph with vertices. We denote by the degree of a vertex in , i.e. the number of incident edges in to . Then, Ore's theorem states that if then is Hamiltonian. Proof It is equivalent to show that every non-Hamiltonian graph does not obey condition (∗). Accordingly, let be a graph on vertices that is not Hamiltonian, and let be formed from by adding edges one at a time that do not create a Hamiltonian cycle, until no more edges can be added. Let and be any two non-adjacent vertices in . Then adding edge to would create at least one new Hamiltonian cycle, and the edges other than in such a cycle must form a Hamiltonian path in with and . For each index in the range , consider the two possible edges in from to and from to . At most one of these two edges can be present in , for otherwise the cycle would be a Hamiltonian cycle. Thus, the total number of edges incident to either or is at most equal to the number of choices of , which is . Therefore, does not obey property (∗), which requires that this total number of edges () be greater than or equal to . Since the vertex degrees in are at most equal to the degrees in , it follows that also does not obey property (∗). Algorithm describes the following simple algorithm for constructing a Hamiltonian cycle in a graph meeting Ore's condition. Arrange the vertices arbitrarily into a cycle, ignoring adjacencies in the graph. While the cycle contains two consecutive vertices vi and v
https://en.wikipedia.org/wiki/List%20of%20medical%20journals
Medical journals are published regularly to communicate new research to clinicians, medical scientists, and other healthcare workers. This article lists academic journals that focus on the practice of medicine or any medical specialty. Journals are listed alphabetically by journal name, and also grouped by the subfield of medicine they focus on. Journals for other fields of healthcare can be found at List of healthcare journals. Journals by name Journals by specialty Allergy Allergy Journal of Asthma Journal of Asthma & Allergy Educators Anesthesiology Acta Anaesthesiologica Scandinavica Anaesthesia Anesthesia & Analgesia Annals of Cardiac Anaesthesia British Journal of Anaesthesia The Clinical Journal of Pain Current Opinion in Anesthesiology European Journal of Anaesthesiology Korean Journal of Anesthesiology Pain Le Praticien en Anesthésie Réanimation Seminars in Cardiothoracic and Vascular Anesthesia Cleft palate and craniofacial anomalies The Cleft Palate-Craniofacial Journal Dentistry List of dental journals Pharmaceutical sciences Advanced Drug Delivery Reviews Health Economics International Journal of Medical Sciences International Journal of Pharmaceutics Journal of Controlled Release Journal of Enzyme Inhibition and Medicinal Chemistry Pharmacology Alimentary Pharmacology & Therapeutics The Annals of Pharmacotherapy Clinical Pharmacology: Advances and Applications Clinical Pharmacology & Therapeutics Expert Opinion on Drug Delivery Expert Opinion on Drug Discovery Expert Opinion on Drug Metabolism & Toxicology Expert Opinion on Drug Safety Expert Opinion on Emerging Drugs Expert Opinion on Investigational Drugs Expert Opinion on Pharmacotherapy Indian Journal of Pharmacology The Medical Letter on Drugs and Therapeutics Scientia Pharmaceutica Plastic Surgery Annals of Plastic Surgery Plastic and Reconstructive Surgery Psychiatry American Journal of Psychiatry Archives of General Psychiatry Biological Ps
https://en.wikipedia.org/wiki/Code%3A%3ABlocks
Code::Blocks is a free, open-source cross-platform IDE that supports multiple compilers including GCC, Clang and Visual C++. It is developed in C++ using wxWidgets as the GUI toolkit. Using a plugin architecture, its capabilities and features are defined by the provided plugins. Currently, Code::Blocks is oriented towards C, C++, and Fortran. It has a custom build system and optional Make support. Code::Blocks is being developed for Windows and Linux and has been ported to FreeBSD, OpenBSD and Solaris. The latest binary provided for macOS version is 13.12 released on 2013/12/26 (compatible with Mac OS X 10.6 and later), but more recent versions can be compiled and MacPorts supplies version 17.12. History After releasing two release candidate versions, 1.0rc1 on July 25, 2005 and 1.0rc2 on October 25, 2005, instead of making a final release, the project developers started adding many new features, with the final release being repeatedly postponed. Instead, there were nightly builds of the latest SVN version made available on a daily basis. The first stable release was on February 28, 2008, with the version number changed to 8.02. The versioning scheme was changed to that of Ubuntu, with the major and minor number representing the year and month of the release. Version 20.03 is the latest stable release; however for the most up-to-date version the user can download the relatively stable nightly build or download the source code from SVN. In April 2020, a critical software vulnerability was found in the Code::Blocks IDE v17.12, identified by CVE-2020-10814. Jennic Limited distributes a version of Code::Blocks customized to work with its microcontrollers. Features Compilers Code::Blocks supports multiple compilers, including GCC, MinGW, Digital Mars, Microsoft Visual C++, Borland C++, LLVM Clang, Watcom, LCC and the Intel C++ compiler. Although the IDE was designed for the C++ language, there is some support for other languages, including Fortran and D. A plug-i
https://en.wikipedia.org/wiki/Stationary%20ergodic%20process
In probability theory, a stationary ergodic process is a stochastic process which exhibits both stationarity and ergodicity. In essence this implies that the random process will not change its statistical properties with time and that its statistical properties (such as the theoretical mean and variance of the process) can be deduced from a single, sufficiently long sample (realization) of the process. Stationarity is the property of a random process which guarantees that its statistical properties, such as the mean value, its moments and variance, will not change over time. A stationary process is one whose probability distribution is the same at all times. For more information see stationary process. An ergodic process is one which conforms to the ergodic theorem. The theorem allows the time average of a conforming process to equal the ensemble average. In practice this means that statistical sampling can be performed at one instant across a group of identical processes or sampled over time on a single process with no change in the measured result. A simple example of a violation of ergodicity is a measured process which is the superposition of two underlying processes, each with its own statistical properties. Although the measured process may be stationary in the long term, it is not appropriate to consider the sampled distribution to be the reflection of a single (ergodic) process: The ensemble average is meaningless. Also see ergodic theory and ergodic process. See also Measure-preserving dynamical system References Peebles,P. Z., 2001, Probability, Random Variables and Random Signal Principles, McGraw-Hill Inc, Boston, Ergodic theory pl:Proces ergodyczny
https://en.wikipedia.org/wiki/Storage%20service%20provider
A Storage service provider (SSP) is any company that provides computer storage space and related management services. SSPs may also offer periodic backup and archiving. Advantages of managed storage are that more space can be ordered as required. Depending upon your SSP, backups may also be managed. Faster data access can be ordered as required. Also, maintenance costs may be reduced, particularly for larger organizations who store a large or increasing volumes of data. Another advantage is that best practices are likely to be followed. Disadvantages are that the cost may be prohibitive, for small organizations or individuals who deal with smaller amounts or static volumes of data and that there's less control of data systems. Types of managed storage Data owners normally access managed storage via a network (LAN), or through a series of networks (Internet). However, managed storage may be directly attached to a workstation or server, which is not managed by SSP. Managed Storage generally falls into one of the following categories: locally managed storage remotely managed storage Locally managed storage Advantages of this type of storage include a high-speed access to data and greater control over data availability. A disadvantage is that additional space is required at a local site to store the data, as well as limitations of the on-site area. Remotely managed storage Advantages of this type of storage are that it may be used an off site backup, it offers global access (depending upon configuration) and adding storage will not require additional space at the local site. However, if the network providing connectivity to the remote data is interrupted, there will be data availability issues, unless distributed file systems are in use. In cloud computing, Storage as a Service (SaaS) involves the provision of off-site storage for data and information. This approach may offer greater reliability, but at a higher cost. See also Application service provider Int
https://en.wikipedia.org/wiki/Thomson%20problem
The objective of the Thomson problem is to determine the minimum electrostatic potential energy configuration of electrons constrained to the surface of a unit sphere that repel each other with a force given by Coulomb's law. The physicist J. J. Thomson posed the problem in 1904 after proposing an atomic model, later called the plum pudding model, based on his knowledge of the existence of negatively charged electrons within neutrally-charged atoms. Related problems include the study of the geometry of the minimum energy configuration and the study of the large behavior of the minimum energy. Mathematical statement The electrostatic interaction energy occurring between each pair of electrons of equal charges (, with the elementary charge of an electron) is given by Coulomb's law, where is the electric constant and is the distance between each pair of electrons located at points on the sphere defined by vectors and , respectively. Simplified units of and (the Coulomb constant) are used without loss of generality. Then, The total electrostatic potential energy of each N-electron configuration may then be expressed as the sum of all pair-wise interaction energies The global minimization of over all possible configurations of N distinct points is typically found by numerical minimization algorithms. Thomson's problem is related to the 7th of the eighteen unsolved mathematics problems proposed by the mathematician Steve Smale — "Distribution of points on the 2-sphere". The main difference is that in Smale's problem the function to minimise is not the electrostatic potential but a logarithmic potential given by A second difference is that Smale's question is about the asymptotic behaviour of the total potential when the number N of points goes to infinity, not for concrete values of N. Example The solution of the Thomson problem for two electrons is obtained when both electrons are as far apart as possible on opposite sides of the origin, , or K
https://en.wikipedia.org/wiki/Glossary%20of%20architecture
This page is a glossary of architecture. A B C D E F G H I J K L M N O P Q R S T U V W Z See also Outline of architecture List of classical architecture terms Classical order List of architectural vaults List of structural elements Glossary of engineering Notes References Page has search box. Building engineering Architecture Architecture Architectural elements architecture Wikipedia glossaries using description lists
https://en.wikipedia.org/wiki/Disk%20pack
Disk packs and disk cartridges were early forms of removable media for computer data storage, introduced in the 1960s. Disk pack A disk pack is a layered grouping of hard disk platters (circular, rigid discs coated with a magnetic data storage surface). A disk pack is the core component of a hard disk drive. In modern hard disks, the disk pack is permanently sealed inside the drive. In many early hard disks, the disk pack was a removable unit, and would be supplied with a protective canister featuring a lifting handle. The protective cover consisted of two parts, a plastic shell, with a handle in the center, that enclosed the top and sides of the disks and a separate bottom that completed the sealed package. To remove the disk pack, the drive would be taken off line and allowed to spin down. Its access door could then be opened and an empty shell inserted and twisted to unlock the disk platter from the drive and secure it to the shell. The assembly would then be lifted out and the bottom cover attached. A different disk pack could then be inserted by removing the bottom and placing the disk pack with its shell into the drive. Turning the handle would lock the disk pack in place and free the shell for removal. The first removable disk pack was invented in 1961 by IBM engineers R. E. Pattison as part of the LCF (Low Cost File) project headed by Jack Harker. The 14-inch (356 mm) diameter disks introduced by IBM became a de facto standard, with many vendors producing disk drives using 14-inch disks in disk packs and cartridges into the 1980s. Examples of disk drives that employed removable disk packs include the IBM 1311, IBM 2311, and the Digital RP04. Disk cartridge An early disk cartridge was a single hard disk platter encased in a protective plastic shell. When the removable cartridge was inserted into the cartridge drive peripheral device, the read/write heads of the drive could access the magnetic data storage surface of the platter through holes in the sh
https://en.wikipedia.org/wiki/Phoning%20home
In computing, phoning home is a term often used to refer to the behavior of security systems that report network location, username, or other such data to another computer. Phoning home may be useful for the proprietor in tracking a missing or stolen computer. In this way, it is frequently performed by mobile computers at corporations. It typically involves a software agent which is difficult to detect or remove. However, phoning home can also be malicious, as in surreptitious communication between end-user applications or hardware and its manufacturers or developers. The traffic may be encrypted to make it difficult or impractical for the end user to determine what data are being transmitted. The Stuxnet attack on Iran's nuclear facilities was facilitated by phone-home technology as reported by The New York Times. Legally phoning home Some uses for the practice are legal in some countries. For example, phoning home could be for access restriction, such as transmitting an authorization key. This was done with the Adobe Creative Suite: Each time one of the programs is opened, it phones home with the serial number. If the serial number is already in use, or a fake, then the program will present the user with the option of entering the correct serial number. If the user refuses, the next time the program loads, it will operate in trial mode until a valid serial number has been entered. However, the method can be thwarted by either disabling the internet connection when starting the program or adding a firewall or Hosts file rule to prevent the program from communicating with the verification server. Phoning home could also be for marketing purposes, such as the "Sony BMG rootkit", which transmits a hash of the currently playing CD back to Sony, or a digital video recorder (DVR) reporting on viewing habits. High-end computing systems such as mainframes have been able to phone home for many years, to alert the manufacturer of hardware problems with the mainframes or
https://en.wikipedia.org/wiki/Temporal%20mean
The temporal mean is the arithmetic mean of a series of values over a time period. Assuming equidistant measuring or sampling times, it can be computed as the sum of the values over a period divided by the number of values. A simple moving average can be considered to be a sequence of temporal means over periods of equal duration. (If the time variable is continuous, the average value during the time period is the integral over the period divided by the length of the duration of the period.) See also Moving average References Means
https://en.wikipedia.org/wiki/Iconoscope
The iconoscope (from the Greek: εἰκών "image" and σκοπεῖν "to look, to see") was the first practical video camera tube to be used in early television cameras. The iconoscope produced a much stronger signal than earlier mechanical designs, and could be used under any well-lit conditions. This was the first fully electronic system to replace earlier cameras, which used special spotlights or spinning disks to capture light from a single very brightly lit spot. Some of the principles of this apparatus were described when Vladimir Zworykin filed two patents for a television system in 1923 and 1925. A research group at Westinghouse Electronic Company headed by Zworykin presented the iconoscope to the general public in a press conference in June 1933, and two detailed technical papers were published in September and October of the same year. The German company Telefunken bought the rights from RCA and built the superikonoskop camera used for the historical TV transmission at the 1936 Summer Olympics in Berlin. The iconoscope was replaced in Europe around 1936 by the much more sensitive Super-Emitron and Superikonoskop, while in the United States the iconoscope was the leading camera tube used for broadcasting from 1936 until 1946, when it was replaced by the image orthicon tube. Discovery of a New Physical Phenomenon In a Technikatörténeti Szemle article, subsequently reissued on the internet, entitled The Iconoscope: Kálmán Tihanyi and the Development of Modern Television, Tihanyi's daughter Katalin Tihanyi Glass notes that her father found the "storage principle" included a "new physical phenomenon", the photoconductive effect: Operation The main image forming element in the iconoscope was a mica plate with a pattern of photosensitive granules deposited on the front using an electrically insulating glue. The granules were typically made of silver grains covered with caesium or caesium oxide. The back of the mica plate, opposite the granules, was covered with a thin
https://en.wikipedia.org/wiki/Piezochromism
Piezochromism, from the Greek piezô "to squeeze, to press" and chromos "color", describes the tendency of certain materials to change color with the application of pressure. This effect is closely related to the electronic band gap change, which can be found in plastics, semiconductors (e.g. hybrid perovskites) and hydrocarbons. One simple molecule displaying this property is 5-methyl-2-[(2-nitrophenyl)amino]-3-thiophenecarbonitrile, also known as ROY owing to its red, orange and yellow crystalline forms. Individual yellow and pale orange versions transform reversibly to red at high pressure. References External links Piezochromism Chromism
https://en.wikipedia.org/wiki/Confounding
In causal inference, a confounder (also confounding variable, confounding factor, extraneous determinant or lurking variable) is a variable that influences both the dependent variable and independent variable, causing a spurious association. Confounding is a causal concept, and as such, cannot be described in terms of correlations or associations. The existence of confounders is an important quantitative explanation why correlation does not imply causation. Some notations are explicitly designed to identify the existence, possible existence, or non-existence of confounders in causal relationships between elements of a system. Confounds are threats to internal validity. Definition Confounding is defined in terms of the data generating model. Let X be some independent variable, and Y some dependent variable. To estimate the effect of X on Y, the statistician must suppress the effects of extraneous variables that influence both X and Y. We say that X and Y are confounded by some other variable Z whenever Z causally influences both X and Y. Let be the probability of event Y = y under the hypothetical intervention X = x. X and Y are not confounded if and only if the following holds: for all values X = x and Y = y, where is the conditional probability upon seeing X = x. Intuitively, this equality states that X and Y are not confounded whenever the observationally witnessed association between them is the same as the association that would be measured in a controlled experiment, with x randomized. In principle, the defining equality can be verified from the data generating model, assuming we have all the equations and probabilities associated with the model. This is done by simulating an intervention (see Bayesian network) and checking whether the resulting probability of Y equals the conditional probability . It turns out, however, that graph structure alone is sufficient for verifying the equality . Control Consider a researcher attempting to assess the effe
https://en.wikipedia.org/wiki/Rankine%20vortex
The Rankine vortex is a simple mathematical model of a vortex in a viscous fluid. It is named after its discoverer, William John Macquorn Rankine. The vortices observed in nature are usually modelled with an irrotational (potential or free) vortex. However, in potential vortex, the velocity becomes infinite at the vortex center. In reality, very close to the origin, the motion resembles a solid body rotation. The Rankine vortex model assumes a solid-body rotation inside a cylinder of radius and a potential vortex outside the cylinder. The radius is referred to as the vortex-core radius. The velocity components of the Rankine vortex, expressed in terms of the cylindrical-coordinate system are given by where is the circulation strength of the Rankine vortex. Since solid-body rotation is characterized by an azimuthal velocity , where is the constant angular velocity, one can also use the parameter to characterize the vortex. The vorticity field associated with the Rankine vortex is At all points inside the core of the Rankine vortex, the vorticity is uniform at twice the angular velocity of the core; whereas vorticity is zero at all points outside the core because the flow there is irrotational. In reality, vortex cores are not always circular; and vorticity is not exactly uniform throughout the vortex core. See also Kaufmann (Scully) vortex – an alternative mathematical simplification for a vortex, with a smoother transition. Lamb–Oseen vortex – the exact solution for a free vortex decaying due to viscosity. Burgers vortex References External links Streamlines vs. Trajectories in a Translating Rankine Vortex: an example of a Rankine vortex imposed on a constant velocity field, with animation. Equations of fluid dynamics Vortices
https://en.wikipedia.org/wiki/RE%20%28complexity%29
In computability theory and computational complexity theory, RE (recursively enumerable) is the class of decision problems for which a 'yes' answer can be verified by a Turing machine in a finite amount of time. Informally, it means that if the answer to a problem instance is 'yes', then there is some procedure that takes finite time to determine this, and this procedure never falsely reports 'yes' when the true answer is 'no'. However, when the true answer is 'no', the procedure is not required to halt; it may go into an "infinite loop" for some 'no' cases. Such a procedure is sometimes called a semi-algorithm, to distinguish it from an algorithm, defined as a complete solution to a decision problem. Similarly, co-RE is the set of all languages that are complements of a language in RE. In a sense, co-RE contains languages of which membership can be disproved in a finite amount of time, but proving membership might take forever. Equivalent definition Equivalently, RE is the class of decision problems for which a Turing machine can list all the 'yes' instances, one by one (this is what 'enumerable' means). Each member of RE is a recursively enumerable set and therefore a Diophantine set. To show this is equivalent, note that if there is a machine that enumerates all accepted inputs, another machine that takes in a string can run and accept if the string is enumerated. Conversely, if a machine accepts when an input is in a language, another machine can enumerate all strings in the language by interleaving simulations of on every input and outputting strings that are accepted (there is an order of execution that will eventually get to every execution step because there are countably many ordered pairs of inputs and steps). Relations to other classes The set of recursive languages (R) is a subset of both RE and co-RE. In fact, it is the intersection of those two classes, because we can decide any problem for which there exists a recogniser and also a co-recogni
https://en.wikipedia.org/wiki/R%20%28complexity%29
In computational complexity theory, R is the class of decision problems solvable by a Turing machine, which is the set of all recursive languages (also called decidable languages). Equivalent formulations R is equivalent to the set of all total computable functions in the sense that: a decision problem is in R if and only if its indicator function is computable, a total function is computable if and only if its graph is in R. Relationship with other classes Since we can decide any problem for which there exists a recogniser and also a co-recogniser by simply interleaving them until one obtains a result, the class is equal to RE ∩ co-RE. References Blum, Lenore, Mike Shub, and Steve Smale, (1989), "On a theory of computation and complexity over the real numbers: NP-completeness, recursive functions and universal machines", Bulletin of the American Mathematical Society, New Series, 21 (1): 1-46. External links Complexity classes
https://en.wikipedia.org/wiki/G-Market
Gmarket is an e-commerce website based in South Korea. The company was founded in 2000 as a subsidiary of Interpark, and was acquired by eBay in 2009, who subsequently sold it to Shinsegae at 3.4 trillion Korean Won. History and Incidents The predecessor of Gmarket was founded in 1999 by Young Bae Ku. At the time, it was part of the online auction company Interpark. In 2000, it spun off as its own website, known as Goodsdaq. In 2003, the website was renamed Gmarket and adopted a customer to customer e-commerce business model. In 2006, Gmarket became the first South Korean online company to be listed on the NASDAQ. That same year, it launched its global website with product listings in English. In 2009, eBay acquired Gmarket for approximately 1.2 billion USD after buying Gmarket shares from Interpark and Yahoo. Following the acquisition, Gmarket was delisted from the NASDAQ. After the acquisition, G-Market received an on-site investigation by the Fair Trade Commission for allegations of unfair trade due to 11street's report of abuse of market dominant status, and was charged with a corrective order and fine from the Fair Trade Commission as well as being charged with prosecution. However, the following year, the 6th Division of the Seoul Central District Prosecutor's Office concluded that the case was not suspicious. An official at the prosecution said, "The number of sellers who actually stopped trading with '11street' was so small that the effect of restricting competition was not reached, and it was acknowledged that the eBay Market side performed its usual management and supervision duties to prevent unfair trade, so it was disposed of without charge." Items and services Collectibles, appliances, computers, furniture, equipment, vehicles, and other miscellaneous items are listed, bought, and sold. Large international companies such as LG and Samsung also sell their newest products and offer services using competitive auctions and fixed-priced storefronts. R
https://en.wikipedia.org/wiki/Klee%27s%20measure%20problem
In computational geometry, Klee's measure problem is the problem of determining how efficiently the measure of a union of (multidimensional) rectangular ranges can be computed. Here, a d-dimensional rectangular range is defined to be a Cartesian product of d intervals of real numbers, which is a subset of Rd. The problem is named after Victor Klee, who gave an algorithm for computing the length of a union of intervals (the case d = 1) which was later shown to be optimally efficient in the sense of computational complexity theory. The computational complexity of computing the area of a union of 2-dimensional rectangular ranges is now also known, but the case d ≥ 3 remains an open problem. History and algorithms In 1977, Victor Klee considered the following problem: given a collection of n intervals in the real line, compute the length of their union. He then presented an algorithm to solve this problem with computational complexity (or "running time") — see Big O notation for the meaning of this statement. This algorithm, based on sorting the intervals, was later shown by Michael Fredman and Bruce Weide (1978) to be optimal. Later in 1977, Jon Bentley considered a 2-dimensional analogue of this problem: given a collection of n rectangles, find the area of their union. He also obtained a complexity algorithm, now known as Bentley's algorithm, based on reducing the problem to n 1-dimensional problems: this is done by sweeping a vertical line across the area. Using this method, the area of the union can be computed without explicitly constructing the union itself. Bentley's algorithm is now also known to be optimal (in the 2-dimensional case), and is used in computer graphics, among other areas. These two problems are the 1- and 2-dimensional cases of a more general question: given a collection of n d-dimensional rectangular ranges, compute the measure of their union. This general problem is Klee's measure problem. When generalized to the d-dimensional case, B