source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/HVDC%20converter%20station
An HVDC converter station (or simply converter station) is a specialised type of substation which forms the terminal equipment for a high-voltage direct current (HVDC) transmission line. It converts direct current to alternating current or the reverse. In addition to the converter, the station usually contains: three-phase alternating current switch gear transformers capacitors or synchronous condensers for reactive power filters for harmonic suppression, and direct current switch gear. Components Converter The converter is usually installed in a building called the valve hall. Early HVDC systems used mercury-arc valves, but since the mid-1970s, solid state devices such as thyristors have been used. Converters using thyristors or mercury-arc valves are known as line commutated converters. In thyristor-based converters, many thyristors are connected in series to form a thyristor valve, and each converter normally consists of six or twelve thyristor valves. The thyristor valves are usually grouped in pairs or groups of four and can stand on insulators on the floor or hang from insulators from the ceiling. Line commutated converters require voltage from the AC network for commutation, but since the late 1990s, voltage sourced converters have started to be used for HVDC. Voltage sourced converters use insulated-gate bipolar transistors instead of thyristors, and these can provide power to a deenergized AC system. Almost all converters used for HVDC are intrinsically able to operate with power conversion in either direction. Power conversion from AC to DC is called rectification and conversion from DC to AC is called inversion. DC equipment The direct current equipment often includes a coil (called a reactor) that adds inductance in series with the DC line to help smooth the direct current. The inductance typically amounts to between 0.1 H and 1 H. The smoothing reactor can have either an air-core or an iron-core. Iron-core coils look like oil-filled high
https://en.wikipedia.org/wiki/TLS%20acceleration
TLS acceleration (formerly known as SSL acceleration) is a method of offloading processor-intensive public-key encryption for Transport Layer Security (TLS) and its predecessor Secure Sockets Layer (SSL) to a hardware accelerator. Typically this means having a separate card that plugs into a PCI slot in a computer that contains one or more coprocessors able to handle much of the SSL processing. TLS accelerators may use off-the-shelf CPUs, but most use custom ASIC and RISC chips to do most of the difficult computational work. Principle of TLS acceleration operation The most computationally expensive part of a TLS session is the TLS handshake, where the TLS server (usually a webserver) and the TLS client (usually a web browser) agree on a number of parameters that establish the security of the connection. During the TLS handshake the server and the client establish session keys (symmetric keys, used for the duration of a given session), but the encryption and signature of the TLS handshake messages itself is done using asymmetric keys, which requires more computational power than the symmetric cryptography used for the encryption/decryption of the session data. Typically a hardware TLS accelerator will offload processing of the TLS handshake while leaving it to the server software to process the less intense symmetric cryptography of the actual TLS data exchange, but some accelerators handle all TLS operations and terminate the TLS connection, thus leaving the server seeing only decrypted connections. Sometimes data centers employ dedicated servers for TLS acceleration in a reverse proxy configuration. Central processor support Modern x86 CPUs support Advanced Encryption Standard (AES) encoding and decoding in hardware, using the AES instruction set proposed by Intel in March 2008. Allwinner Technology provides a hardware cryptographic accelerator in its A10, A20, A30 and A80 ARM system-on-chip series, and all ARM CPUs have acceleration in the later ARMv8 arch
https://en.wikipedia.org/wiki/Train%20track%20%28mathematics%29
In the mathematical area of topology, a train track is a family of curves embedded on a surface, meeting the following conditions: The curves meet at a finite set of vertices called switches. Away from the switches, the curves are smooth and do not touch each other. At each switch, three curves meet with the same tangent line, with two curves entering from one direction and one from the other. The main application of train tracks in mathematics is to study laminations of surfaces, that is, partitions of closed subsets of surfaces into unions of smooth curves. Train tracks have also been used in graph drawing. Train tracks and laminations A lamination of a surface is a partition of a closed subset of the surface into smooth curves. The study of train tracks was originally motivated by the following observation: If a generic lamination on a surface is looked at from a distance by a myopic person, it will look like a train track. A switch in a train track models a point where two families of parallel curves in the lamination merge to become a single family, as shown in the illustration. Although the switch consists of three curves ending in and intersecting at a single point, the curves in the lamination do not have endpoints and do not intersect each other. For this application of train tracks to laminations, it is often important to constrain the shapes that can be formed by connected components of the surface between the curves of the track. For instance, Penner and Harer require that each such component, when glued to a copy of itself along its boundary to form a smooth surface with cusps, have negative cusped Euler characteristic. A train track with weights, or weighted train track or measured train track, consists of a train track with a non-negative real number, called a weight, assigned to each branch. The weights can be used to model which of the curves in a parallel family of curves from a lamination are split to which sides of the switch. Weights mus
https://en.wikipedia.org/wiki/Stacker
A stacker is a large machine used in bulk material handling. Its function is to pile bulk material such as limestone, ores, coal and cereals on to a stockpile. A reclaimer can be used to recover the material. Gold dredges in Alaska had a stacker that was a fixed part of the dredge. It carried over-size material to the tailings pile. Stackers are nominally rated for capacity in tonnes per hour (tph). They normally travel on a rail between stockpiles in the stockyard. A stacker can usually move in at least two directions: horizontally along the rail and vertically by luffing (raising and lowering) its boom. Luffing of the boom minimises dust by reducing the distance that material such as coal needs to fall to the top of the stockpile. The boom is luffed upwards as the height of the stockpile increases. Some stackers can rotate the boom. This allows a single stacker to form two stockpiles, one on either side of the conveyor. Stackers are used to stack in different patterns, such as cone stacking and chevron stacking. Stacking in a single cone tends to cause size segregation, with coarser material moving out towards the base. In raw cone ply stacking, additional cones are added next to the first cone. In chevron stacking, the stacker travels along the length of the stockpile adding layer upon layer of material. Stackers and reclaimers were originally manually controlled, with no means of remote control. Modern machines are typically semi-automatic or fully automated, with parameters remotely set. The control system used is typically a programmable logic controller, with a human-machine interface for display, connected to a central control system. Other than stacking, a stacker has three basic movements: Luffing: This is vertical movement. Stackers use either a winch mechanism with metal wire, or hydraulic cylinders, generally two. Winch mechanisms are highly reliable compared to hydraulic actuators and remain widely used, particularly in large stackers.
https://en.wikipedia.org/wiki/Thermogravimetric%20analysis
Thermogravimetric analysis or thermal gravimetric analysis (TGA) is a method of thermal analysis in which the mass of a sample is measured over time as the temperature changes. This measurement provides information about physical phenomena, such as phase transitions, absorption, adsorption and desorption; as well as chemical phenomena including chemisorptions, thermal decomposition, and solid-gas reactions (e.g., oxidation or reduction). Thermogravimetric analyzer Thermogravimetric analysis (TGA) is conducted on an instrument referred to as a thermogravimetric analyzer. A thermogravimetric analyzer continuously measures mass while the temperature of a sample is changed over time. Mass, temperature, and time are considered base measurements in thermogravimetric analysis while many additional measures may be derived from these three base measurements. A typical thermogravimetric analyzer consists of a precision balance with a sample pan located inside a furnace with a programmable control temperature. The temperature is generally increased at constant rate (or for some applications the temperature is controlled for a constant mass loss) to incur a thermal reaction. The thermal reaction may occur under a variety of atmospheres including: ambient air, vacuum, inert gas, oxidizing/reducing gases, corrosive gases, carburizing gases, vapors of liquids or "self-generated atmosphere"; as well as a variety of pressures including: a high vacuum, high pressure, constant pressure, or a controlled pressure. The thermogravimetric data collected from a thermal reaction is compiled into a plot of mass or percentage of initial mass on the y axis versus either temperature or time on the x-axis. This plot, which is often smoothed, is referred to as a TGA curve. The first derivative of the TGA curve (the DTG curve) may be plotted to determine inflection points useful for in-depth interpretations as well as differential thermal analysis. A TGA can be used for materials character
https://en.wikipedia.org/wiki/CQ%20%28call%29
CQ is a station code used by wireless operators derived from long established telegraphic practice on undersea cables and landlines, particularly used by those communicating in Morse code, (), but also by voice operators, to make a general call (called a CQ call). Transmitting the letters CQ on a particular radio frequency means that the transmission is a broadcast or "General Call" to anyone listening, and when the operator sends "K' or says "Go Ahead" it is an invitation for any licensed radio station listening on that frequency to respond. Its use on radio matched the existing use on Morse landline telegraphy and dates from the earliest wireless stations. It was widely used in point-to-point diplomatic and press services, maritime, aviation, and police services until those services eliminated Morse radiotelegraphy. It is still widely used in amateur radio which still has active use of Morse radiotelegraphy. History and usage The CQ station code was originally used by landline and undersea cable telegraphy operators in the United Kingdom. The oldest reference found to the station code CQ is contained in The Telegraphist. Edited by W. Lynd, Volume 1 1886. which states on p. 15 under "Alphabetical Codes and Abbreviations": "CQ All Stations". Additionally, the telegraphic station code "CQ" was mentioned in "The Telegraphist. Edited by W. Lynd, Volume 1 1886" CQ was adopted by the Marconi Company in 1904 for use in wireless telegraphy by spark-gap transmitter, and was adopted internationally at the 1912 London International Radiotelegraph Convention, and is still used. A variant of the CQ call, CQD, was the first code used as a distress signal. It was proposed by the Marconi Company and adopted in 1904, but was replaced between 1906 and 1908 by the SOS code. When the Titanic sank in 1912, it initially transmitted the distress call "CQD DE MGY" (with "MGY" being the ship's call sign). Titanic'''s radio operator subsequently alternated between SOS and CQD calls
https://en.wikipedia.org/wiki/Related%20rates
In differential calculus, related rates problems involve finding a rate at which a quantity changes by relating that quantity to other quantities whose rates of change are known. The rate of change is usually with respect to time. Because science and engineering often relate quantities to each other, the methods of related rates have broad applications in these fields. Differentiation with respect to time or one of the other variables requires application of the chain rule, since most problems involve several variables. Fundamentally, if a function is defined such that , then the derivative of the function can be taken with respect to another variable. We assume is a function of , i.e. . Then , so Written in Leibniz notation, this is: Thus, if it is known how changes with respect to , then we can determine how changes with respect to and vice versa. We can extend this application of the chain rule with the sum, difference, product and quotient rules of calculus, etc. For example, if then Procedure The most common way to approach related rates problems is the following: Identify the known variables, including rates of change and the rate of change that is to be found. (Drawing a picture or representation of the problem can help to keep everything in order) Construct an equation relating the quantities whose rates of change are known to the quantity whose rate of change is to be found. Differentiate both sides of the equation with respect to time (or other rate of change). Often, the chain rule is employed at this step. Substitute the known rates of change and the known quantities into the equation. Solve for the wanted rate of change. Errors in this procedure are often caused by plugging in the known values for the variables before (rather than after) finding the derivative with respect to time. Doing so will yield an incorrect result, since if those values are substituted for the variables before differentiation, those variables will beco
https://en.wikipedia.org/wiki/Semantic%20grid
A semantic grid is an approach to grid computing in which information, computing resources and services are described using the semantic data model. In this model, the data and metadata are expressed through facts (small sentences), becoming directly understandable for humans. This makes it easier for resources to be discovered and combined automatically to create virtual organizations (VOs). The descriptions constitute metadata and are typically represented using the technologies of the Semantic Web, such as the Resource Description Framework (RDF). Like the Semantic Web, the semantic grid can be defined as "an extension of the current grid in which information and services are given well-defined meaning, better enabling computers and people to work in cooperation." This notion of the semantic grid was first articulated in the context of e-Science, observing that such an approach is necessary to achieve a high degree of easy-to-use and seamless automation, enabling flexible collaborations and computations on a global scale. The use of semantic web and other knowledge technologies in grid applications are sometimes described as the knowledge grid. Semantic grid extends this by also applying these technologies within the grid middleware. Some semantic grid activities are coordinated through the Semantic Grid Research Group of the Global Grid Forum. Notes See also Business Intelligence 2.0 LSID Semantic Web Rule Language Semantic Grid System - A CSS grid framework . External links ONTOGRID: EU-funded research project for enabling semantic grid applications Semantic Grid Dagstuhl Seminar A semantic grid oriented to e-tourism Grid computing Semantic Web
https://en.wikipedia.org/wiki/History%20of%20radar
The history of radar (where radar stands for radio detection and ranging) started with experiments by Heinrich Hertz in the late 19th century that showed that radio waves were reflected by metallic objects. This possibility was suggested in James Clerk Maxwell's seminal work on electromagnetism. However, it was not until the early 20th century that systems able to use these principles were becoming widely available, and it was German inventor Christian Hülsmeyer who first used them to build a simple ship detection device intended to help avoid collisions in fog (Reichspatent Nr. 165546). True radar, such as the British Chain Home early warning system provided directional information to objects over short ranges, were developed over the next two decades. The development of systems able to produce short pulses of radio energy was the key advance that allowed modern radar systems to come into existence. By timing the pulses on an oscilloscope, the range could be determined and the direction of the antenna revealed the angular location of the targets. The two, combined, produced a "fix", locating the target relative to the antenna. In the 1934–1939 period, eight nations developed independently, and in great secrecy, systems of this type: the United Kingdom, Germany, the United States, the USSR, Japan, the Netherlands, France, and Italy. In addition, Britain shared their information with the United States and four Commonwealth countries: Australia, Canada, New Zealand, and South Africa, and these countries also developed their own radar systems. During the war, Hungary was added to this list. The term RADAR was coined in 1939 by the United States Signal Corps as it worked on these systems for the Navy. Progress during the war was rapid and of great importance, probably one of the decisive factors for the victory of the Allies. A key development was the magnetron in the UK, which allowed the creation of relatively small systems with sub-meter resolution. By the end of
https://en.wikipedia.org/wiki/The%20Story%20of%20Mel
The Story of Mel is an archetypical piece of computer programming folklore. Its subject, Melvin Kaye, is an exemplary "Real Programmer" whose subtle techniques fascinate his colleagues. Story Ed Nather's The Story of Mel details the extraordinary programming prowess of a former colleague of his, "Mel", at Royal McBee Computer Corporation. Although originally written in prose, Nather's story was modified by someone into a "free verse" form which has become widespread. Little is known about Mel Kaye, beyond the fact that he was credited with doing the "bulk of the programming" on the 1959 ACT-1 compiler for the Royal McBee LGP-30 computer. In Nather's story, Kaye is portrayed as being prone to avoiding optimizing assemblers in favor of crafting code to take advantage of hardware quirks, for example taking advantage of the rotation of the LGP-30's drum memory to avoid writing delay loops into the code. The story, as written by Nather, involved Kaye's work on rewriting a blackjack program from the LGP-30 to a newer Royal McBee system, the RPC-4000; company sales executives had requested to modify the program so that they could flip a front panel switch and cause the program to lose (and the user to win). Kaye reluctantly acceded to the request, but to his own delight, he got the test wrong, and the switch would instead cause the program to win every time (and the user to lose). Subsequent to Kaye's departure, Nather was asked to fix the bug. While examining the code, he was puzzled to discover that it contained what appeared to be an infinite loop, yet control did not remain inside the loop. Eventually he realized that Kaye was using self-modifying code to process elements of an array, and had coded the loop in such a way as to take advantage of an Integer overflow. Adding 1 to the address field of an instruction that referred to address x normally just changed the address to x+1. But when x was already the highest possible address, not only did the address wrap ar
https://en.wikipedia.org/wiki/500%20kHz
From early in the 20th century, the radio frequency of 500 kilohertz (500 kHz) was an international calling and distress frequency for Morse code maritime communication. For much of its early history, this frequency was referred to by its equivalent wavelength, 600 meters, or, using the earlier frequency unit name, 500 kilocycles (per second) or 500 kc. Maritime authorities of many nations, including the Maritime and Coastguard Agency and the United States Coast Guard, once maintained 24 hour watches on this frequency, staffed by skilled radio operators. Many SOS calls and medical emergencies at sea were handled via this frequency. However, as the use of Morse code over radio is now obsolete in commercial shipping, 500 kHz is obsolete as a Morse distress frequency. Beginning in the late 1990s, most nations ended monitoring of transmissions on 500 kHz and emergency traffic on 500 kHz has been replaced by the Global Maritime Distress Safety System (GMDSS). Current status The 500 kHz frequency has now been allocated to the maritime Navigational Data or NAVDAT broadcast system. The nearby frequencies of 518 kHz and 490 kHz are used for the NAVTEX component of GMDSS. Proposals to allocate frequencies at or near 500 kHz to amateur radio use resulted in the international allocation of 472–479 kHz to the 630-meter amateur radio band, now implemented in many countries. Initial adoption International standards for the use of 500 kHz first appeared in the first International Radiotelegraph Convention in Berlin, which was signed 3 November 1906, and became effective 1 July 1908. The second service regulation affixed to this Convention designated 500 kHz as one of the standard frequencies to be employed by shore stations, specifying that "Two wave lengths, one of 300 meters [1 MHz] and the other of 600 meters, are authorized for general public service. Every coastal station opened to such service shall use one or the other of these two wave lengths." These regulations
https://en.wikipedia.org/wiki/Alexanderson%20alternator
An Alexanderson alternator is a rotating machine invented by Ernst Alexanderson in 1904 for the generation of high-frequency alternating current for use as a radio transmitter. It was one of the first devices capable of generating the continuous radio waves needed for transmission of amplitude modulated signals by radio. It was used from about 1910 in a few "superpower" longwave radiotelegraphy stations to transmit transoceanic message traffic by Morse code to similar stations all over the world. Although superseded in the early 1920s by the development of vacuum-tube transmitters, the Alexanderson alternator continued to be used until World War II. It is on the list of IEEE Milestones as a key achievement in electrical engineering. History Prior developments After radio waves were discovered in 1887, the first generation of radio transmitters, the spark gap transmitters, produced strings of damped waves, pulses of radio waves which died out to zero quickly. By the 1890s it was realized that damped waves had disadvantages; their energy was spread over a broad frequency band so transmitters on different frequencies interfered with each other, and they could not be modulated with an audio signal to transmit sound. Efforts were made to invent transmitters that would produce continuous waves -- a sinusoidal alternating current at a single frequency. In an 1891 lecture, Frederick Thomas Trouton pointed out that, if an electrical alternator were run at a great enough cycle speed (that is, if it turned fast enough and was built with a large enough number of magnetic poles on its armature) it would generate continuous waves at radio frequency. Starting with Elihu Thomson in 1889, a series of researchers built high frequency alternators, Nikola Tesla (1891, 15 kHz), Salomons and Pyke (1891, 9 kHz), Parsons and Ewing (1892, 14 kHz.), Siemens (5 kHz), B. G. Lamme (1902, 10 kHz), but none was able to reach the frequencies required for radio transmission, above 20 
https://en.wikipedia.org/wiki/Hilbert%27s%20syzygy%20theorem
In mathematics, Hilbert's syzygy theorem is one of the three fundamental theorems about polynomial rings over fields, first proved by David Hilbert in 1890, which were introduced for solving important open questions in invariant theory, and are at the basis of modern algebraic geometry. The two other theorems are Hilbert's basis theorem that asserts that all ideals of polynomial rings over a field are finitely generated, and Hilbert's Nullstellensatz, which establishes a bijective correspondence between affine algebraic varieties and prime ideals of polynomial rings. Hilbert's syzygy theorem concerns the relations, or syzygies in Hilbert's terminology, between the generators of an ideal, or, more generally, a module. As the relations form a module, one may consider the relations between the relations; the theorem asserts that, if one continues in this way, starting with a module over a polynomial ring in indeterminates over a field, one eventually finds a zero module of relations, after at most steps. Hilbert's syzygy theorem is now considered to be an early result of homological algebra. It is the starting point of the use of homological methods in commutative algebra and algebraic geometry. History The syzygy theorem first appeared in Hilbert's seminal paper "Über die Theorie der algebraischen Formen" (1890). The paper is split into five parts: part I proves Hilbert's basis theorem over a field, while part II proves it over the integers. Part III contains the syzygy theorem (Theorem III), which is used in part IV to discuss the Hilbert polynomial. The last part, part V, proves finite generation of certain rings of invariants. Incidentally part III also contains a special case of the Hilbert–Burch theorem. Syzygies (relations) Originally, Hilbert defined syzygies for ideals in polynomial rings, but the concept generalizes trivially to (left) modules over any ring. Given a generating set of a module over a ring , a relation or first syzygy between the gen
https://en.wikipedia.org/wiki/Brother%20Industries
is a Japanese multinational electronics and electrical equipment company headquartered in Nagoya, Japan. Its products include printers, multifunction printers, desktop computers, consumer and industrial sewing machines, large machine tools, label printers, typewriters, fax machines, and other computer-related electronics. Brother distributes its products both under its own name and under OEM agreements with other companies. History Brother's history began in 1908 when it was originally called Yasui Sewing Machine Co in Nagoya, Japan. In 1955, Brother International Corporation (US) was established as their first overseas sales affiliate. In 1958 a European regional sales company was established in Dublin. The corporate name was changed to Brother Industries, Ltd. in 1962. Brother entered the printer market during its long association with Centronics. In 1968 the company moved its UK headquarters to Audenshaw, Manchester, after acquiring the Jones Sewing Machine Company, a long-established British sewing machine maker. In March 2005, "Brother Communication Space" (now the Brother Museum), a corporate museum that also serves as a public relations facility, opened in Nagoya. In December 2011, Brother diversified its offerings by acquiring Nefsis, an innovator in web-based remote collaboration and conferencing software. In November 2012, Brother announced that it had built the last UK-made typewriter at its north Wales factory. It had made 5.9 million typewriters in its Wrexham factory since it opened in 1985. Brother donated the last machine to London's Science Museum. As of 31 March 2020, Brother's annual sales revenue had reached 637,259 million yen (US$6,044,666,710 at October 2020 exchange rates). Sewing and embroidery machines In 2010, the sewing divisions of Brother Industries around Europe were consolidated into one larger company called "Brother Sewing Machines Europe GmbH". With a turnover in excess of €80 million, it is the 4th largest company under
https://en.wikipedia.org/wiki/Resistance%20thermometer
Resistance thermometers, also called resistance temperature detectors (RTDs), are sensors used to measure temperature. Many RTD elements consist of a length of fine wire wrapped around a heat-resistant ceramic or glass core but other constructions are also used. The RTD wire is a pure material, typically platinum (Pt), nickel (Ni), or copper (Cu). The material has an accurate resistance/temperature relationship which is used to provide an indication of temperature. As RTD elements are fragile, they are often housed in protective probes. RTDs, which have higher accuracy and repeatability, are slowly replacing thermocouples in industrial applications below 600 °C. Resistance/temperature relationship of metals Common RTD sensing elements for biomedical application constructed of platinum (Pt), nickel (Ni), or copper (Cu) have a repeatable, resistance versus temperature relationship (R vs T) and operating temperature range. The R vs T relationship is defined as the amount of resistance change of the sensor per degree of temperature change. The relative change in resistance (temperature coefficient of resistance) varies only slightly over the useful range of the sensor. Platinum was proposed by Sir William Siemens as an element for a resistance temperature detector at the Bakerian lecture in 1871: it is a noble metal and has the most stable resistance–temperature relationship over the largest temperature range. Nickel elements have a limited temperature range because the amount of change in resistance per degree of change in temperature becomes very non-linear at temperatures over 300 °C (572 °F). Copper has a very linear resistance–temperature relationship; however, copper oxidizes at moderate temperatures and cannot be used over 150 °C (302 °F). The significant characteristic of metals used as resistive elements is the linear approximation of the resistance versus temperature relationship between 0 and 100 °C. This temperature coefficient of resistance is denote
https://en.wikipedia.org/wiki/Peak%20signal-to-noise%20ratio
Peak signal-to-noise ratio (PSNR) is an engineering term for the ratio between the maximum possible power of a signal and the power of corrupting noise that affects the fidelity of its representation. Because many signals have a very wide dynamic range, PSNR is usually expressed as a logarithmic quantity using the decibel scale. PSNR is commonly used to quantify reconstruction quality for images and video subject to lossy compression. Definition PSNR is most easily defined via the mean squared error (MSE). Given a noise-free m×n monochrome image I and its noisy approximation K, MSE is defined as The PSNR (in dB) is defined as Here, MAXI is the maximum possible pixel value of the image. When the pixels are represented using 8 bits per sample, this is 255. More generally, when samples are represented using linear PCM with B bits per sample, MAXI is 2B − 1. Application in color images For color images with three RGB values per pixel, the definition of PSNR is the same except that the MSE is the sum over all squared value differences (now for each color, i.e. three times as many differences as in a monochrome image) divided by image size and by three. Alternately, for color images the image is converted to a different color space and PSNR is reported against each channel of that color space, e.g., YCbCr or HSL. Quality estimation with PSNR PSNR is most commonly used to measure the quality of reconstruction of lossy compression codecs (e.g., for image compression). The signal in this case is the original data, and the noise is the error introduced by compression. When comparing compression codecs, PSNR is an approximation to human perception of reconstruction quality. Typical values for the PSNR in lossy image and video compression are between 30 and 50 dB, provided the bit depth is 8 bits, where higher is better. The processing quality of 12-bit images is considered high when the PSNR value is 60 dB or higher. For 16-bit data typical values for the PSNR ar
https://en.wikipedia.org/wiki/Underpinning
In construction or renovation, underpinning is the process of strengthening the foundation of an existing building or other structure. Underpinning may be necessary for a variety of reasons: The original foundation isn't strong or stable enough. The usage of the structure has changed. The properties of the soil supporting the foundation may have changed (possibly through subsidence) or were mischaracterized during design. The construction of nearby structures necessitates the excavation of soil supporting existing foundations. To increase the depth or load capacity of existing foundations to support the addition of another storey to the building (above or below grade). It is more economical, due to land price or otherwise, to work on the present structure's foundation than to build a new one. Earthquake, flood, drought or other natural causes have caused the structure to move, requiring stabilisation of foundation soils and/or footings. Underpinning may be accomplished by extending the foundation in depth or breadth so it either rests on a more supportive soil stratum or distributes its load across a greater area. Use of micropiles and jet grouting are common methods in underpinning. Underpinning may be necessary where P class (problem) soils in certain areas of the site are encountered. Through semantic change the word underpinning has evolved to encompass all abstract concepts that serve as a foundation. Mass concrete underpinning Mass concrete underpinning is one of the simplest forms of remedial underpinning at shallow depths. This type of underpinning is done by excavating "bays" along and under the existing foundation and filling them with mass concrete. It is sometimes called a "traditional" method to distinguish it from other types of underpinning like piling and needling. The latter often require underpinning specialists and may use proprietary underpinning systems. Mass concrete underpinning work is performed in compliance with the Party Wall
https://en.wikipedia.org/wiki/Orthogonal%20instruction%20set
In computer engineering, an orthogonal instruction set is an instruction set architecture where all instruction types can use all addressing modes. It is "orthogonal" in the sense that the instruction type and the addressing mode vary independently. An orthogonal instruction set does not impose a limitation that requires a certain instruction to use a specific register so there is little overlapping of instruction functionality. Orthogonality was considered a major goal for processor designers in the 1970s, and the VAX-11 is often used as the benchmark for this concept. However, the introduction of RISC design philosophies in the 1980s significantly reversed the trend against more orthogonality. Modern CPUs often simulate orthogonality in a preprocessing step before performing the actual tasks in a RISC-like core. This "simulated orthogonality" in general is a broader concept, encompassing the notions of decoupling and completeness in function libraries, like in the mathematical concept: an orthogonal function set is easy to use as a basis into expanded functions, ensuring that parts don’t affect another if we change one part. Basic concepts At their core, all general purpose computers work in the same underlying fashion; data stored in a main memory is read by the central processing unit (CPU) into a fast temporary memory (e.g. CPU registers), acted on, and then written back to main memory. Memory consists of a collection of data values, encoded as numbers and referred to by their addresses, also a numerical value. This means the same operations applied to the data can be applied to the addresses themselves. While being worked on, data can be temporarily held in processor registers, scratchpad values that can be accessed very quickly. Registers are used, for example, when adding up strings of numbers into a total. Single instruction, single operand In early computers, the instruction set architecture (ISA) often used a single register, in which case it was kn
https://en.wikipedia.org/wiki/Jarzynski%20equality
The Jarzynski equality (JE) is an equation in statistical mechanics that relates free energy differences between two states and the irreversible work along an ensemble of trajectories joining the same states. It is named after the physicist Christopher Jarzynski (then at the University of Washington and Los Alamos National Laboratory, currently at the University of Maryland) who derived it in 1996. Fundamentally, the Jarzynski equality points to the fact that the fluctuations in the work satisfy certain constraints separately from the average value of the work that occurs in some process. Overview In thermodynamics, the free energy difference between two states A and B is connected to the work W done on the system through the inequality: , with equality holding only in the case of a quasistatic process, i.e. when one takes the system from A to B infinitely slowly (such that all intermediate states are in thermodynamic equilibrium). In contrast to the thermodynamic statement above, the JE remains valid no matter how fast the process happens. The JE states: Here k is the Boltzmann constant and T is the temperature of the system in the equilibrium state A or, equivalently, the temperature of the heat reservoir with which the system was thermalized before the process took place. The over-line indicates an average over all possible realizations of an external process that takes the system from the equilibrium state A to a new, generally nonequilibrium state under the same external conditions as that of the equilibrium state B. This average over possible realizations is an average over different possible fluctuations that could occur during the process (due to Brownian motion, for example), each of which will cause a slightly different value for the work done on the system. In the limit of an infinitely slow process, the work W performed on the system in each realization is numerically the same, so the average becomes irrelevant and the Jarzynski equality re
https://en.wikipedia.org/wiki/F%20%28programming%20language%29
F is a modular, compiled, numeric programming language, designed for scientific programming and scientific computation. F was developed as a modern Fortran, thus making it a subset of Fortran 95. It combines both numerical and data abstraction features from these languages. F is also backwards compatible with Fortran 77, allowing calls to Fortran 77 programs. F was implemented on top of compilers from NAG, Fujitsu, Salford Software and Absoft. It was later included in the g95 compiler. Overview F is designed to be a minimal subset of Fortran, with only about one hundred intrinsic procedures. Language keywords and intrinsic function names are reserved keywords in F and no other names may take this exact form. F contains the same character set used in Fortran 90/95 with a limit of 132 characters. Reserved words are always written in lowercase. Any uppercase letter may appear in a character constant. Variable names do not have restriction and can include upper and lowercase characters. Operators F supports many of the standard operators used in Fortran. The operators supported by F are: Arithmetic operators: +, -, *, /, ** Relational operators: <, <=, ==, /=, >, >= Logical operators: .not., .and., .or., .eqv., .neqv. character concatenation: // The assignment operator is denoted by the equal sign =. In addition, pointer assignment is denoted by =>. Comments are denoted by the ! symbol: variable = expression ! assignment pointer => target ! pointer assignment Data types Similar to Fortran, the type specification is made up of a type, a list of attributes for the declared variables, and the variable list. F provides the same types as Fortran, except that double precision floating point variables must be declared as real with a kind with a kind parameter: ! type [,attribute list] :: entity declaration list real :: x, y ! declaring variables of type real x,y without an attribute list integer (kind = long), dimension (100) :: x ! declaring variable of type big
https://en.wikipedia.org/wiki/Orthant
In geometry, an orthant or hyperoctant is the analogue in n-dimensional Euclidean space of a quadrant in the plane or an octant in three dimensions. In general an orthant in n-dimensions can be considered the intersection of n mutually orthogonal half-spaces. By independent selections of half-space signs, there are 2n orthants in n-dimensional space. More specifically, a closed orthant in Rn is a subset defined by constraining each Cartesian coordinate to be nonnegative or nonpositive. Such a subset is defined by a system of inequalities: ε1x1 ≥ 0      ε2x2 ≥ 0     · · ·     εnxn ≥ 0, where each εi is +1 or −1. Similarly, an open orthant in Rn is a subset defined by a system of strict inequalities ε1x1 > 0      ε2x2 > 0     · · ·     εnxn > 0, where each εi is +1 or −1. By dimension: In one dimension, an orthant is a ray. In two dimensions, an orthant is a quadrant. In three dimensions, an orthant is an octant. John Conway defined the term n-orthoplex from orthant complex as a regular polytope in n-dimensions with 2n simplex facets, one per orthant. The nonnegative orthant is the generalization of the first quadrant to n-dimensions and is important in many constrained optimization problems. See also Cross polytope (or orthoplex) – a family of regular polytopes in n-dimensions which can be constructed with one simplex facets in each orthant space. Measure polytope (or hypercube) – a family of regular polytopes in n-dimensions which can be constructed with one vertex in each orthant space. Orthotope – generalization of a rectangle in n-dimensions, with one vertex in each orthant. References Further reading The facts on file: Geometry handbook, Catherine A. Gorini, 2003, , p.113 Euclidean geometry Linear algebra zh:卦限
https://en.wikipedia.org/wiki/Wireless%20WAN
Wireless wide area network (WWAN), is a form of wireless network. The larger size of a wide area network compared to a local area network requires differences in technology. Wireless networks of different sizes deliver data in the form of telephone calls, web pages, and video streaming. A WWAN often differs from wireless local area network (WLAN) by using mobile telecommunication cellular network technologies such as 2G, 3G, 4G LTE, and 5G to transfer data. It is sometimes referred as Mobile Broadband. These technologies are offered regionally, nationwide, or even globally and are provided by a wireless service provider. WWAN connectivity allows a user with a laptop and a WWAN card to surf the web, check email, or connect to a virtual private network (VPN) from anywhere within the regional boundaries of cellular service. Various computers can have integrated WWAN capabilities. A WWAN may also be a closed network that covers a large geographic area. For example, a mesh network or MANET with nodes on buildings, towers, trucks, and planes could also be considered a WWAN. A WWAN may also be a low-power, low-bit-rate wireless WAN, (LPWAN), intended to carry small packets of information between things, often in the form of battery operated sensors. Since radio communications systems do not provide a physically secure connection path, WWANs typically incorporate encryption and authentication methods to make them more secure. Some of the early GSM encryption techniques were flawed, and security experts have issued warnings that cellular communication, including WWAN, is no longer secure. UMTS (3G) encryption was developed later and has yet to be broken. See also Private Shared Wireless Network Wide area network Wireless LAN Wi-Fi Satellite Internet access References Wide area networks Wireless networking
https://en.wikipedia.org/wiki/Elementary%20class
In model theory, a branch of mathematical logic, an elementary class (or axiomatizable class) is a class consisting of all structures satisfying a fixed first-order theory. Definition A class K of structures of a signature σ is called an elementary class if there is a first-order theory T of signature σ, such that K consists of all models of T, i.e., of all σ-structures that satisfy T. If T can be chosen as a theory consisting of a single first-order sentence, then K is called a basic elementary class. More generally, K is a pseudo-elementary class if there is a first-order theory T of a signature that extends σ, such that K consists of all σ-structures that are reducts to σ of models of T. In other words, a class K of σ-structures is pseudo-elementary if and only if there is an elementary class K' such that K consists of precisely the reducts to σ of the structures in K'. For obvious reasons, elementary classes are also called axiomatizable in first-order logic, and basic elementary classes are called finitely axiomatizable in first-order logic. These definitions extend to other logics in the obvious way, but since the first-order case is by far the most important, axiomatizable implicitly refers to this case when no other logic is specified. Conflicting and alternative terminology While the above is nowadays standard terminology in "infinite" model theory, the slightly different earlier definitions are still in use in finite model theory, where an elementary class may be called a Δ-elementary class, and the terms elementary class and first-order axiomatizable class are reserved for basic elementary classes (Ebbinghaus et al. 1994, Ebbinghaus and Flum 2005). Hodges calls elementary classes axiomatizable classes, and he refers to basic elementary classes as definable classes. He also uses the respective synonyms EC class and EC class (Hodges, 1993). There are good reasons for this diverging terminology. The signatures that are considered in general model theo
https://en.wikipedia.org/wiki/Dun%20%26%20Bradstreet
The Dun & Bradstreet Corporation is an American company that provides commercial data, analytics, and insights for businesses. Headquartered in Jacksonville, Florida, the company offers a wide range of products and services for risk and financial analysis, operations and supply, and sales and marketing professionals, as well as research and insights on global business issues. It serves customers in government and industries such as communications, technology, strategic financial services, and retail, telecommunications, and manufacturing markets. Often referred to as D&B, the company's database contains over 500 million business records worldwide. History 1800s Dun & Bradstreet traces its history to July 20, 1841, with formation of The Mercantile Agency in New York City by Lewis Tappan. Recognizing the need for a centralized credit reporting system, Tappan formed the company to create a network of correspondents who would provide reliable, objective credit information to subscribers. As an advocate for civil rights, Tappan used his abolitionist connections to expand and update the company's credit information. Despite accusations of personal privacy invasion, by 1844 the Mercantile Agency had over 280 clients. The agency continued to expand, opening offices in Boston, Philadelphia, and Baltimore. By 1849, Tappan retired, allowing Benjamin Douglass to take over the booming business. In 1859, Douglass transferred the company to Robert Graham Dun, who immediately changed the firm's name to R. G. Dun & Company. Over the next 40 years, Graham Dun continued to expand the business across international boundaries. 1900s In March 1933, Dun merged with competitor John M. Bradstreet to form today's Dun & Bradstreet. The merger was engineered by Dun's CEO, Arthur Whiteside. Whiteside's successor, J. Wilson Newman, worked to extend the company's range of products and services, expanding the company dramatically during the 1960s by engineering ways to apply new technologies
https://en.wikipedia.org/wiki/Hemagglutination%20assay
The hemagglutination assay or haemagglutination assay (HA) and the hemagglutination inhibition assay (HI or HAI) were developed in 1941–42 by American virologist George Hirst as methods for quantifying the relative concentration of viruses, bacteria, or antibodies. HA and HAI apply the process of hemagglutination, in which sialic acid receptors on the surface of red blood cells (RBCs) bind to the hemagglutinin glycoprotein found on the surface of influenza virus (and several other viruses) and create a network, or lattice structure, of interconnected RBCs and virus particles. The agglutinated lattice maintains the RBCs in a suspended distribution, typically viewed as a diffuse reddish solution. The formation of the lattice depends on the concentrations of the virus and RBCs, and when the relative virus concentration is too low, the RBCs are not constrained by the lattice and settle to the bottom of the well. Hemagglutination is observed in the presence of staphylococci, vibrios, and other bacterial species, similar to the mechanism viruses use to cause agglutination of erythrocytes. The RBCs used in HA and HI assays are typically from chickens, turkeys, horses, guinea pigs, or humans depending on the selectivity of the targeted virus or bacterium and the associated surface receptors on the RBC. Procedure A general procedure for HA is as follows, a serial dilution of virus is prepared across the rows in a U or V- bottom shaped 96-well microtiter plate. The most concentrated sample in the first well is often diluted to be 1/5x of the stock, and subsequent wells are typically two-fold dilutions (1/10, 1/20, 1/40, etc.).The final well serves as a negative control with no virus. Each row of the plate typically has a different virus and the same pattern of dilutions. After serial dilutions, a standardized concentration of RBCs is added to each well and mixed gently. The plate is incubated for 30 minutes at room temperature. Following the incubation period, the assay can
https://en.wikipedia.org/wiki/Esterel
Esterel is a synchronous programming language for the development of complex reactive systems. The imperative programming style of Esterel allows the simple expression of parallelism and preemption. As a consequence, it is well suited for control-dominated model designs. The development of the language started in the early 1980s, and was mainly carried out by a team of Ecole des Mines de Paris and INRIA led by Gérard Berry in France. Current compilers take Esterel programs and generate C code or hardware (RTL) implementations (VHDL or Verilog). The language is still under development, with several compilers out. The commercial version of Esterel is the development environment Esterel Studio. The company that commercialize it (Synfora) initiated a normalization process with the IEEE in April 2007 however the working group (P1778) dissolved March 2011. The reference manual is publicly available. The Multiform Notion of Time The notion of time used in Esterel differs from that of non-synchronous languages in the following way: The notion of physical time is replaced with the notion of order. Only the simultaneity and precedence of events are considered. This means that the physical time does not play any special role. This is called multiform notion of time. An Esterel program describes a totally ordered sequence of logical instants. At each instant, an arbitrary number of events occur (including 0). Event occurrences that happen at the same logical instant are considered simultaneous. Other events are ordered as their instances of occurrences. There are two types of statements: Those that take zero time (execute and terminate in the same instant) and those that delay for a prescribed number of cycles. Signals Signals are the only means of communication. There are valued and non-valued signals. They are further categorized as being input, output, or local signals. A signal has the property of being either present or absent in an instant. Valued signals also contai
https://en.wikipedia.org/wiki/De-escalation
De-escalation is a human behavior that is intended to prevent the escalation of conflicts. It may also refer to approaches in conflict resolution. People may become committed to behaviors that tend to escalate conflict, so specific measures must be taken to avoid such escalation. Overview Verbal de-escalation in psychiatric settings In psychiatric settings, de-escalation is aimed at calmly communicating with an agitated client in order to understand, manage and resolve their concerns. Ultimately, these actions should help reduce the client's agitation and potential for future aggression or violence. An inadequate intervention, or one occurring too late, may leave staff needing to use coercive measures to manage an aggressive or violent client. Coercive measures, such as chemical or mechanical restraints, or seclusion, are damaging to the therapeutic relationship and harmful to clients and staff. A review of the literature conducted by Mavandadi, Bieling and Madsen (2016) identified 19 articles that defined or provided a model of de-escalation. Articles converge on a number of themes (i.e. de-escalation should involve safely, calmly and empathetically supporting the client with their concerns). Hankin et al.’s (2011) review of four de-escalation studies reflected the somewhat unclear state of de-escalation research. Their review settled on eight goals, seven elements, 15 general techniques and 15 other techniques divided into three subheadings. In addition, an attempt to synthesize the various models and definitions was conducted by Price & Baker (2012). Thematic analysis of 11 eligible studies converged on seven themes: three related to staff skills (e.g. empathetic concern, calm appearance and gentle tone of voice) and four related to the process of intervening (e.g. establish rapport, maintain safety, problem solve and set limits). The available literature provides clinical descriptions of effective de-escalation based on qualitative data and professional obs
https://en.wikipedia.org/wiki/Picketing
Picketing is a form of protest in which people (called pickets or picketers) congregate outside a place of work or location where an event is taking place. Often, this is done in an attempt to dissuade others from going in ("crossing the picket line"), but it can also be done to draw public attention to a cause. Picketers normally endeavor to be non-violent. It can have a number of aims, but is generally to put pressure on the party targeted to meet particular demands or cease operations. This pressure is achieved by harming the business through loss of customers and negative publicity, or by discouraging or preventing workers or customers from entering the site and thereby preventing the business from operating normally. Picketing is a common tactic used by trade unions during strikes, who will try to prevent dissident members of the union, members of other unions and non-unionised workers from working. Those who cross the picket line and work despite the strike are known pejoratively as scabs. Types of picket Informational picketing is the legal name given to awareness-raising picketing. Per Merriam-Webster's Dictionary of Law, it entails picketing by a group, typically a labour or trade union, which inform the public about a cause of its concern. In almost all cases this is a disliked policy or practice of the business or organisation. It is a popular picketing technique for nurses to use outside of healthcare facilities. For example, on April 5, 2006, nurses of the UMass Memorial Medical Center (UMMHC) took part in two separate such events to protect the quality of their nursing program. Informational picketing was used to gain public support and promote further bargaining with management. It may also be a spur or auxiliary to a petition to government to seek regulatory intervention, reliefs, dispensations or funds. A mass picket is an attempt to bring as many people as possible to a picket line to demonstrate support for the cause. It is primarily used whe
https://en.wikipedia.org/wiki/Geometry%20template
A geometry template is a piece of clear plastic with cut-out shapes for use in mathematics and other subjects in primary school through secondary school. It also has various measurements on its sides to be used like a ruler. In Australia, popular brands include Mathomat and MathAid. Brands Mathomat and Mathaid Mathomat is a trademark used for a plastic stencil developed in Australia by Craig Young in 1969, who originally worked as an engineering tradesperson in the Government Aircraft Factories (GAF) in Melbourne before retraining and working as head of mathematics in a secondary school in Melbourne. Young designed Mathomat to address what he perceived as limitations of traditional mathematics drawing sets in classrooms, mainly caused by students losing parts of the sets. The Mathomat stencil has a large number of geometric shapes stencils combined with the functions of a technical drawing set (rulers, set squares, protractor and circles stencils to replace a compass). The template made use polycarbonate – a new type of thermoplastic polymer when Mathomat first came out – which was strong and transparent enough to allow for a large number of stencil shapes to be included in its design without breaking or tearing. The first template was exhibited in 1970 at a mathematics conference in Melbourne along with a series of popular mathematics teaching lesson plan; it became an immediate success with a large number of schools specifying it as a required students purchase. As of 2017, the stencil is widely specified in Australian schools, chiefly for students at early secondary school level. The manufacturing of Mathomat was taken over in 1989 by the W&G drawing instrument company, which had a factory in Melbourne for manufacture of technical drawing instruments. Young also developed MathAid, which was initially produced by him when he was living in Ringwood, Victoria. He later sold the company. W&G published a series of teacher resource books for Mathomat authored by
https://en.wikipedia.org/wiki/Subgame
In game theory, a subgame is any part (a subset) of a game that meets the following criteria (the following terms allude to a game described in extensive form): It has a single initial node that is the only member of that node's information set (i.e. the initial node is in a singleton information set). If a node is contained in the subgame then so are all of its successors. If a node in a particular information set is in the subgame then all members of that information set belong to the subgame. It is a notion used in the solution concept of subgame perfect Nash equilibrium, a refinement of the Nash equilibrium that eliminates non-credible threats. The key feature of a subgame is that it, when seen in isolation, constitutes a game in its own right. When the initial node of a subgame is reached in a larger game, players can concentrate only on that subgame; they can ignore the history of the rest of the game (provided they know what subgame they are playing). This is the intuition behind the definition given above of a subgame. It must contain an initial node that is a singleton information set since this is a requirement of a game. Otherwise, it would be unclear where the player with first move should start at the beginning of a game (but see nature's choice). Even if it is clear in the context of the larger game which node of a non-singleton information set has been reached, players could not ignore the history of the larger game once they reached the initial node of a subgame if subgames cut across information sets. Furthermore, a subgame can be treated as a game in its own right, but it must reflect the strategies available to players in the larger game of which it is a subset. This is the reasoning behind 2 and 3 of the definition. All the strategies (or subsets of strategies) available to a player at a node in a game must be available to that player in the subgame the initial node of which is that node. Subgame perfection One of the principal uses of the n
https://en.wikipedia.org/wiki/Capability%20Maturity%20Model%20Integration
Capability Maturity Model Integration (CMMI) is a process level improvement training and appraisal program. Administered by the CMMI Institute, a subsidiary of ISACA, it was developed at Carnegie Mellon University (CMU). It is required by many U.S. Government contracts, especially in software development. CMU claims CMMI can be used to guide process improvement across a project, division, or an entire organization. CMMI defines the following maturity levels for processes: Initial, Managed, Defined, Quantitatively Managed, and Optimizing. Version 2.0 was published in 2018 (Version 1.3 was published in 2010, and is the reference model for the rest of the information in this article). CMMI is registered in the U.S. Patent and Trademark Office by CMU. Overview Originally CMMI addresses three areas of interest: Product and service development – CMMI for Development (CMMI-DEV), Service establishment, management, – CMMI for Services (CMMI-SVC), and Product and service acquisition – CMMI for Acquisition (CMMI-ACQ). In version 2.0 these three areas (that previously had a separate model each) were merged into a single model. CMMI was developed by a group from industry, government, and the Software Engineering Institute (SEI) at CMU. CMMI models provide guidance for developing or improving processes that meet the business goals of an organization. A CMMI model may also be used as a framework for appraising the process maturity of the organization. By January 2013, the entire CMMI product suite was transferred from the SEI to the CMMI Institute, a newly created organization at Carnegie Mellon. History CMMI was developed by the CMMI project, which aimed to improve the usability of maturity models by integrating many different models into one framework. The project consisted of members of industry, government and the Carnegie Mellon Software Engineering Institute (SEI). The main sponsors included the Office of the Secretary of Defense (OSD) and the National Defense Industri
https://en.wikipedia.org/wiki/Stueckelberg%20action
In field theory, the Stueckelberg action (named after Ernst Stueckelberg) describes a massive spin-1 field as an R (the real numbers are the Lie algebra of U(1)) Yang–Mills theory coupled to a real scalar field . This scalar field takes on values in a real 1D affine representation of R with as the coupling strength. This is a special case of the Higgs mechanism, where, in effect, and thus the mass of the Higgs scalar excitation has been taken to infinity, so the Higgs has decoupled and can be ignored, resulting in a nonlinear, affine representation of the field, instead of a linear representation — in contemporary terminology, a U(1) nonlinear -model. Gauge-fixing , yields the Proca action. This explains why, unlike the case for non-abelian vector fields, quantum electrodynamics with a massive photon is, in fact, renormalizable, even though it is not manifestly gauge invariant (after the Stückelberg scalar has been eliminated in the Proca action). Stueckelberg extension of the Standard Model The Stueckelberg extension of the Standard Model (StSM) consists of a gauge invariant kinetic term for a massive U(1) gauge field. Such a term can be implemented into the Lagrangian of the Standard Model without destroying the renormalizability of the theory and further provides a mechanism for mass generation that is distinct from the Higgs mechanism in the context of Abelian gauge theories. The model involves a non-trivial mixing of the Stueckelberg and the Standard Model sectors by including an additional term in the effective Lagrangian of the Standard Model given by The first term above is the Stueckelberg field strength, and are topological mass parameters and is the axion. After symmetry breaking in the electroweak sector the photon remains massless. The model predicts a new type of gauge boson dubbed which inherits a very distinct narrow decay width in this model. The St sector of the StSM decouples from the SM in limit . Stueckelberg type couplings arise
https://en.wikipedia.org/wiki/Word%20metric
In group theory, a word metric on a discrete group is a way to measure distance between any two elements of . As the name suggests, the word metric is a metric on , assigning to any two elements , of a distance that measures how efficiently their difference can be expressed as a word whose letters come from a generating set for the group. The word metric on G is very closely related to the Cayley graph of G: the word metric measures the length of the shortest path in the Cayley graph between two elements of G. A generating set for must first be chosen before a word metric on is specified. Different choices of a generating set will typically yield different word metrics. While this seems at first to be a weakness in the concept of the word metric, it can be exploited to prove theorems about geometric properties of groups, as is done in geometric group theory. Examples The group of integers ℤ The group of integers ℤ is generated by the set {-1,+1}. The integer -3 can be expressed as -1-1-1+1-1, a word of length 5 in these generators. But the word that expresses -3 most efficiently is -1-1-1, a word of length 3. The distance between 0 and -3 in the word metric is therefore equal to 3. More generally, the distance between two integers m and n in the word metric is equal to |m-n|, because the shortest word representing the difference m-n has length equal to |m-n|. The group For a more illustrative example, the elements of the group can be thought of as vectors in the Cartesian plane with integer coefficients. The group is generated by the standard unit vectors , and their inverses , . The Cayley graph of is the so-called taxicab geometry. It can be pictured in the plane as an infinite square grid of city streets, where each horizontal and vertical line with integer coordinates is a street, and each point of lies at the intersection of a horizontal and a vertical street. Each horizontal segment between two vertices represents the generating vector or
https://en.wikipedia.org/wiki/FISH%20%28cipher%29
The FISH (FIbonacci SHrinking) stream cipher is a fast software based stream cipher using Lagged Fibonacci generators, plus a concept from the shrinking generator cipher. It was published by Siemens in 1993. FISH is quite fast in software and has a huge key length. However, in the same paper where he proposed Pike, Ross Anderson showed that FISH can be broken with just a few thousand bits of known plaintext. References . . Stream ciphers Fibonacci numbers
https://en.wikipedia.org/wiki/Nested%20function
In computer programming, a nested function (or nested procedure or subroutine) is a function which is defined within another function, the enclosing function. Due to simple recursive scope rules, a nested function is itself invisible outside of its immediately enclosing function, but can see (access) all local objects (data, functions, types, etc.) of its immediately enclosing function as well as of any function(s) which, in turn, encloses that function. The nesting is theoretically possible to unlimited depth, although only a few levels are normally used in practical programs. Nested functions are used in many approaches to structured programming, including early ones, such as ALGOL, Simula 67 and Pascal, and also in many modern dynamic languages and functional languages. However, they are traditionally not supported in the (originally simple) C-family of languages. Effects Nested functions assume function scope or block scope. The scope of a nested function is inside the enclosing function, i.e. inside one of the constituent blocks of that function, which means that it is invisible outside that block and also outside the enclosing function. A nested function can access other local functions, variables, constants, types, classes, etc. that are in the same scope, or in any enclosing scope, without explicit parameter passing, which greatly simplifies passing data into and out of the nested function. This is typically allowed for both reading and writing. Nested functions may in certain situations (and languages) lead to the creation of a closure. If it is possible for the nested function to escape the enclosing function, for example if functions are first class objects and a nested function is passed to another function or returned from the enclosing function, then a closure is created and calls to this function can access the environment of the original function. The frame of the immediately enclosing function must continue to be alive until the last referencing
https://en.wikipedia.org/wiki/CableCARD
CableCARD is a special-use PC Card device that allows consumers in the United States to view and record digital cable television channels on digital video recorders, personal computers and television sets on equipment such as a set-top box not provided by a cable television company. The card is usually provided by the local cable operator, typically for a nominal monthly fee. In a broader context, CableCARD refers to a set of technologies created by the United States cable television industry to allow devices from non-cable companies to access content on the cable networks. Some technologies not only refer to the physical card, but also to a device ("Host") that uses the card. Some CableCARD technologies can be used with devices that have no physical CableCARD. The CableCARD was the outcome of a U.S. federal government objective, directed in the Telecommunications Act of 1996, to provide a robust competitive retail market for set-top boxes so consumers did not have to use proprietary equipment from the cable operators. It was believed that this would provide consumers with more choices and lower costs. A 2020 FCC decision removed the requirement for cable companies to provide CableCARDs, but they are still required to provide consumer access options via "separable security". Background The portion of the Telecommunications Act of 1996 which resulted in the creation of CableCARDs is known as Section 629, instructing the Federal Communications Commission (FCC) to: Multichannel video programming refers to cable or satellite television. A driving motivation of this passage was to foster the kind of consumer choices that resulted after the Federal government landmark Carterfone ruling requiring telephone companies to allow consumers to purchase third-party telephones for attachment to the phone company network. The thought was that consumers would benefit from wider choices due to competition between consumer electronics (CE) manufacturers unaffiliated with cable c
https://en.wikipedia.org/wiki/Phase%20correlation
Phase correlation is an approach to estimate the relative translative offset between two similar images (digital image correlation) or other data sets. It is commonly used in image registration and relies on a frequency-domain representation of the data, usually calculated by fast Fourier transforms. The term is applied particularly to a subset of cross-correlation techniques that isolate the phase information from the Fourier-space representation of the cross-correlogram. Example The following image demonstrates the usage of phase correlation to determine relative translative movement between two images corrupted by independent Gaussian noise. The image was translated by (30,33) pixels. Accordingly, one can clearly see a peak in the phase-correlation representation at approximately (30,33). Method Given two input images and : Apply a window function (e.g., a Hamming window) on both images to reduce edge effects (this may be optional depending on the image characteristics). Then, calculate the discrete 2D Fourier transform of both images. Calculate the cross-power spectrum by taking the complex conjugate of the second result, multiplying the Fourier transforms together elementwise, and normalizing this product elementwise. Where is the Hadamard product (entry-wise product) and the absolute values are taken entry-wise as well. Written out entry-wise for element index : Obtain the normalized cross-correlation by applying the inverse Fourier transform. Determine the location of the peak in . Commonly, interpolation methods are used to estimate the peak location in the cross-correlogram to non-integer values, despite the fact that the data are discrete, and this procedure is often termed 'subpixel registration'. A large variety of subpixel interpolation methods are given in the technical literature. Common peak interpolation methods such as parabolic interpolation have been used, and the OpenCV computer vision package uses a centroid-based method, though the
https://en.wikipedia.org/wiki/Logic%20Made%20Easy
Logic Made Easy: How to Know When Language Deceives You is a 2004 book by Deborah J. Bennett published by W.W. Norton & Company (). Its theme is the analysis of what common words such as "some", "all", and "not" mean, and how logic relates to speech and writing. It discusses eliminating problems such as ambiguity and imprecise language from communications, such as in technical writing. References 2004 non-fiction books Logic books W. W. Norton & Company books
https://en.wikipedia.org/wiki/Wax%20motor
A wax motor is a linear actuator device that converts thermal energy into mechanical energy by exploiting the phase-change behaviour of waxes. During melting, wax typically expands in volume by 5–20% . A wide range of waxes can be used in wax motors, ranging from highly refined hydrocarbons to waxes extracted from vegetable matter. Specific examples include paraffin waxes in the straight-chain n-alkanes series. These melt and solidify over a well-defined and narrow temperature range. Design The principal components of a wax motor are: An enclosed volume of wax A plunger or stroke-rod to convert the thermo-hydraulic force from the wax into a useful mechanical output A source of heat such as: Electric current; typically a PTC thermistor, that heats the wax Solar radiation; e.g. greenhouse vents Combustion heat; e.g. excess heat from internal combustion engines Ambient heat A sink to reject heat energy such as: Convection to cooler ambient air Peltier effect device arranged to transfer heat energy away When the heat source is energized, the wax block is heated and it expands, driving the plunger outwards by volume displacement. When the heat source is removed, the wax block contracts as it cools and the wax solidifies. For the plunger to withdraw, a biasing force is usually required to overcome the mechanical resistance of seals that contain the liquid wax. The biasing force is typically 20% to 30% of the operating force and often provided by a mechanical spring or gravity-fed dead weight applied externally into the wax motor . Depending on the particular application, wax motors potentially have advantages over magnetic solenoids: They provide a large hydraulic force from the expansion of the wax in the order of 4000 N (corresponding to roughly 400 kg or 900 lb at standard gravity) . Both the application and the release of the wax motor is not instantaneous, but rather, smooth and gentle. Because the wax motor is a resistive load rather than an inductive load, wax
https://en.wikipedia.org/wiki/Steam%20rupture
A steam rupture occurs within a pressurized system of super critical water when the pressure exceeds the design plus safety margin specification. A steam rupture can occur in any high temperature pressurized system, including, but not limited to: automobile cooling systems, stationary power plants, mobile power plants, steam driven tools (such as some trip hammers), and even the delivery systems for application processes such as cleaning and fabric fullering. References Mechanical engineering Engineering failures
https://en.wikipedia.org/wiki/Linear%20phase
In signal processing, linear phase is a property of a filter where the phase response of the filter is a linear function of frequency. The result is that all frequency components of the input signal are shifted in time (usually delayed) by the same constant amount (the slope of the linear function), which is referred to as the group delay. Consequently, there is no phase distortion due to the time delay of frequencies relative to one another. For discrete-time signals, perfect linear phase is easily achieved with a finite impulse response (FIR) filter by having coefficients which are symmetric or anti-symmetric. Approximations can be achieved with infinite impulse response (IIR) designs, which are more computationally efficient. Several techniques are: a Bessel transfer function which has a maximally flat group delay approximation function a phase equalizer Definition A filter is called a linear phase filter if the phase component of the frequency response is a linear function of frequency. For a continuous-time application, the frequency response of the filter is the Fourier transform of the filter's impulse response, and a linear phase version has the form: where: A(ω) is a real-valued function. is the group delay. For a discrete-time application, the discrete-time Fourier transform of the linear phase impulse response has the form: where: A(ω) is a real-valued function with 2π periodicity. k is an integer, and k/2 is the group delay in units of samples. is a Fourier series that can also be expressed in terms of the Z-transform of the filter impulse response. I.e.: where the notation distinguishes the Z-transform from the Fourier transform. Examples When a sinusoid  passes through a filter with constant (frequency-independent) group delay   the result is: where: is a frequency-dependent amplitude multiplier. The phase shift is a linear function of angular frequency , and is the slope. It follows that a complex exponential function: is transf
https://en.wikipedia.org/wiki/ICL%20Distributed%20Array%20Processor
The Distributed Array Processor (DAP) produced by International Computers Limited (ICL) was the world's first commercial massively parallel computer. The original paper study was complete in 1972 and building of the prototype began in 1974. The first machine was delivered to Queen Mary College in 1979. Development The initial 'Pilot DAP' was designed and implemented by Dr Stewart F Reddaway with the aid of David J Hunt and Peter M Flanders at the ICL Stevenage Labs. Their manager and a major contributor was John K Iliffe who had designed the Basic Language Machine—he is well known nowadays for Iliffe vectors. The ICL DAP had 64×64 single bit processing elements (PEs) with 4096 bits of storage per PE. It was attached to an ICL mainframe and its memory was mapped into the mainframe's memory. Programs for the DAP were written in DAP FORTRAN which was FORTRAN extended with 64×64 matrix and 64 element vector primitives. DAP Fortran compiled to an assembly language called APAL (Array Processor Assembly Language). The DAP had a single instruction, multiple data (SIMD) architecture. Each operation could be performed under the control of a mask which controlled which elements were affected. Array programs were executed as subroutines of normal mainframe FORTRAN programs and IO was handled by the mainframe. Operationally, there was an overhead to transfer computational data into and out of the array, and problems which did not fit the 64×64 matrix imposed additional complexity to handle the boundaries (65×65 was perhaps the worst case!)—but for problems which suited the architecture, it could outperform the current Cray pipeline architectures by two orders of magnitude. The ICL 2980 was not a popular machine and this held back the use of the DAP as an attached processor was restricted initially to this one range. The design as described in Reddaway's 1973 paper is pretty much that which was implemented in the first commercial version except the facility to supply address
https://en.wikipedia.org/wiki/Time-invariant%20system
In control theory, a time-invariant (TI) system has a time-dependent system function that is not a direct function of time. Such systems are regarded as a class of systems in the field of system analysis. The time-dependent system function is a function of the time-dependent input function. If this function depends only indirectly on the time-domain (via the input function, for example), then that is a system that would be considered time-invariant. Conversely, any direct dependence on the time-domain of the system function could be considered as a "time-varying system". Mathematically speaking, "time-invariance" of a system is the following property: Given a system with a time-dependent output function , and a time-dependent input function , the system will be considered time-invariant if a time-delay on the input directly equates to a time-delay of the output function. For example, if time is "elapsed time", then "time-invariance" implies that the relationship between the input function and the output function is constant with respect to time In the language of signal processing, this property can be satisfied if the transfer function of the system is not a direct function of time except as expressed by the input and output. In the context of a system schematic, this property can also be stated as follows, as shown in the figure to the right: If a system is time-invariant then the system block commutes with an arbitrary delay. If a time-invariant system is also linear, it is the subject of linear time-invariant theory (linear time-invariant) with direct applications in NMR spectroscopy, seismology, circuits, signal processing, control theory, and other technical areas. Nonlinear time-invariant systems lack a comprehensive, governing theory. Discrete time-invariant systems are known as shift-invariant systems. Systems which lack the time-invariant property are studied as time-variant systems. Simple example To demonstrate how to determine if a syst
https://en.wikipedia.org/wiki/Reinventing%20the%20wheel
To reinvent the wheel is to attempt to duplicate—most likely with inferior results—a basic method that has already previously been created or optimized by others. The inspiration for this idiomatic metaphor is that the wheel is an ancient archetype of human ingenuity (one so profound that it continues to underlie much of modern technology). As it has already been invented and is not considered to have any inherent flaws, an attempt to reinvent it would add no value to it and be a waste of time, diverting the investigator's resources from possibly more worthy goals. Usage The phrase is sometimes used without derision when a person's activities might be perceived as merely reinventing the wheel when they actually possess additional value. For example, "reinventing the wheel" is an important tool in the instruction of complex ideas. Rather than providing students simply with a list of known facts and techniques and expecting them to incorporate these ideas perfectly and rapidly, the instructor instead will build up the material anew, leaving the student to work out those key steps which embody the reasoning characteristic of the field. "Reinventing the wheel" may be an ironic cliche – it is not clear when the wheel itself was actually invented. The modern "invention" of the wheel might actually be a "re-invention" of an age-old invention. Additionally, many different wheels featuring enhancements on existing wheels (such as the many types of available tires) are regularly developed and marketed. The metaphor emphasizes understanding existing solutions, but not necessarily settling for them. In software development In software development, reinventing the wheel is often necessary in order to work around software licensing incompatibilities or around technical and policy limitations present in parts or modules provided by third parties. An example would be to implement a quicksort for a script written in JavaScript and destined to be embedded in a web page.
https://en.wikipedia.org/wiki/Time-variant%20system
A time-variant system is a system whose output response depends on moment of observation as well as moment of input signal application. In other words, a time delay or time advance of input not only shifts the output signal in time but also changes other parameters and behavior. Time variant systems respond differently to the same input at different times. The opposite is true for time invariant systems (TIV). Overview There are many well developed techniques for dealing with the response of linear time invariant systems, such as Laplace and Fourier transforms. However, these techniques are not strictly valid for time-varying systems. A system undergoing slow time variation in comparison to its time constants can usually be considered to be time invariant: they are close to time invariant on a small scale. An example of this is the aging and wear of electronic components, which happens on a scale of years, and thus does not result in any behaviour qualitatively different from that observed in a time invariant system: day-to-day, they are effectively time invariant, though year to year, the parameters may change. Other linear time variant systems may behave more like nonlinear systems, if the system changes quickly – significantly differing between measurements. The following things can be said about a time-variant system: It has explicit dependence on time. It does not have an impulse response in the normal sense. The system can be characterized by an impulse response except the impulse response must be known at each and every time instant. It is not stationary Linear time-variant systems Linear-time variant (LTV) systems are the ones whose parameters vary with time according to previously specified laws. Mathematically, there is a well defined dependence of the system over time and over the input parameters that change over time. In order to solve time-variant systems, the algebraic methods consider initial conditions of the system i.e. whether the syst
https://en.wikipedia.org/wiki/Walsh%20matrix
In mathematics, a Walsh matrix is a specific square matrix of dimensions 2, where n is some particular natural number. The entries of the matrix are either +1 or −1 and its rows as well as columns are orthogonal, i.e. dot product is zero. The Walsh matrix was proposed by Joseph L. Walsh in 1923. Each row of a Walsh matrix corresponds to a Walsh function. The Walsh matrices are a special case of Hadamard matrices. The naturally ordered Hadamard matrix is defined by the recursive formula below, and the sequency-ordered Hadamard matrix is formed by rearranging the rows so that the number of sign changes in a row is in increasing order. Confusingly, different sources refer to either matrix as the Walsh matrix. The Walsh matrix (and Walsh functions) are used in computing the Walsh transform and have applications in the efficient implementation of certain signal processing operations. Formula The Hadamard matrices of dimension 2k for k ∈ N are given by the recursive formula (the lowest order of Hadamard matrix is 2): and in general for 2 ≤ k ∈ N, where ⊗ denotes the Kronecker product. Permutation Rearrange the rows of the matrix according to the number of sign change of each row. For example, in the successive rows have 0, 3, 1, and 2 sign changes. If we rearrange the rows in sequency ordering: then the successive rows have 0, 1, 2, and 3 sign changes. Alternative forms of the Walsh matrix Sequency ordering The sequency ordering of the rows of the Walsh matrix can be derived from the ordering of the Hadamard matrix by first applying the bit-reversal permutation and then the Gray-code permutation: where the successive rows have 0, 1, 2, 3, 4, 5, 6, and 7 sign changes. Dyadic ordering where the successive rows have 0, 1, 3, 2, 7, 6, 4, and 5 sign changes. Natural ordering where the successive rows have 0, 7, 3, 4, 1, 6, 2, and 5 sign changes. See also Haar wavelet Quincunx matrix Hadamard transform Code-division multiple access () – rows of the (n
https://en.wikipedia.org/wiki/TCL%20Technology
TCL Technology (originally an abbreviation for Telephone Communication Limited) is a Chinese partially state-owned electronics company headquartered in Huizhou, Guangdong Province. It designs, develops, manufactures, and sells consumer products including television sets, mobile phones, air conditioners, washing machines, refrigerators, and small electrical appliances. In 2010, it was the world's 25th-largest consumer electronics producer. It became the second-largest television manufacturer by market share by 2019. On 7 February 2020, TCL Corporation changed its name to TCL Technology. TCL comprises five listed companies: TCL Technology, listed on the Shenzhen Stock Exchange (), TCL Electronics Holdings, Ltd. (), TCL Communication Technology Holdings, Ltd. (former code ; delisted in 2016), China Display Optoelectronics Technology Holdings Ltd. (), and Tonly Electronics Holdings Ltd. (), listed on the Hong Kong Stock Exchange. TCL Technology's business structure is focused on three major sectors: semiconductor display, semiconductor and semiconductor photovoltaic, industrial finance and capital. History The company was founded in 1981 under the brand name TTK as an audio cassette manufacturer. It was founded as a state-owned enterprise. In 1985, after being sued by TDK for intellectual property violation, the company changed its brand name to TCL by taking the initials from Telephone Communication Limited. In 1999, TCL entered the Vietnamese market. On 19 September 2002, TCL announced the acquisition of all consumer electronics related assets of the former German company Schneider Rundfunkwerke, including the right to use its trademarks as Schneider, Dual, Albona, Joyce and Logix. In July 2003, TCL chairman Li Dongsheng formally announced a "Dragon and Tiger Plan" to establish two competitive TCL businesses in global markets ("Dragons") and three leading businesses inside China ("Tigers"). In November 2003, TCL and Vantiva (then-named Thomson SA) of France
https://en.wikipedia.org/wiki/Competitive%20Lotka%E2%80%93Volterra%20equations
The competitive Lotka–Volterra equations are a simple model of the population dynamics of species competing for some common resource. They can be further generalised to the Generalized Lotka–Volterra equation to include trophic interactions. Overview The form is similar to the Lotka–Volterra equations for predation in that the equation for each species has one term for self-interaction and one term for the interaction with other species. In the equations for predation, the base population model is exponential. For the competition equations, the logistic equation is the basis. The logistic population model, when used by ecologists often takes the following form: Here is the size of the population at a given time, is inherent per-capita growth rate, and is the carrying capacity. Two species Given two populations, and , with logistic dynamics, the Lotka–Volterra formulation adds an additional term to account for the species' interactions. Thus the competitive Lotka–Volterra equations are: Here, represents the effect species 2 has on the population of species 1 and represents the effect species 1 has on the population of species 2. These values do not have to be equal. Because this is the competitive version of the model, all interactions must be harmful (competition) and therefore all α-values are positive. Also, note that each species can have its own growth rate and carrying capacity. A complete classification of this dynamics, even for all sign patterns of above coefficients, is available, which is based upon equivalence to the 3-type replicator equation. N species This model can be generalized to any number of species competing against each other. One can think of the populations and growth rates as vectors, 's as a matrix. Then the equation for any species becomes or, if the carrying capacity is pulled into the interaction matrix (this doesn't actually change the equations, only how the interaction matrix is defined), where is the total nu
https://en.wikipedia.org/wiki/Sandbox%20%28computer%20security%29
In computer security, a sandbox is a security mechanism for separating running programs, usually in an effort to mitigate system failures and/or software vulnerabilities from spreading. The isolation metaphor is taken from the idea of children who do not play well together, so each is given their own sandbox to play in alone. It is often used to execute untested or untrusted programs or code, possibly from unverified or untrusted third parties, suppliers, users or websites, without risking harm to the host machine or operating system. A sandbox typically provides a tightly controlled set of resources for guest programs to run in, such as storage and memory scratch space. Network access, the ability to inspect the host system, or read from input devices are usually disallowed or heavily restricted. In the sense of providing a highly controlled environment, sandboxes may be seen as a specific example of virtualization. Sandboxing is frequently used to test unverified programs that may contain a virus or other malicious code without allowing the software to harm the host device. Implementations A sandbox is implemented by executing the software in a restricted operating system environment, thus controlling the resources (e.g. file descriptors, memory, file system space, etc.) that a process may use. Examples of sandbox implementations include the following: Linux application sandboxing, built on Seccomp, cgroups and Linux namespaces. Notably used by Systemd, Google Chrome, Firefox, Firejail. Android was the first mainstream operating system to implement full application sandboxing, built by assigning each application its own Linux user ID. Apple App Sandbox is required for apps distributed through Apple's Mac App Store and iOS/iPadOS App Store, and recommended for other signed apps. Windows Vista and later editions include a "low" mode process running, known as "User Account Control" (UAC), which only allows writing in a specific directory and registry keys.
https://en.wikipedia.org/wiki/Anti-aliasing%20filter
An anti-aliasing filter (AAF) is a filter used before a signal sampler to restrict the bandwidth of a signal to satisfy the Nyquist–Shannon sampling theorem over the band of interest. Since the theorem states that unambiguous reconstruction of the signal from its samples is possible when the power of frequencies above the Nyquist frequency is zero, a brick wall filter is an idealized but impractical AAF. A practical AAF makes a trade off between reduced bandwidth and increased aliasing. A practical anti-aliasing filter will typically permit some aliasing to occur or attenuate or otherwise distort some in-band frequencies close to the Nyquist limit. For this reason, many practical systems sample higher than would be theoretically required by a perfect AAF in order to ensure that all frequencies of interest can be reconstructed, a practice called oversampling. Optical applications In the case of optical image sampling, as by image sensors in digital cameras, the anti-aliasing filter is also known as an optical low-pass filter (OLPF), blur filter, or AA filter. The mathematics of sampling in two spatial dimensions is similar to the mathematics of time-domain sampling, but the filter implementation technologies are different. The typical implementation in digital cameras is two layers of birefringent material such as lithium niobate, which spreads each optical point into a cluster of four points. The choice of spot separation for such a filter involves a tradeoff among sharpness, aliasing, and fill factor (the ratio of the active refracting area of a microlens array to the total contiguous area occupied by the array). In a monochrome or three-CCD or Foveon X3 camera, the microlens array alone, if near 100% effective, can provide a significant anti-aliasing function, while in color filter array (e.g. Bayer filter) cameras, an additional filter is generally needed to reduce aliasing to an acceptable level. Alternative implementations include the Pentax K-3's anti-a
https://en.wikipedia.org/wiki/Classical%20field%20theory
A classical field theory is a physical theory that predicts how one or more physical fields interact with matter through field equations, without considering effects of quantization; theories that incorporate quantum mechanics are called quantum field theories. In most contexts, 'classical field theory' is specifically intended to describe electromagnetism and gravitation, two of the fundamental forces of nature. A physical field can be thought of as the assignment of a physical quantity at each point of space and time. For example, in a weather forecast, the wind velocity during a day over a country is described by assigning a vector to each point in space. Each vector represents the direction of the movement of air at that point, so the set of all wind vectors in an area at a given point in time constitutes a vector field. As the day progresses, the directions in which the vectors point change as the directions of the wind change. The first field theories, Newtonian gravitation and Maxwell's equations of electromagnetic fields were developed in classical physics before the advent of relativity theory in 1905, and had to be revised to be consistent with that theory. Consequently, classical field theories are usually categorized as non-relativistic and relativistic. Modern field theories are usually expressed using the mathematics of tensor calculus. A more recent alternative mathematical formalism describes classical fields as sections of mathematical objects called fiber bundles. Non-relativistic field theories Some of the simplest physical fields are vector force fields. Historically, the first time that fields were taken seriously was with Faraday's lines of force when describing the electric field. The gravitational field was then similarly described. Newtonian gravitation The first field theory of gravity was Newton's theory of gravitation in which the mutual interaction between two masses obeys an inverse square law. This was very useful for predicting t
https://en.wikipedia.org/wiki/Property%20P%20conjecture
In mathematics, the Property P conjecture is a statement about 3-manifolds obtained by Dehn surgery on a knot in the 3-sphere. A knot in the 3-sphere is said to have Property P if every 3-manifold obtained by performing (non-trivial) Dehn surgery on the knot is not simply-connected. The conjecture states that all knots, except the unknot, have Property P. Research on Property P was started by R. H. Bing, who popularized the name and conjecture. This conjecture can be thought of as a first step to resolving the Poincaré conjecture, since the Lickorish–Wallace theorem says any closed, orientable 3-manifold results from Dehn surgery on a link. If a knot has Property P, then one cannot construct a counterexample to the Poincaré conjecture by surgery along . A proof was announced in 2004, as the combined result of efforts of mathematicians working in several different fields. Algebraic Formulation Let denote elements corresponding to a preferred longitude and meridian of a tubular neighborhood of . has Property P if and only if its Knot group is never trivialised by adjoining a relation of the form for some . See also Property R conjecture References 3-manifolds Conjectures that have been proved
https://en.wikipedia.org/wiki/Insular%20biogeography
Insular biogeography or island biogeography is a field within biogeography that examines the factors that affect the species richness and diversification of isolated natural communities. The theory was originally developed to explain the pattern of the species–area relationship occurring in oceanic islands. Under either name it is now used in reference to any ecosystem (present or past) that is isolated due to being surrounded by unlike ecosystems, and has been extended to mountain peaks, seamounts, oases, fragmented forests, and even natural habitats isolated by human land development. The field was started in the 1960s by the ecologists Robert H. MacArthur and E. O. Wilson, who coined the term island biogeography in their inaugural contribution to Princeton's Monograph in Population Biology series, which attempted to predict the number of species that would exist on a newly created island. Definitions For biogeographical purposes, an insular environment or "island" is any area of habitat suitable for a specific ecosystem, surrounded by an expanse of unsuitable habitat. While this may be a traditional island—a mass of land surrounded by water—the term may also be applied to many nontraditional "islands", such as the peaks of mountains, isolated springs or lakes, and non-contiguous woodlands. The concept is often applied to natural habitats surrounded by human-altered landscapes, such as expanses of grassland surrounded by highways or housing tracts, and national parks. Additionally, what is an insular for one organism may not be so for others, some organisms located on mountaintops may also be found in the valleys, while others may be restricted to the peaks. Theory The theory of insular biogeography proposes that the number of species found in an undisturbed insular environment ("island") is determined by immigration and extinction. And further, that the isolated populations may follow different evolutionary routes, as shown by Darwin's observation of finche
https://en.wikipedia.org/wiki/Pressure-fed%20engine
The pressure-fed engine is a class of rocket engine designs. A separate gas supply, usually helium, pressurizes the propellant tanks to force fuel and oxidizer to the combustion chamber. To maintain adequate flow, the tank pressures must exceed the combustion chamber pressure. Pressure fed engines have simple plumbing and have no need for complex and occasionally unreliable turbopumps. A typical startup procedure begins with opening a valve, often a one-shot pyrotechnic device, to allow the pressurizing gas to flow through check valves into the propellant tanks. Then the propellant valves in the engine itself are opened. If the fuel and oxidizer are hypergolic, they burn on contact; non-hypergolic fuels require an igniter. Multiple burns can be conducted by merely opening and closing the propellant valves as needed. If the pressurization system also has activating valves, they can be operated electrically, or by gas pressure controlled by smaller electrically operated valves. Care must be taken, especially during long burns, to avoid excessive cooling of the pressurizing gas due to adiabatic expansion. Cold helium won't liquify, but it could freeze a propellant, decrease tank pressures, or damage components not designed for low temperatures. The Apollo Lunar Module Descent Propulsion System was unusual in storing its helium in a supercritical but very cold state. It was warmed as it was withdrawn through a heat exchanger from the ambient temperature fuel. Spacecraft attitude control and orbital maneuvering thrusters are almost universally pressure-fed designs. Examples include the Reaction Control (RCS) and the Orbital Maneuvering (OMS) engines of the Space Shuttle orbiter; the RCS and Service Propulsion System (SPS) engines on the Apollo Command/Service Module; the SuperDraco (in-flight abort) and Draco (RCS) engines on the SpaceX Dragon 2; and the RCS, ascent and descent engines on the Apollo Lunar Module. Some launcher upper stages also use pressure-fed engin
https://en.wikipedia.org/wiki/Event-driven%20finite-state%20machine
In computation, a finite-state machine (FSM) is event driven if the transition from one state to another is triggered by an event or a message. This is in contrast to the parsing-theory origins of the term finite-state machine where the machine is described as consuming characters or tokens. Often these machines are implemented as threads or processes communicating with one another as part of a larger application. For example, a telecommunication protocol is most of the time implemented as an event-driven finite-state machine. Example in C This code describes the state machine for a very basic car radio system. It is basically an infinite loop that reads incoming events. The state machine is only 2 states: radio mode, or CD mode. The event is either a mode change from radio to cd back and forth, or a go to next (next preset for radio or next track for CD). /********************************************************************/ #include <stdio.h> /********************************************************************/ typedef enum { ST_RADIO, ST_CD } STATES; typedef enum { EVT_MODE, EVT_NEXT } EVENTS; EVENTS readEventFromMessageQueue(void); /********************************************************************/ int main(void) { /* Default state is radio */ STATES state = ST_RADIO; int stationNumber = 0; int trackNumber = 0; /* Infinite loop */ while (1) { /* Read the next incoming event. Usually this is a blocking function. */ EVENTS event = readEventFromMessageQueue(); /* Switch the state and the event to execute the right transition. */ switch (state) { case ST_RADIO: switch (event) { case EVT_MODE: /* Change the state */ state = ST_CD; break; case EVT_NEXT: /* Increase the station number */ stationNumber++; break; } break; case ST_CD: switch (event
https://en.wikipedia.org/wiki/Virtual%20finite-state%20machine
A virtual finite-state machine (VFSM) is a finite-state machine (FSM) defined in a Virtual Environment. The VFSM concept provides a software specification method to describe the behaviour of a control system using assigned names of input control properties and output actions. The VFSM method introduces an execution model and facilitates the idea of an executable specification. This technology is mainly used in complex machine control, instrumentation, and telecommunication applications. Why Implementing a state machine necessitates the generation of logical conditions (state transition conditions and action conditions). In the hardware environment, where state machines found their original use, this is trivial: all signals are Boolean. In contrast state machines specified and implemented in software require logical conditions that are per se multivalued: Temperature could be Low, OK, High Commands may have several values: Init, Start, Stop, Break, Continue In a hierarchical control system the subordinate state machines can have many states that are used as conditions of the superior state machine In addition input signals can be unknown due to errors or malfunctions, meaning even digital input signals (considered as classical Boolean values) are in fact 3 values: Low, High, Unknown. A Positive Logical Algebra solves this problem via virtualization, by creating a Virtual Environment which allows specification of state machines for software using multivalued variables. Control Properties A state variable in the VFSM environment may have one or more values which are relevant for the Control—in such a case it is an input variable. Those values are the control properties of this variable. Control properties are not necessarily specific data values but are rather certain states of the variable. For instance, a digital variable could provide three control properties: TRUE, FALSE and UNKNOWN according to its possible boolean values. A numerical (analog) input vari
https://en.wikipedia.org/wiki/Media-independent%20interface
The media-independent interface (MII) was originally defined as a standard interface to connect a Fast Ethernet (i.e., ) media access control (MAC) block to a PHY chip. The MII is standardized by IEEE 802.3u and connects different types of PHYs to MACs. Being media independent means that different types of PHY devices for connecting to different media (i.e. twisted pair, fiber optic, etc.) can be used without redesigning or replacing the MAC hardware. Thus any MAC may be used with any PHY, independent of the network signal transmission media. The MII can be used to connect a MAC to an external PHY using a pluggable connector, or directly to a PHY chip on the same PCB. On a PC the CNR connector Type B carries MII signals. Network data on the interface is framed using the IEEE Ethernet standard. As such it consists of a preamble, start frame delimiter, Ethernet headers, protocol-specific data and a cyclic redundancy check (CRC). The original MII transfers network data using 4-bit nibbles in each direction (4 transmit data bits, 4 receive data bits). The data is clocked at 25 MHz to achieve 100 Mbit/s throughput. The original MII design has been extended to support reduced signals and increased speeds. Current variants include: Reduced media-independent interface (RMII) Gigabit media-independent interface (GMII) Reduced gigabit media-independent interface (RGMII) Serial media-independent interface (SMII) Serial gigabit media-independent interface (serial GMII, SGMII) High serial gigabit media-independent interface (HSGMII) Quad serial gigabit media-independent interface (QSGMII) 10-gigabit media-independent interface (XGMII) The Management Data Input/Output (MDIO) serial bus is a subset of the MII that is used to transfer management information between MAC and PHY. At power up, using autonegotiation, the PHY usually adapts to whatever it is connected to unless settings are altered via the MDIO interface. Standard MII The standard MII features a small set o
https://en.wikipedia.org/wiki/International%20Components%20for%20Unicode
International Components for Unicode (ICU) is an open-source project of mature C/C++ and Java libraries for Unicode support, software internationalization, and software globalization. ICU is widely portable to many operating systems and environments. It gives applications the same results on all platforms and between C, C++, and Java software. The ICU project is a technical committee of the Unicode Consortium and sponsored, supported, and used by IBM and many other companies. ICU provides the following services: Unicode text handling, full character properties, and character set conversions; Unicode regular expressions; full Unicode sets; character, word, and line boundaries; language-sensitive collation and searching; normalization, upper and lowercase conversion, and script transliterations; comprehensive locale data and resource bundle architecture via the Common Locale Data Repository (CLDR); multiple calendars and time zones; and rule-based formatting and parsing of dates, times, numbers, currencies, and messages. ICU provided complex text layout service for Arabic, Hebrew, Indic, and Thai historically, but that was deprecated in version 54, and was completely removed in version 58 in favor of HarfBuzz. ICU provides more extensive internationalization facilities than the standard libraries for C and C++. ICU 72 updates to the latest Unicode 15. "In many formatting patterns, ASCII spaces are replaced with Unicode spaces (e.g., a "thin space")." ICU (ICU4J) now requires Java 8 but "Most of the ICU 72 library code should still work with Java 7 / Android API level 21, but we no longer test with Java 7." ICU 71 added e.g. phrase-based line breaking for Japanese (earlier methods didn't work well for short Japanese text, such as in titles and headings) and support for Hindi written in Latin letters (hi_Latn), also referred to as "Hinglish" and updates to the time zone data version 2022a. ICU 70 added e.g. support for emoji properties of strings and can now be built
https://en.wikipedia.org/wiki/Ethyl%20oleate
Ethyl oleate is a fatty acid ester formed by the condensation of oleic acid and ethanol. It is a colorless oil although degraded samples can appear yellow. Use and occurrence Additive Ethyl oleate is used by compounding pharmacies as a vehicle for intramuscular drug delivery, in some cases to prepare the daily doses of progesterone in support of pregnancy. Studies that document the safe use of ethyl oleate in pregnancy for both the mother and the fetus have never been performed. It is regulated as a food additive in the U.S. by the Food and Drug Administration. Ethyl oleate is used as a solvent for pharmaceutical drug preparations involving lipophilic substances such as steroids. It also finds use as a lubricant and a plasticizer. Louis Bouveault used ethyl oleate to demonstrate Bouveault–Blanc reduction, producing oleyl alcohol and ethanol, a method which was subsequently refined and published in Organic Syntheses. Occurrence Ethyl oleate has been identified as a primer pheromone in honeybees. Precursor to other chemicals By the process of ethenolysis, the methyl ester of oleic acid, converts to 1-decene and methyl 9-decenoate: CH2=CH2 → CH3(CH2)7CH=CH2 + MeO2C(CH2)7CH=CH2 Medical aspects Ethyl oleate is produced by the body during ethanol intoxication. It is one of the fatty acid ethyl esters (FAEE) produced after ingestion of ethanol. Some research literature implicates FAEEs such as ethyl oleate as the toxic mediators of ethanol in the body (pancreas, liver, heart, and brain). Ethyl oleate may be the toxic mediator of alcohol in fetal alcohol syndrome. The oral ingestion of ethyl oleate has been carefully studied and due to rapid degradation in the digestive tract it appears safe for oral ingestion. See also Oleate References Ethyl esters Food additives Insect pheromones Semiochemicals
https://en.wikipedia.org/wiki/R%C3%B3zsa%20P%C3%A9ter
Rózsa Péter, born Rózsa Politzer, (17 February 1905 – 16 February 1977) was a Hungarian mathematician and logician. She is best known as the "founding mother of recursion theory". Early life and education Péter was born in Budapest, Hungary, as Rózsa Politzer (Hungarian: Politzer Rózsa). She attended Pázmány Péter University (now Eötvös Loránd University), originally studying chemistry but later switching to mathematics. She attended lectures by Lipót Fejér and József Kürschák. While at university, she met László Kalmár; they would collaborate in future years and Kalmár encouraged her to pursue her love of mathematics. After graduating in 1927, Péter could not find a permanent teaching position although she had passed her exams to qualify as a mathematics teacher. Due to the effects of the Great Depression, many university graduates could not find work and Péter began private tutoring. At this time, she also began her graduate studies. Professional career and research Initially, Péter began her graduate research on number theory. Upon discovering that her results had already been proven by the work of Robert Carmichael and L. E. Dickson, she abandoned mathematics to focus on poetry. However, she was convinced to return to mathematics by her friend László Kalmár, who suggested she research the work of Kurt Gödel on the theory of incompleteness. She prepared her own, different proofs to Gödel's work. Péter presented the results of her paper on recursive theory, "", to the International Congress of Mathematicians in Zurich, Switzerland in 1932. In the summer of 1933, she worked with Paul Bernays in Göttingen, Germany, for the long chapter on recursive functions in the book that appeared in 1934 under the names of David Hilbert and Bernays. Her main results are summarised in the book and also appeared in several articles in the leading journal of mathematics, the , the first in 1934. Publication was under the name Politzer-Péter as she had changed her Jewish surnam
https://en.wikipedia.org/wiki/No%20free%20lunch%20theorem
In mathematical folklore, the "no free lunch" (NFL) theorem (sometimes pluralized) of David Wolpert and William Macready, alludes to the saying "no such thing as a free lunch", that is, there are no easy shortcuts to success. It appeared in the 1997 "No Free Lunch Theorems for Optimization". Wolpert had previously derived no free lunch theorems for machine learning (statistical inference). In 2005, Wolpert and Macready themselves indicated that the first theorem in their paper "state[s] that any two optimization algorithms are equivalent when their performance is averaged across all possible problems". The "no free lunch" (NFL) theorem is an easily stated and easily understood consequence of theorems Wolpert and Macready actually prove. It is weaker than the proven theorems, and thus does not encapsulate them. Various investigators have extended the work of Wolpert and Macready substantively. In terms of how the NFL theorem is used in the context of the research area, the no free lunch in search and optimization is a field that is dedicated for purposes of mathematically analyzing data for statistical identity, particularly search and optimization. While some scholars argue that NFL conveys important insight, others argue that NFL is of little relevance to machine learning research. Example Posit a toy universe that exists for exactly two days and on each day contains exactly one object, a square or a triangle. The universe has exactly four possible histories: (square, triangle): the universe contains a square on day 1, and a triangle on day 2 (square, square) (triangle, triangle) (triangle, square) Any prediction strategy that succeeds for history #2, by predicting a square on day 2 if there is a square on day 1, will fail on history #1, and vice versa. If all histories are equally likely, then any prediction strategy will score the same, with the same accuracy rate of 0.5. Origin Wolpert and Macready give two NFL theorems that are closely related to the
https://en.wikipedia.org/wiki/No%20free%20lunch%20in%20search%20and%20optimization
In computational complexity and optimization the no free lunch theorem is a result that states that for certain types of mathematical problems, the computational cost of finding a solution, averaged over all problems in the class, is the same for any solution method. The name alludes to the saying "no such thing as a free lunch", that is, no method offers a "short cut". This is under the assumption that the search space is a probability density function. It does not apply to the case where the search space has underlying structure (e.g., is a differentiable function) that can be exploited more efficiently (e.g., Newton's method in optimization) than random search or even has closed-form solutions (e.g., the extrema of a quadratic polynomial) that can be determined without search at all. For such probabilistic assumptions, the outputs of all procedures solving a particular type of problem are statistically identical. A colourful way of describing such a circumstance, introduced by David Wolpert and William G. Macready in connection with the problems of search and optimization, is to say that there is no free lunch. Wolpert had previously derived no free lunch theorems for machine learning (statistical inference). Before Wolpert's article was published, Cullen Schaffer independently proved a restricted version of one of Wolpert's theorems and used it to critique the current state of machine learning research on the problem of induction. In the "no free lunch" metaphor, each "restaurant" (problem-solving procedure) has a "menu" associating each "lunch plate" (problem) with a "price" (the performance of the procedure in solving the problem). The menus of restaurants are identical except in one regard – the prices are shuffled from one restaurant to the next. For an omnivore who is as likely to order each plate as any other, the average cost of lunch does not depend on the choice of restaurant. But a vegan who goes to lunch regularly with a carnivore who seeks econom
https://en.wikipedia.org/wiki/Feature%20%28machine%20learning%29
In machine learning and pattern recognition, a feature is an individual measurable property or characteristic of a phenomenon. Choosing informative, discriminating and independent features is a crucial element of effective algorithms in pattern recognition, classification and regression. Features are usually numeric, but structural features such as strings and graphs are used in syntactic pattern recognition. The concept of "feature" is related to that of explanatory variable used in statistical techniques such as linear regression. Feature types In feature engineering, two types of features are commonly used: numerical and categorical. Numerical features are continuous values that can be measured on a scale. Examples of numerical features include age, height, weight, and income. Numerical features can be used in machine learning algorithms directly. Categorical features are discrete values that can be grouped into categories. Examples of categorical features include gender, color, and zip code. Categorical features typically need to be converted to numerical features before they can be used in machine learning algorithms. This can be done using a variety of techniques, such as one-hot encoding, label encoding, and ordinal encoding. The type of feature that is used in feature engineering depends on the specific machine learning algorithm that is being used. Some machine learning algorithms, such as decision trees, can handle both numerical and categorical features. Other machine learning algorithms, such as linear regression, can only handle numerical features. Classification A numeric feature can be conveniently described by a feature vector. One way to achieve binary classification is using a linear predictor function (related to the perceptron) with a feature vector as input. The method consists of calculating the scalar product between the feature vector and a vector of weights, qualifying those observations whose result exceeds a threshold. Algorithms
https://en.wikipedia.org/wiki/Language%20identification%20in%20the%20limit
Language identification in the limit is a formal model for inductive inference of formal languages, mainly by computers (see machine learning and induction of regular languages). It was introduced by E. Mark Gold in a technical report and a journal article with the same title. In this model, a teacher provides to a learner some presentation (i.e. a sequence of strings) of some formal language. The learning is seen as an infinite process. Each time the learner reads an element of the presentation, it should provide a representation (e.g. a formal grammar) for the language. Gold defines that a learner can identify in the limit a class of languages if, given any presentation of any language in the class, the learner will produce only a finite number of wrong representations, and then stick with the correct representation. However, the learner need not be able to announce its correctness; and the teacher might present a counterexample to any representation arbitrarily long after. Gold defined two types of presentations: Text (positive information): an enumeration of all strings the language consists of. Complete presentation (positive and negative information): an enumeration of all possible strings, each with a label indicating if the string belongs to the language or not. Learnability This model is an early attempt to formally capture the notion of learnability. Gold's journal article introduces for contrast the stronger models Finite identification (where the learner has to announce correctness after a finite number of steps), and Fixed-time identification (where correctness has to be reached after an apriori-specified number of steps). A weaker formal model of learnability is the Probably approximately correct learning (PAC) model, introduced by Leslie Valiant in 1984. Examples It is instructive to look at concrete examples (in the tables) of learning sessions the definition of identification in the limit speaks about. A fictitious session to learn a re
https://en.wikipedia.org/wiki/Unibus
The Unibus was the earliest of several computer bus and backplane designs used with PDP-11 and early VAX systems manufactured by the Digital Equipment Corporation (DEC) of Maynard, Massachusetts. The Unibus was developed around 1969 by Gordon Bell and student Harold McFarland while at Carnegie Mellon University. The name refers to the unified nature of the bus; Unibus was used both as a system bus allowing the central processing unit to communicate with main memory, as well as a peripheral bus, allowing peripherals to send and receive data. Unifying these formerly separate busses allowed external devices to easily perform direct memory access (DMA) and made the construction of device drivers easier as control and data exchange was all handled through memory-mapped I/O. Unibus was physically large, which led to the introduction of Q-bus, which multiplexed some signals to reduce pin count. Higher performance PDP systems used Fastbus, essentially two Unibusses in one. The system was later supplanted by Massbus, a dedicated I/O bus introduced on the VAX and late-model PDP-11s. Technical specifications The Unibus consists of 72 signals, usually connected via two 36-way edge connectors on each printed circuit board. When not counting the power and ground lines, it is usually referred to as a 56-line bus. It can exist within a backplane or on a cable. Up to 20 nodes (devices) can be connected to a single Unibus segment; additional segments can be connected via a bus repeater. The bus is completely asynchronous, allowing a mixture of fast and slow devices. It allows the overlapping of arbitration (selection of the next bus master) while the current bus master is still performing data transfers. The 18 address lines allow the addressing of a maximum of . Typically, the top is reserved for the registers of the memory-mapped I/O devices used in the PDP-11 architecture. The design deliberately minimizes the amount of redundant logic required in the system. For example, a
https://en.wikipedia.org/wiki/Q-Bus
The Q-bus, also known as the LSI-11 Bus, is one of several bus technologies used with PDP and MicroVAX computer systems previously manufactured by the Digital Equipment Corporation of Maynard, Massachusetts. The Q-bus is a less expensive version of Unibus using multiplexing so that address and data signals share the same wires. This allows both a physically smaller and less-expensive implementation of essentially the same functionality. Over time, the physical address range of the Q-bus was expanded from 16 to 18 and then 22 bits. Block transfer modes were also added to the Q-bus. Main features of the Q-bus Like the Unibus before it, the Q-bus uses: Memory-mapped I/O Byte addressing A strict master-slave relationship between devices on the bus Asynchronous signaling Memory-mapped I/O means that data cycles between any two devices, whether CPU, memory, or I/O devices, use the same protocols. On the Unibus, a range of physical addresses are dedicated for I/O devices. The Q-bus simplifies this design by providing a specific signal (originally called BBS7, Bus Bank Select 7 but later generalized to be called BBSIO, Bus Bank Select I/O) that selects the range of addresses used by the I/O devices. Byte addressing means that the physical address passed on the Unibus is interpreted as the address of a byte-sized quantity of data. Because the bus actually contains a data path that is two bytes wide, address bit [0] is subject to special interpretation and data on the bus has to travel in the correct byte lanes. A strict Master-Slave relationship means that at any point in time, only one device can be the Master of the Q-bus. This master device can initiate data transactions which can then be responded to by a maximum of one selected slave device. (This had no effect on whether a given bus cycle is reading or writing data; the bus master can command either type of transaction.) At the end of the bus cycle, a bus arbitration protocol then selects the next device to
https://en.wikipedia.org/wiki/Fock%20matrix
In the Hartree–Fock method of quantum mechanics, the Fock matrix is a matrix approximating the single-electron energy operator of a given quantum system in a given set of basis vectors. It is most often formed in computational chemistry when attempting to solve the Roothaan equations for an atomic or molecular system. The Fock matrix is actually an approximation to the true Hamiltonian operator of the quantum system. It includes the effects of electron-electron repulsion only in an average way. Because the Fock operator is a one-electron operator, it does not include the electron correlation energy. The Fock matrix is defined by the Fock operator. In its general form the Fock operator writes: Where i runs over the total N spin orbitals. In the closed-shell case, it can be simplified by considering only the spatial orbitals. Noting that the terms are duplicated and the exchange terms are null between different spins. For the restricted case which assumes closed-shell orbitals and single- determinantal wavefunctions, the Fock operator for the i-th electron is given by: where: is the Fock operator for the i-th electron in the system, is the one-electron Hamiltonian for the i-th electron, is the number of electrons and is the number of occupied orbitals in the closed-shell system, is the Coulomb operator, defining the repulsive force between the j-th and i-th electrons in the system, is the exchange operator, defining the quantum effect produced by exchanging two electrons. The Coulomb operator is multiplied by two since there are two electrons in each occupied orbital. The exchange operator is not multiplied by two since it has a non-zero result only for electrons which have the same spin as the i-th electron. For systems with unpaired electrons there are many choices of Fock matrices. See also Hartree–Fock method Unrestricted Hartree–Fock Restricted open-shell Hartree–Fock References Atomic, molecular, and optical physics Quantum chemistry Matric
https://en.wikipedia.org/wiki/Multiflow
Multiflow Computer, Inc., founded in April, 1984 near New Haven, Connecticut, USA, was a manufacturer and seller of minisupercomputer hardware and software embodying the VLIW design style. Multiflow, incorporated in Delaware, ended operations in March, 1990, after selling about 125 VLIW minisupercomputers in the United States, Europe, and Japan. While Multiflow's commercial success was small and short-lived, its technical success and the dissemination of its technology and people had a great effect on the future of computer science and the computer industry. Multiflow's computers were arguably the most novel ever to be broadly sold, programmed, and used like conventional computers. (Other novel computers either required novel programming, or represented more incremental steps beyond existing computers.) Along with Cydrome, an attached-VLIW minisupercomputer company that had less commercial success, Multiflow demonstrated that the VLIW design style was practical, a conclusion surprising to many. While still controversial, VLIW has since been a force in high-performance embedded systems, and has been finding slow acceptance in general-purpose computing. Early history Technology roots The VLIW (for Very Long Instruction Word) design style was first proposed by Joseph A. (Josh) Fisher, a Yale University computer science professor, during the period 1979-1981. VLIW was motivated by a compiler scheduling technique, called trace scheduling, that Fisher had developed as a graduate student at the Courant Institute of Mathematical Sciences of New York University in 1978. Trace scheduling, unlike any prior compiler technique, exposed significant quantities of instruction-level parallelism (ILP) in ordinary computer programs, without laborious hand coding. This implied the practicality of processors for which the compiler could be relied upon to find and specify ILP. VLIW was put forward by Fisher as a way to build general-purpose instruction-level parallel processors expl
https://en.wikipedia.org/wiki/Vimentin
Vimentin is a structural protein that in humans is encoded by the VIM gene. Its name comes from the Latin vimentum which refers to an array of flexible rods. Vimentin is a type III intermediate filament (IF) protein that is expressed in mesenchymal cells. IF proteins are found in all animal cells as well as bacteria. Intermediate filaments, along with tubulin-based microtubules and actin-based microfilaments, comprises the cytoskeleton. All IF proteins are expressed in a highly developmentally-regulated fashion; vimentin is the major cytoskeletal component of mesenchymal cells. Because of this, vimentin is often used as a marker of mesenchymally-derived cells or cells undergoing an epithelial-to-mesenchymal transition (EMT) during both normal development and metastatic progression. Structure The assembly of the fibrous vimentin filament that forms the cytoskeleton follows a gradual sequence .The vimentin monomer has a central α-helical domain, capped on each end by non-helical amino (head) and carboxyl (tail) domains. Two monomers are likely co-translationally expressed in a way that facilitates their interaction forming a coiled-coil dimer, which is the basic subunit of vimentin assembly. A pair of coiled-coil dimers connect in an antiparallel fashion to form a tetramer. Eight tetramers join to form what is known as the unit-length filament (ULF), ULFs then stick to each other and elongate followed by compaction to form the fibrous proteins. The α-helical sequences contain a pattern of hydrophobic amino acids that contribute to forming a "hydrophobic seal" on the surface of the helix. In addition, there is a periodic distribution of acidic and basic amino acids that seems to play an important role in stabilizing coiled-coil dimers. The spacing of the charged residues is optimal for ionic salt bridges, which allows for the stabilization of the α-helix structure. While this type of stabilization is intuitive for intrachain interactions, rather than interchain i
https://en.wikipedia.org/wiki/Alanine%20aminopeptidase
Membrane alanyl aminopeptidase () also known as alanyl aminopeptidase (AAP) or aminopeptidase N (AP-N) is an enzyme that in humans is encoded by the ANPEP gene. Function Aminopeptidase N is located in the small-intestinal and renal microvillar membrane, and also in other plasma membranes. In the small intestine aminopeptidase N plays a role in the final digestion of peptides generated from hydrolysis of proteins by gastric and pancreatic proteases. Its function in proximal tubular epithelial cells and other cell types is less clear. The large extracellular carboxyterminal domain contains a pentapeptide consensus sequence characteristic of members of the zinc-binding metalloproteinase superfamily. Sequence comparisons with known enzymes of this class showed that CD13 and aminopeptidase N are identical. The latter enzyme was thought to be involved in the metabolism of regulatory peptides by diverse cell types, including small intestinal and renal tubular epithelial cells, macrophages, granulocytes, and synaptic membranes from the CNS. Defects in this gene appear to be a cause of various types of leukemia or lymphoma. AAP is also used by some viruses as a receptor to which these viruses bind to and then enter cells. It is a receptor for human coronavirus 229E, feline coronavirus serotype II (FCoV-II), TGEV, PEDV, canine coronavirus genotype II (CCoV-II) as well as several Deltacoronaviruses. References Further reading External links The MEROPS online database for peptidases and their inhibitors: M01.001 Biomarkers EC 3.4.11
https://en.wikipedia.org/wiki/DeWitt%20notation
Physics often deals with classical models where the dynamical variables are a collection of functions {φα}α over a d-dimensional space/spacetime manifold M where α is the "flavor" index. This involves functionals over the φs, functional derivatives, functional integrals, etc. From a functional point of view this is equivalent to working with an infinite-dimensional smooth manifold where its points are an assignment of a function for each α, and the procedure is in analogy with differential geometry where the coordinates for a point x of the manifold M are φα(x). In the DeWitt notation''' (named after theoretical physicist Bryce DeWitt), φα(x) is written as φi where i is now understood as an index covering both α and x. So, given a smooth functional A, A,i stands for the functional derivative as a functional of φ''. In other words, a "1-form" field over the infinite dimensional "functional manifold". In integrals, the Einstein summation convention is used. Alternatively, References Quantum field theory Mathematical notation
https://en.wikipedia.org/wiki/Two-port%20network
In electronics, a two-port network (a kind of four-terminal network or quadripole) is an electrical network (i.e. a circuit) or device with two pairs of terminals to connect to external circuits. Two terminals constitute a port if the currents applied to them satisfy the essential requirement known as the port condition: the current entering one terminal must equal the current emerging from the other terminal on the same port. The ports constitute interfaces where the network connects to other networks, the points where signals are applied or outputs are taken. In a two-port network, often port 1 is considered the input port and port 2 is considered the output port. It is commonly used in mathematical circuit analysis. Application The two-port network model is used in mathematical circuit analysis techniques to isolate portions of larger circuits. A two-port network is regarded as a "black box" with its properties specified by a matrix of numbers. This allows the response of the network to signals applied to the ports to be calculated easily, without solving for all the internal voltages and currents in the network. It also allows similar circuits or devices to be compared easily. For example, transistors are often regarded as two-ports, characterized by their -parameters (see below) which are listed by the manufacturer. Any linear circuit with four terminals can be regarded as a two-port network provided that it does not contain an independent source and satisfies the port conditions. Examples of circuits analyzed as two-ports are filters, matching networks, transmission lines, transformers, and small-signal models for transistors (such as the hybrid-pi model). The analysis of passive two-port networks is an outgrowth of reciprocity theorems first derived by Lorentz. In two-port mathematical models, the network is described by a 2 by 2 square matrix of complex numbers. The common models that are used are referred to as -parameters, -parameters, -param
https://en.wikipedia.org/wiki/Augmented%20assignment
Augmented assignment (or compound assignment) is the name given to certain assignment operators in certain programming languages (especially those derived from C). An augmented assignment is generally used to replace a statement where an operator takes a variable as one of its arguments and then assigns the result back to the same variable. A simple example is x += 1 which is expanded to x = x + 1. Similar constructions are often available for various binary operators. In general, in languages offering this feature, most operators that can take a variable as one of their arguments and return a result of the same type have an augmented assignment equivalent that assigns the result back to the variable in place, including arithmetic operators, bitshift operators, and bitwise operators. Discussion For example, the following statement or some variation of it can be found in many programs: x = x + 1 This means "find the number stored in the variable , add 1 to it, and store the result of the addition in the variable ." As simple as this seems, it may have an inefficiency, in that the location of variable has to be looked up twice if the compiler does not recognize that two parts of the expression are identical: might be a reference to some array element or other complexity. In comparison, here is the augmented assignment version: x += 1 With this version, there is no excuse for a compiler failing to generate code that looks up the location of variable just once, and modifies it in place, if of course the machine code supports such a sequence. For instance, if x is a simple variable, the machine code sequence might be something like Load x Add 1 Store x and the same code would be generated for both forms. But if there is a special op code, it might be MDM x,1 meaning "Modify Memory" by adding 1 to x, and an optimizing compiler would generate the same code for both forms. Some machine codes offer INC and DEC operations (to add or subtract one), ot
https://en.wikipedia.org/wiki/Leibniz%20formula%20for%20%CF%80
In mathematics, the Leibniz formula for , named after Gottfried Wilhelm Leibniz, states that an alternating series. It is sometimes called the Madhava–Leibniz series as it was first discovered by the Indian mathematician Madhava of Sangamagrama or his followers in the 14th–15th century (see Madhava series), and was later independently rediscovered by James Gregory in 1671 and Leibniz in 1673. The Taylor series for the inverse tangent function, often called Gregory's series, is: The Leibniz formula is the special case It also is the Dirichlet -series of the non-principal Dirichlet character of modulus 4 evaluated at , and, therefore, the value of the Dirichlet beta function. Proofs Proof 1 Considering only the integral in the last term, we have: Therefore, by the squeeze theorem, as , we are left with the Leibniz series: Proof 2 Let , when , the series to be converges uniformly, then Therefore, if approaches so that it is continuous and converges uniformly, the proof is complete, where, the series to be converges by the Leibniz's test, and also, approaches from within the Stolz angle, so from Abel's theorem this is correct. Convergence Leibniz's formula converges extremely slowly: it exhibits sublinear convergence. Calculating to 10 correct decimal places using direct summation of the series requires precisely five billion terms because for (one needs to apply Calabrese error bound). To get 4 correct decimal places (error of 0.00005) one needs 5000 terms. Even better than Calabrese or Johnsonbaugh error bounds are available. However, the Leibniz formula can be used to calculate to high precision (hundreds of digits or more) using various convergence acceleration techniques. For example, the Shanks transformation, Euler transform or Van Wijngaarden transformation, which are general methods for alternating series, can be applied effectively to the partial sums of the Leibniz series. Further, combining terms pairwise gives the non-alternati
https://en.wikipedia.org/wiki/Wallis%20product
In mathematics, the Wallis product for , published in 1656 by John Wallis, states that Proof using integration Wallis derived this infinite product using interpolation, though his method is not regarded as rigorous. A modern derivation can be found by examining for even and odd values of , and noting that for large , increasing by 1 results in a change that becomes ever smaller as increases. Let (This is a form of Wallis' integrals.) Integrate by parts: Now, we make two variable substitutions for convenience to obtain: We obtain values for and for later use. Now, we calculate for even values by repeatedly applying the recurrence relation result from the integration by parts. Eventually, we end get down to , which we have calculated. Repeating the process for odd values , We make the following observation, based on the fact that Dividing by : , where the equality comes from our recurrence relation. By the squeeze theorem, Proof using Laplace's method See the main page on Gaussian integral. Proof using Euler's infinite product for the sine function While the proof above is typically featured in modern calculus textbooks, the Wallis product is, in retrospect, an easy corollary of the later Euler infinite product for the sine function. Let : Relation to Stirling's approximation Stirling's approximation for the factorial function asserts that Consider now the finite approximations to the Wallis product, obtained by taking the first terms in the product where can be written as Substituting Stirling's approximation in this expression (both for and ) one can deduce (after a short calculation) that converges to as . Derivative of the Riemann zeta function at zero The Riemann zeta function and the Dirichlet eta function can be defined: Applying an Euler transform to the latter series, the following is obtained: See also John Wallis, English mathematician who is given partial credit for the development of infinitesimal calculus and pi
https://en.wikipedia.org/wiki/Last%20universal%20common%20ancestor
The last universal common ancestor (LUCA) is mostly hypothesized to have been a common ancestral cell from which the three domains of life, the Bacteria, the Archaea, and the Eukarya originated. It is suggested to have been a "cellular organism that had a lipid bilayer and used DNA, RNA, and protein". The LUCA has also been defined as "a hypothetical organism ancestral to all three domains". In general, the LUCA is considered the point or stage at which the three domains of life diverged from precursing forms of life (about 3.5 - 3.8 billion years ago). The nature of this point or stage of divergence remains a topic of research. All earlier forms of life precursing this divergence and, of course, all extant terrestrial organisms are generally thought to share common ancestry. On the basis of a formal statistical test, this theory of a universal common ancestry (UCA) is supported versus competing multiple-ancestry hypotheses. The genesis of viruses, before or after the LUCA as well as the diversity of extant viruses and their hosts are subjects of investigation. While no specific fossil evidence of the LUCA exists, the detailed biochemical similarity of all current life (divided into the three domains) makes it plausible. Its characteristics can be inferred from shared features of modern genomes. These genes describe a complex life form with many co-adapted features, including transcription and translation mechanisms to convert information from DNA to mRNA to proteins. The earliest forms of life probably lived in the high-temperature water of deep sea vents near ocean-floor magma flows around 4 billion years ago. Historical background A phylogenetic tree directly portrays the idea of evolution by descent from a single ancestor. An early tree of life was sketched by Jean-Baptiste Lamarck in his Philosophie zoologique in 1809. Charles Darwin more famously proposed the theory of universal common descent through an evolutionary process in his book On the Origin o
https://en.wikipedia.org/wiki/Software%20quality
In the context of software engineering, software quality refers to two related but distinct notions: Software's functional quality reflects how well it complies with or conforms to a given design, based on functional requirements or specifications. That attribute can also be described as the fitness for purpose of a piece of software or how it compares to competitors in the marketplace as a worthwhile product. It is the degree to which the correct software was produced. Software structural quality refers to how it meets non-functional requirements that support the delivery of the functional requirements, such as robustness or maintainability. It has a lot more to do with the degree to which the software works as needed. Many aspects of structural quality can be evaluated only statically through the analysis of the software inner structure, its source code (see Software metrics), at the unit level, and at the system level (sometimes referred to as end-to-end testing), which is in effect how its architecture adheres to sound principles of software architecture outlined in a paper on the topic by Object Management Group (OMG). However some structural qualities, such as usability, can be assessed only dynamically (users or others acting on their behalf interact with the software or, at least, some prototype or partial implementation; even the interaction with a mock version made in cardboard represents a dynamic test because such version can be considered a prototype). Other aspects, such as reliability, might involve not only the software but also the underlying hardware, therefore, it can be assessed both statically and dynamically (stress test). Functional quality is typically assessed dynamically but it is also possible to use static tests (such as software reviews). Historically, the structure, classification and terminology of attributes and metrics applicable to software quality management have been derived or extracted from the ISO 9126 and the subsequent
https://en.wikipedia.org/wiki/Adaptive%20equalizer
An adaptive equalizer is an equalizer that automatically adapts to time-varying properties of the communication channel. It is frequently used with coherent modulations such as phase-shift keying, mitigating the effects of multipath propagation and Doppler spreading. Adaptive equalizers are a subclass of adaptive filters. The central idea is altering the filter's coefficients to optimize a filter characteristic. For example, in case of linear discrete-time filters, the following equation can be used: where is the vector of the filter's coefficients, is the received signal covariance matrix and is the cross-correlation vector between the tap-input vector and the desired response. In practice, the last quantities are not known and, if necessary, must be estimated during the equalization procedure either explicitly or implicitly. Many adaptation strategies exist. They include, e.g.: Least mean squares filter (LMS) Note that the receiver does not have access to the transmitted signal when it is not in training mode. If the probability that the equalizer makes a mistake is sufficiently small, the symbol decisions made by the equalizer may be substituted for . Stochastic gradient descent (SG) Recursive least squares filter (RLS) A well-known example is the decision feedback equalizer, a filter that uses feedback of detected symbols in addition to conventional equalization of future symbols. Some systems use predefined training sequences to provide reference points for the adaptation process. See also Equalizer Intersymbol interference References Data transmission Digital signal processing
https://en.wikipedia.org/wiki/Web%20services%20protocol%20stack
A web service protocol stack is a protocol stack (a stack of computer networking protocols) that is used to define, locate, implement, and make Web services interact with each other. A web service protocol stack typically stacks four protocols: (Service) Transport Protocol: responsible for transporting messages between network applications and includes protocols such as HTTP, SMTP, FTP, as well as the more recent Blocks Extensible Exchange Protocol (BEEP). (XML) Messaging Protocol: responsible for encoding messages in a common XML format so that they can be understood at either end of a network connection. Currently, this area includes such protocols as XML-RPC, WS-Addressing, and SOAP. (Service) Description Protocol: used for describing the public interface to a specific Web service. The WSDL interface format is typically used for this purpose. (Service) Discovery Protocol: centralizes services into a common registry so that network Web services can publish their location and description, and makes it easy to discover what services are available on the network. Universal Description Discovery and Integration (UDDI) was intended for this purpose, but it has not been widely adopted. The protocol stack can also include a range of higher-level protocols such as Business Process Execution Language (WS-BPEL) or WS-Security for security extensions. External links Alex Nghiem (2003) The Basic Web Services Stack Ethan Cerami (2002) Top Ten FAQs for Web Services innoQ (2007) Web Services Standards as of Q1 2007 Lawrence Wilkes (updated Feb 2005) The Web Services Protocol Stack Pavel Kulchenko (2002) Web Services Acronyms, Demystified Web services
https://en.wikipedia.org/wiki/Ethyl%20acetate
Ethyl acetate (systematically ethyl ethanoate, commonly abbreviated EtOAc, ETAC or EA) is the organic compound with the formula , simplified to . This colorless liquid has a characteristic sweet smell (similar to pear drops) and is used in glues, nail polish removers, and in the decaffeination process of tea and coffee. Ethyl acetate is the ester of ethanol and acetic acid; it is manufactured on a large scale for use as a solvent. Production and synthesis Ethyl acetate was first synthesized by the Count de Lauraguais in 1759 by distilling a mixture of ethanol and acetic acid. In 2004, an estimated 1.3 million tonnes were produced worldwide. The combined annual production in 1985 of Japan, North America, and Europe was about 400,000 tonnes. The global ethyl acetate market was valued at $3.3 billion in 2018. Ethyl acetate is synthesized in industry mainly via the classic Fischer esterification reaction of ethanol and acetic acid. This mixture converts to the ester in about 65% yield at room temperature: The reaction can be accelerated by acid catalysis and the equilibrium can be shifted to the right by removal of water. It is also prepared in industry using the Tishchenko reaction, by combining two equivalents of acetaldehyde in the presence of an alkoxide catalyst: Silicotungstic acid is used to manufacture ethyl acetate by the alkylation of acetic acid by ethylene: Uses Ethyl acetate is used primarily as a solvent and diluent, being favored because of its low cost, low toxicity, and agreeable odor. For example, it is commonly used to clean circuit boards and in some nail varnish removers (acetone is also used). Coffee beans and tea leaves are decaffeinated with this solvent. It is also used in paints as an activator or hardener. Ethyl acetate is present in confectionery, perfumes, and fruits. In perfumes it evaporates quickly, leaving the scent of the perfume on the skin. Ethyl acetate is an asphyxiant for use in insect collecting and study. In a killing
https://en.wikipedia.org/wiki/Flicky
is a platform game developed by Sega and released as an arcade video game in May 1984. It was licensed to Bally Midway for distribution in the United States. In Flicky, the player controls the eponymous blue bird and must gather all the small birds called Chirps in each round and bring them safely to the exit. There are cat and lizard enemies which can disperse the Chirps and kill the player, but Flicky can use items on the playing field to protect herself and the Chirps from danger. The idea for Flicky came from Sega senior leadership, who wanted to exceed the success of Namco's Mappy (1983). Yoji Ishii and Yoshiki Kawasaki developed Flicky at Sega over one year. Originally, the game simply had the player catch ambiguous dots in a maze. Taking inspiration from a popular song in a Japanese variety show, Kawasaki gave the game an urban theme and bird characters. The game was originally titled "Busty", then "Flippy", before finally settling on "Flicky". Flicky was first ported to the SG-1000 in Japan, and then later to other Japanese home consoles. In 1991, Flicky was released in North America and Europe on the Sega Genesis. The character has made cameo appearances in other Sega games, most notably within the Sonic the Hedgehog series. Gameplay Flicky is a platform game in which the player takes control of a flightless blue bird named Flicky. With only the ability to run side-to-side and jump, the player must collect all the small, yellow birds called "Chirps" and take them to the exit to clear each round. According to game artist Yoshiki Kawasaki, Flicky is just a friend to the Chirps although some players may think she is a mother to them. The Chirps follow Flicky in a chain until they are collected at the exit. Bonus points are awarded for bringing multiple Chirps back in a single chain. There are 48 total stages. Each stage takes place on a single wraparound screen that scrolls horizontally with Flicky always in the center. After all the stages are completed,
https://en.wikipedia.org/wiki/Electrostriction
In electromagnetism, electrostriction is a property of all electrical non-conductors, or dielectrics, that causes them to change their shape under the application of an electric field. It is the dual property to magnetostriction. Explanation Electrostriction is a property of all dielectric materials, and is caused by displacement of ions in the crystal lattice upon being exposed to an external electric field. Positive ions will be displaced in the direction of the field, while negative ions will be displaced in the opposite direction. This displacement will accumulate throughout the bulk material and result in an overall strain (elongation) in the direction of the field. The thickness will be reduced in the orthogonal directions characterized by Poisson's ratio. All insulating materials consisting of more than one type of atom will be ionic to some extent due to the difference of electronegativity of the atoms, and therefore exhibit electrostriction. The resulting strain (ratio of deformation to the original dimension) is proportional to the square of the polarization. Reversal of the electric field does not reverse the direction of the deformation. More formally, the electrostriction coefficient is a fourth rank tensor (), relating the second-order strain tensor () and the first-order electric polarization density (). The related piezoelectric effect occurs only in a particular class of dielectrics. Electrostriction applies to all crystal symmetries, while the piezoelectric effect only applies to the 20 piezoelectric point groups. Electrostriction is a quadratic effect, unlike piezoelectricity, which is a linear effect. Materials Although all dielectrics exhibit some electrostriction, certain engineered ceramics, known as relaxor ferroelectrics, have extraordinarily high electrostrictive constants. The most commonly used are lead magnesium niobate (PMN) lead magnesium niobate-lead titanate (PMN-PT) lead lanthanum zirconate titanate (PLZT) Magnitude of
https://en.wikipedia.org/wiki/Scan-Line%20Interleave
Scan-Line Interleave (SLI) is a multi-GPU method developed by 3dfx for linking two (or more) video cards or chips together to produce a single output. It is an application of parallel processing for computer graphics, meant to increase the processing power available for graphics. 3dfx's SLI technology was first introduced in 1998 with the Voodoo2 line of graphics accelerators. The original Voodoo Graphics card and the VSA-100 were also SLI-capable, however in the case of the former it was only used in arcades and professional applications. NVIDIA reintroduced the SLI acronym in 2004 as Scalable Link Interface. NVIDIA's SLI, compared to 3dfx's SLI, is modernized to use graphics cards interfaced over the PCI Express bus. Function 3dfx's SLI design was the first attempt, in the consumer PC market, at combining the rendering power of two video cards. The two 3dfx cards were connected by a small ribbon cable inside the PC. This cable shared graphics and synchronization information between the cards. Each 3dfx card rendered alternating horizontal lines of pixels composing a frame. See also Scalable Link Interface - Nvidia AMD CrossFireX - AMD References External links The 3dfx Help Page 3dfx Interactive Graphics cards ar:واجهة توصيل قابلة للتوسع bs:SLI cs:Scalable Link Interface es:Scalable Link Interface fr:Scalable Link Interface ko:스케일러블 링크 인터페이스 it:Scalable Link Interface nl:Scalable Link Interface ja:Scalable Link Interface no:SLI pl:SLI pt:Scalable Link Interface ru:NVIDIA SLI sr:SLI fi:Scalable Link Interface sv:Scalable Link Interface tr:SLI zh:SLI
https://en.wikipedia.org/wiki/Brownian%20noise
In science, Brownian noise, also known as Brown noise or red noise, is the type of signal noise produced by Brownian motion, hence its alternative name of random walk noise. The term "Brown noise" does not come from the color, but after Robert Brown, who documented the erratic motion for multiple types of inanimate particles in water. The term "red noise" comes from the "white noise"/"white light" analogy; red noise is strong in longer wavelengths, similar to the red end of the visible spectrum. Explanation The graphic representation of the sound signal mimics a Brownian pattern. Its spectral density is inversely proportional to f 2, meaning it has higher intensity at lower frequencies, even more so than pink noise. It decreases in intensity by 6 dB per octave (20 dB per decade) and, when heard, has a "damped" or "soft" quality compared to white and pink noise. The sound is a low roar resembling a waterfall or heavy rainfall. See also violet noise, which is a 6 dB increase per octave. Strictly, Brownian motion has a Gaussian probability distribution, but "red noise" could apply to any signal with the 1/f 2 frequency spectrum. Power spectrum A Brownian motion, also called a Wiener process, is obtained as the integral of a white noise signal: meaning that Brownian motion is the integral of the white noise , whose power spectral density is flat: Note that here denotes the Fourier transform, and is a constant. An important property of this transform is that the derivative of any distribution transforms as from which we can conclude that the power spectrum of Brownian noise is An individual Brownian motion trajectory presents a spectrum , where the amplitude is a random variable, even in the limit of an infinitely long trajectory. Production Brown noise can be produced by integrating white noise. That is, whereas (digital) white noise can be produced by randomly choosing each sample independently, Brown noise can be produced by adding a random offset to each
https://en.wikipedia.org/wiki/Vlog
A vlog (), also known as a video blog or video log, is a form of blog for which the medium is video. Vlog entries often combine embedded video (or a video link) with supporting text, images, and other metadata. Entries can be recorded in one take or cut into multiple parts. The vlog category is popular on the video-sharing platform YouTube. In recent years, "vlogging" has spawned a large community on social media, becoming one of the most popular forms of digital entertainment. It is popularly believed that, alongside being entertaining, vlogs can deliver deep context through imagery as opposed to written blogs. Video logs (vlogs) also often take advantage of web syndication to allow for the distribution of video over the Internet using either the RSS or Atom syndication formats, for automatic aggregation and playback on mobile devices and personal computers (see video podcast). History In the 1980s, New York artist Nelson Sullivan documented his experiences travelling around New York City and South Carolina by recording videos in a distinctive vlog-like style. On January 2, 2000, Adam Kontras posted a video alongside a blog entry aimed at informing his friends and family of his cross-country move to Los Angeles in pursuit of show business, marking the first post on what would later become the longest-running video blog in history. In November of that year, Adrian Miles posted a video of changing text on a still image, coining the term vog to refer to his video blog. Filmmaker and musician Luuk Bouwman started in 2002 the now-defunct Tropisms.org site as a video diary of his post-college travels, one of the first sites to be called a vlog or videolog. In 2004, Steve Garfield launched his own video blog and declared that year "the year of the video blog". YouTube Vlogging saw a strong increase in popularity beginning in 2005. The most popular video sharing site, YouTube, was founded in February 2005. The site's co-founder Jawed Karim uploaded the first YouTube
https://en.wikipedia.org/wiki/Wireless%20Multimedia%20Extensions
Wireless Multimedia Extensions (WME), also known as Wi-Fi Multimedia (WMM), is a Wi-Fi Alliance interoperability certification, based on the IEEE 802.11e standard. It provides basic Quality of service (QoS) features to IEEE 802.11 networks. WMM prioritizes traffic according to four Access Categories (AC): voice (AC_VO), video (AC_VI), best effort (AC_BE), and background (AC_BK). However, it does not provide guaranteed throughput. It is suitable for well-defined applications that require QoS, such as Voice over IP (VoIP) on Wi-Fi phones (VoWLAN). WMM replaces the Wi-Fi DCF distributed coordination function for CSMA/CA wireless frame transmission with Enhanced Distributed Coordination Function (EDCF). EDCF, according to version 1.1 of the WMM specifications by the Wi-Fi Alliance, defines Access Categories labels AC_VO, AC_VI, AC_BE, and AC_BK for the Enhanced Distributed Channel Access (EDCA) parameters that are used by a WMM-enabled station to control how long it sets its Transmission Opportunity (TXOP), according to the information transmitted by the access point to the station. It is implemented for wireless QoS between RF media. Power Save Certification The Wi-Fi Alliance has added Power Save Certification to the WMM specification. Power Save uses mechanisms from 802.11e and legacy 802.11 to save power (for battery powered equipment) and fine-tune power consumption. The certification provides an indication that the certified product is targeted for power critical applications like Mobile/Smart Phones and portable power devices (I.e Those that require battery or recharging such as smart phones.) The underlying concept of WMM PowerSave is that the station (STA) triggers the release of buffered data from the access point (AP) by sending an uplink data frame. Upon receipt of such a data (trigger) frame the AP releases previously buffered data stored in each of its queues. Queues may be configured to be trigger enabled, (i.e. a receipt of a data frame corresponding
https://en.wikipedia.org/wiki/Chomp
Chomp is a two-player strategy game played on a rectangular grid made up of smaller square cells, which can be thought of as the blocks of a chocolate bar. The players take it in turns to choose one block and "eat it" (remove from the board), together with those that are below it and to its right. The top left block is "poisoned" and the player who eats this loses. The chocolate-bar formulation of Chomp is due to David Gale, but an equivalent game expressed in terms of choosing divisors of a fixed integer was published earlier by Frederik Schuh. Chomp is a special case of a poset game where the partially ordered set on which the game is played is a product of total orders with the minimal element (poisonous block) removed. Example game Below shows the sequence of moves in a typical game starting with a 5 × 4 bar: Player A eats two blocks from the bottom right corner; Player B eats three from the bottom row; Player A picks the block to the right of the poisoned block and eats eleven blocks; Player B eats three blocks from the remaining column, leaving only the poisoned block. Player A must eat the last block and so loses. Note that since it is provable that player A can win when starting from a 5 × 4 bar, at least one of A's moves is a mistake. Positions of the game The intermediate positions in an m × n Chomp are integer-partitions (non-increasing sequences of positive integers) λ1 ≥ λ2 ≥···≥ λr, with λ1 ≤ n and r ≤ m. Their number is the binomial coefficient , which grows exponentially with m and n. Winning the game Chomp belongs to the category of impartial two-player perfect information games, making it also analyzable by Nim because of the Sprague–Grundy theorem. For any rectangular starting position, other than 1×1, the first player can win. This can be shown using a strategy-stealing argument: assume that the second player has a winning strategy against any initial first-player move. Suppose then, that the first player takes only the bottom right han
https://en.wikipedia.org/wiki/Secure%20Remote%20Password%20protocol
The Secure Remote Password protocol (SRP) is an augmented password-authenticated key exchange (PAKE) protocol, specifically designed to work around existing patents. Like all PAKE protocols, an eavesdropper or man in the middle cannot obtain enough information to be able to brute-force guess a password or apply a dictionary attack without further interactions with the parties for each guess. Furthermore, being an augmented PAKE protocol, the server does not store password-equivalent data. This means that an attacker who steals the server data cannot masquerade as the client unless they first perform a brute force search for the password. In layman's terms, during SRP (or any other PAKE protocol) authentication, one party (the "client" or "user") demonstrates to another party (the "server") that they know the password, without sending the password itself nor any other information from which the password can be derived. The password never leaves the client and is unknown to the server. Furthermore, the server also needs to know about the password (but not the password itself) in order to instigate the secure connection. This means that the server also authenticates itself to the client which prevents phishing without reliance on the user parsing complex URLs. The only mathematically proven security property of SRP is that it is equivalent to Diffie-Hellman against a passive attacker. Newer PAKEs such as AuCPace and OPAQUE offer stronger guarantees. Overview The SRP protocol has a number of desirable properties: it allows a user to authenticate themselves to a server, it is resistant to dictionary attacks mounted by an eavesdropper, and it does not require a trusted third party. It effectively conveys a zero-knowledge password proof from the user to the server. In revision 6 of the protocol only one password can be guessed per connection attempt. One of the interesting properties of the protocol is that even if one or two of the cryptographic primitives it uses
https://en.wikipedia.org/wiki/Lang%20factor
The Lang Factor is an estimated ratio of the total cost of creating a process within a plant, to the cost of all major technical components. It is widely used in industrial engineering to calculate the capital and operating costs of a plant. The factors were introduced by H. J. Lang and Dr Micheal Bird in Chemical Engineering magazine in 1947 as a method for estimating the total installation cost for plants and equipment. Industries These factors are widely used in the refining and petrochemical industries to help estimate the cost of new facilities. A typical multiplier for a new unit within a refinery would be in the range of 5.0. When the purchase price of all the pumps, heat exchangers, pressure vessels, and other process equipment are multiplied by 5.0, a rough estimate of the total installed cost of the plant, including equipment, materials, construction, and engineering will be achieved. The accuracy of this estimate method usually is +/- 35%. Guthrie factors The factors change over time because construction labor, bulk materials (concrete, pipe, etc.), engineering design, indirect costs, and major process equipment prices often do not change at the same rate. In the late 1960s and early 1970s Kenneth Guthrie further expanded on this concept, generating different factors for different types of process equipment (pumps, exchangers, vessels, etc.). These are sometimes referred to as "Guthrie factors". References Industrial engineering
https://en.wikipedia.org/wiki/Variable-width%20encoding
A variable-width encoding is a type of character encoding scheme in which codes of differing lengths are used to encode a character set (a repertoire of symbols) for representation, usually in a computer. Most common variable-width encodings are multibyte encodings, which use varying numbers of bytes (octets) to encode different characters. (Some authors, notably in Microsoft documentation, use the term multibyte character set, which is a misnomer, because representation size is an attribute of the encoding, not of the character set.) Early variable width encodings using less than a byte per character were sometimes used to pack English text into fewer bytes in adventure games for early microcomputers. However disks (which unlike tapes allowed random access allowing text to be loaded on demand), increases in computer memory and general purpose compression algorithms have rendered such tricks largely obsolete. Multibyte encodings are usually the result of a need to increase the number of characters which can be encoded without breaking backward compatibility with an existing constraint. For example, with one byte (8 bits) per character, one can encode 256 possible characters; in order to encode more than 256 characters, the obvious choice would be to use two or more bytes per encoding unit, two bytes (16 bits) would allow 65,536 possible characters, but such a change would break compatibility with existing systems and therefore might not be feasible at all. General structure Since the aim of a multibyte encoding system is to minimise changes to existing application software, some characters must retain their pre-existing single-unit codes, even while other characters have multiple units in their codes. The result is that there are three sorts of units in a variable-width encoding: singletons, which consist of a single unit, lead units, which come first in a multiunit sequence, and trail units, which come afterwards in a multiunit sequence. Input and display softw
https://en.wikipedia.org/wiki/Scale%20test%20car
A scale test car is a type of railroad car in maintenance of way service. Its purpose is to calibrate the weighing scales used to weigh loaded railroad cars. Scale test cars are of a precisely known weight so that the track scale can be calibrated against them. Purposes Cars are weighed for various purposes. These include: Axle load limits Cars are weighed to ensure they are within the axle load limits of the railroad. Customer billing Cars are weighed to determine (by subtracting the car's unloaded, or tare weight from the total weight) the amount of cargo loaded. This is used to bill the railroad's customers for the carriage of bulk commodities, so it is essential that the track scales be accurate. Construction Many scale test cars were small, old railroad cars carrying heavy metal weights as their superstructure. Scale test cars needed special handling so they would not suffer damage, which might alter their weight. They were reweighed periodically on accurate scales at the railroad's shops. See also Rail weighbridge References Bibliography AAR, Engineering Division, "AAR Scale Handbook (2011)" USDA, AMS, Grain Inspection Packers and Stockyards Administration(GIPSA), "Weighing Handbook" Chap. 3, Pages 31 thru 34, 27DEC2010 (Overall Document dated: Apr 2014) Retrieved: 08JUN2020 External links NIST circa 1913 scale test car and its transporter rail car. One of its large calibration masses can be seen being hoisted onto the scale test car. Maintenance of way equipment Mass Measuring instruments
https://en.wikipedia.org/wiki/Mixing%20%28mathematics%29
In mathematics, mixing is an abstract concept originating from physics: the attempt to describe the irreversible thermodynamic process of mixing in the everyday world: e.g. mixing paint, mixing drinks, industrial mixing. The concept appears in ergodic theory—the study of stochastic processes and measure-preserving dynamical systems. Several different definitions for mixing exist, including strong mixing, weak mixing and topological mixing, with the last not requiring a measure to be defined. Some of the different definitions of mixing can be arranged in a hierarchical order; thus, strong mixing implies weak mixing. Furthermore, weak mixing (and thus also strong mixing) implies ergodicity: that is, every system that is weakly mixing is also ergodic (and so one says that mixing is a "stronger" condition than ergodicity). Informal explanation The mathematical definition of mixing aims to capture the ordinary every-day process of mixing, such as mixing paints, drinks, cooking ingredients, industrial process mixing, smoke in a smoke-filled room, and so on. To provide the mathematical rigor, such descriptions begin with the definition of a measure-preserving dynamical system, written as . The set is understood to be the total space to be filled: the mixing bowl, the smoke-filled room, etc. The measure is understood to define the natural volume of the space and of its subspaces. The collection of subspaces is denoted by , and the size of any given subset is ; the size is its volume. Naively, one could imagine to be the power set of ; this doesn't quite work, as not all subsets of a space have a volume (famously, the Banach-Tarski paradox). Thus, conventionally, consists of the measurable subsets—the subsets that do have a volume. It is always taken to be a Borel set—the collection of subsets that can be constructed by taking intersections, unions and set complements; these can always be taken to be measurable. The time evolution of the system is described by
https://en.wikipedia.org/wiki/TV-B-Gone
TV-B-Gone is a universal remote control device for turning off various brands of television sets. Released in 2015, its inventor referred to it as "an environmental management device". Although it can require up to 72 seconds for the device to find the proper code for a particular television receiver, the most popular televisions turn off in the first few seconds. History TV-B-Gone was invented by Mitch Altman and is sold by his company Cornfield Electronics. The standard model TV-B-Gone consists of an infra-red LED, two CR2032 cells and an integrated circuit containing the television power code database, all in a plastic case. The original case aesthetics and design were created by Robert Ellis. Models TV-B-Gone Pro SHP The TV-B-Gone Pro SHP (Super High Power) is the latest TV-B-Gone to be announced. It is considerably more powerful than the standard model, using eight infra-red LEDs to allow TVs to be turned off from distances of up to 100 meters (300 feet). TV-B-Gone Pro SHP is switchable between its North American and European databases of POWER codes. Later, in 2009, Mitch Altman made a new kind of TV-B-Gone Pro SHP. Instead of disguising it as an iPhone, Mitch Altman has made the new and improved TV-B-Gone look like an iPod Nano and go ten more yards than the old one. The recent invention of >1W 850 and 970 nm IREDs makes a miniature long range version of the TV-B-Gone feasible. TV-B-Gone Kit At several hacker conventions Mitch Altman has run workshops that allow participants to build their own TV-B-Gones using Adafruit Industries' micro controller–based mini-POV kit. Around January 2008, Adafruit Industries released a kit to build an open source TV-B-Gone. Consumer Electronics Show controversy During the 2008 Consumer Electronics Show, an individual associated with Gizmodo brought a TV-B-Gone remote control and shut off many display monitors at booths and during demos affecting several companies. These actions caused the individual to be banned for
https://en.wikipedia.org/wiki/P%20system
For the computer p-System, see UCSD p-System. A P system is a computational model in the field of computer science that performs calculations using a biologically inspired process. They are based upon the structure of biological cells, abstracting from the way in which chemicals interact and cross cell membranes. The concept was first introduced in a 1998 report by the computer scientist Gheorghe Păun, whose last name is the origin of the letter P in 'P Systems'. Variations on the P system model led to the formation of a branch of research known as 'membrane computing.' Although inspired by biology, the primary research interest in P systems is concerned with their use as a computational model, rather than for biological modeling, although this is also being investigated. Informal description A P system is defined as a series of membranes containing chemicals (in finite quantities), catalysts and rules which determine possible ways in which chemicals may react with one another to form products. Rules may also cause chemicals to pass through membranes or even cause membranes to dissolve. Just as in a biological cell, where a chemical reaction may only take place upon the chance event that the required chemical molecules collide and interact (possibly also with a catalyst), the rules in a P system are applied at random. This causes the computation to proceed in a non-deterministic manner, often resulting in multiple solutions being encountered if the computation is repeated. A P system continues until it reaches a state where no further reactions are possible. At this point the result of the computation is all those chemicals that have been passed outside of the outermost membrane, or otherwise those passed into a designated 'result' membrane. Components of a P system Although many varieties of P system exist, most share the same basic components. Each element has a specific role to play, and each has a founding in the biological cell architecture upon which P s
https://en.wikipedia.org/wiki/Finite%20thickness
In formal language theory, in particular in algorithmic learning theory, a class C of languages has finite thickness if every string is contained in at most finitely many languages in C. This condition was introduced by Dana Angluin as a sufficient condition for C being identifiable in the limit. The related notion of M-finite thickness Given a language L and an indexed class C = { L1, L2, L3, ... } of languages, a member language Lj ∈ C is called a minimal concept of L within C if L ⊆ Lj, but not L ⊊ Li ⊆ Lj for any Li ∈ C. The class C is said to satisfy the MEF-condition if every finite subset D of a member language Li ∈ C has a minimal concept Lj ⊆ Li. Symmetrically, C is said to satisfy the MFF-condition if every nonempty finite set D has at most finitely many minimal concepts in C. Finally, C is said to have M-finite thickness if it satisfies both the MEF- and the MFF-condition. Finite thickness implies M-finite thickness. However, there are classes that are of M-finite thickness but not of finite thickness (for example, any class of languages C = { L1, L2, L3, ... } such that L1 ⊆ L2 ⊆ L3 ⊆ ...). References Formal languages
https://en.wikipedia.org/wiki/Iron%28II%2CIII%29%20oxide
Iron(II,III) oxide, or black iron oxide, is the chemical compound with formula Fe3O4. It occurs in nature as the mineral magnetite. It is one of a number of iron oxides, the others being iron(II) oxide (FeO), which is rare, and iron(III) oxide (Fe2O3) which also occurs naturally as the mineral hematite. It contains both Fe2+ and Fe3+ ions and is sometimes formulated as FeO ∙ Fe2O3. This iron oxide is encountered in the laboratory as a black powder. It exhibits permanent magnetism and is ferrimagnetic, but is sometimes incorrectly described as ferromagnetic. Its most extensive use is as a black pigment (see: Mars Black). For this purpose, it is synthesized rather than being extracted from the naturally occurring mineral as the particle size and shape can be varied by the method of production. Preparation Heated iron metal interacts with steam to form iron oxide and hydrogen gas. 3Fe + 4H2O->Fe3O4 + 4H2 Under anaerobic conditions, ferrous hydroxide (Fe(OH)2) can be oxidized by water to form magnetite and molecular hydrogen. This process is described by the Schikorr reaction: \underset{ferrous\ hydroxide}{3Fe(OH)2} -> \underset{magnetite}{Fe3O4} + \underset{hydrogen}{H2} + \underset{water}{2H2O} This works because crystalline magnetite (Fe3O4) is thermodynamically more stable than amorphous ferrous hydroxide (Fe(OH)2 ). The Massart method of preparation of magnetite as a ferrofluid, is convenient in the laboratory: mix iron(II) chloride and iron(III) chloride in the presence of sodium hydroxide. A more efficient method of preparing magnetite without troublesome residues of sodium, is to use ammonia to promote chemical co-precipitation from the iron chlorides: first mix solutions of 0.1 M FeCl3·6H2O and FeCl2·4H2O with vigorous stirring at about 2000 rpm. The molar ratio of the FeCl3:FeCl2 should be about 2:1. Heat the mix to 70 °C, then raise the speed of stirring to about 7500 rpm and quickly add a solution of NH4OH (10 volume %). A dark precipitate of nanopart
https://en.wikipedia.org/wiki/XE8000
The XE8000 series is a low-power microcontroller family from XEMICS (now a business unit of Semtech). Advanced analog features are combined with a proprietary RISC CPU named CoolRISC on all XE8000 devices. The CPU has 8-bits data bus and 22 bits instruction bus. All instructions (including 8*8 bit multiplication) are executed in 1 clock cycle. The analog features include the ZoomingADC a new type of delta-sigma modulator for analog-to-digital conversion that includes capabilities to amplify and offset the input signal during the acquisition. UART, timers, RAM, MTP-ROM ("flash" program memory), watchdog timer, analog-to-digital converter, digital-to-analog converter, RC and XTAL oscillators, interrupts, I/O, drivers for seven segment displays and RF interface are possible on-chip features. XE8000 devices For sensor interfacing: XE88LC01A XE88LC02 (with display drivers) XE88LC05A (with DAC) For RF interfacing: XE88LC06A XE88LC07A Typical target applications Sensor interface 4-20 mA current loop Battery supplied devices RF interface External links http://www.xemics.com/ http://www.raisonance.com/ (Compiler) http://www.phyton.com/ (Compiler, programmer and emulators) Microcontrollers
https://en.wikipedia.org/wiki/Function%20prototype
In computer programming, a function prototype or function interface is a declaration of a function that specifies the function’s name and type signature (arity, data types of parameters, and return type), but omits the function body. While a function definition specifies how the function does what it does (the "implementation"), a function prototype merely specifies its interface, i.e. what data types go in and come out of it. The term "function prototype" is particularly used in the context of the programming languages C and C++ where placing forward declarations of functions in header files allows for splitting a program into translation units, i.e. into parts that a compiler can separately translate into object files, to be combined by a linker into an executable or a library. The function declaration precedes the function definition, giving details of name, return type, and storage class along with other relevant attributes. Function prototypes can be used when either: Defining an ExternalType Creating an Interface part In a prototype, parameter names are optional (and in C/C++ have function prototype scope, meaning their scope ends at the end of the prototype), however, the type is necessary along with all modifiers (e.g. if it is a pointer or a reference to parameter) except alone. In object-oriented programming, interfaces and abstract methods serve much the same purpose. Example Consider the following function prototype: void Sum( int a, int b ); OR void Sum( int, int ); OR auto Sum( int, int ) -> void; // C++ only Function prototypes include the function signature, the name of the function, return type and access specifier. In this case the name of the function is "Sum". The function signature defines the number of parameters and their types. The return type is "void". This means that the function is not going to return any value. Note that the parameter names in the first example are optional. Uses In early versions of C, if a function wa
https://en.wikipedia.org/wiki/I%C2%B2S
I²S (Inter-IC Sound, pronounced "eye-squared-ess"), is an electrical serial bus interface standard used for connecting digital audio devices together. It is used to communicate PCM audio data between integrated circuits in an electronic device. The I²S bus separates clock and serial data signals, resulting in simpler receivers than those required for asynchronous communications systems that need to recover the clock from the data stream. Alternatively I²S is spelled I2S (pronounced eye-two-ess) or IIS (pronounced eye-eye-ess). Despite the similar name, I²S is unrelated to the bidirectional I²C (IIC) bus. History This standard was introduced in 1986 by Philips Semiconductor (now NXP Semiconductors) and was first revised June 5, 1996. The standard was last revised on February 17, 2022 and updated terms master and slave to controller and target. Details The I²S protocol outlines one specific type of PCM digital audio communication with defined parameters outlined in the Philips specification. The bus consists of at least three lines: Bit clock line Officially "continuous serial clock (SCK)". Typically written "bit clock (BCLK)". Word clock line Officially "word select (WS)". Typically called "left-right clock (LRCLK)" or "frame sync (FS)". 0 = Left channel, 1 = Right channel At least one multiplexed data line Officially "serial data (SD)", but can be called SDATA, SDIN, SDOUT, DACDAT, ADCDAT, etc. It may also include the following lines: Master clock (typically 256 x LRCLK) This is not part of the I2S standard, but is commonly included for synchronizing the internal operation of the analog/digital converters. A multiplexed data line for upload The bit clock pulses once for each discrete bit of data on the data lines. The bit clock frequency is the product of the sample rate, the number of bits per channel and the number of channels. So, for example, CD Audio with a sample frequency of 44.1 kHz, with 16 bits of precision and two channels (stereo) has
https://en.wikipedia.org/wiki/Thiele/Small%20parameters
Thiele/Small parameters (commonly abbreviated T/S parameters, or TSP) are a set of electromechanical parameters that define the specified low frequency performance of a loudspeaker driver. These parameters are published in specification sheets by driver manufacturers so that designers have a guide in selecting off-the-shelf drivers for loudspeaker designs. Using these parameters, a loudspeaker designer may simulate the position, velocity and acceleration of the diaphragm, the input impedance and the sound output of a system comprising a loudspeaker and enclosure. Many of the parameters are strictly defined only at the resonant frequency, but the approach is generally applicable in the frequency range where the diaphragm motion is largely pistonic, i.e., when the entire cone moves in and out as a unit without cone breakup. Rather than purchase off-the-shelf components, loudspeaker design engineers often define desired performance and work backwards to a set of parameters and manufacture a driver with said characteristics or order it from a driver manufacturer. This process of generating parameters from a target response is known as synthesis. Thiele/Small parameters are named after A. Neville Thiele of the Australian Broadcasting Commission, and Richard H. Small of the University of Sydney, who pioneered this line of analysis for loudspeakers. A common use of Thiele/Small parameters is in designing PA system and hi-fi speaker enclosures; the TSP calculations indicate to the speaker design professionals how large a speaker cabinet will need to be and how large and long the bass reflex port (if it is used) should be. History The 1925 paper of Chester W. Rice and Edward W. Kellogg, fueled by advances in radio and electronics, increased interest in direct radiator loudspeakers. In 1930, A. J. Thuras of Bell Labs patented (US Patent No. 1869178) his "Sound Translating Device" (essentially a vented box) which was evidence of the interest in many types of enclosure desig