source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Taxis
A taxis (; : taxes ) is the movement of an organism in response to a stimulus such as light or the presence of food. Taxes are innate behavioural responses. A taxis differs from a tropism (turning response, often growth towards or away from a stimulus) in that in the case of taxis, the organism has motility and demonstrates guided movement towards or away from the stimulus source. It is sometimes distinguished from a kinesis, a non-directional change in activity in response to a stimulus. Classification Taxes are classified based on the type of stimulus, and on whether the organism's response is to move towards or away from the stimulus. If the organism moves towards the stimulus the taxis are positive, while if it moves away the taxis are negative. For example, flagellate protozoans of the genus Euglena move towards a light source. This reaction or behavior is called positive phototaxis since phototaxis refers to a response to light and the organism is moving towards the stimulus. Terminology derived from type of stimulus Many types of taxis have been identified, including: Aerotaxis (stimulation by oxygen) Anemotaxis (by wind) Barotaxis (by pressure) Chemotaxis or "gradient search" (by chemicals) Durotaxis (by stiffness) Electrotaxis or galvanotaxis (by electric current) Gravitaxis or geotaxis (by gravity) Hydrotaxis (by moisture) Magnetotaxis (by magnetic field) Phototaxis (by light) Rheotaxis (by fluid flow) Thermotaxis (by changes in temperature) Thigmotaxis (by physical contact) Depending on the type of sensory organs present, a taxis can be classified as a klinotaxis, where an organism continuously samples the environment to determine the direction of a stimulus; a tropotaxis, where bilateral sense organs are used to determine the stimulus direction; and a telotaxis, where a single organ suffices to establish the orientation of the stimulus. Terminology derived from taxis direction There are five types of taxes based on the movement of org
https://en.wikipedia.org/wiki/Broadcast%20television%20systems
Broadcast television systems (or terrestrial television systems outside the US and Canada) are the encoding or formatting systems for the transmission and reception of terrestrial television signals. Analog television systems were standardized by the International Telecommunication Union (ITU) in 1961, with each system designated by a letter (A-N) in combination with the color standard used (NTSC, PAL or SECAM) - for example PAL-B, NTSC-M, etc.). These analog systems for TV broadcasting dominated until the 2010s. With the introduction of digital terrestrial television (DTT), they were replaced by four main systems in use around the world: ATSC, DVB, ISDB and DTMB. Analog television systems Every analog television system bar one began as a black-and-white system. Each country, faced with local political, technical, and economic issues, adopted a color television standard which was grafted onto an existing monochrome system such as CCIR System M, using gaps in the video spectrum (explained below) to allow color transmission information to fit in the existing channels allotted. The grafting of the color transmission standards onto existing monochrome systems permitted existing monochrome television receivers predating the changeover to color television to continue to be operated as monochrome television. Because of this compatibility requirement, color standards added a second signal to the basic monochrome signal, which carries the color information. The color information is called chrominance with the symbol C, while the black and white information is called the luminance with the symbol Y. Monochrome television receivers only display the luminance, while color receivers process both signals. Though in theory any monochrome system could be adopted to a color system, in practice some of the original monochrome systems proved impractical to adapt to color and were abandoned when the switch to color broadcasting was made. All countries used one of three color standa
https://en.wikipedia.org/wiki/Visual%20cryptography
Visual cryptography is a cryptographic technique which allows visual information (pictures, text, etc.) to be encrypted in such a way that the decrypted information appears as a visual image. One of the best-known techniques has been credited to Moni Naor and Adi Shamir, who developed it in 1994. They demonstrated a visual secret sharing scheme, where an image was broken up into n shares so that only someone with all n shares could decrypt the image, while any shares revealed no information about the original image. Each share was printed on a separate transparency, and decryption was performed by overlaying the shares. When all n shares were overlaid, the original image would appear. There are several generalizations of the basic scheme including k-out-of-n visual cryptography, and using opaque sheets but illuminating them by multiple sets of identical illumination patterns under the recording of only one single-pixel detector. Using a similar idea, transparencies can be used to implement a one-time pad encryption, where one transparency is a shared random pad, and another transparency acts as the ciphertext. Normally, there is an expansion of space requirement in visual cryptography. But if one of the two shares is structured recursively, the efficiency of visual cryptography can be increased to 100%. Some antecedents of visual cryptography are in patents from the 1960s. Other antecedents are in the work on perception and secure communication. Visual cryptography can be used to protect biometric templates in which decryption does not require any complex computations. Example In this example, the image has been split into two component images. Each component image has a pair of pixels for every pixel in the original image. These pixel pairs are shaded black or white according to the following rule: if the original image pixel was black, the pixel pairs in the component images must be complementary; randomly shade one ■□, and the other □■. When these comple
https://en.wikipedia.org/wiki/Taguchi%20methods
Taguchi methods () are statistical methods, sometimes called robust design methods, developed by Genichi Taguchi to improve the quality of manufactured goods, and more recently also applied to engineering, biotechnology, marketing and advertising. Professional statisticians have welcomed the goals and improvements brought about by Taguchi methods, particularly by Taguchi's development of designs for studying variation, but have criticized the inefficiency of some of Taguchi's proposals. Taguchi's work includes three principal contributions to statistics: A specific loss function The philosophy of off-line quality control; and Innovations in the design of experiments. Loss functions Loss functions in the statistical theory Traditionally, statistical methods have relied on mean-unbiased estimators of treatment effects: Under the conditions of the Gauss–Markov theorem, least squares estimators have minimum variance among all mean-unbiased linear estimators. The emphasis on comparisons of means also draws (limiting) comfort from the law of large numbers, according to which the sample means converge to the true mean. Fisher's textbook on the design of experiments emphasized comparisons of treatment means. However, loss functions were avoided by Ronald A. Fisher. Taguchi's use of loss functions Taguchi knew statistical theory mainly from the followers of Ronald A. Fisher, who also avoided loss functions. Reacting to Fisher's methods in the design of experiments, Taguchi interpreted Fisher's methods as being adapted for seeking to improve the mean outcome of a process. Indeed, Fisher's work had been largely motivated by programmes to compare agricultural yields under different treatments and blocks, and such experiments were done as part of a long-term programme to improve harvests. However, Taguchi realised that in much industrial production, there is a need to produce an outcome on target, for example, to machine a hole to a specified diameter, or to manufacture
https://en.wikipedia.org/wiki/Computability
Computability is the ability to solve a problem in an effective manner. It is a key topic of the field of computability theory within mathematical logic and the theory of computation within computer science. The computability of a problem is closely linked to the existence of an algorithm to solve the problem. The most widely studied models of computability are the Turing-computable and μ-recursive functions, and the lambda calculus, all of which have computationally equivalent power. Other forms of computability are studied as well: computability notions weaker than Turing machines are studied in automata theory, while computability notions stronger than Turing machines are studied in the field of hypercomputation. Problems A central idea in computability is that of a (computational) problem, which is a task whose computability can be explored. There are two key types of problems: A decision problem fixes a set S, which may be a set of strings, natural numbers, or other objects taken from some larger set U. A particular instance of the problem is to decide, given an element u of U, whether u is in S. For example, let U be the set of natural numbers and S the set of prime numbers. The corresponding decision problem corresponds to primality testing. A function problem consists of a function f from a set U to a set V. An instance of the problem is to compute, given an element u in U, the corresponding element f(u) in V. For example, U and V may be the set of all finite binary strings, and f may take a string and return the string obtained by reversing the digits of the input (so f(0101) = 1010). Other types of problems include search problems and optimization problems. One goal of computability theory is to determine which problems, or classes of problems, can be solved in each model of computation. Formal models of computation A model of computation is a formal description of a particular type of computational process. The description often takes the form
https://en.wikipedia.org/wiki/Satellaview
The is a satellite modem peripheral produced by Nintendo for the Super Famicom in 1995. Containing 1 megabyte of ROM space and an additional 512 kB of RAM, Satellaview allowed players to download games, magazines, and other media through satellite broadcasts provided by Japanese company St.GIGA. Its heavy third-party support included Squaresoft, Taito, Konami, Capcom, and Seta. To use Satellaview, players purchased a special broadcast satellite (BS) tuner directly from St.GIGA or rented one for a six-month fee. It attaches to the expansion port on the bottom of the Super Famicom. Satellaview is the result of a collaboration between Nintendo and St.GIGA, the latter known in Japan for its "Tide of Sound" nature sound music. By 1994, St.GIGA was struggling financially due to the Japanese Recession affecting the demand for its music; Nintendo initiated a "rescue" plan by purchasing a stake in the company. Satellaview was produced by Nintendo Research & Development 2, the same team that designed the Super Famicom, and was made for a more adult-oriented market. By 1998, Nintendo's relationship with St.GIGA was beginning to collapse due to St.GIGA's refusal of a debt-management plan and failure to secure a government broadcasting license. Nintendo withdrew support for Satellaview in March 1999, with St.GIGA continuing to supply content until June 30, 2000, when it was fully discontinued. Consumer adoption of Satellaview was complicated by the rise of technologically superior fifth-generation consoles such as the Sega Saturn, PlayStation, and Nintendo 64, and by Satellaview's high cost, especially due to its exclusive availability via mail order and specific electronic store chains. However, St.GIGA reported more than 100,000 subscribers by March 1997. Retrospectively, Satellaview has been praised by critics for its technological accomplishments and its overall library quality, particularly of the Legend of Zelda series. In recent years, it has gained a strong cult follo
https://en.wikipedia.org/wiki/John%20Harsanyi
John Charles Harsanyi (; May 29, 1920 – August 9, 2000) was a Hungarian-American economist and the recipient of the Nobel Memorial Prize in Economic Sciences in 1994. He is best known for his contributions to the study of game theory and its application to economics, specifically for his developing the highly innovative analysis of games of incomplete information, so-called Bayesian games. He also made important contributions to the use of game theory and economic reasoning in political and moral philosophy (specifically utilitarian ethics) as well as contributing to the study of equilibrium selection. For his work, he was a co-recipient along with John Nash and Reinhard Selten of the 1994 Nobel Memorial Prize in Economic Sciences. He moved to the United States in 1956, and spent most of his life there. According to György Marx, he was one of The Martians. Early life Harsanyi was born on May 29, 1920, in Budapest, Hungary, the son of Alice Harsányi (née Gombos) and Károly Harsányi, a pharmacy owner. His parents converted from Judaism to Catholicism a year before he was born. He attended high school at the Lutheran Gymnasium in Budapest. In high school, he became one of the best problem solvers of the KöMaL, the Mathematical and Physical Monthly for Secondary Schools. Founded in 1893, this periodical is generally credited with a large share of Hungarian students' success in mathematics. He also won the first prize in the Eötvös mathematics competition for high school students. Although he wanted to study mathematics and philosophy, his father sent him to France in 1939 to enroll in chemical engineering at the University of Lyon. However, because of the start of World War II, Harsanyi returned to Hungary to study pharmacology at the University of Budapest (today: Eötvös Loránd University), earning a diploma in 1944. As a pharmacology student, Harsanyi escaped conscription into the Royal Hungarian Army which, as a person of Jewish descent, would have meant forced l
https://en.wikipedia.org/wiki/List%20of%20prime%20numbers
This is a list of articles about prime numbers. A prime number (or prime) is a natural number greater than 1 that has no positive divisors other than 1 and itself. By Euclid's theorem, there are an infinite number of prime numbers. Subsets of the prime numbers may be generated with various formulas for primes. The first 1000 primes are listed below, followed by lists of notable types of prime numbers in alphabetical order, giving their respective first terms. 1 is neither prime nor composite. The first 1000 prime numbers The following table lists the first 1000 primes, with 20 columns of consecutive primes in each of the 50 rows. . The Goldbach conjecture verification project reports that it has computed all primes below 4×10. That means 95,676,260,903,887,607 primes (nearly 10), but they were not stored. There are known formulae to evaluate the prime-counting function (the number of primes below a given value) faster than computing the primes. This has been used to compute that there are 1,925,320,391,606,803,968,923 primes (roughly 2) below 10. A different computation found that there are 18,435,599,767,349,200,867,866 primes (roughly 2) below 10, if the Riemann hypothesis is true. Lists of primes by type Below are listed the first prime numbers of many named forms and types. More details are in the article for the name. n is a natural number (including 0) in the definitions. Balanced primes Primes with equal-sized prime gaps above and below them, so that they are equal to the arithmetic mean of the nearest primes above and below. 5, 53, 157, 173, 211, 257, 263, 373, 563, 593, 607, 653, 733, 947, 977, 1103, 1123, 1187, 1223, 1367, 1511, 1747, 1753, 1907, 2287, 2417, 2677, 2903, 2963, 3307, 3313, 3637, 3733, 4013, 4409, 4457, 4597, 4657, 4691, 4993, 5107, 5113, 5303, 5387, 5393 (). Bell primes Primes that are the number of partitions of a set with n members. 2, 5, 877, 27644437, 35742549198872617291353508656626642567, 3593340859686228310419601885980
https://en.wikipedia.org/wiki/Epigeal
Epigeal, epigean, epigeic and epigeous are biological terms describing an organism's activity above the soil surface. In botany, a seed is described as showing epigeal germination when the cotyledons of the germinating seed expand, throw off the seed shell and become photosynthetic above the ground. The opposite kind, where the cotyledons remain non-photosynthetic, inside the seed shell, and below ground, is hypogeal germination. The terms epigean, epigeic or epigeous are used for organisms that crawl (epigean), creep like a vine (epigeal), or grow (epigeous) on the soil surface: they are also used more generally for animals that neither burrow nor swim nor fly. The opposite terms are hypogean, hypogeic and hypogeous. An epigeal nest is a term used for a termite mound, the above ground nest of a colony of termites. See also List of plant morphology terms References Ecology Plant reproduction Plant morphology
https://en.wikipedia.org/wiki/1000%20%28number%29
1000 or one thousand is the natural number following 999 and preceding 1001. In most English-speaking countries, it can be written with or without a comma or sometimes a period separating the thousands digit: 1,000. A group of one thousand things is sometimes known, from Ancient Greek, as a chiliad. A period of one thousand years may be known as a chiliad or, more often from Latin, as a millennium. The number 1000 is also sometimes described as a short thousand in medieval contexts where it is necessary to distinguish the Germanic concept of 1200 as a long thousand. Notation The decimal representation for one thousand is 1000—a one followed by three zeros, in the general notation; 1 × 103—in engineering notation, which for this number coincides with: 1 × 103 exactly—in scientific normalized exponential notation; 1 E+3 exactly—in scientific E notation. The SI prefix for a thousand units is "kilo-", abbreviated to "k"—for instance, a kilogram or "kg" is a thousand grams. This is sometimes extended to non-SI contexts, such as "ka" (kiloannum) being used as a shorthand for periods of 1000 years. In computer science, however, "kilo-" is used more loosely to mean 2 to the 10th power (1024). In the SI writing style, a non-breaking space can be used as a thousands separator, i.e., to separate the digits of a number at every power of 1000. Multiples of thousands are occasionally represented by replacing their last three zeros with the letter "K" or "k": for instance, writing "$30k" for $30 000 or denoting the Y2K computer bug of the year 2000. A thousand units of currency, especially dollars or pounds, are colloquially called a grand. In the United States, this is sometimes abbreviated with a "G" suffix. Properties There are 168 prime numbers less than 1000. 1000 is the 10th icositetragonal number, or 24-gonal number. 1000 has a reduced totient value of 100, and totient of 400. It is equal to the sum of Euler's totient function over the first 57 integers, wi
https://en.wikipedia.org/wiki/Cryptographic%20Message%20Syntax
The Cryptographic Message Syntax (CMS) is the IETF's standard for cryptographically protected messages. It can be used by cryptographic schemes and protocols to digitally sign, digest, authenticate or encrypt any form of digital data. CMS is based on the syntax of PKCS #7, which in turn is based on the Privacy-Enhanced Mail standard. The newest version of CMS () is specified in (but see also for updated ASN.1 modules conforming to ASN.1 2002). The architecture of CMS is built around certificate-based key management, such as the profile defined by the PKIX working group. CMS is used as the key cryptographic component of many other cryptographic standards, such as S/MIME, PKCS #12 and the digital timestamping protocol. OpenSSL is open source software that can encrypt, decrypt, sign and verify, compress and uncompress CMS documents, using the openssl-cms command. See also CAdES - CMS Advanced Electronic Signatures S/MIME PKCS #7 External links (Update to the Cryptographic Message Syntax (CMS) for Algorithm Identifier Protection) (Cryptographic Message Syntax (CMS), in use) (Cryptographic Message Syntax (CMS), obsolete) (Cryptographic Message Syntax (CMS), obsolete) (Cryptographic Message Syntax, obsolete) (New ASN.1 Modules for Cryptographic Message Syntax (CMS) and S/MIME, in use) (New ASN.1 Modules for Cryptographic Message Syntax (CMS) and S/MIME, updated) (Using Elliptic Curve Cryptography with CMS, in use) (Use of Elliptic Curve Cryptography (ECC) Algorithms in Cryptographic Message Syntax (CMS), obsolete) (Using AES-CCM and AES-GCM Authenticated Encryption in the Cryptographic Message Syntax (CMS), in use) Cryptographic protocols Internet Standards
https://en.wikipedia.org/wiki/Fundamental%20theorem%20of%20asset%20pricing
The fundamental theorems of asset pricing (also: of arbitrage, of finance), in both financial economics and mathematical finance, provide necessary and sufficient conditions for a market to be arbitrage-free, and for a market to be complete. An arbitrage opportunity is a way of making money with no initial investment without any possibility of loss. Though arbitrage opportunities do exist briefly in real life, it has been said that any sensible market model must avoid this type of profit. The first theorem is important in that it ensures a fundamental property of market models. Completeness is a common property of market models (for instance the Black–Scholes model). A complete market is one in which every contingent claim can be replicated. Though this property is common in models, it is not always considered desirable or realistic. Discrete markets In a discrete (i.e. finite state) market, the following hold: The First Fundamental Theorem of Asset Pricing: A discrete market on a discrete probability space is arbitrage-free if, and only if, there exists at least one risk neutral probability measure that is equivalent to the original probability measure, P. The Second Fundamental Theorem of Asset Pricing: An arbitrage-free market (S,B) consisting of a collection of stocks S and a risk-free bond B is complete if and only if there exists a unique risk-neutral measure that is equivalent to P and has numeraire B. In more general markets When stock price returns follow a single Brownian motion, there is a unique risk neutral measure. When the stock price process is assumed to follow a more general sigma-martingale or semimartingale, then the concept of arbitrage is too narrow, and a stronger concept such as no free lunch with vanishing risk (NFLVR) must be used to describe these opportunities in an infinite dimensional setting. In continuous time, a version of the fundamental theorems of asset pricing read: Let be a d-dimensional semimartingale market (a collectio
https://en.wikipedia.org/wiki/%C4%86uk%20converter
The Ćuk converter (, ) is a type of buck-boost converter with low ripple current. A Ćuk converter can be seen as a combination of boost converter and buck converter, having one switching device and a mutual capacitor, to couple the energy. Similar to the buck-boost converter with inverting topology, the output voltage of non-isolated Ćuk converter is typically inverted, with lower or higher values with respect to the input voltage. Usually in DC converters, the inductor is used as a main energy-storage component. In ćuk converter, the main energy-storage component is the capacitor. It is named after Slobodan Ćuk of the California Institute of Technology, who first presented the design. Non-isolated Ćuk converter There are variations on the basic Ćuk converter. For example, the coils may share single magnetic core, which drops the output ripple, and adds efficiency. Because the power transfer flows continuously via the capacitor, this type of switcher has minimized EMI radiation. The Ćuk converter allows energy to flow bidirectionally by using a diode and a switch. Operating principle A non-isolated Ćuk converter comprises two inductors, two capacitors, a switch (usually a transistor), and a diode. Its schematic can be seen in figure 1. It is an inverting converter, so the output voltage is negative with respect to the input voltage. The main advantage of this converter is the continuous currents at the input and output of the converter.  The main disadvantage is the high current stress on the switch. The capacitor C1 is used to transfer energy. It is connected alternately to the input and to the output of the converter via the commutation of the transistor and the diode (see figures 2 and 3). The two inductors L1 and L2 are used to convert respectively the input voltage source (Vs) and the output voltage source (Vo) into current sources. At a short time scale, an inductor can be considered as a current source as it maintains a constant current. This conversion
https://en.wikipedia.org/wiki/Hardware%20abstraction
Hardware abstractions are sets of routines in software that provide programs with access to hardware resources through programming interfaces. The programming interface allows all devices in a particular class C of hardware devices to be accessed through identical interfaces even though C may contain different subclasses of devices that each provide a different hardware interface. Hardware abstractions often allow programmers to write device-independent, high performance applications by providing standard operating system (OS) calls to hardware. The process of abstracting pieces of hardware is often done from the perspective of a CPU. Each type of CPU has a specific instruction set architecture or ISA. The ISA represents the primitive operations of the machine that are available for use by assembly programmers and compiler writers. One of the main functions of a compiler is to allow a programmer to write an algorithm in a high-level language without having to care about CPU-specific instructions. Then it is the job of the compiler to generate a CPU-specific executable. The same type of abstraction is made in operating systems, but OS APIs now represent the primitive operations of the machine, rather than an ISA. This allows a programmer to use OS-level operations (e.g. task creation/deletion) in their programs while retaining portability over a variety of different platforms. Overview Many early computer systems did not have any form of hardware abstraction. This meant that anyone writing a program for such a system would have to know how each hardware device communicated with the rest of the system. This was a significant challenge to software developers since they then had to know how every hardware device in a system worked to ensure the software's compatibility. With hardware abstraction, rather than the program communicating directly with the hardware device, it communicates to the operating system what the device should do, which then generates a hard
https://en.wikipedia.org/wiki/MS/8
MS/8 or The RL Monitor System is a discontinued computer operating system developed for the Digital Equipment Corporation PDP-8 in 1966 by Richard F. Lary. History RL Monitor System, as it was initially called, was developed on a 4K (12-bit) PDP-8 with "a Teletype that had a paper tape reader and punch and .. a single DECtape." It was a disk oriented system, faster than its predecessor, the PDP-8 4K Disk Monitor System, with tricks to make it run quickly on DECtape based systems. Still named RL, it was submitted to DECUS in 1970. MS/8 was replaced by P?S/8 and COS-310. See also PDP-8 Digital Equipment Corporation References External links "What is a PDP-8?". PDP-8 and DECmate documentation files. Free software operating systems 1966 software
https://en.wikipedia.org/wiki/Finite%20impulse%20response
In signal processing, a finite impulse response (FIR) filter is a filter whose impulse response (or response to any finite length input) is of finite duration, because it settles to zero in finite time. This is in contrast to infinite impulse response (IIR) filters, which may have internal feedback and may continue to respond indefinitely (usually decaying). The impulse response (that is, the output in response to a Kronecker delta input) of an Nth-order discrete-time FIR filter lasts exactly samples (from first nonzero element through last nonzero element) before it then settles to zero. FIR filters can be discrete-time or continuous-time, and digital or analog. Definition For a causal discrete-time FIR filter of order N, each value of the output sequence is a weighted sum of the most recent input values: where: is the input signal, is the output signal, is the filter order; an th-order filter has terms on the right-hand side is the value of the impulse response at the ith instant for of an -order FIR filter. If the filter is a direct form FIR filter then is also a coefficient of the filter. This computation is also known as discrete convolution. The in these terms are commonly referred to as s, based on the structure of a tapped delay line that in many implementations or block diagrams provides the delayed inputs to the multiplication operations. One may speak of a 5th order/6-tap filter, for instance. The impulse response of the filter as defined is nonzero over a finite duration. Including zeros, the impulse response is the infinite sequence:If an FIR filter is non-causal, the range of nonzero values in its impulse response can start before , with the defining formula appropriately generalized. Properties An FIR filter has a number of useful properties which sometimes make it preferable to an infinite impulse response (IIR) filter. FIR filters: Require no feedback. This means that any rounding errors are not compounded by summed iterations.
https://en.wikipedia.org/wiki/Key%20%28instrument%29
A key is a component of a musical instrument, the purpose and function of which depends on the instrument. However, the term is most often used in the context of keyboard instruments, in which case it refers to the exterior part of the instrument that the player physically interacts in the process of sound production. On instruments equipped with tuning machines such as guitars or mandolins, a key is part of a tuning machine. It is a worm gear with a key shaped end used to turn a cog, which, in turn, is attached to a post which winds the string. The key is used to make pitch adjustments to a string. With other instruments, zithers and drums, for example, a key is essentially a small wrench used to turn a tuning machine or lug. On woodwind instruments such as a flute or saxophone, keys are finger operated levers used to open or close tone holes, the operation of which effectively shortens or lengthens the resonating tube of the instrument. By doing so, the player is able to physically manipulate the range of resonating sound frequencies capable of being produced by the tubes that has been altered into various “effective” lengths, based on specific key configurations. The keys on the keyboard of a pipe organ also open and close various mechanical valves. However, rather than directed influencing the paths the airflow takes within the same tube, the configuration of these valves instead determines through which of the numerous separate organ pipes, each of which tuned for a specific note, the air stream flows through. The keys of an accordion direct the air flow from manually operated bellows across various tuned vibrating reeds. On other keyboard instruments, a key may be a lever which mechanically triggers a hammer to strike a group of strings, as on a piano, or an electric switch which energizes an audio oscillator as on an electronic organ or a synthesizer. References See also Musical keyboard Piano key frequencies Human–machine interaction Musical instrume
https://en.wikipedia.org/wiki/Covariant%20transformation
In physics, a covariant transformation is a rule that specifies how certain entities, such as vectors or tensors, change under a change of basis. The transformation that describes the new basis vectors as a linear combination of the old basis vectors is defined as a covariant transformation. Conventionally, indices identifying the basis vectors are placed as lower indices and so are all entities that transform in the same way. The inverse of a covariant transformation is a contravariant transformation. Whenever a vector should be invariant under a change of basis, that is to say it should represent the same geometrical or physical object having the same magnitude and direction as before, its components must transform according to the contravariant rule. Conventionally, indices identifying the components of a vector are placed as upper indices and so are all indices of entities that transform in the same way. The sum over pairwise matching indices of a product with the same lower and upper indices are invariant under a transformation. A vector itself is a geometrical quantity, in principle, independent (invariant) of the chosen basis. A vector v is given, say, in components vi on a chosen basis ei. On another basis, say e′j, the same vector v has different components v′j and As a vector, v should be invariant to the chosen coordinate system and independent of any chosen basis, i.e. its "real world" direction and magnitude should appear the same regardless of the basis vectors. If we perform a change of basis by transforming the vectors ei into the basis vectors e′j, we must also ensure that the components vi transform into the new components v′j to compensate. The needed transformation of v is called the contravariant transformation rule. In the shown example, a vector is described by two different coordinate systems: a rectangular coordinate system (the black grid), and a radial coordinate system (the red grid). Basis vectors have been chosen for both coordina
https://en.wikipedia.org/wiki/Simple%20ring
In abstract algebra, a branch of mathematics, a simple ring is a non-zero ring that has no two-sided ideal besides the zero ideal and itself. In particular, a commutative ring is a simple ring if and only if it is a field. The center of a simple ring is necessarily a field. It follows that a simple ring is an associative algebra over this field. It is then called a simple algebra over this field. Several references (e.g., Lang (2002) or Bourbaki (2012)) require in addition that a simple ring be left or right Artinian (or equivalently semi-simple). Under such terminology a non-zero ring with no non-trivial two-sided ideals is called quasi-simple. Rings which are simple as rings but are not a simple module over themselves do exist: a full matrix ring over a field does not have any nontrivial two-sided ideals (since any ideal of is of the form with an ideal of ), but it has nontrivial left ideals (for example, the sets of matrices which have some fixed zero columns). An immediate example of a simple ring is a division ring, where every nonzero element has a multiplicative inverse, for instance, the quaternions. Also, for any , the algebra of matrices with entries in a division ring is simple. Joseph Wedderburn proved that if a ring is a finite-dimensional simple algebra over a field , it is isomorphic to a matrix algebra over some division algebra over . In particular, the only simple rings that are finite-dimensional algebras over the real numbers are rings of matrices over either the real numbers, the complex numbers, or the quaternions. Wedderburn proved these results in 1907 in his doctoral thesis, On hypercomplex numbers, which appeared in the Proceedings of the London Mathematical Society. His thesis classified finite-dimensional simple and also semisimple algebras over fields. Simple algebras are building blocks of semisimple algebras: any finite-dimensional semisimple algebra is a Cartesian product, in the sense of algebras, of finite-dimensiona
https://en.wikipedia.org/wiki/Woodworking%20machine
A Woodworking machine is a machine that is intended to process wood. These machines are usually powered by electric motors and are used extensively in woodworking. Sometimes grinding machines (used for grinding down to smaller pieces) are also considered a part of woodworking machinery. Types of woodworking machinery Artisanal and hobby machines These machines are used both in small-scale commercial production of timber products and by hobbyists. Most of these machines may be used on solid timber and on composite products. Machines can be divided into the bigger stationary machines where the machine remains stationary while the material is moved over the machine, and hand-held power tools, where the tool is moved over the material. Hand-held power tools Biscuit joiner Domino jointer Chain saw Hand-held circular saw Electric drill Jig saw Miter saw Nail gun Hand-held electric plane Reciprocating saw Rotary tool Router Hand-held sanders, including belt sander, orbital sander, random orbit sander Stationary machines Bandsaw Combination machine Double side planer Four sided planer or timber sizer Drill press Drum sander Bench grinder Jointer Wood lathe Mortiser Panel saw Pin router or Overhead Router Radial arm saw Scroll saw Spindle moulder (Wood shaper) Stationary sanders, including stroke sanders, oscillating spindle sander, belt sander, disc sander (and combination disc-belt sander). Table saw Tenoner or tenoning machine Thicknesser or Thickness planer Round pole milling machine Round pole sanding machine Panel Line Woodworking machines These machines are used in large-scale manufacturing of cabinets and other wooden or panel products. Panel surface processing Panel dividing equipment Panel dividing equipment, classified by number of beam, loading system, saw carriage speed Double end tenoner Double end tenoner, classified by conveyor type Rolling chain system conveyor speed 40 to 120 m/min Sliding chain system conveyor s
https://en.wikipedia.org/wiki/Topological%20quantum%20field%20theory
In gauge theory and mathematical physics, a topological quantum field theory (or topological field theory or TQFT) is a quantum field theory which computes topological invariants. Although TQFTs were invented by physicists, they are also of mathematical interest, being related to, among other things, knot theory and the theory of four-manifolds in algebraic topology, and to the theory of moduli spaces in algebraic geometry. Donaldson, Jones, Witten, and Kontsevich have all won Fields Medals for mathematical work related to topological field theory. In condensed matter physics, topological quantum field theories are the low-energy effective theories of topologically ordered states, such as fractional quantum Hall states, string-net condensed states, and other strongly correlated quantum liquid states. Overview In a topological field theory, correlation functions do not depend on the metric of spacetime. This means that the theory is not sensitive to changes in the shape of spacetime; if spacetime warps or contracts, the correlation functions do not change. Consequently, they are topological invariants. Topological field theories are not very interesting on flat Minkowski spacetime used in particle physics. Minkowski space can be contracted to a point, so a TQFT applied to Minkowski space results in trivial topological invariants. Consequently, TQFTs are usually applied to curved spacetimes, such as, for example, Riemann surfaces. Most of the known topological field theories are defined on spacetimes of dimension less than five. It seems that a few higher-dimensional theories exist, but they are not very well understood . Quantum gravity is believed to be background-independent (in some suitable sense), and TQFTs provide examples of background independent quantum field theories. This has prompted ongoing theoretical investigations into this class of models. (Caveat: It is often said that TQFTs have only finitely many degrees of freedom. This is not a fundamental
https://en.wikipedia.org/wiki/List%20of%20numerical%20analysis%20topics
This is a list of numerical analysis topics. General Validated numerics Iterative method Rate of convergence — the speed at which a convergent sequence approaches its limit Order of accuracy — rate at which numerical solution of differential equation converges to exact solution Series acceleration — methods to accelerate the speed of convergence of a series Aitken's delta-squared process — most useful for linearly converging sequences Minimum polynomial extrapolation — for vector sequences Richardson extrapolation Shanks transformation — similar to Aitken's delta-squared process, but applied to the partial sums Van Wijngaarden transformation — for accelerating the convergence of an alternating series Abramowitz and Stegun — book containing formulas and tables of many special functions Digital Library of Mathematical Functions — successor of book by Abramowitz and Stegun Curse of dimensionality Local convergence and global convergence — whether you need a good initial guess to get convergence Superconvergence Discretization Difference quotient Complexity: Computational complexity of mathematical operations Smoothed analysis — measuring the expected performance of algorithms under slight random perturbations of worst-case inputs Symbolic-numeric computation — combination of symbolic and numeric methods Cultural and historical aspects: History of numerical solution of differential equations using computers Hundred-dollar, Hundred-digit Challenge problems — list of ten problems proposed by Nick Trefethen in 2002 International Workshops on Lattice QCD and Numerical Analysis Timeline of numerical analysis after 1945 General classes of methods: Collocation method — discretizes a continuous equation by requiring it only to hold at certain points Level-set method Level set (data structures) — data structures for representing level sets Sinc numerical methods — methods based on the sinc function, sinc(x) = sin(x) / x ABS methods Error Error analysis (mathematics) Approximat
https://en.wikipedia.org/wiki/Weierstrass%20M-test
In mathematics, the Weierstrass M-test is a test for determining whether an infinite series of functions converges uniformly and absolutely. It applies to series whose terms are bounded functions with real or complex values, and is analogous to the comparison test for determining the convergence of series of real or complex numbers. It is named after the German mathematician Karl Weierstrass (1815-1897). Statement Weierstrass M-test. Suppose that (fn) is a sequence of real- or complex-valued functions defined on a set A, and that there is a sequence of non-negative numbers (Mn) satisfying the conditions for all and all , and converges. Then the series converges absolutely and uniformly on A. The result is often used in combination with the uniform limit theorem. Together they say that if, in addition to the above conditions, the set A is a topological space and the functions fn are continuous on A, then the series converges to a continuous function. Proof Consider the sequence of functions Since the series converges and for every , then by the Cauchy criterion, For the chosen , (Inequality (1) follows from the triangle inequality.) The sequence is thus a Cauchy sequence in R or C, and by completeness, it converges to some number that depends on x. For n > N we can write Since N does not depend on x, this means that the sequence of partial sums converges uniformly to the function S. Hence, by definition, the series converges uniformly. Analogously, one can prove that converges uniformly. Generalization A more general version of the Weierstrass M-test holds if the common codomain of the functions (fn) is a Banach space, in which case the premise is to be replaced by , where is the norm on the Banach space. For an example of the use of this test on a Banach space, see the article Fréchet derivative. See also Example of Weierstrass M-test References Functional analysis Convergence tests Articles containing proofs
https://en.wikipedia.org/wiki/Oenology
Oenology (also enology; ) is the science and study of wine and winemaking. Oenology is distinct from viticulture, which is the science of the growing, cultivation, and harvesting of grapes. The English word oenology derives from the Greek word oinos (οἶνος) "wine" and the suffix –logia (-λογία) the "study of". An oenologist is an expert in the science of wine and of the arts and techniques for making wine. Education and training University programs in oenology and viticulture usually feature a concentration in science for the degree of Bachelor of Science (B.S, B.Sc., Sc.B), and as a terminal master's degree — either in a scientific or in a research program for the degree of Master of Science (M.S., Sc.M.), e.g. the master of professional studies degree. Oenologists and viticulturalists with doctorates often have a background in horticulture, plant physiology, and microbiology. Related to oenology are the professional titles of sommelier and master of wine, which are specific certifications in the restaurant business and in hospitality management. Occupationally, oenologists usually work as winemakers, as wine chemists in commercial laboratories, and in oenologic organisations, such as the Australian Wine Research Institute. Australia Schools in Australia tend to offer a "bachelor of viticulture" or "master of viticulture" degree. Charles Sturt University - Wagga Wagga, New South Wales Curtin University of Technology - Perth, Western Australia Melbourne Polytechnic/La Trobe University - Melbourne Australia Queensland College of Wine Tourism - Stanthorpe, Queensland University of Adelaide - Adelaide, South Australia Brazil Federal Institute of Rio Grande do Sul - Bento Gonçalves, Porto Alegre, Feliz, Sertão, Canoas, Porto Alegre-Restinga, Caxias do Sul, Osório, Erechim, and Rio Grande Federal University of Pampa - Dom Pedrito Campus, Rio Grande do Sul Canada Brock University - St. Catharines, Ontario France Official National Diploma of Oenology: Instit
https://en.wikipedia.org/wiki/Generative%20grammar
Generative grammar, or generativism , is a linguistic theory that regards linguistics as the study of a hypothesised innate grammatical structure. It is a biological or biologistic modification of earlier structuralist theories of linguistics, deriving from logical syntax and glossematics. Generative grammar considers grammar as a system of rules that generates exactly those combinations of words that form grammatical sentences in a given language. It is a system of explicit rules that may apply repeatedly to generate an indefinite number of sentences which can be as long as one wants them to be. The difference from structural and functional models is that the object is base-generated within the verb phrase in generative grammar. This purportedly cognitive structure is thought of as being a part of a universal grammar, a syntactic structure which is caused by a genetic mutation in humans. Generativists have created numerous theories to make the NP VP (NP) analysis work in natural language description. That is, the subject and the verb phrase appearing as independent constituents, and the object placed within the verb phrase. A main point of interest remains in how to appropriately analyse Wh-movement and other cases where the subject appears to separate the verb from the object. Although claimed by generativists as a cognitively real structure, neuroscience has found no evidence for it. In other words, generative grammar encompasses proposed models of linguistic cognition; but there is still no specific indication that these are quite correct. Recent arguments have been made that the success of large language models undermine key claims of generative syntax because they are based on markedly different assumptions, including gradient probability and memorized constructions, and out-perform generative theories both in syntactic structure and in integration with cognition and neuroscience. Frameworks There are a number of different approaches to generative grammar.
https://en.wikipedia.org/wiki/Vitruvian%20Man
The Vitruvian Man (; ) is a drawing by the Italian Renaissance artist and scientist Leonardo da Vinci, dated to . Inspired by the writings of the ancient Roman architect Vitruvius, the drawing depicts a nude man in two superimposed positions with his arms and legs apart and inscribed in both a circle and square. It was described by the art historian Carmen C. Bambach as "justly ranked among the all-time iconic images of Western civilization". Although not the only known drawing of a man inspired by the writings of Vitruvius, the work is a unique synthesis of artistic and scientific ideals and often considered an archetypal representation of the High Renaissance. The drawing represents Leonardo's conception of ideal body proportions, originally derived from Vitruvius but influenced by his own measurements, the drawings of his contemporaries, and the De pictura treatise by Leon Battista Alberti. Leonardo produced the Vitruvian Man in Milan and the work was probably passed to his student Francesco Melzi. It later came into the possession of Venanzio de Pagave, who convinced the engraver Carlo Giuseppe Gerli to include it in a book of Leonardo's drawings, which widely disseminated the previously little-known image. It was later owned by Giuseppe Bossi, who wrote early scholarship on it, and eventually sold to the Gallerie dell'Accademia of Venice in 1822, where it has remained since. Due to its sensitivity to light, the drawing rarely goes on public display, but it was borrowed by the Louvre in 2019 for their exhibition marking the 500th anniversary of Leonardo's death. Name The drawing is described by Leonardo's notes as , variously translated as The Proportions of the Human Figure after Vitruvius, or Proportional Study of a Man in the Manner of Vitruvius. It is much better known as the Vitruvian Man. The art historian Carlo Pedretti lists it as Homo Vitruvius, study of proportions with the human figure inscribed in a circle and a square, and later as simply Homo Vit
https://en.wikipedia.org/wiki/300%20%28number%29
300 (three hundred) is the natural number following 299 and preceding 301. Mathematical properties The number 300 is a triangular number and the sum of a pair of twin primes (149 + 151), as well as the sum of ten consecutive primes (13 + 17 + 19 + 23 + 29 + 31 + 37 + 41 + 43 + 47). It is palindromic in 3 consecutive bases: 30010 = 6067 = 4548 = 3639, and also in base 13. Factorization is 30064 + 1 is prime Integers from 301 to 399 300s 301 301 = 7 × 43 = . 301 is the sum of three consecutive primes (97 + 101 + 103), happy number in base 10, lazy caterer number . 302 302 = 2 × 151. 302 is a nontotient, a happy number, the number of partitions of 40 into prime parts 303 303 = 3 × 101. 303 is a palindromic semiprime. The number of compositions of 10 which cannot be viewed as stacks is 303. 304 305 305 = 5 × 61. 305 is the convolution of the first 7 primes with themselves. 306 306 = 2 × 32 × 17. 306 is the sum of four consecutive primes (71 + 73 + 79 + 83), pronic number, and an untouchable number. 307 307 is a prime number, Chen prime, number of one-sided octiamonds 308 308 = 22 × 7 × 11. 308 is a nontotient, totient sum of the first 31 integers, heptagonal pyramidal number, and the sum of two consecutive primes (151 + 157). 309 309 = 3 × 103, Blum integer, number of primes <= 211. 310s 310 311 312 312 = 23 × 3 × 13, idoneal number. 313 314 314 = 2 × 157. 314 is a nontotient, smallest composite number in Somos-4 sequence. 315 315 = 32 × 5 × 7 = rencontres number, highly composite odd number, having 12 divisors. 316 316 = 22 × 79. 316 is a centered triangular number and a centered heptagonal number 317 317 is a prime number, Eisenstein prime with no imaginary part, Chen prime, and a strictly non-palindromic number. 317 is the exponent (and number of ones) in the fourth base-10 repunit prime. 318 319 319 = 11 × 29. 319 is the sum of three consecutive primes (103 + 107 + 109), Smith number, cannot be represented as the sum of
https://en.wikipedia.org/wiki/Plural%20quantification
In mathematics and logic, plural quantification is the theory that an individual variable x may take on plural, as well as singular, values. As well as substituting individual objects such as Alice, the number 1, the tallest building in London etc. for x, we may substitute both Alice and Bob, or all the numbers between 0 and 10, or all the buildings in London over 20 stories. The point of the theory is to give first-order logic the power of set theory, but without any "existential commitment" to such objects as sets. The classic expositions are Boolos 1984 and Lewis 1991. History The view is commonly associated with George Boolos, though it is older (see notably Simons 1982), and is related to the view of classes defended by John Stuart Mill and other nominalist philosophers. Mill argued that universals or "classes" are not a peculiar kind of thing, having an objective existence distinct from the individual objects that fall under them, but "is neither more nor less than the individual things in the class". (Mill 1904, II. ii. 2,also I. iv. 3). A similar position was also discussed by Bertrand Russell in chapter VI of Russell (1903), but later dropped in favour of a "no-classes" theory. See also Gottlob Frege 1895 for a critique of an earlier view defended by Ernst Schroeder. The general idea can be traced back to Leibniz. (Levey 2011, pp. 129–133) Interest revived in plurals with work in linguistics in the 1970s by Remko Scha, Godehard Link, Fred Landman, Friederike Moltmann, Roger Schwarzschild, Peter Lasersohn and others, who developed ideas for a semantics of plurals. Background and motivation Multigrade (variably polyadic) predicates and relations Sentences like Alice and Bob cooperate. Alice, Bob and Carol cooperate. are said to involve a multigrade (also known as variably polyadic, also anadic) predicate or relation ("cooperate" in this example), meaning that they stand for the same concept even though they don't have a fixed arity (cf. Linnebo
https://en.wikipedia.org/wiki/StepMania
StepMania is a cross-platform rhythm video game and engine. It was originally developed as a clone of Konami's arcade game series Dance Dance Revolution, and has since evolved into an extensible rhythm game engine capable of supporting a variety of rhythm-based game types. Released under the MIT License, StepMania is open-source free software. Several video game series use StepMania as their game engines. This includes In the Groove, Pump It Up Pro, Pump It Up Infinity, and StepManiaX. StepMania was included in a video game exhibition at New York's Museum of the Moving Image in 2005. Development StepMania was originally developed as an open-source clone of Konami's arcade game series Dance Dance Revolution (DDR). During the first three major versions, the Interface was based heavily on DDR's. New versions were released relatively quickly at first, culminating in version 3.9 in 2005. In 2010, after almost 5 years of work without a stable release, StepMania creator Chris Danford forked a 2006 build of StepMania, paused development on the bleeding edge branch, and labeled the new branch StepMania 4 beta. A separate development team called the Spinal Shark Collective forked the bleeding-edge branch and continued work on it, branding it sm-ssc. On 30 May 2011, sm-ssc gained official status and was renamed StepMania 5.0. Development on the upcoming version, 5.1, has gone cold over the past few years after a couple of betas were released over at GitHub. Project OutFox (formerly known as StepMania 5.3, initially labeled as FoxMania) is a currently closed-source fork of the 5.0 and 5.1 codebase originally planned to reintegrate in StepMania, however further in development, it was decided to become an independent project due to its larger scope of goals while still sharing codebase improvements to future versions of StepMania. These improvements include modernizing the original codebase to improve performance and graphical fidelity, refurbishing aspects of the engine that h
https://en.wikipedia.org/wiki/IEEE%20802.7
IEEE 802.7 is a sub-standard of the IEEE 802 which covers broadband local area networks. The working group did issue a recommendation in 1989, but is currently inactive and in hibernation. IEEE 802.07 Working groups
https://en.wikipedia.org/wiki/Pitfall%21
Pitfall! is a video game developed by David Crane for the Atari Video Computer System (later renamed Atari 2600) and released in 1982 by Activision. The player controls Pitfall Harry, who has a time limit of 20 minutes to seek treasure in a jungle. The game world is populated by enemies and hazards that variously cause the player to lose lives or points. Crane had made several games for both Atari, Inc. and Activision before working on Pitfall! in 1982. He started with creating a new realistic-style walking animation for a person on the Atari 2600 hardware. After completing it, he fashioned a game around it. He used a jungle setting with items to collect and enemies to avoid, and the result became Pitfall! Pitfall! received mostly positive reviews at the time of its release praising both its gameplay and graphics. The game was influential in the platform game genre and various publications have considered it one of the greatest video games of all time. It is also one of the best-selling Atari 2600 video games. The game was ported to several contemporary video game systems. It has been included in various Activision compilation games and was included as a secret extra in later Activision published titles. Gameplay Pitfall! is a video game set in a jungle where the player controls Pitfall Harry, a fortune hunter and explorer. Pitfall! has been described as a platform game by Nick Montfort and Ian Bogost, authors of Racing the Beam. Similar to Superman (1979) and Adventure (1980), the game does not feature side-scrolling, but instead loads one screen at a time, with a new screen appearing when Harry moves to the edge. The goal is to get Harry as many points as possible within a twenty-minute time limit. The player starts the game with 2000 points and can collect a total of 32 treasure hidden among 255 different scenes to increase their total, ranging from a money bag worth 2000 points, to a diamond ring worth 5000 points. Pitfall Harry moves left and right and can
https://en.wikipedia.org/wiki/Forests%20of%20Poland
Forests cover an estimated 38.5% of Poland's territory, and are mostly owned by the state. And are increasing at a fast rate by 2035 Polands forest percentage will be 42-46%. Western and northern parts of Poland as well as the Carpathian Mountains in the extreme south, are much more forested than eastern and central provinces. The most forested administrative districts of the country are: Lubusz Voivodeship (60,2%), Subcarpathian Voivodeship (58,2%), and Pomeranian Voivodeship (50,1%). The least forested are: Łódź Voivodeship (36%), Masovian Voivodeship (34,6%), and Lublin Voivodeship (32,8%). Contemporary history At the end of the 18th century, forests covered around 40% of Poland. However, due to the 19th century economic exploitation during the partitions of Poland, as well as, the Nazi German and Soviet occupations between 1939–1945 with trees shipped to battle fronts across Europe, deforestation and slash and burn conditions of war shrank Polish forests to only 21% of total area of the country (as of 1946). Furthermore, rich deciduous trees were replaced with fast growing coniferous trees of lesser value meant for commerce, such as pine. After World War II, the government of Poland initiated the National Plan of Afforestation. By 1970, forests covered 29% of the country. As of 2009 – 29,1% of Poland's territory was forested, amounting to 9,088,000 hectares. It is estimated that by 2050, the total area of forested land should increase to 33%. As much as 81,8% of the Polish forests are state-owned, majority (77,8%) by Polish State Forests (Lasy Państwowe), 2% constitute Polish National Forests protected zones, 2% are owned by other governmental entities (such as local self-government or the Agricultural Property Agency) and 18,2% belong to private owners. The high percentage of Polish forests owned by the state is the result of nationalization of forests that occurred in the aftermath of World War II when Poland became a communist state (see People's Republic o
https://en.wikipedia.org/wiki/Reflection%20high-energy%20electron%20diffraction
Reflection high-energy electron diffraction (RHEED) is a technique used to characterize the surface of crystalline materials. RHEED systems gather information only from the surface layer of the sample, which distinguishes RHEED from other materials characterization methods that also rely on diffraction of high-energy electrons. Transmission electron microscopy, another common electron diffraction method samples mainly the bulk of the sample due to the geometry of the system, although in special cases it can provide surface information. Low-energy electron diffraction (LEED) is also surface sensitive, but LEED achieves surface sensitivity through the use of low energy electrons. Introduction A RHEED system requires an electron source (gun), photoluminescent detector screen and a sample with a clean surface, although modern RHEED systems have additional parts to optimize the technique. The electron gun generates a beam of electrons which strike the sample at a very small angle relative to the sample surface. Incident electrons diffract from atoms at the surface of the sample, and a small fraction of the diffracted electrons interfere constructively at specific angles and form regular patterns on the detector. The electrons interfere according to the position of atoms on the sample surface, so the diffraction pattern at the detector is a function of the sample surface. Figure 1 shows the most basic setup of a RHEED system. Surface diffraction In the RHEED setup, only atoms at the sample surface contribute to the RHEED pattern. The glancing angle of incident electrons allows them to escape the bulk of the sample and to reach the detector. Atoms at the sample surface diffract (scatter) the incident electrons due to the wavelike properties of electrons. The diffracted electrons interfere constructively at specific angles according to the crystal structure and spacing of the atoms at the sample surface and the wavelength of the incident electrons. Some of the electron
https://en.wikipedia.org/wiki/Polylogarithmic%20function
In mathematics, a polylogarithmic function in is a polynomial in the logarithm of , The notation is often used as a shorthand for , analogous to for . In computer science, polylogarithmic functions occur as the order of time or memory used by some algorithms (e.g., "it has polylogarithmic order"), such as in the definition of QPTAS (see PTAS). All polylogarithmic functions of are for every exponent (for the meaning of this symbol, see small o notation), that is, a polylogarithmic function grows more slowly than any positive exponent. This observation is the basis for the soft O notation . References Mathematical analysis Polynomials Analysis of algorithms
https://en.wikipedia.org/wiki/Decision%20theory
Decision theory (or the theory of choice; not to be confused with choice theory) is a branch of applied probability theory and analytic philosophy concerned with the theory of making decisions based on assigning probabilities to various factors and assigning numerical consequences to the outcome. There are three branches of decision theory: Normative decision theory: Concerned with the identification of optimal decisions, where optimality is often determined by considering an ideal decision-maker who is able to calculate with perfect accuracy and is in some sense fully rational. Prescriptive decision theory: Concerned with describing observed behaviors through the use of conceptual models, under the assumption that those making the decisions are behaving under some consistent rules. Descriptive decision theory: Analyzes how individuals actually make the decisions that they do. Decision theory is a broad field from management sciences and is an interdisciplinary topic, studied by management scientists, medical researchers, mathematicians, data scientists, psychologists, biologists, social scientists, philosophers and computer scientists. Empirical applications of this theory are usually done with the help of statistical and discrete mathematical approaches from computer science. Normative and descriptive Normative decision theory is concerned with identification of optimal decisions where optimality is often determined by considering an ideal decision maker who is able to calculate with perfect accuracy and is in some sense fully rational. The practical application of this prescriptive approach (how people ought to make decisions) is called decision analysis and is aimed at finding tools, methodologies, and software (decision support systems) to help people make better decisions. In contrast, descriptive decision theory is concerned with describing observed behaviors often under the assumption that those making decisions are behaving under some consistent ru
https://en.wikipedia.org/wiki/Gene%20knockdown
Gene knockdown is an experimental technique by which the expression of one or more of an organism's genes is reduced. The reduction can occur either through genetic modification or by treatment with a reagent such as a short DNA or RNA oligonucleotide that has a sequence complementary to either gene or an mRNA transcript. Versus transient knockdown If a DNA of an organism is genetically modified, the resulting organism is called a "knockdown organism." If the change in gene expression is caused by an oligonucleotide binding to an mRNA or temporarily binding to a gene, this leads to a temporary change in gene expression that does not modify the chromosomal DNA, and the result is referred to as a "transient knockdown". In a transient knockdown, the binding of this oligonucleotide to the active gene or its transcripts causes decreased expression through a variety of processes. Binding can occur either through the blocking of transcription (in the case of gene-binding), the degradation of the mRNA transcript (e.g. by small interfering RNA (siRNA)) or RNase-H dependent antisense, or through the blocking of either mRNA translation, pre-mRNA splicing sites, or nuclease cleavage sites used for maturation of other functional RNAs, including miRNA (e.g. by morpholino oligos or other RNase-H independent antisense). The most direct use of transient knockdowns is for learning about a gene that has been sequenced, but has an unknown or incompletely known function. This experimental approach is known as reverse genetics. Researchers draw inferences from how the knockdown differs from individuals in which the gene of interest is operational. Transient knockdowns are often used in developmental biology because oligos can be injected into single-celled zygotes and will be present in the daughter cells of the injected cell through embryonic development. The term gene knockdown first appeared in the literature in 1994 RNA interference RNA interference (RNAi) is a means of silenci
https://en.wikipedia.org/wiki/Modularity%20of%20mind
Modularity of mind is the notion that a mind may, at least in part, be composed of innate neural structures or mental modules which have distinct, established, and evolutionarily developed functions. However, different definitions of "module" have been proposed by different authors. According to Jerry Fodor, the author of Modularity of Mind, a system can be considered 'modular' if its functions are made of multiple dimensions or units to some degree. One example of modularity in the mind is binding. When one perceives an object, they take in not only the features of an object, but the integrated features that can operate in sync or independently that create a whole. Instead of just seeing red, round, plastic, and moving, the subject may experience a rolling red ball. Binding may suggest that the mind is modular because it takes multiple cognitive processes to perceive one thing. Early investigations Historically, questions regarding the functional architecture of the mind have been divided into two different theories of the nature of the faculties. The first can be characterized as a horizontal view because it refers to mental processes as if they are interactions between faculties such as memory, imagination, judgement, and perception, which are not domain specific (e.g., a judgement remains a judgement whether it refers to a perceptual experience or to the conceptualization/comprehension process). The second can be characterized as a vertical view because it claims that the mental faculties are differentiated on the basis of domain specificity, are genetically determined, are associated with distinct neurological structures, and are computationally autonomous. The vertical vision goes back to the 19th-century movement called phrenology and its founder Franz Joseph Gall. Gall claimed that the individual mental faculties could be associated precisely, in a one-to-one correspondence, with specific physical areas of the brain. For example, someone's level of intell
https://en.wikipedia.org/wiki/Josephson%20effect
In physics, the Josephson effect is a phenomenon that occurs when two superconductors are placed in proximity, with some barrier or restriction between them. The effect is named after the British physicist Brian Josephson, who predicted in 1962 the mathematical relationships for the current and voltage across the weak link. It is an example of a macroscopic quantum phenomenon, where the effects of quantum mechanics are observable at ordinary, rather than atomic, scale. The Josephson effect has many practical applications because it exhibits a precise relationship between different physical measures, such as voltage and frequency, facilitating highly accurate measurements. The Josephson effect produces a current, known as a supercurrent, that flows continuously without any voltage applied, across a device known as a Josephson junction (JJ). These consist of two or more superconductors coupled by a weak link. The weak link can be a thin insulating barrier (known as a superconductor–insulator–superconductor junction, or S-I-S), a short section of non-superconducting metal (S-N-S), or a physical constriction that weakens the superconductivity at the point of contact (S-c-S). Josephson junctions have important applications in quantum-mechanical circuits, such as SQUIDs, superconducting qubits, and RSFQ digital electronics. The NIST standard for one volt is achieved by an array of 20,208 Josephson junctions in series. History The DC Josephson effect had been seen in experiments prior to 1962, but had been attributed to "super-shorts" or breaches in the insulating barrier leading to the direct conduction of electrons between the superconductors. In 1962 Brian Josephson became interested into superconducting tunneling. He was then 23-year-old and a second year graduate student of Brian Pippard at the Mond Laboratory of the University of Cambridge. On that year, Josephson took a many-body theory course with Philip W. Anderson, a Bell Labs employee on sabbatical leave for
https://en.wikipedia.org/wiki/360%20%28number%29
360 (three hundred sixty) is the natural number following 359 and preceding 361. In mathematics 360 is a highly composite number and one of only seven numbers such that no number less than twice as much has more divisors; the others are 1, 2, 6, 12, 60, and 2520 . 360 is also a superior highly composite number, a colossally abundant number, a refactorable number, a 5-smooth number, and a Harshad number in decimal since the sum of its digits (9) is a divisor of 360. 360 is divisible by the number of its divisors (24), and it is the smallest number divisible by every natural number from 1 to 10, except for 7. Furthermore, one of the divisors of 360 is 72, which is the number of primes below it. 360 is the sum of twin primes (179 + 181) and the sum of four consecutive powers of 3 (9 + 27 + 81 + 243). The sum of Euler's totient function φ(x) over the first thirty-four integers is 360. 360 is a triangular matchstick number. A circle is divided into 360 degrees for angular measurement. is also called a round angle. This unit choice divides round angles into equal sectors measured in integer rather than fractional degrees. Many angles commonly appearing in planimetrics have an integer number of degrees. For a simple non-intersecting polygon, the sum of the internal angles of a quadrilateral always equals 360 degrees. Integers from 361 to 369 361 361 = 192, centered triangular number, centered octagonal number, centered decagonal number, member of the Mian–Chowla sequence; also the number of positions on a standard 19 x 19 Go board. 362 362 = 2 × 181 = σ2(19): sum of squares of divisors of 19, Mertens function returns 0, nontotient, noncototient. 363 364 364 = 22 × 7 × 13, tetrahedral number, sum of twelve consecutive primes (11 + 13 + 17 + 19 + 23 + 29 + 31 + 37 + 41 + 43 + 47 + 53), Mertens function returns 0, nontotient. It is a repdigit in base 3 (111111), base 9 (444), base 25 (EE), base 27 (DD), base 51 (77) and base 90 (44), the sum of six conse
https://en.wikipedia.org/wiki/Dedekind%20eta%20function
In mathematics, the Dedekind eta function, named after Richard Dedekind, is a modular form of weight 1/2 and is a function defined on the upper half-plane of complex numbers, where the imaginary part is positive. It also occurs in bosonic string theory. Definition For any complex number with , let ; then the eta function is defined by, Raising the eta equation to the 24th power and multiplying by gives where is the modular discriminant. The presence of 24 can be understood by connection with other occurrences, such as in the 24-dimensional Leech lattice. The eta function is holomorphic on the upper half-plane but cannot be continued analytically beyond it. The eta function satisfies the functional equations In the second equation the branch of the square root is chosen such that when . More generally, suppose are integers with , so that is a transformation belonging to the modular group. We may assume that either , or and . Then where Here is the Dedekind sum Because of these functional equations the eta function is a modular form of weight and level 1 for a certain character of order 24 of the metaplectic double cover of the modular group, and can be used to define other modular forms. In particular the modular discriminant of Weierstrass can be defined as and is a modular form of weight 12. Some authors omit the factor of , so that the series expansion has integral coefficients. The Jacobi triple product implies that the eta is (up to a factor) a Jacobi theta function for special values of the arguments: where is "the" Dirichlet character modulo 12 with and . Explicitly, The Euler function has a power series by the Euler identity: Because the eta function is easy to compute numerically from either power series, it is often helpful in computation to express other functions in terms of it when possible, and products and quotients of eta functions, called eta quotients, can be used to express a great variety of modular forms. The pictur
https://en.wikipedia.org/wiki/Chiclet%20keyboard
A chiclet keyboard is a computer keyboard with keys that form an array of small, flat rectangular or lozenge-shaped rubber or plastic keys that look like erasers or "Chiclets", a brand of chewing gum manufactured in the shape of small squares with rounded corners. It is an evolution of the membrane keyboard, using the same principle of a single rubber sheet with individual electrical switches underneath each key, but with the addition of an additional upper layer which provides superior tactile feedback through a buckling mechanism. The term "chiclet keyboard" is sometimes incorrectly used to refer to island keyboards. Since the mid-1980s, chiclet keyboards have been mainly restricted to lower-end electronics, such as small handheld calculators, cheap PDAs and many remote controls, though the name is also used to refer to scissor keyboards with superficially similar appearance. History The term first appeared during the home computer era of the late 1970s to mid-1980s. The TRS-80 Color Computer, TRS-80 MC-10, and Timex Sinclair 2068 were all described as having "chiclet keys". This style of keyboard has been met with a poor reception. John Dvorak wrote that it was "associated with $99 el cheapo computers". The keys on ZX Spectrum computers are "rubber dome keys" which were sometimes described as "dead flesh", while the feel of the IBM PCjr's chiclet keyboard was reportedly compared to "massaging fruit cake". Its quality was such that an amazed Tandy executive, whose company had previously released a computer with a similarly unpopular keyboard, asked "How could IBM have made that mistake with the PCjr?" Design Chiclet keyboards operate under essentially the same mechanism as in the membrane keyboard. In both cases, a keypress is registered when the top layer is forced through a hole to touch the bottom layer. For every key, the conductive traces on the bottom layer are normally separated by a non-conductive gap. Electrical current cannot flow between them; t
https://en.wikipedia.org/wiki/Project%20Athena
Project Athena was a joint project of MIT, Digital Equipment Corporation, and IBM to produce a campus-wide distributed computing environment for educational use. It was launched in 1983, and research and development ran until June 30, 1991. , Athena is still in production use at MIT. It works as software (currently a set of Debian packages) that makes a machine a thin client, that will download educational applications from the MIT servers on demand. Project Athena was important in the early history of desktop and distributed computing. It created the X Window System, Kerberos, and Zephyr Notification Service. It influenced the development of thin computing, LDAP, Active Directory, and instant messaging. Description Leaders of the $50 million, five-year project at MIT included Michael Dertouzos, director of the Laboratory for Computer Science; Jerry Wilson, dean of the School of Engineering; and Joel Moses, head of the Electrical Engineering and Computer Science department. DEC agreed to contribute more than 300 terminals, 1600 microcomputers, 63 minicomputers, and five employees. IBM agreed to contribute 500 microcomputers, 500 workstations, software, five employees, and grant funding. History In 1979 Dertouzos proposed to university president Jerome Wiesner that the university network mainframe computers for student use. At that time MIT used computers throughout its research, but undergraduates did not use computers except in Course VI (computer science) classes. With no interest from the rest of the university, the School of Engineering in 1982 approached DEC for equipment for itself. President Paul E. Gray and the MIT Corporation wanted the project to benefit the rest of the university, and IBM agreed to donate equipment to MIT except to the engineering school. Project Athena began in May 1983. Its initial goals were to: Develop computer-based learning tools that are usable in multiple educational environments Establish a base of knowledge for future de
https://en.wikipedia.org/wiki/System%20X%20%28supercomputer%29
System X (pronounced "System Ten") was a supercomputer assembled by Virginia Tech's Advanced Research Computing facility in the summer of 2003. Costing US$5.2 million, it was originally composed of 1,100 Apple Power Mac G5 computers with dual 2.0 GHz processors. System X was decommissioned on May 21, 2012. System X ran at 12.25 Teraflops, (20.24 peak), and was ranked #3 on November 16, 2003 and #280 in the July 2008 edition of the TOP500 list of the world's most powerful supercomputers. The system used error-correcting (ECC) RAM, which is important for accuracy due to the rate of bits flipped by cosmic rays or other interference sources in its vast number of RAM chips. Background The supercomputer's name originates from the use of the Mac OS X operating system for each node, and because it was the first university computer to achieve 10 teraflops on the high performance LINPACK benchmark. The supercomputer is also known as Big Mac or Terascale Cluster. In 2003 it was also touted as "the world's most powerful and cheapest homebuilt supercomputer." System X was constructed with a relatively low budget of just $5.2 million, in the span of only three months, thanks in large part to using off-the-shelf Power Mac G5 computers. By comparison, the Earth Simulator, the fastest supercomputer at the time, cost approximately $400 million to build. Upgrade to Server-Grade Parts In 2004, Virginia Tech upgraded its computer to Apple's newly released, Xserve G5 servers. The upgraded version ranked #7 in the 2004 TOP500 list and its server-grade error-correcting memory solved the problem of cosmic ray interference. In October 2004, Virginia Tech partially rebuilt System X at a cost of about $600,000. These improvements brought the computer's speed up to 12.25 Teraflops, which placed System X #14 on the 2005 TOP500 list. Similar Projects Virginia Tech's system was the model for Xseed, a smaller system also made from Xserve servers and built by Bowie State University in Mar
https://en.wikipedia.org/wiki/Foraging
Foraging is searching for wild food resources. It affects an animal's fitness because it plays an important role in an animal's ability to survive and reproduce. Foraging theory is a branch of behavioral ecology that studies the foraging behavior of animals in response to the environment where the animal lives. Behavioral ecologists use economic models and categories to understand foraging; many of these models are a type of optimal model. Thus foraging theory is discussed in terms of optimizing a payoff from a foraging decision. The payoff for many of these models is the amount of energy an animal receives per unit time, more specifically, the highest ratio of energetic gain to cost while foraging. Foraging theory predicts that the decisions that maximize energy per unit time and thus deliver the highest payoff will be selected for and persist. Key words used to describe foraging behavior include resources, the elements necessary for survival and reproduction which have a limited supply, predator, any organism that consumes others, prey, an organism that is eaten in part or whole by another, and patches, concentrations of resources. Behavioral ecologists first tackled this topic in the 1960s and 1970s. Their goal was to quantify and formalize a set of models to test their null hypothesis that animals forage randomly. Important contributions to foraging theory have been made by: Eric Charnov, who developed the marginal value theorem to predict the behavior of foragers using patches; Sir John Krebs, with work on the optimal diet model in relation to tits and chickadees; John Goss-Custard, who first tested the optimal diet model against behavior in the field, using redshank, and then proceeded to an extensive study of foraging in the common pied oystercatcher. Factors influencing foraging behavior Several factors affect an animal's ability to forage and acquire profitable resources. Learning Learning is defined as an adaptive change or modification of a behavior
https://en.wikipedia.org/wiki/Almost%20disjoint%20sets
In mathematics, two sets are almost disjoint if their intersection is small in some sense; different definitions of "small" will result in different definitions of "almost disjoint". Definition The most common choice is to take "small" to mean finite. In this case, two sets are almost disjoint if their intersection is finite, i.e. if (Here, '|X|' denotes the cardinality of X, and '< ∞' means 'finite'.) For example, the closed intervals [0, 1] and [1, 2] are almost disjoint, because their intersection is the finite set {1}. However, the unit interval [0, 1] and the set of rational numbers Q are not almost disjoint, because their intersection is infinite. This definition extends to any collection of sets. A collection of sets is pairwise almost disjoint or mutually almost disjoint if any two distinct sets in the collection are almost disjoint. Often the prefix "pairwise" is dropped, and a pairwise almost disjoint collection is simply called "almost disjoint". Formally, let I be an index set, and for each i in I, let Ai be a set. Then the collection of sets {Ai : i in I} is almost disjoint if for any i and j in I, For example, the collection of all lines through the origin in R2 is almost disjoint, because any two of them only meet at the origin. If {Ai} is an almost disjoint collection consisting of more than one set, then clearly its intersection is finite: However, the converse is not true—the intersection of the collection is empty, but the collection is not almost disjoint; in fact, the intersection of any two distinct sets in this collection is infinite. The possible cardinalities of a maximal almost disjoint family (commonly referred to as a MAD family) on the set of the natural numbers has been the object of intense study. The minimum infinite such cardinal is one of the classical Cardinal characteristics of the continuum. Other meanings Sometimes "almost disjoint" is used in some other sense, or in the sense of measure theory or topological catego
https://en.wikipedia.org/wiki/Heisenberg%20group
In mathematics, the Heisenberg group , named after Werner Heisenberg, is the group of 3×3 upper triangular matrices of the form under the operation of matrix multiplication. Elements a, b and c can be taken from any commutative ring with identity, often taken to be the ring of real numbers (resulting in the "continuous Heisenberg group") or the ring of integers (resulting in the "discrete Heisenberg group"). The continuous Heisenberg group arises in the description of one-dimensional quantum mechanical systems, especially in the context of the Stone–von Neumann theorem. More generally, one can consider Heisenberg groups associated to n-dimensional systems, and most generally, to any symplectic vector space. The three-dimensional case In the three-dimensional case, the product of two Heisenberg matrices is given by: As one can see from the term {{math|ab}}, the group is non-abelian. The neutral element of the Heisenberg group is the identity matrix, and inverses are given by The group is a subgroup of the 2-dimensional affine group Aff(2): acting on corresponds to the affine transform . There are several prominent examples of the three-dimensional case. Continuous Heisenberg group If , are real numbers (in the ring R) then one has the continuous Heisenberg group H3(R). It is a nilpotent real Lie group of dimension 3. In addition to the representation as real 3×3 matrices, the continuous Heisenberg group also has several different representations in terms of function spaces. By Stone–von Neumann theorem, there is, up to isomorphism, a unique irreducible unitary representation of H in which its centre acts by a given nontrivial character. This representation has several important realizations, or models. In the Schrödinger model, the Heisenberg group acts on the space of square integrable functions. In the theta representation, it acts on the space of holomorphic functions on the upper half-plane; it is so named for its connection with the theta functions.
https://en.wikipedia.org/wiki/WordWeb
WordWeb is an international English dictionary and thesaurus program for Microsoft Windows, iOS, Android and Mac OS X. It is partly based on the WordNet database. Functions WordWeb usually resides in the Windows' notification area. It can be activated by holding down CTRL and right-clicking a word in almost any program. This opens the WordWeb main window, with definitions. In addition to its dictionary and thesaurus features, it includes: Phrase guessing – for example, CTRL + right-clicking 'Princeton' in 'Princeton University' shows the meaning of the combined entity rather than only 'Princeton'. Words from pictures – CTRL + right-clicking a word within an image (for example, the 'Free' in the Wikipedia logo) asks WordWeb to guess the word. Wordweb is used primarily by international students to find out meanings, improve their vocabulary and progress through academic life. Information The thesaurus is integrated into the dictionary. Under each definition, various related words are shown, including: Synonyms Antonyms Hyponyms ('play' lists several sub-types of play, including 'passion play') Hypernyms ('daisy' is listed as a type of 'flower') Constituents (under 'forest', listed parts include 'tree' and 'underbrush') Words describing things that might be thereby constituted Similar words (words that are not synonyms, but are semantically similar; 'big' is listed as similar to 'huge') Each shown word can be double-clicked to jump to its entry. WordWeb keeps a history of each session, allowing users to see their previously viewed entries. Users can also actively improve the dictionary and thesaurus by submitting errors (such as missing words, phrases, or more senses for existing entries) and enhancement requests. Wordweb is not a social platform in any way. Versions There are two WordWeb versions: the free version, which does not have the word list, search, anagram, or customization features; and the paid version, WordWeb Pro. WordWeb 5 added th
https://en.wikipedia.org/wiki/Aviion
Aviion (styled AViiON) was a series of computers from Data General that were the company's main product from the late 1980s until the company's server products were discontinued in 2001. Earlier Aviion models used the Motorola 88000 CPU, but later models moved to an all-Intel solution when Motorola stopped work on the 88000 in the early 1990s. Some versions of these later Intel-based machines ran Windows NT, while higher-end machines ran the company's flavor of Unix, DG/UX. History Data General had, for most of its history, essentially mirrored the strategy of DEC with a competitive (but, in the spirit of the time, incompatible) minicomputer with a better price/performance ratio. However, by the 1980s, Data General was clearly in a downward spiral relative to DEC. With the performance of custom-designed minicomputer CPU's dropping relative to commodity microprocessors, the cost of developing a custom solution no longer paid for itself. A better solution was to use these same commodity processors, but put them together in such a way to offer better performance than a commodity machine could offer. With Aviion, DG shifted its sight from a purely proprietary minicomputer line to the burgeoning Unix server market. The new line was based around the Motorola 88000, a high performance RISC processor with some support for multiprocessing and a particularly clean architecture. The machines ran a System V Unix variant known as DG/UX, largely developed at the company's Research Triangle Park facility. DG/UX had previously run on the company's family of Eclipse MV 32-bit minicomputers (the successors to Nova and the 16-bit Eclipse minis) but only in a very secondary role to the Eclipse MV mainstay AOS/VS and AOS/VS II operating systems. Also, some Aviion servers from this era ran the proprietary Meditech MAGIC operating system. From February 1988 to October 1990, Robert E. Cousins was the Department Manager for workstation development. During this time they produced the Mave
https://en.wikipedia.org/wiki/Simon%20Singh
Simon Lehna Singh, (born 19 September 1964) is a British popular science author, theoretical and particle physicist. His written works include Fermat's Last Theorem (in the United States titled Fermat's Enigma: The Epic Quest to Solve the World's Greatest Mathematical Problem), The Code Book (about cryptography and its history), Big Bang (about the Big Bang theory and the origins of the universe), Trick or Treatment? Alternative Medicine on Trial (about complementary and alternative medicine, co-written by Edzard Ernst) and The Simpsons and Their Mathematical Secrets (about mathematical ideas and theorems hidden in episodes of The Simpsons and Futurama). In 2012 Singh founded the Good Thinking Society, through which he created the website "Parallel" to help students learn mathematics. Singh has also produced documentaries and works for television to accompany his books, is a trustee of the National Museum of Science and Industry, a patron of Humanists UK, founder of the Good Thinking Society, and co-founder of the Undergraduate Ambassadors Scheme. Early life and education Singh was born in a Sikh family to parents who emigrated from Punjab, India to Britain in 1950. He is the youngest of three brothers, his eldest brother being Tom Singh, the founder of the UK New Look chain of stores. Singh grew up in Wellington, Somerset, attending Wellington School, and went on to Imperial College London, where he studied physics. He was active in the student union, becoming President of the Royal College of Science Union. Later he completed a PhD in particle physics at the University of Cambridge as a postgraduate student of Emmanuel College, Cambridge while working at CERN, Geneva. Career In 1983, he was part of the UA2 experiment in CERN. In 1987, Singh taught science at The Doon School, an independent all-boys' boarding school in India. In 1990 Singh returned to England and joined the BBC's Science and Features Department, where he was a producer and director working on
https://en.wikipedia.org/wiki/Run-length%20limited
Run-length limited or RLL coding is a line coding technique that is used to send arbitrary data over a communications channel with bandwidth limits. RLL codes are defined by four main parameters: m, n, d, k. The first two, m/n, refer to the rate of the code, while the remaining two specify the minimal d and maximal k number of zeroes between consecutive ones. This is used in both telecommunication and storage systems that move a medium past a fixed recording head. Specifically, RLL bounds the length of stretches (runs) of repeated bits during which the signal does not change. If the runs are too long, clock recovery is difficult; if they are too short, the high frequencies might be attenuated by the communications channel. By modulating the data, RLL reduces the timing uncertainty in decoding the stored data, which would lead to the possible erroneous insertion or removal of bits when reading the data back. This mechanism ensures that the boundaries between bits can always be accurately found (preventing bit slip), while efficiently using the media to reliably store the maximal amount of data in a given space. Early disk drives used very simple encoding schemes, such as RLL (0,1) FM code, followed by RLL (1,3) MFM code, which were widely used in hard disk drives until the mid-1980s and are still used in digital optical discs such as CD, DVD, MD, Hi-MD and Blu-ray. Higher-density RLL (2,7) and RLL (1,7) codes became the de facto industry standard for hard disks by the early 1990s. Need for RLL coding On a hard disk drive, information is represented by changes in the direction of the magnetic field on the disk, and on magnetic media, the playback output is proportional to the density of flux transition. In a computer, information is represented by the voltage on a wire. No voltage on the wire in relation to a defined ground level would be a binary zero, and a positive voltage on the wire in relation to ground represents a binary one. Magnetic media, on the other
https://en.wikipedia.org/wiki/Out%20Run
(also stylized as OutRun) is an arcade driving video game released by Sega in September 1986. It is known for its pioneering hardware and graphics, nonlinear gameplay, a selectable soundtrack with music composed by Hiroshi Kawaguchi, and the hydraulic motion simulator deluxe arcade cabinet. The goal is to avoid traffic and reach one of five destinations. The game was designed by Yu Suzuki, who traveled to Europe to gain inspiration for the game's stages. Suzuki had a small team and only ten months to program the game, leaving him to do most of the work himself. The game was a critical and commercial success, becoming the highest-grossing arcade game of 1987 worldwide as well as Sega's most successful arcade cabinet of the 1980s. It was ported to numerous video game consoles and home computers, becoming one of the best-selling video games at the time and selling millions of copies worldwide, and it spawned a number of sequels. Out Run is considered one of the most influential racing games, cited as an influence upon numerous later video games, playing a role in the arcade video game industry's recovery, and providing the name for a popular music genre. Gameplay Out Run is a 3D driving video game in which the player controls a Ferrari Testarossa convertible from a third-person rear perspective. The camera is placed near the ground, simulating a Ferrari driver's position and limiting the player's view into the distance. The road curves, crests, and dips, which increases the challenge by obscuring upcoming obstacles such as traffic that the player must avoid. The object of the game is to reach the finish line against a timer. The game world is divided into multiple stages that each end in a checkpoint, and reaching the end of a stage provides more time. Near the end of each stage, the track forks to give the player a choice of routes leading to five final destinations. The destinations represent different difficulty levels and each conclude with their own ending scen
https://en.wikipedia.org/wiki/Partial-response%20maximum-likelihood
In computer data storage, partial-response maximum-likelihood (PRML) is a method for recovering the digital data from the weak analog read-back signal picked up by the head of a magnetic disk drive or tape drive. PRML was introduced to recover data more reliably or at a greater areal-density than earlier simpler schemes such as peak-detection. These advances are important because most of the digital data in the world is stored using magnetic storage on hard disk or tape drives. Ampex introduced PRML in a tape drive in 1984. IBM introduced PRML in a disk drive in 1990 and also coined the acronym PRML. Many advances have taken place since the initial introduction. Recent read/write channels operate at much higher data-rates, are fully adaptive, and, in particular, include the ability to handle nonlinear signal distortion and non-stationary, colored, data-dependent noise (PDNP or NPML). Partial response refers to the fact that part of the response to an individual bit may occur at one sample instant while other parts fall in other sample instants. Maximum-likelihood refers to the detector finding the bit-pattern most likely to have been responsible for the read-back waveform. Theoretical development Partial-response was first proposed by Adam Lender in 1963. The method was generalized by Kretzmer in 1966. Kretzmer also classified the several different possible responses, for example, PR1 is duobinary and PR4 is the response used in the classical PRML. In 1970, Kobayashi and Tang recognized the value of PR4 for the magnetic recording channel. Maximum-likelihood decoding using the eponymous Viterbi algorithm was proposed in 1967 by Andrew Viterbi as a means of decoding convolutional codes. By 1971, Hisashi Kobayashi at IBM had recognized that the Viterbi algorithm could be applied to analog channels with inter-symbol interference and particularly to the use of PR4 in the context of Magnetic Recording (later called PRML). (The wide range of applications of
https://en.wikipedia.org/wiki/Cyclic%20permutation
In mathematics, and in particular in group theory, a cyclic permutation is a permutation consisting of a single cycle. In some cases, cyclic permutations are referred to as cycles; if a cyclic permutation has k elements, it may be called a k-cycle. Some authors widen this definition to include permutations with fixed points in addition to at most one non-trivial cycle. In cycle notation, cyclic permutations are denoted by the list of their elements enclosed with parentheses, in the order to which they are permuted. For example, the permutation (1 3 2 4) that sends 1 to 3, 3 to 2, 2 to 4 and 4 to 1 is a 4-cycle, and the permutation (1 3 2)(4) that sends 1 to 3, 3 to 2, 2 to 1 and 4 to 4 is considered a 3-cycle by some authors. On the other hand, the permutation (1 3)(2 4) that sends 1 to 3, 3 to 1, 2 to 4 and 4 to 2 is not a cyclic permutation because it separately permutes the pairs {1, 3} and {2, 4}. The set of elements that are not fixed by a cyclic permutation is called the orbit of the cyclic permutation. Every permutation on finitely many elements can be decomposed into cyclic permutations on disjoint orbits. The individual cyclic parts of a permutation are also called cycles, thus the second example is composed of a 3-cycle and a 1-cycle (or fixed point) and the third is composed of two 2-cycles. Definition There is not widespread consensus about the precise definition of a cyclic permutation. Some authors define a permutation of a set to be cyclic if "successive application would take each object of the permuted set successively through the positions of all the other objects", or, equivalently, if its representation in cycle notation consists of a single cycle. Others provide a more permissive definition which allows fixed points. A nonempty subset of is a cycle of if the restriction of to is a cyclic permutation of . If is finite, its cycles are disjoint, and their union is . That is, they form a partition, called the cycle decomposition of S
https://en.wikipedia.org/wiki/Horseshoe%20map
In the mathematics of chaos theory, a horseshoe map is any member of a class of chaotic maps of the square into itself. It is a core example in the study of dynamical systems. The map was introduced by Stephen Smale while studying the behavior of the orbits of the van der Pol oscillator. The action of the map is defined geometrically by squishing the square, then stretching the result into a long strip, and finally folding the strip into the shape of a horseshoe. Most points eventually leave the square under the action of the map. They go to the side caps where they will, under iteration, converge to a fixed point in one of the caps. The points that remain in the square under repeated iteration form a fractal set and are part of the invariant set of the map. The squishing, stretching and folding of the horseshoe map are typical of chaotic systems, but not necessary or even sufficient. In the horseshoe map, the squeezing and stretching are uniform. They compensate each other so that the area of the square does not change. The folding is done neatly, so that the orbits that remain forever in the square can be simply described. For a horseshoe map: there are an infinite number of periodic orbits; periodic orbits of arbitrarily long period exist; the number of periodic orbits grows exponentially with the period; and close to any point of the fractal invariant set there is a point of a periodic orbit. The horseshoe map The horseshoe map is a diffeomorphism defined from a region of the plane into itself. The region is a square capped by two semi-disks. The codomain of (the "horseshoe") is a proper subset of its domain . The action of is defined through the composition of three geometrically defined transformations. First the square is contracted along the vertical direction by a factor . The caps are contracted so as to remain semi-disks attached to the resulting rectangle. Contracting by a factor smaller than one half assures that there will be a
https://en.wikipedia.org/wiki/List%20of%20ciphertexts
Some famous ciphertexts (or cryptograms), in chronological order by date, are: See also Undeciphered writing systems (cleartext, natural-language writing of unknown meaning) External links Elonka Dunin's list of famous unsolved codes and ciphers Cryptography lists and comparisons History of cryptography Undeciphered historical codes and ciphers
https://en.wikipedia.org/wiki/Beale%20ciphers
The Beale ciphers are a set of three ciphertexts, one of which allegedly states the location of a buried treasure of gold, silver and jewels estimated to be worth over US$43 million Comprising three ciphertexts, the first (unsolved) text describes the location, the second (solved) ciphertext accounts the content of the treasure, and the third (unsolved) lists the names of the treasure's owners and their next of kin. The story of the three ciphertexts originates from an 1885 pamphlet called The Beale Papers, detailing treasure being buried by a man named Thomas J. Beale in a secret location in Bedford County, Virginia, in about 1820. Beale entrusted a box containing the encrypted messages to a local innkeeper named Robert Morriss and then disappeared, never to be seen again. According to the story, the innkeeper opened the box 23 years later, and then decades after that gave the three encrypted ciphertexts to a friend before he died. The friend then spent the next twenty years of his life trying to decode the messages, and was able to solve only one of them, which gave details of the treasure buried and the general location of the treasure. The unnamed friend then published all three ciphertexts in a pamphlet which was advertised for sale in the 1880s. Since the publication of the pamphlet, a number of attempts have been made to decode the two remaining ciphertexts and to locate the treasure, but all efforts have resulted in failure. There are many arguments that the entire story is a hoax, including the 1980 article "A Dissenting Opinion" by cryptographer Jim Gillogly, and a 1982 scholarly analysis of the Beale Papers and their related story by Joe Nickell, using historical records that cast doubt on the existence of Thomas J. Beale. Nickell also presents linguistic evidence demonstrating that the documents could not have been written at the time alleged (words such as "stampeding", for instance, are of later vintage). His analysis of the writing style showed th
https://en.wikipedia.org/wiki/Bilinear%20form
In mathematics, a bilinear form is a bilinear map on a vector space (the elements of which are called vectors) over a field K (the elements of which are called scalars). In other words, a bilinear form is a function that is linear in each argument separately: and and The dot product on is an example of a bilinear form. The definition of a bilinear form can be extended to include modules over a ring, with linear maps replaced by module homomorphisms. When is the field of complex numbers , one is often more interested in sesquilinear forms, which are similar to bilinear forms but are conjugate linear in one argument. Coordinate representation Let be an -dimensional vector space with basis . The matrix A, defined by is called the matrix of the bilinear form on the basis . If the matrix represents a vector with respect to this basis, and similarly, the matrix represents another vector , then: A bilinear form has different matrices on different bases. However, the matrices of a bilinear form on different bases are all congruent. More precisely, if is another basis of , then where the form an invertible matrix . Then, the matrix of the bilinear form on the new basis is . Maps to the dual space Every bilinear form on defines a pair of linear maps from to its dual space . Define by This is often denoted as where the dot ( ⋅ ) indicates the slot into which the argument for the resulting linear functional is to be placed (see Currying). For a finite-dimensional vector space , if either of or is an isomorphism, then both are, and the bilinear form is said to be nondegenerate. More concretely, for a finite-dimensional vector space, non-degenerate means that every non-zero element pairs non-trivially with some other element: for all implies that and for all implies that . The corresponding notion for a module over a commutative ring is that a bilinear form is if is an isomorphism. Given a finitely generated module over a commu
https://en.wikipedia.org/wiki/Dorabella%20Cipher
The Dorabella Cipher is an enciphered letter written by composer Edward Elgar to Dora Penny, which was accompanied by another dated July 14, 1897. Penny never deciphered it and its meaning remains unknown. The cipher, consisting of 87 characters spread over 3 lines, appears to be made up from 24 symbols, each symbol consisting of 1, 2, or 3 approximate semicircles oriented in one of 8 directions (the orientation of several characters is ambiguous). A small dot appears after the fifth character on the third line. Background Dora Penny (1874–1964) was the daughter of the Reverend Alfred Penny (1845–1935) of Wolverhampton. Dora's mother died in February 1874, six days after giving birth to Dora, after which her father worked for many years as a missionary in Melanesia. In 1895 Dora's father remarried, and Dora's stepmother was a friend of Caroline Alice Elgar. In July 1897 the Penny family invited Edward and Alice Elgar to stay at the Wolverhampton Rectory for a few days. Edward Elgar was a forty-year-old music teacher who had yet to become a successful composer. Dora Penny was almost seventeen years his junior. Edward and Dora liked one another and remained friends for the rest of the composer's life: Elgar named Variation 10 of his 1899 Variations on an Original Theme (Enigma) Dorabella as a dedication to Dora Penny. On returning to Great Malvern on 14 July 1897 Alice wrote a letter of thanks to the Penny family. Edward Elgar inserted a note with cryptic writing: he pencilled the name 'Miss Penny' on the reverse. This note laid in a drawer for forty years and became generally known when Dora had it reproduced in her memoir Edward Elgar: Memories of a Variation, published by Methuen Publishing in 1937. Subsequently, the original note was lost. Dora claimed that she had never been able to read the note, which she assumed to be a cipher message. Composer and historian Kevin Jones advanced one view; Dora's father had just returned from Melanesia where he had b
https://en.wikipedia.org/wiki/%E2%88%921
In mathematics, −1 (negative one or minus one) is the additive inverse of 1, that is, the number that when added to 1 gives the additive identity element, 0. It is the negative integer greater than negative two (−2) and less than 0. Algebraic properties Multiplication Multiplying a number by −1 is equivalent to changing the sign of the number – that is, for any we have . This can be proved using the distributive law and the axiom that 1 is the multiplicative identity: . Here we have used the fact that any number times 0 equals 0, which follows by cancellation from the equation . In other words, , so is the additive inverse of , i.e. , as was to be shown. Square of −1 The square of −1, i.e. −1 multiplied by −1, equals 1. As a consequence, a product of two negative numbers is positive. For an algebraic proof of this result, start with the equation . The first equality follows from the above result, and the second follows from the definition of −1 as additive inverse of 1: it is precisely that number which when added to 1 gives 0. Now, using the distributive law, it can be seen that . The third equality follows from the fact that 1 is a multiplicative identity. But now adding 1 to both sides of this last equation implies . The above arguments hold in any ring, a concept of abstract algebra generalizing integers and real numbers. Square roots of −1 Although there are no real square roots of −1, the complex number satisfies , and as such can be considered as a square root of −1. The only other complex number whose square is −1 is − because there are exactly two square roots of any non‐zero complex number, which follows from the fundamental theorem of algebra. In the algebra of quaternions – where the fundamental theorem does not apply – which contains the complex numbers, the equation has infinitely many solutions. Exponentiation to negative integers Exponentiation of a non‐zero real number can be extended to negative integers. We make the defi
https://en.wikipedia.org/wiki/The%20Coroner%27s%20Toolkit
The Coroner's Toolkit (or TCT) is a suite of free computer security programs by Dan Farmer and Wietse Venema for digital forensic analysis. The suite runs under several Unix-related operating systems: FreeBSD, OpenBSD, BSD/OS, SunOS/Solaris, Linux, and HP-UX. TCT is released under the terms of the IBM Public License. Parts of TCT can be used to aid analysis of and data recovery from computer disasters. TCT was superseded by The Sleuth Kit. Although TSK is only partially based on TCT, the authors of TCT have accepted it as official successor to TCT. References External links Official home page Feature: The Coroner's Toolkit Frequently Asked Questions about The Coroner's Toolkit Computer forensics Unix security-related software Hard disk software Digital forensics software
https://en.wikipedia.org/wiki/400%20%28number%29
400 (four hundred) is the natural number following 399 and preceding 401. Mathematical properties 400 is the square of 20. 400 is the sum of the powers of 7 from 0 to 3, thus making it a repdigit in base 7 (1111). A circle is divided into 400 grads, which is equal to 360 degrees and 2π radians. (Degrees and radians are the SI accepted units). 400 is a self number in base 10, since there is no integer that added to the sum of its own digits results in 400. On the other hand, 400 is divisible by the sum of its own base 10 digits, making it a Harshad number. Other fields Four hundred is also .400 (2 hits out of 5 at-bats) is a numerically significant annual batting average statistic in Major League Baseball, last accomplished by Ted Williams of the Boston Red Sox in 1941. The number of days in a Gregorian calendar year changes according to a cycle of exactly 400 years, of which 97 are leap years and 303 are common. The Sun is approximately 400 times the size of the Moon but also is approximately 400 times further away, creating the temporary illusion in which the Sun and Moon in Earth's sky appear as if of similar size. In gematria 400 is the largest single number that can be represented without using the Sophit forms (see Kaph, Mem, Nun, Pe, and Tzade). Integers from 401 to 499 400s 401 401 is a prime number, tetranacci number, Chen prime, prime index prime Eisenstein prime with no imaginary part Sum of seven consecutive primes (43 + 47 + 53 + 59 + 61 + 67 + 71) Sum of nine consecutive primes (29 + 31 + 37 + 41 + 43 + 47 + 53 + 59 + 61) Mertens function returns 0, Member of the Mian–Chowla sequence. 402 402 = 2 × 3 × 67, sphenic number, nontotient, Harshad number, number of graphs with 8 nodes and 9 edges HTTP status code for "Payment Required", area code for Nebraska 403 403 = 13 × 31, heptagonal number, Mertens function returns 0. First number that is the product of an emirp pair. HTTP 403, the status code for "Forbidden" Also in the name of
https://en.wikipedia.org/wiki/HP-35
The HP-35 was Hewlett-Packard's first pocket calculator and the world's first scientific pocket calculator: a calculator with trigonometric and exponential functions. It was introduced in 1972. History In about 1970 HP co-founder Bill Hewlett challenged his co-workers to create a "shirt-pocket sized HP-9100". At the time, slide rules were the only practical portable devices for performing trigonometric and exponential functions, as existing pocket calculators could only perform addition, subtraction, multiplication, and division. Introduced at , like HP's first scientific calculator, the desktop 9100A, it used reverse Polish notation (RPN) rather than what came to be called "algebraic" entry. The "35" in the calculator's name came from the number of keys. The original HP-35 was available from 1972 to 1975. In 2007 HP announced the release of the "retro"-look HP 35s to commemorate the 35th anniversary of the launch of the original HP-35. It was priced at . The HP-35 was named an IEEE Milestone in 2009. Description The calculator used a traditional floating decimal display for numbers that could be displayed in that format, but automatically switched to scientific notation for other numbers. The fifteen-digit LED display was capable of displaying a ten-digit mantissa plus its sign and a decimal point and a two-digit exponent plus its sign. The display used a unique form of multiplexing, illuminating a single LED segment at a time rather than a single LED digit, because HP research had shown that this method was perceived by the human eye as brighter for equivalent power. Light-emitting diodes were relatively new at the time and were much dimmer than high-efficiency diodes developed in subsequent decades. The calculator used three "AA"-sized NiCd batteries assembled into a removable proprietary battery pack. Replacement battery packs are no longer available, leaving existing HP-35 calculators to rely on AC power, or their users to rebuild the battery packs themse
https://en.wikipedia.org/wiki/List%20of%20semiconductor%20materials
Semiconductor materials are nominally small band gap insulators. The defining property of a semiconductor material is that it can be compromised by doping it with impurities that alter its electronic properties in a controllable way. Because of their application in the computer and photovoltaic industry—in devices such as transistors, lasers, and solar cells—the search for new semiconductor materials and the improvement of existing materials is an important field of study in materials science. Most commonly used semiconductor materials are crystalline inorganic solids. These materials are classified according to the periodic table groups of their constituent atoms. Different semiconductor materials differ in their properties. Thus, in comparison with silicon, compound semiconductors have both advantages and disadvantages. For example, gallium arsenide (GaAs) has six times higher electron mobility than silicon, which allows faster operation; wider band gap, which allows operation of power devices at higher temperatures, and gives lower thermal noise to low power devices at room temperature; its direct band gap gives it more favorable optoelectronic properties than the indirect band gap of silicon; it can be alloyed to ternary and quaternary compositions, with adjustable band gap width, allowing light emission at chosen wavelengths, which makes possible matching to the wavelengths most efficiently transmitted through optical fibers. GaAs can be also grown in a semi-insulating form, which is suitable as a lattice-matching insulating substrate for GaAs devices. Conversely, silicon is robust, cheap, and easy to process, whereas GaAs is brittle and expensive, and insulation layers can not be created by just growing an oxide layer; GaAs is therefore used only where silicon is not sufficient. By alloying multiple compounds, some semiconductor materials are tunable, e.g., in band gap or lattice constant. The result is ternary, quaternary, or even quinary compositions.
https://en.wikipedia.org/wiki/Fixed-point%20arithmetic
In computing, fixed-point is a method of representing fractional (non-integer) numbers by storing a fixed number of digits of their fractional part. Dollar amounts, for example, are often stored with exactly two fractional digits, representing the cents (1/100 of dollar). More generally, the term may refer to representing fractional values as integer multiples of some fixed small unit, e.g. a fractional amount of hours as an integer multiple of ten-minute intervals. Fixed-point number representation is often contrasted to the more complicated and computationally demanding floating-point representation. In the fixed-point representation, the fraction is often expressed in the same number base as the integer part, but using negative powers of the base b. The most common variants are decimal (base 10) and binary (base 2). The latter is commonly known also as binary scaling. Thus, if n fraction digits are stored, the value will always be an integer multiple of b−n. Fixed-point representation can also be used to omit the low-order digits of integer values, e.g. when representing large dollar values as multiples of $1000. When decimal fixed-point numbers are displayed for human reading, the fraction digits are usually separated from those of the integer part by a radix character (usually '.' in English, but ',' or some other symbol in many other languages). Internally, however, there is no separation, and the distinction between the two groups of digits is defined only by the programs that handle such numbers. Fixed-point representation was the norm in mechanical calculators. Since most modern processors have fast floating-point unit (FPU), fixed-point representations are now used only in special situations, such as in low-cost embedded microprocessors and microcontrollers; in applications that demand high speed and/or low power consumption and/or small chip area, like image, video, and digital signal processing; or when their use is more natural for the problem.
https://en.wikipedia.org/wiki/Fixed%20point%20%28mathematics%29
{{hatnote|1=Fixed points in mathematics are not to be confused with other uses of "fixed point", or stationary points where {{math|1=f(x) = 0}}.}} In mathematics, a fixed point (sometimes shortened to fixpoint), also known as an invariant point, is a value that does not change under a given transformation. Specifically, for functions, a fixed point is an element that is mapped to itself by the function. Fixed point of a function Formally, is a fixed point of a function if belongs to both the domain and the codomain of , and . For example, if is defined on the real numbers by then 2 is a fixed point of , because . Not all functions have fixed points: for example, , has no fixed points, since is never equal to for any real number. In graphical terms, a fixed-point means the point is on the line , or in other words the graph of has a point in common with that line. Fixed point iteration In numerical analysis, fixed-point iteration is a method of computing fixed points of a function. Specifically, given a function with the same domain and codomain, a point in the domain of , the fixed-point iteration is which gives rise to the sequence of iterated function applications which is hoped to converge to a point . If is continuous, then one can prove that the obtained is a fixed point of . The notions of attracting fixed points, repelling fixed points, and periodic points are defined with respect to fixed-point iteration. Fixed-point theorems A fixed-point theorem is a result saying that at least one fixed point exists, under some general condition. For example, the Banach fixed-point theorem (1922) gives a general criterion guaranteeing that, if it is satisfied, Fixed-point iteration will always converge to a fixed point. The Brouwer fixed-point theorem (1911) says that any continuous function from the closed unit ball in n-dimensional Euclidean space to itself must have a fixed point, but it doesn't describe how to find the fixed point. Th
https://en.wikipedia.org/wiki/Theta%20function
In mathematics, theta functions are special functions of several complex variables. They show up in many topics, including Abelian varieties, moduli spaces, quadratic forms, and solitons. As Grassmann algebras, they appear in quantum field theory. The most common form of theta function is that occurring in the theory of elliptic functions. With respect to one of the complex variables (conventionally called ), a theta function has a property expressing its behavior with respect to the addition of a period of the associated elliptic functions, making it a quasiperiodic function. In the abstract theory this quasiperiodicity comes from the cohomology class of a line bundle on a complex torus, a condition of descent. One interpretation of theta functions when dealing with the heat equation is that "a theta function is a special function that describes the evolution of temperature on a segment domain subject to certain boundary conditions". Throughout this article, should be interpreted as (in order to resolve issues of choice of branch). Jacobi theta function There are several closely related functions called Jacobi theta functions, and many different and incompatible systems of notation for them. One Jacobi theta function (named after Carl Gustav Jacob Jacobi) is a function defined for two complex variables and , where can be any complex number and is the half-period ratio, confined to the upper half-plane, which means it has positive imaginary part. It is given by the formula where is the nome and . It is a Jacobi form. The restriction ensures that it is an absolutely convergent series. At fixed , this is a Fourier series for a 1-periodic entire function of . Accordingly, the theta function is 1-periodic in : By completing the square, it is also -quasiperiodic in , with Thus, in general, for any integers and . For any fixed , the function is an entire function on the complex plane, so by Liouville's theorem, it cannot be doubly periodic in unless it
https://en.wikipedia.org/wiki/Epitaxy
Epitaxy (prefix epi- means "on top of”) refers to a type of crystal growth or material deposition in which new crystalline layers are formed with one or more well-defined orientations with respect to the crystalline seed layer. The deposited crystalline film is called an epitaxial film or epitaxial layer. The relative orientation(s) of the epitaxial layer to the seed layer is defined in terms of the orientation of the crystal lattice of each material. For most epitaxial growths, the new layer is usually crystalline and each crystallographic domain of the overlayer must have a well-defined orientation relative to the substrate crystal structure. Epitaxy can involve single-crystal structures, although grain-to-grain epitaxy has been observed in granular films. For most technological applications, single domain epitaxy, which is the growth of an overlayer crystal with one well-defined orientation with respect to the substrate crystal, is preferred. Epitaxy can also play an important role while growing superlattice structures. The term epitaxy comes from the Greek roots epi (ἐπί), meaning "above", and taxis (τάξις), meaning "an ordered manner". One of the main commercial applications of epitaxial growth is in the semiconductor industry, where semiconductor films are grown epitaxially on semiconductor substrate wafers. For the case of epitaxial growth of a planar film atop a substrate wafer, the epitaxial film's lattice will have a specific orientation relative to the substrate wafer's crystalline lattice such as the [001] Miller index of the film aligning with the [001] index of the substrate. In the simplest case, the epitaxial layer can be a continuation of the same exact semiconductor compound as the substrate; this is referred to as homoepitaxy. Otherwise, the epitaxial layer will be composed of a different compound; this is referred to as heteroepitaxy. Types Homoepitaxy is a kind of epitaxy performed with only one material, in which a crystalline film is gr
https://en.wikipedia.org/wiki/Key%20derivation%20function
In cryptography, a key derivation function (KDF) is a cryptographic algorithm that derives one or more secret keys from a secret value such as a master key, a password, or a passphrase using a pseudorandom function (which typically uses a cryptographic hash function or block cipher). KDFs can be used to stretch keys into longer keys or to obtain keys of a required format, such as converting a group element that is the result of a Diffie–Hellman key exchange into a symmetric key for use with AES. Keyed cryptographic hash functions are popular examples of pseudorandom functions used for key derivation. History The first deliberately slow (key stretching) password-based key derivation function was called "crypt" (or "crypt(3)" after its man page), and was invented by Robert Morris in 1978. It would encrypt a constant (zero), using the first 8 characters of the user's password as the key, by performing 25 iterations of a modified DES encryption algorithm (in which a 12-bit number read from the real-time computer clock is used to perturb the calculations). The resulting 64-bit number is encoded as 11 printable characters and then stored in the Unix password file. While it was a great advance at the time, increases in processor speeds since the PDP-11 era have made brute-force attacks against crypt feasible, and advances in storage have rendered the 12-bit salt inadequate. The crypt function's design also limits the user password to 8 characters, which limits the keyspace and makes strong passphrases impossible. Although high throughput is a desirable property in general-purpose hash functions, the opposite is true in password security applications in which defending against brute-force cracking is a primary concern. The growing use of massively-parallel hardware such as GPUs, FPGAs, and even ASICs for brute-force cracking has made the selection of a suitable algorithms even more critical because the good algorithm should not only enforce a certain amount of computation
https://en.wikipedia.org/wiki/Triskelion
A triskelion or triskeles is an ancient motif consisting of a triple spiral exhibiting rotational symmetry or other patterns in triplicate that emanate from a common center. The spiral design can be based on interlocking Archimedean spirals, or represent three bent human legs. It is found in artifacts of the European Neolithic and Bronze Age with continuation into the Iron Age especially in the context of the La Tène culture and related Celtic traditions. The actual triskeles symbol of three human legs is found especially in Greek antiquity, beginning in archaic pottery and continued in coinage of the classical period. In the Hellenistic period, the symbol becomes associated with the island of Sicily, appearing on coins minted under Dionysius I of Syracuse beginning in BCE. It later appears in heraldry, and, other than in the flag of Sicily, came to be used in the flag of the Isle of Man (known as ny tree cassyn 'the three legs'). Greek (triskelḗs) means 'three-legged'. While the Greek adjective 'three-legged (e.g., of a table)' is ancient, use of the term for the symbol is modern, introduced in 1835 by Honoré Théodoric d'Albert de Luynes as French , and adopted in the spelling triskeles following Otto Olshausen (1886). The form triskelion (as it were Greek ) is a diminutive which entered English usage in numismatics in the late 19th century. The form consisting of three human legs (as opposed to the triple spiral) has also been called a "triquetra of legs", also triskelos or triskel. Use in European antiquity Neolithic to Iron Age The triple spiral symbol, or three spiral volute, appears in many early cultures, the first in Malta (4400–3600 BCE) and in the astronomical calendar at the famous megalithic tomb of Newgrange in Ireland built around 3200 BCE, as well as on Mycenaean vessels. The Neolithic era symbol of three conjoined spirals may have had triple significance similar to the imagery that lies behind the triskelion. It is carved into the rock
https://en.wikipedia.org/wiki/Jacobi%20elliptic%20functions
In mathematics, the Jacobi elliptic functions are a set of basic elliptic functions. They are found in the description of the motion of a pendulum (see also pendulum (mathematics)), as well as in the design of electronic elliptic filters. While trigonometric functions are defined with reference to a circle, the Jacobi elliptic functions are a generalization which refer to other conic sections, the ellipse in particular. The relation to trigonometric functions is contained in the notation, for example, by the matching notation for . The Jacobi elliptic functions are used more often in practical problems than the Weierstrass elliptic functions as they do not require notions of complex analysis to be defined and/or understood. They were introduced by . Carl Friedrich Gauss had already studied special Jacobi elliptic functions in 1797, the lemniscate elliptic functions in particular, but his work was published much later. Overview There are twelve Jacobi elliptic functions denoted by , where and are any of the letters , , , and . (Functions of the form are trivially set to unity for notational completeness.) is the argument, and is the parameter, both of which may be complex. In fact, the Jacobi elliptic functions are meromorphic in both and . The distribution of the zeros and poles in the -plane is well-known. However, questions of the distribution of the zeros and poles in the -plane remain to be investigated. In the complex plane of the argument , the twelve functions form a repeating lattice of simple poles and zeroes. Depending on the function, one repeating parallelogram, or unit cell, will have sides of length or on the real axis, and or on the imaginary axis, where and are known as the quarter periods with being the elliptic integral of the first kind. The nature of the unit cell can be determined by inspecting the "auxiliary rectangle" (generally a parallelogram), which is a rectangle formed by the origin at one corner, and as the diagonall
https://en.wikipedia.org/wiki/Molecular-beam%20epitaxy
Molecular-beam epitaxy (MBE) is an epitaxy method for thin-film deposition of single crystals. MBE is widely used in the manufacture of semiconductor devices, including transistors, and it is considered one of the fundamental tools for the development of nanotechnologies. MBE is used to fabricate diodes and MOSFETs (MOS field-effect transistors) at microwave frequencies, and to manufacture the lasers used to read optical discs (such as CDs and DVDs). History Original ideas of MBE process were first established by K. G. Günther. Films he deposited were not epitaxial, but were deposited on glass substrates. With the development of vacuum technology, MBE process was demonstrated by John Davey and Titus Pankey who succeeded in growing GaAs epitaxial films on single crystal GaAs substrates using Günther's method. Major subsequent development of MBE films was enabled by J.R. Arthur's investigations of kinetic behavior of growth mechanisms and Alfred Y. Cho's in situ observation of MBE process using reflection high-energy electron diffraction (RHEED) in the late 1960s. Method Molecular-beam epitaxy takes place in high vacuum or ultra-high vacuum (10−8–10−12 Torr). The most important aspect of MBE is the deposition rate (typically less than 3,000 nm per hour) that allows the films to grow epitaxially. These deposition rates require proportionally better vacuum to achieve the same impurity levels as other deposition techniques. The absence of carrier gases, as well as the ultra-high vacuum environment, result in the highest achievable purity of the grown films. In solid source MBE, elements such as gallium and arsenic, in ultra-pure form, are heated in separate quasi-Knudsen effusion cells or electron-beam evaporators until they begin to slowly sublime. The gaseous elements then condense on the wafer, where they may react with each other. In the example of gallium and arsenic, single-crystal gallium arsenide is formed. When evaporation sources such as copper or gold ar
https://en.wikipedia.org/wiki/Dolby
Dolby Laboratories, Inc. (often shortened to Dolby Labs and known simply as Dolby) is a company specializing in audio noise reduction, audio encoding/compression, spatial audio, and HDR imaging. Dolby licenses its technologies to consumer electronics manufacturers. History Dolby Labs was founded by Ray Dolby (1933–2013) in London, England, in 1965. In the same year, he invented the Dolby Noise Reduction system, a form of audio signal processing for reducing the background hissing sound on cassette tape recordings. His first U.S. patent on the technology was filed in 1969, four years later. The method was first used by Decca Records in the UK. After this, other companies began purchasing Dolby’s A301 technology, which was the professional noise reduction system used in recording, motion picture, broadcasting stations and communications networks. These companies include BBC, Pye, IBC, CBS Studios, RCA, and Granada. He moved the company headquarters to the United States (San Francisco, California) in 1976. The first product Dolby Labs produced was the Dolby 301 unit which incorporated Type A Dolby Noise Reduction, a compander-based noise reduction system. These units were intended for use in professional recording studios. Dolby was persuaded by Henry Kloss of KLH to manufacture a consumer version of his noise reduction. Dolby worked more on companding systems and introduced Type B in 1968. Dolby also sought to improve film sound. As the corporation's history explains: Upon investigation, Dolby found that many of the limitations in optical sound stemmed directly from its significantly high background noise. To filter this noise, the high-frequency response of theatre playback systems was deliberately curtailed… To make matters worse, to increase dialogue intelligibility over such systems, sound mixers were recording soundtracks with so much high-frequency pre-emphasis that high distortion resulted. The first film with Dolby sound was A Clockwork Orange (1971). The
https://en.wikipedia.org/wiki/Personal%20firewall
A personal firewall is an application which controls network traffic to and from a computer, permitting or denying communications based on a security policy. Typically it works as an application layer firewall. A personal firewall differs from a conventional firewall in terms of scale. A personal firewall will usually protect only the computer on which it is installed, as compared to a conventional firewall which is normally installed on a designated interface between two or more networks, such as a router or proxy server. Hence, personal firewalls allow a security policy to be defined for individual computers, whereas a conventional firewall controls the policy between the networks that it connects. The per-computer scope of personal firewalls is useful to protect machines that are moved across different networks. For example, a laptop computer may be used on a trusted intranet at a workplace where minimal protection is needed as a conventional firewall is already in place, and services that require open ports such as file and printer sharing are useful. The same laptop could be used at public Wi-Fi hotspots, where it may be necessary to decide the level of trust and reconfigure firewall settings to limit traffic to and from the computer. A firewall can be configured to allow different security policies for each network. Unlike network firewalls, many personal firewalls are able to control network traffic allowed to programs on the secured computer. When an application attempts an outbound connection, the firewall may block it if blacklisted, or ask the user whether to blacklist it if it is not yet known. This protects against malware implemented as an executable program. Personal firewalls may also provide some level of intrusion detection, allowing the software to terminate or block connectivity where it suspects an intrusion is being attempted. Features Common personal firewall features: Block or alert the user about all unauthorized inbound or outbound c
https://en.wikipedia.org/wiki/Elliptic%20Curve%20Digital%20Signature%20Algorithm
In cryptography, the Elliptic Curve Digital Signature Algorithm (ECDSA) offers a variant of the Digital Signature Algorithm (DSA) which uses elliptic-curve cryptography. Key and signature-size As with elliptic-curve cryptography in general, the bit size of the private key believed to be needed for ECDSA is about twice the size of the security level, in bits. For example, at a security level of 80 bits—meaning an attacker requires a maximum of about operations to find the private key—the size of an ECDSA private key would be 160 bits. On the other hand, the signature size is the same for both DSA and ECDSA: approximately bits, where is the exponent in the formula , that is, about 320 bits for a security level of 80 bits, which is equivalent to operations. Signature generation algorithm Suppose Alice wants to send a signed message to Bob. Initially, they must agree on the curve parameters . In addition to the field and equation of the curve, we need , a base point of prime order on the curve; is the multiplicative order of the point . The order of the base point must be prime. Indeed, we assume that every nonzero element of the ring is invertible, so that must be a field. It implies that must be prime (cf. Bézout's identity). Alice creates a key pair, consisting of a private key integer , randomly selected in the interval ; and a public key curve point . We use to denote elliptic curve point multiplication by a scalar. For Alice to sign a message , she follows these steps: Calculate . (Here HASH is a cryptographic hash function, such as SHA-2, with the output converted to an integer.) Let be the leftmost bits of , where is the bit length of the group order . (Note that can be greater than but not longer.) Select a cryptographically secure random integer from . Calculate the curve point . Calculate . If , go back to step 3. Calculate . If , go back to step 3. The signature is the pair . (And is also a valid signature.) As the standa
https://en.wikipedia.org/wiki/Zero-knowledge%20proof
In cryptography, a zero-knowledge proof or zero-knowledge protocol is a method by which one party (the prover) can prove to another party (the verifier) that a given statement is true, while avoiding conveying to the verifier any information beyond the mere fact of the statement's truth. The intuition underlying zero-knowledge proofs is that it is trivial to prove the possession of certain information by simply revealing it; the challenge is to prove this possession without revealing the information, or any aspect of it whatsoever. In light of the fact that one should be able to generate a proof of some statement only when in possession of certain secret information connected to the statement, the verifier, even after having become convinced of the statement's truth, should nonetheless remain unable to prove the statement to third parties. In the plain model, nontrivial zero-knowledge proofs (i.e., those for languages outside of BPP) demand interaction between the prover and the verifier. This interaction usually entails the selection of one or more random challenges by the verifier; the random origin of these challenges, together with the prover's successful responses to them notwithstanding, jointly convince the verifier that the prover does possess the claimed knowledge. If interaction weren't present, then the verifier, having obtained the protocol's execution transcript—that is, the prover's one and only message—could replay that transcript to a third party, thereby convincing the third party that the verifier too possessed the secret information. In the common random string and random oracle models, non-interactive zero-knowledge proofs exist, in light of the Fiat–Shamir heuristic. These proofs, in practice, rely on computational assumptions (typically the collision-resistance of a cryptographic hash function). Abstract examples The Ali Baba cave There is a well-known story presenting the fundamental ideas of zero-knowledge proofs, first published in 19
https://en.wikipedia.org/wiki/Trivial%20representation
In the mathematical field of representation theory, a trivial representation is a representation of a group G on which all elements of G act as the identity mapping of V. A trivial representation of an associative or Lie algebra is a (Lie) algebra representation for which all elements of the algebra act as the zero linear map (endomorphism) which sends every element of V to the zero vector. For any group or Lie algebra, an irreducible trivial representation always exists over any field, and is one-dimensional, hence unique up to isomorphism. The same is true for associative algebras unless one restricts attention to unital algebras and unital representations. Although the trivial representation is constructed in such a way as to make its properties seem tautologous, it is a fundamental object of the theory. A subrepresentation is equivalent to a trivial representation, for example, if it consists of invariant vectors; so that searching for such subrepresentations is the whole topic of invariant theory. The trivial character is the character that takes the value of one for all group elements. References . Representation theory
https://en.wikipedia.org/wiki/Hellschreiber
The Hellschreiber, Feldhellschreiber or Typenbildfeldfernschreiber (also Hell-Schreiber named after its inventor Rudolf Hell) is a facsimile-based teleprinter invented by Rudolf Hell. Compared to contemporary teleprinters that were based on typewriter systems and were mechanically complex and expensive, the Hellschreiber was much simpler and more robust, with far fewer moving parts. It has the added advantage of being capable of providing intelligible communication even over very poor quality radio or cable links, where voice or other teledata would be unintelligible. The device was first developed in the late 1920s, and saw use starting in the 1930s, chiefly being used for landline press services. During World War II it was sometimes used by the German military in conjunction with the Enigma encryption system. In the post-war era, it became increasingly common among newswire services, and was used in this role well into the 1980s. Today, the Hellschreiber is used as a means of communication by amateur radio operators using computers and sound cards; the resulting mode is referred to as Hellschreiber, Feld-Hell, or simply Hell. Operation Hellschreiber sends a line of text as a series of vertical columns. Each column is broken down vertically into a series of pixels, normally using a 7 by 7 pixel grid to represent characters. The data for a line is then sent as a series of on-off signals to the receiver, using a variety of formats depending on the medium, but normally at a rate of 112.5 baud. At the receiver end, a paper tape is fed at a constant speed over a roller. Located above the roller is a spinning cylinder with small bumps in a helical pattern on the surface. The received signal is amplified and sent to a magnetic actuator that pulls the cylinder down onto the roller, hammering out a dot into the surface of the paper. A Hellschreiber will print each received column twice, one below the other. This is to compensate for slight timing errors that are often p
https://en.wikipedia.org/wiki/Price%E2%80%93performance%20ratio
In economics, engineering, business management and marketing the price–performance ratio is often written as cost–performance, cost–benefit or capability/price (C/P), refers to a product's ability to deliver performance, of any sort, for its price. Generally speaking, products with a lower price/performance ratio are more desirable on demand curve, excluding other factors. Even though this term would seem to be a straightforward ratio, when price performance is improved, better, or increased, it actually refers to the performance divided by the price, in other words exactly the opposite ratio (i.e. an inverse ratio) to rank a product as having an increased price/performance. Background of appearance Due to the prolonged low growth and economic slump, the proportion of consumption to income will inevitably decrease. However, they cannot completely give up their consumption, so they have found ways to maintain a similar level of consumption at a minimum cost. Examples Consumer and medical products According to futurist Raymond Kurzweil, products start out as highly ineffective and highly expensive. Gradually, products become more effective and cheaper until they are highly effective and almost free to buy. Some of the products that have followed this example include AIDS medications (which are now affordable compared to initial pricing), text-to-speech programs, and digital cameras. However, products that rely primarily on paper (e.g., newspapers and toilet paper) and/or fossil fuels (e.g., electricity in most countries and petroleum gasoline for automobiles) have only increased in price. This directly contradicts the trend of electronic gadgets like netbooks, desktop computers, and laptop computers that also have been decreasing in price. However, the prevailing inflation rate of a country or province/state may negate the plummeting costs of software, AIDS medications, and/or digital cameras in certain regions along with certain governmental policies. This
https://en.wikipedia.org/wiki/Display%20device
A display device is an output device for presentation of information in visual or tactile form (the latter used for example in tactile electronic displays for blind people). When the input information that is supplied has an electrical signal the display is called an electronic display. Common applications for electronic visual displays are television sets or computer monitors. Types of electronic displays In use These are the technologies used to create the various displays in use today. Liquid crystal display (LCD) Light-emitting diode (LED) backlit LCD Thin-film transistor (TFT) LCD Quantum dot (QLED) display Light-emitting diode (LED) display OLED display AMOLED display Super AMOLED display Segment displays Some displays can show only digits or alphanumeric characters. They are called segment displays, because they are composed of several segments that switch on and off to give appearance of desired glyph. The segments are usually single LEDs or liquid crystals. They are mostly used in digital watches and pocket calculators. Common types are seven-segment displays which are used for numerals only, and alphanumeric fourteen-segment displays and sixteen-segment displays which can display numerals and Roman alphabet letters. Other types Vacuum fluorescent display Electroluminescent (ELD) display Plasma (PDP) display Laser-powered phosphor display Cathode-ray tubes were also formerly widely used. Full-area 2-dimensional displays 2-dimensional displays that cover a full area (usually a rectangle) are also called video displays, since it is the main modality of presenting video. Applications of full-area 2-dimensional displays Full-area 2-dimensional displays are used in, for example: Television set Computer monitors Head-mounted displays, Heads-up displays and Virtual reality headsets Broadcast reference monitor Medical monitors Mobile displays (for mobile devices) Smartphone displays (for smartphones) Video walls Underlying techno
https://en.wikipedia.org/wiki/Flat%20memory%20model
Flat memory model or linear memory model refers to a memory addressing paradigm in which "memory appears to the program as a single contiguous address space." The CPU can directly (and linearly) address all of the available memory locations without having to resort to any sort of bank switching, memory segmentation or paging schemes. Memory management and address translation can still be implemented on top of a flat memory model in order to facilitate the operating system's functionality, resource protection, multitasking or to increase the memory capacity beyond the limits imposed by the processor's physical address space, but the key feature of a flat memory model is that the entire memory space is linear, sequential and contiguous. In a simple controller, or in a single tasking embedded application, where memory management is not needed nor desirable, the flat memory model is the most appropriate, because it provides the simplest interface from the programmer's point of view, with direct access to all memory locations and minimum design complexity. In a general purpose computer system, which requires multitasking, resource allocation, and protection, the flat memory system must be augmented by some memory management scheme, which is typically implemented through a combination of dedicated hardware (inside or outside the CPU) and software built into the operating system. The flat memory model (at the physical addressing level) still provides the greatest flexibility for implementing this type of memory management. Memory models Most modern memory models fall into one of three categories: Flat memory model Simple interface for programmers, clean design Greatest flexibility due to uniform access speed (segmented memory page switches usually incur varied latency due to longer accesses of other pages, either due to extra CPU logic in changing page, or hardware requirements) Minimum hardware and CPU real estate for simple controller applications Maximum execution
https://en.wikipedia.org/wiki/His-tag
A polyhistidine-tag, best known by the trademarked name His-tag, is an amino acid motif in proteins that typically consists of at least six histidine (His) residues, often at the N- or C-terminus of the protein. It is also known as a hexa histidine-tag, 6xHis-tag, or His6 tag. The tag was invented by Roche, although the use of histidines and its vectors are distributed by Qiagen. Various purification kits for histidine-tagged proteins are commercially available from multiple companies. The total number of histidine residues may vary in the tag from as low as two, to as high as 10 or more His residues. N- or C-terminal His-tags may also be followed or preceded, respectively, by a suitable amino acid sequence that facilitates removal of the polyhistidine-tag using endopeptidases. This extra sequence is not necessary if exopeptidases are used to remove N-terminal His-tags (e.g., Qiagen TAGZyme). Furthermore, exopeptidase cleavage may solve the unspecific cleavage observed when using endoprotease-based tag removal. Polyhistidine-tags are often used for affinity purification of genetically modified proteins. Principle Proteins can coordinate metal ions on their surface and it is possible to separate proteins using chromatography by making use of the difference in their affinity to metal ions. This is termed as immobilized metal ion affinity chromatography (IMAC), as originally introduced in 1975 under the name metal chelate affinity chromatography. Subsequent studies have revealed that among amino acids constituting proteins, histidine is strongly involved in the coordination complex with metal ions. Therefore, if a number of histidines are added to the end of the protein, the affinity of the protein for the metal ion is increased and this can be exploited to selectively isolate the protein of interest. When a protein with a His-tag is brought into contact with a carrier on which a metal ion such as nickel is immobilized, the histidine residue chelates the metal io
https://en.wikipedia.org/wiki/Soundtrack%20Pro
Soundtrack Pro is a discontinued music composing and audio editing application made by Apple Inc. It included a collection of just over 5,000 royalty free professional instrument loops and sound effects for use. The software was featured in the Logic Studio and Final Cut Studio software bundles; It was discontinued with the release of Final Cut Pro X, Motion 5, and Compressor 4. History An earlier incarnation of the package, Soundtrack, was first sold as part of Apple's Final Cut Pro 4 video editor, released in April 2003. It was then released as a stand-alone product, but due to low demand it was discontinued. The main concept of Soundtrack was to allow people who are not professional composers to produce original music for their videos or DVDs, royalty-free. The music could be easily scored to QuickTime movies and Final Cut Pro projects. Soundtrack was reinstated in January 2005 as part of Final Cut Express HD. Also see a release history in context with the rest of Final Cut Studio. Version 1 Soundtrack Pro was introduced on April 17, 2005 as a stand-alone product and as part of the Final Cut Studio suite, where it integrates with Final Cut Pro. Soundtrack (non-Pro) was removed from Final Cut Pro. There was also an upgrade package for users of Soundtrack. As of January 10, 2006, the stand-alone product has been dropped again. Soundtrack Pro adds features for professional audio engineers and sound designers, and focuses more on sound editing than the original Soundtrack. Since September 2007 Soundtrack Pro also comes as part of the Logic Studio bundle, Apple's own audio and midi sequencing software. But it was removed from Logic Studio when Logic Pro was moved to the App Store on December 8, 2011. Soundtrack Pro 2 Soundtrack Pro 2 is included in Final Cut Studio 2 (2008). Soundtrack Pro 3 Soundtrack Pro 3 is included in Final Cut Studio (2009). Features Soundtrack Pro has two main modes: multitrack mode and editing mode. Multitrack mode Multitrack mode is w
https://en.wikipedia.org/wiki/Competitive%20exclusion%20principle
In ecology, the competitive exclusion principle, sometimes referred to as Gause's law, is a proposition that two species which compete for the same limited resource cannot coexist at constant population values. When one species has even the slightest advantage over another, the one with the advantage will dominate in the long term. This leads either to the extinction of the weaker competitor or to an evolutionary or behavioral shift toward a different ecological niche. The principle has been paraphrased in the maxim "complete competitors can not coexist". History The competitive exclusion principle is classically attributed to Georgy Gause, although he actually never formulated it. The principle is already present in Darwin's theory of natural selection. Throughout its history, the status of the principle has oscillated between a priori ('two species coexisting must have different niches') and experimental truth ('we find that species coexisting do have different niches'). Experimental basis Based on field observations, Joseph Grinnell formulated the principle of competitive exclusion in 1904: "Two species of approximately the same food habits are not likely to remain long evenly balanced in numbers in the same region. One will crowd out the other". Georgy Gause formulated the law of competitive exclusion based on laboratory competition experiments using two species of Paramecium, P. aurelia and P. caudatum. The conditions were to add fresh water every day and input a constant flow of food. Although P. caudatum initially dominated, P. aurelia recovered and subsequently drove P. caudatum extinct via exploitative resource competition. However, Gause was able to let the P. caudatum survive by differing the environmental parameters (food, water). Thus, Gause's law is valid only if the ecological factors are constant. Gause also studied competition between two species of yeast, finding that Saccharomyces cerevisiae consistently outcompeted Schizosaccharomyces kefir
https://en.wikipedia.org/wiki/Passive%20attack
A passive attack on a cryptosystem is one in which the cryptanalyst cannot interact with any of the parties involved, attempting to break the system solely based upon observed data (i.e. the ciphertext). This can also include known plaintext attacks where both the plaintext and its corresponding ciphertext are known. While active attackers can interact with the parties by sending data, a passive attacker is limited to intercepting communications (eavesdropping), and seeks to decrypt data by interpreting the transcripts of authentication sessions. Since passive attackers do not introduce data of their own, they can be difficult to detect. While most classical ciphers are vulnerable to this form of attack, most modern ciphers are designed to prevent this type of attack above all others. Attributes Traffic analysis Non-evasive eavesdropping and monitoring of transmissions Because data unaffected, tricky to detect Emphasis on prevention (encryption) not detection Sometimes referred to as "tapping" The main types of passive attacks are traffic analysis and release of message contents. During a traffic analysis attack, the eavesdropper analyzes the traffic, determines the location, identifies communicating hosts and observes the frequency and length of exchanged messages. He uses all this information to predict the nature of communication. All incoming and outgoing traffic of the network is analyzed, but not altered. For a release of message content, a telephonic conversation, an E-mail message or a transferred file may contain confidential data. A passive attack monitors the contents of the transmitted data. Passive attacks are very difficult to detect because they do not involve any alteration of the data. When the messages are exchanged neither the sender nor the receiver is aware that a third party may capture the messages. This can be prevented by encryption of data. See also Known plaintext attack Chosen plaintext attack Chosen ciphertext attack Adaptive ch
https://en.wikipedia.org/wiki/Undeniable%20signature
An undeniable signature is a digital signature scheme which allows the signer to be selective to whom they allow to verify signatures. The scheme adds explicit signature repudiation, preventing a signer later refusing to verify a signature by omission; a situation that would devalue the signature in the eyes of the verifier. It was invented by David Chaum and Hans van Antwerpen in 1989. Overview In this scheme, a signer possessing a private key can publish a signature of a message. However, the signature reveals nothing to a recipient/verifier of the message and signature without taking part in either of two interactive protocols: Confirmation protocol, which confirms that a candidate is a valid signature of the message issued by the signer, identified by the public key. Disavowal protocol, which confirms that a candidate is not a valid signature of the message issued by the signer. The motivation for the scheme is to allow the signer to choose to whom signatures are verified. However, that the signer might claim the signature is invalid at any later point, by refusing to take part in verification, would devalue signatures to verifiers. The disavowal protocol distinguishes these cases removing the signer's plausible deniability. It is important that the confirmation and disavowal exchanges are not transferable. They achieve this by having the property of zero-knowledge; both parties can create transcripts of both confirmation and disavowal that are indistinguishable, to a third-party, of correct exchanges. The designated verifier signature scheme improves upon deniable signatures by allowing, for each signature, the interactive portion of the scheme to be offloaded onto another party, a designated verifier, reducing the burden on the signer. Zero-knowledge protocol The following protocol was suggested by David Chaum. A group, G, is chosen in which the discrete logarithm problem is intractable, and all operation in the scheme take place in this group. Commo
https://en.wikipedia.org/wiki/Random%20oracle
In cryptography, a random oracle is an oracle (a theoretical black box) that responds to every unique query with a (truly) random response chosen uniformly from its output domain. If a query is repeated, it responds the same way every time that query is submitted. Stated differently, a random oracle is a mathematical function chosen uniformly at random, that is, a function mapping each possible query to a (fixed) random response from its output domain. Random oracles as a mathematical abstraction were first used in rigorous cryptographic proofs in the 1993 publication by Mihir Bellare and Phillip Rogaway (1993). They are typically used when the proof cannot be carried out using weaker assumptions on the cryptographic hash function. A system that is proven secure when every hash function is replaced by a random oracle is described as being secure in the random oracle model, as opposed to secure in the standard model of cryptography. Applications Random oracles are typically used as an idealised replacement for cryptographic hash functions in schemes where strong randomness assumptions are needed of the hash function's output. Such a proof often shows that a system or a protocol is secure by showing that an attacker must require impossible behavior from the oracle, or solve some mathematical problem believed hard in order to break it. However, it only proves such properties in the random oracle model, making sure no major design flaws are present. It is in general not true that such a proof implies the same properties in the standard model. Still, a proof in the random oracle model is considered better than no formal security proof at all. Not all uses of cryptographic hash functions require random oracles: schemes that require only one or more properties having a definition in the standard model (such as collision resistance, preimage resistance, second preimage resistance, etc.) can often be proven secure in the standard model (e.g., the Cramer–Shoup cryptosyst
https://en.wikipedia.org/wiki/Cube%20%28algebra%29
In arithmetic and algebra, the cube of a number is its third power, that is, the result of multiplying three instances of together. The cube of a number or any other mathematical expression is denoted by a superscript 3, for example or . The cube is also the number multiplied by its square: . The cube function is the function (often denoted ) that maps a number to its cube. It is an odd function, as . The volume of a geometric cube is the cube of its side length, giving rise to the name. The inverse operation that consists of finding a number whose cube is is called extracting the cube root of . It determines the side of the cube of a given volume. It is also raised to the one-third power. The graph of the cube function is known as the cubic parabola. Because the cube function is an odd function, this curve has a center of symmetry at the origin, but no axis of symmetry. In integers A cube number, or a perfect cube, or sometimes just a cube, is a number which is the cube of an integer. The non-negative perfect cubes up to 603 are : Geometrically speaking, a positive integer is a perfect cube if and only if one can arrange solid unit cubes into a larger, solid cube. For example, 27 small cubes can be arranged into one larger one with the appearance of a Rubik's Cube, since . The difference between the cubes of consecutive integers can be expressed as follows: . or . There is no minimum perfect cube, since the cube of a negative integer is negative. For example, . Base ten Unlike perfect squares, perfect cubes do not have a small number of possibilities for the last two digits. Except for cubes divisible by 5, where only 25, 75 and 00 can be the last two digits, any pair of digits with the last digit odd can occur as the last digits of a perfect cube. With even cubes, there is considerable restriction, for only 00, 2, 4, 6 and 8 can be the last two digits of a perfect cube (where stands for any odd digit and for any even digit). Some cube numbe
https://en.wikipedia.org/wiki/Defect%20tracking
In engineering, defect tracking is the process of tracking the logged defects in a product from beginning to closure (by inspection, testing, or recording feedback from customers), and making new versions of the product that fix the defects. Defect tracking is important in software engineering as complex software systems typically have tens, hundreds, or thousands of defects; therefore, managing, evaluating and prioritizing these defects is a difficult task. When the number of defects gets quite large, and the defects need to be tracked over extended periods of time, use of a defect tracking system can make the management task much easier. See also Bug tracking Comparison of issue tracking systems References Bug and issue tracking software Product testing
https://en.wikipedia.org/wiki/Network-attached%20storage
Network-attached storage (NAS) is a file-level (as opposed to block-level storage) computer data storage server connected to a computer network providing data access to a heterogeneous group of clients. The term "NAS" can refer to both the technology and systems involved, or a specialized device built for such functionality (as unlike tangentially related technologies such as local area networks, a NAS device is often a singular unit). Overview A NAS device is optimised for serving files either by its hardware, software, or configuration. It is often manufactured as a computer appliance a purpose-built specialized computer. NAS systems are networked appliances that contain one or more storage drives, often arranged into logical, redundant storage containers or RAID. Network-attached storage typically provide access to files using network file sharing protocols such as NFS, SMB, or AFP. From the mid-1990s, NAS devices began gaining popularity as a convenient method of sharing files among multiple computers, as well as to remove the responsibility of file serving from other servers on the network; by doing so, a NAS can provide faster data access, easier administration, and simpler configuration as opposed to using general-purpose server to serve files. Accompanying a NAS are purpose-built hard disk drives, which are functionally similar to non-NAS drives but may have different firmware, vibration tolerance, or power dissipation to make them more suitable for use in RAID arrays, a technology often used in NAS implementations. For example, some NAS versions of drives support a command extension to allow extended error recovery to be disabled. In a non-RAID application, it may be important for a disk drive to go to great lengths to successfully read a problematic storage block, even if it takes several seconds. In an appropriately configured RAID array, a single bad block on a single drive can be recovered completely via the redundancy encoded across the RAID set. If
https://en.wikipedia.org/wiki/Data%20definition%20language
In the context of SQL, data definition or data description language (DDL) is a syntax for creating and modifying database objects such as tables, indices, and users. DDL statements are similar to a computer programming language for defining data structures, especially database schemas. Common examples of DDL statements include CREATE, ALTER, and DROP. History The concept of the data definition language and its name was first introduced in relation to the Codasyl database model, where the schema of the database was written in a language syntax describing the records, fields, and sets of the user data model. Later it was used to refer to a subset of Structured Query Language (SQL) for declaring tables, columns, data types and constraints. SQL-92 introduced a schema manipulation language and schema information tables to query schemas. These information tables were specified as SQL/Schemata in SQL:2003. The term DDL is also used in a generic sense to refer to any formal language for describing data or information structures. Structured Query Language (SQL) Many data description languages use a declarative syntax to define columns and data types. Structured Query Language (SQL), however, uses a collection of imperative verbs whose effect is to modify the schema of the database by adding, changing, or deleting definitions of tables or other elements. These statements can be freely mixed with other SQL statements, making the DDL not a separate language. CREATE statement The create command is used to establish a new database, table, index, or stored procedure. The CREATE statement in SQL creates a component in a relational database management system (RDBMS). In the SQL 1992 specification, the types of components that can be created are schemas, tables, views, domains, character sets, collations, translations, and assertions. Many implementations extend the syntax to allow creation of additional elements, such as indexes and user profiles. Some systems, such as Postgr
https://en.wikipedia.org/wiki/Portland%20Pattern%20Repository
The Portland Pattern Repository (PPR) is a repository for computer programming software design patterns. It was accompanied by a companion website, WikiWikiWeb, which was the world's first wiki. The repository has an emphasis on Extreme Programming, and it is hosted by Cunningham & Cunningham (C2) of Portland, Oregon. The PPR's motto is "People, Projects & Patterns". History On 17 September 1987, programmer Ward Cunningham, tronnie binghamhen with Tektronix, and Apple Computer's Kent Beck co-published the paper "Using Pattern Languages for Object-Oriented Programs" This paper, about software design patterns, was inspired by Christopher Alexander's architectural concept of "patterns" It was written for the 1987 OOPSLA programming conference organized by the Association for Computing Machinery. Cunningham and Beck's idea became popular among programmers because it helped them exchange programming ideas in a format that is easy to understand. Cunningham & Cunningham, the programming consultancy that would eventually host the PPR on its Internet domain, was incorporated in Salem, Oregon on 1 November 1991, and is named after Ward and his wife, Karen R. Cunningham, a mathematician, school teacher, and school director. Cunningham & Cunningham registered their Internet domain, c2.com, on 23 October 1994. Ward created the Portland Pattern Repository on c2.com as a means to help object-oriented programmers publish their computer programming patterns by submitting them to him. Some of those programmers attended the OOPSLA and PLoP conferences about object-oriented programming, and posted their ideas on the PPR. The PPR is accompanied, on c2.com, by the first ever wiki—a collection of reader-modifiable Web pages—which is called WikiWikiWeb. References External links OOPSLA Software design patterns Computing websites
https://en.wikipedia.org/wiki/Communication%20Theory%20of%20Secrecy%20Systems
"Communication Theory of Secrecy Systems" is a paper published in 1949 by Claude Shannon discussing cryptography from the viewpoint of information theory. It is one of the foundational treatments (arguably the foundational treatment) of modern cryptography. It is also a proof that all theoretically unbreakable ciphers must have the same requirements as the one-time pad. Shannon published an earlier version of this research in the formerly classified report A Mathematical Theory of Cryptography, Memorandum MM 45-110-02, Sept. 1, 1945, Bell Laboratories. This report also precedes the publication of his "A Mathematical Theory of Communication", which appeared in 1948. See also Confusion and diffusion Product cipher One-time pad Unicity distance References Shannon, Claude. "Communication Theory of Secrecy Systems", Bell System Technical Journal, vol. 28(4), page 656–715, 1949. Shannon, Claude. "A Mathematical Theory of Cryptography", Memorandum MM 45-110-02, Sept. 1, 1945, Bell Laboratories. Notes https://www.itsoc.org/about/shannon External links Online retyped copy of the paper Scanned version of the published BSTJ paper History of cryptography Cryptography publications 1945 in science 1949 documents 1949 in science 1945 documents Mathematics papers
https://en.wikipedia.org/wiki/Masking%20%28art%29
In art, craft, and engineering, masking is the use of materials to protect areas from change, or to focus change on other areas. This can describe either the techniques and materials used to control the development of a work of art by protecting a desired area from change; or a phenomenon that (either intentionally or unintentionally) causes a sensation to be concealed from conscious attention. The term is derived from the word mask, in the sense that it hides the face from view. In painting Masking materials supplement a painter's dexterity and choice of applicator to control where paint is laid. Examples include the use of a stencil or masking tape to protect areas which are not to be painted. Solid masks Most solid masks require an adhesive to hold the mask in place while work is performed. Some, such as masking tape and frisket, come with adhesive pre-applied. Solid masks are readily available in bulk, and are used in large painting jobs. Paper products Kraft paper Butcher paper Masking tape Plastic film Frisket Polyester tape Stencils Silk screen Liquid masks Liquid masks are preferred where precision is needed; they prevent paint from seeping underneath, resulting in clean edges. Care must be taken to remove them without damaging the work underneath. Latex or other polymers Molten wax Gesso, typically a substrate for painting, but can also be applied to achieve masking effects In photography Masks used for photography are used to enhance the quality of an image. Representations of a scene—whether film, video display, or printed—do not have the dynamic contrast range available to the human eye looking directly at the same scene. Adjusting the contrast in an image helps restore some of the perceived qualities of the original scene. These adjustments are typically performed on "blown-out" highlights, and "crushed" or "muddy" shadow areas, where clipping has occurred; or on desaturated colors. Photographic masks are peculiar in that they are produced from
https://en.wikipedia.org/wiki/Memory%20corruption
Memory corruption occurs in a computer program when the contents of a memory location are modified due to programmatic behavior that exceeds the intention of the original programmer or program/language constructs; this is termed as violation of memory safety. The most likely causes of memory corruption are programming errors (software bugs). When the corrupted memory contents are used later in that program, it leads either to program crash or to strange and bizarre program behavior. Nearly 10% of application crashes on Windows systems are due to heap corruption. Modern programming languages like C and C++ have powerful features of explicit memory management and pointer arithmetic. These features are designed for developing efficient applications and system software. However, using these features incorrectly may lead to memory corruption errors. Memory corruption is one of the most intractable class of programming errors, for two reasons: The source of the memory corruption and its manifestation may be far apart, making it hard to correlate the cause and the effect. Symptoms appear under unusual conditions, making it hard to consistently reproduce the error. Memory corruption errors can be broadly classified into four categories: Using uninitialized memory: Contents of uninitialized memory are treated as garbage values. Using such values can lead to unpredictable program behavior. Using non-owned memory: It is common to use pointers to access and modify memory. If such a pointer is a null pointer, dangling pointer (pointing to memory that has already been freed), or to a memory location outside of current stack or heap bounds, it is referring to memory that is not then possessed by the program. Using such pointers is a serious programming flaw. Accessing such memory usually causes operating system exceptions, that most commonly lead to a program crash (unless suitable memory protection software is being used). Using memory beyond the memory that was allocated (b
https://en.wikipedia.org/wiki/Stage%20box
A stage box is an interface device used in sound reinforcement and recording studios to connect equipment to a mixing console. It provides a central location to connect microphones, instruments, and speakers to a multicore cable (snake), which allows the sound desk to be further from the stage and simplifies setup. Stage boxes typically consist of a rugged metal enclosure, with XLR connectors on the front whose signals are routed through a snake. In the traditional sense, a stage box is effectively a simple termination box at the end of an analog multicore cable. However, many modern stage boxes convert between analog and digital, using a single twisted pair cable instead of an analog multicore. Design Stage boxes typically house 16–32 female XLR connectors and 4–8 male connectors, but occasionally phone connectors are used instead. These connections to the mixer are often called sends (inputs) and returns (outputs). The connector configuration depends on the number of conductors in the multicore cable (for analog signals), or the bandwidth (for digital signals). Some stage boxes are rack-mountable which allows them to be mounted in either a road case or equipment rack. Smaller stage boxes use compact metal cases which may sit on a stage inconspicuously. Labels on the stage box make it easier to identify cables for troubleshooting and setup. Many stage boxes, especially digital ones, include microphone preamps. This is intended to amplify the signal as early as possible in the signal flow in order to minimize interference in the cabling. However, this is unfavorable for some sound engineers who prefer the tone of a specific preamp. Another feature of some stage boxes is integrated DI interfaces for direct connection of instruments. Digital stage boxes Digital mixing consoles inherently introduce conversion between analog and digital signals. Since digital signals are practically immune to noise, it is preferable to use them for long cable runs. As a result, m
https://en.wikipedia.org/wiki/Audio%20multicore%20cable
An audio multicore cable (often colloquially referred to as a multicore, snake cable or snake) is a thick cable which usually contains 4–64 individual audio cables inside a common, sturdy outer jacket. Audio multicore cables are used to convey many audio signals between two locations, such as in audio recording, sound reinforcement, PA systems and broadcasting. Multicores often route many signals from microphones or musical instruments to a mixing console, and can also carry signals from a mixing console back to speakers. In audio engineering, the term multicore may refer to the several things: an unterminated length of multicore cable intended for analog audio signals (a type of cable harness) a terminated cable, with a multipin connector or many individual connectors the entire assembly of a terminated multicore cable and stage box Applications Multicores usually create a link between the stage and sound desk, or live room and control room. When used in sound reinforcement, the multicore cable runs from the stage box or microphone splitter to the front-of-house sound desk, where it connects to a mixing console. Portable multicore cables, stored loose or on a drum, enable sound systems to be set up at temporary outdoor locations such as music festivals. Permanent installations, especially recording studios, use stage boxes mounted in the floor or walls, with the multicore cable running through the ceiling or false floor. Without a snake, a rock band performing onstage, for example, would require 20 or more individual microphone cables running from the stage to the mixing console (typically located at the rear of a venue). This would be harder to set up, would cause tangled cables, and it would be difficult to identify each cable. Varieties Terminations Different termination methods can be used on each end to suit the application. When individual connectors are used, three pin XLR connectors are most common, although phone connectors are occasionally used. An
https://en.wikipedia.org/wiki/Arimaa
Arimaa () is a two-player strategy board game that was designed to be playable with a standard chess set and difficult for computers while still being easy to learn and fun to play for humans. It was invented between 1997 and 2002 by Omar Syed, an Indian-American computer engineer trained in artificial intelligence. Syed was inspired by Garry Kasparov's defeat at the hands of the chess computer Deep Blue to design a new game which could be played with a standard chess set, would be difficult for computers to play well, but would have rules simple enough for his then four-year-old son Aamir to understand. ("Arimaa" is "Aamir" spelled backwards plus an initial "a".) Beginning in 2004, the Arimaa community held three annual tournaments: a World Championship (humans only), a Computer Championship (computers only), and the Arimaa Challenge (human vs. computer). After eleven years of human dominance, the 2015 challenge was won decisively by the computer (Sharp by David Wu). Arimaa has won several awards including GAMES Magazine 2011 Best Abstract Strategy Game, Creative Child Magazine 2010 Strategy Game of the Year, and the 2010 Parents' Choice Approved Award. It has also been the subject of several research papers. Rules Arimaa is played on an 8×8 board with four trap squares. There are six kinds of pieces, ranging from elephant (strongest) to rabbit (weakest). Stronger pieces can push or pull weaker pieces, and stronger pieces freeze weaker pieces. Pieces can be captured by dislodging them onto a trap square when they have no orthogonally adjacent pieces. The two players, Gold and Silver, each control sixteen pieces. These are, in order from strongest to weakest: one elephant (), one camel (), two horses (), two dogs (), two cats (), and eight rabbits . These may be represented by the king, queen, rooks, bishops, knights, and pawns respectively when one plays using a chess set. Objective The main object of the game is to move a rabbit of one's own color onto t
https://en.wikipedia.org/wiki/Perlin%20noise
Perlin noise is a type of gradient noise developed by Ken Perlin in 1983. It has many uses, including but not limited to: procedurally generating terrain, applying pseudo-random changes to a variable, and assisting in the creation of image textures. It is most commonly implemented in two, three, or four dimensions, but can be defined for any number of dimensions. History Ken Perlin developed Perlin noise in 1983 as a result of his frustration with the "machine-like" look of computer-generated imagery (CGI) at the time. He formally described his findings in a SIGGRAPH paper in 1985 called "An Image Synthesizer". He developed it after working on Disney's computer animated sci-fi motion picture Tron (1982) for the animation company Mathematical Applications Group (MAGI). In 1997, Perlin was awarded an Academy Award for Technical Achievement for creating the algorithm, the citation for which read: Perlin did not apply for any patents on the algorithm, but in 2001 he was granted a patent for the use of 3D+ implementations of simplex noise for texture synthesis. Simplex noise has the same purpose, but uses a simpler space-filling grid. Simplex noise alleviates some of the problems with Perlin's "classic noise", among them computational complexity and visually-significant directional artifacts. Uses Perlin noise is a procedural texture primitive, a type of gradient noise used by visual effects artists to increase the appearance of realism in computer graphics. The function has a pseudo-random appearance, yet all of its visual details are the same size. This property allows it to be readily controllable; multiple scaled copies of Perlin noise can be inserted into mathematical expressions to create a great variety of procedural textures. Synthetic textures using Perlin noise are often used in CGI to make computer-generated visual elementssuch as object surfaces, fire, smoke, or cloudsappear more natural, by imitating the controlled random appearance of textures in natu
https://en.wikipedia.org/wiki/Limit%20cycle
In mathematics, in the study of dynamical systems with two-dimensional phase space, a limit cycle is a closed trajectory in phase space having the property that at least one other trajectory spirals into it either as time approaches infinity or as time approaches negative infinity. Such behavior is exhibited in some nonlinear systems. Limit cycles have been used to model the behavior of many real-world oscillatory systems. The study of limit cycles was initiated by Henri Poincaré (1854–1912). Definition We consider a two-dimensional dynamical system of the form where is a smooth function. A trajectory of this system is some smooth function with values in which satisfies this differential equation. Such a trajectory is called closed (or periodic) if it is not constant but returns to its starting point, i.e. if there exists some such that for all . An orbit is the image of a trajectory, a subset of . A closed orbit, or cycle, is the image of a closed trajectory. A limit cycle is a cycle which is the limit set of some other trajectory. Properties By the Jordan curve theorem, every closed trajectory divides the plane into two regions, the interior and the exterior of the curve. Given a limit cycle and a trajectory in its interior that approaches the limit cycle for time approaching , then there is a neighborhood around the limit cycle such that all trajectories in the interior that start in the neighborhood approach the limit cycle for time approaching . The corresponding statement holds for a trajectory in the interior that approaches the limit cycle for time approaching , and also for trajectories in the exterior approaching the limit cycle. Stable, unstable and semi-stable limit cycles In the case where all the neighboring trajectories approach the limit cycle as time approaches infinity, it is called a stable or attractive limit cycle (ω-limit cycle). If instead, all neighboring trajectories approach it as time approaches negative infinity, then it is an
https://en.wikipedia.org/wiki/Enhanced-definition%20television
Enhanced-definition television, or extended-definition television (EDTV) is a Consumer Electronics Association (CEA) marketing shorthand term for certain digital television (DTV) formats and devices. Specifically, this term defines an extension of the standard-definition television (SDTV) format that enables a clearer picture during high-motion scenes compared to previous iterations of SDTV, but not producing images as detailed as high-definition television (HDTV). The term refers to devices capable of displaying 480-line or 576-line signals in progressive scan, commonly referred to as 480p (NTSC-HQ) and 576p (PAL/SECAM) respectively, as opposed to interlaced scanning, commonly referred to as 480i (NTSC) or 576i (PAL, SECAM). High-motion is optional for EDTV. In Australia, the 576p resolution standard was used by the Special Broadcasting Service (SBS TV) and Seven Network, being technically considered high-definition. In Japan, the term is associated with improvements to analog NTSC called EDTV-I (or "Clear-vision") and EDTV-II (or "Wide-aspect Clear-vision") including ghost cancellation, digital sound or widescreen broadcasts, using a methods vaguely similar to PALPlus. In Europe, it can be applied to analog PALPlus or MAC broadcasts. In other countries definitions may vary. Connectivity As EDTV signals require more bandwidth (due to frame doubling) than is feasible with SDTV connection standards (such as composite video, SCART or S-Video), higher bandwidth media must be used to accommodate the additional data transfer. To achieve EDTV, consumer electronic devices such as a progressive scan DVD player or modern video game consoles must be connected through at least a component video cable (typically using 3 RCA cables for video), a VGA connector, or a DVI or HDMI connector. For over-the-air television broadcasts, EDTV content uses the same connectors as HDTV. Broadcast and displays EDTV broadcasts use less digital bandwidth than HDTV, so TV stations can br