source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Popularity
In sociology, popularity is how much a person, idea, place, item or other concept is either liked or accorded status by other people. Liking can be due to reciprocal liking, interpersonal attraction, and similar factors. Social status can be due to dominance, superiority, and similar factors. For example, a kind person may be considered likable and therefore more popular than another person, and a wealthy person may be considered superior and therefore more popular than another person. There are two primary types of interpersonal popularity: perceived and sociometric. Perceived popularity is measured by asking people who the most popular or socially important people in their social group are. Sociometric popularity is measured by objectively measuring the number of connections a person has to others in the group. A person can have high perceived popularity without having high sociometric popularity, and vice versa. According to psychologist Tessa Lansu at the Radboud University Nijmegen, "Popularity [has] to do with being the middle point of a group and having influence on it." Introduction The term popularity is borrowed from the Latin term popularis, which originally meant "common." The current definition of the word popular, the "fact or condition of being well liked by the people", was first seen in 1601. While popularity is a trait often ascribed to an individual, it is an inherently social phenomenon and thus can only be understood in the context of groups of people. Popularity is a collective perception, and individuals report the consensus of a group's feelings towards an individual or object when rating popularity. It takes a group of people to like something, so the more that people advocate for something or claim that someone is best liked, the more attention it will get, and the more popular it will be deemed. Notwithstanding the above, popularity as a concept can be applied, assigned, or directed towards objects such as songs, movies, websites, a
https://en.wikipedia.org/wiki/Fractional%20calculus
Fractional calculus is a branch of mathematical analysis that studies the several different possibilities of defining real number powers or complex number powers of the differentiation operator and of the integration operator and developing a calculus for such operators generalizing the classical one. In this context, the term powers refers to iterative application of a linear operator to a function , that is, repeatedly composing with itself, as in For example, one may ask for a meaningful interpretation of as an analogue of the functional square root for the differentiation operator, that is, an expression for some linear operator that, when applied twice to any function, will have the same effect as differentiation. More generally, one can look at the question of defining a linear operator for every real number in such a way that, when takes an integer value , it coincides with the usual -fold differentiation if , and with the -th power of when . One of the motivations behind the introduction and study of these sorts of extensions of the differentiation operator is that the sets of operator powers defined in this way are continuous semigroups with parameter , of which the original discrete semigroup of for integer is a denumerable subgroup: since continuous semigroups have a well developed mathematical theory, they can be applied to other branches of mathematics. Fractional differential equations, also known as extraordinary differential equations, are a generalization of differential equations through the application of fractional calculus. Historical notes In applied mathematics and mathematical analysis, a fractional derivative is a derivative of any arbitrary order, real or complex. Its first appearance is in a letter written to Guillaume de l'Hôpital by Gottfried Wilhelm Leibniz in 1695.  Around the same time, Leibniz wrote to one of the Bernoulli brothers describing the similarity between the binomial theorem and the Leibniz rule for
https://en.wikipedia.org/wiki/Multiplicative%20inverse
In mathematics, a multiplicative inverse or reciprocal for a number x, denoted by 1/x or x−1, is a number which when multiplied by x yields the multiplicative identity, 1. The multiplicative inverse of a fraction a/b is b/a. For the multiplicative inverse of a real number, divide 1 by the number. For example, the reciprocal of 5 is one fifth (1/5 or 0.2), and the reciprocal of 0.25 is 1 divided by 0.25, or 4. The reciprocal function, the function f(x) that maps x to 1/x, is one of the simplest examples of a function which is its own inverse (an involution). Multiplying by a number is the same as dividing by its reciprocal and vice versa. For example, multiplication by 4/5 (or 0.8) will give the same result as division by 5/4 (or 1.25). Therefore, multiplication by a number followed by multiplication by its reciprocal yields the original number (since the product of the number and its reciprocal is 1). The term reciprocal was in common use at least as far back as the third edition of Encyclopædia Britannica (1797) to describe two numbers whose product is 1; geometrical quantities in inverse proportion are described as in a 1570 translation of Euclid's Elements. In the phrase multiplicative inverse, the qualifier multiplicative is often omitted and then tacitly understood (in contrast to the additive inverse). Multiplicative inverses can be defined over many mathematical domains as well as numbers. In these cases it can happen that ; then "inverse" typically implies that an element is both a left and right inverse. The notation f −1 is sometimes also used for the inverse function of the function f, which is for most functions not equal to the multiplicative inverse. For example, the multiplicative inverse is the cosecant of x, and not the inverse sine of x denoted by or . The terminology difference reciprocal versus inverse is not sufficient to make this distinction, since many authors prefer the opposite naming convention, probably for historical reasons (for
https://en.wikipedia.org/wiki/Scratching
Scratching, sometimes referred to as scrubbing, is a DJ and turntablist technique of moving a vinyl record back and forth on a turntable to produce percussive or rhythmic sounds. A crossfader on a DJ mixer may be used to fade between two records simultaneously. While scratching is most associated with hip hop music, where it emerged in the mid-1970s, from the 1990s it has been used in some styles of edm like techno and house music and rock music such as rap rock, rap metal, rapcore, and nu metal. In hip hop culture, scratching is one of the measures of a DJ's skills. DJs compete in scratching competitions at the DMC World DJ Championships and IDA (International DJ Association), formerly known as ITF (International Turntablist Federation). At scratching competitions, DJs can use only scratch-oriented gear (turntables, DJ mixer, digital vinyl systems or vinyl records only). In recorded hip hop songs, scratched "hooks" often use portions of other songs. History Precursors A rudimentary form of turntable manipulation that is related to scratching was developed in the late 1940s by radio music program hosts, disc jockeys (DJs), or the radio program producers who did their own technical operation as audio console operators. It was known as back-cueing, and was used to find the very beginning of the start of a song (i.e., the cue point) on a vinyl record groove. This was done to permit the operator to back the disc up (rotate the record or the turntable platter itself counter-clockwise) in order to permit the turntable to be switched on, and come up to full speed without ruining the first few bars of music with the "wow" of incorrect, unnaturally slow-speed playing. This permitted the announcer to time their remarks, and start the turntable in time for when they wanted the music on the record to begin. Back cueing was a basic skill that all radio production staff needed to learn, and the dynamics of it were unique to the brand of professional turntable in use at a giv
https://en.wikipedia.org/wiki/Object-oriented%20operating%20system
An object-oriented operating system is an operating system that is designed, structured, and operated using object-oriented programming principles. An object-oriented operating system is in contrast to an object-oriented user interface or programming framework, which can be run on a non-object-oriented operating system like DOS or Unix. There are already object-based language concepts involved in the design of a more typical operating system such as Unix. While a more traditional language like C does not support object-orientation as fluidly as more recent languages, the notion of, for example, a file, stream, or device driver (in Unix, each represented as a file descriptor) can be considered a good example of objects. They are, after all, abstract data types, with various methods in the form of system calls which behavior varies based on the type of object and which implementation details are hidden from the caller. Object-orientation has been defined as objects + inheritance, and inheritance is only one approach to the more general problem of delegation that occurs in every operating system. Object-orientation has been more widely used in the user interfaces of operating systems than in their kernels. Background An object is an instance of a class, which provides a certain set of functionalities. Two objects can be differentiated based on the functionalities (or methods) they support. In an operating system context, objects are associated with a resource. Historically, the object-oriented design principles were used in operating systems to provide several protection mechanisms. Protection mechanisms in an operating system help in providing a clear separation between different user programs. It also protects the operating system from any malicious user program behavior. For example, consider the case of user profiles in an operating system. The user should not have access to resources of another user. The object model deals with these protection issues with e
https://en.wikipedia.org/wiki/Falling%20and%20rising%20factorials
In mathematics, the falling factorial (sometimes called the descending factorial, falling sequential product, or lower factorial) is defined as the polynomial The rising factorial (sometimes called the Pochhammer function, Pochhammer polynomial, ascending factorial, rising sequential product, or upper factorial) is defined as The value of each is taken to be 1 (an empty product) when These symbols are collectively called factorial powers. The Pochhammer symbol, introduced by Leo August Pochhammer, is the notation , where is a non-negative integer. It may represent either the rising or the falling factorial, with different articles and authors using different conventions. Pochhammer himself actually used with yet another meaning, namely to denote the binomial coefficient In this article, the symbol is used to represent the falling factorial, and the symbol is used for the rising factorial. These conventions are used in combinatorics, although Knuth's underline and overline notations and are increasingly popular. In the theory of special functions (in particular the hypergeometric function) and in the standard reference work Abramowitz and Stegun, the Pochhammer symbol is used to represent the rising factorial. When is a positive integer, gives the number of -permutations (sequences of distinct elements) from an -element set, or equivalently the number of injective functions from a set of size to a set of size . The rising factorial gives the number of partitions of an -element set into ordered sequences (possibly empty). Examples and combinatorial interpretation The first few falling factorials are as follows: The first few rising factorials are as follows: The coefficients that appear in the expansions are Stirling numbers of the first kind (see below). When the variable is a positive integer, the number is equal to the number of -permutations from a set of items, that is, the number of ways of choosing an ordered list of length consisti
https://en.wikipedia.org/wiki/Wire%20bonding
Wire bonding is the method of making interconnections between an integrated circuit (IC) or other semiconductor device and its packaging during semiconductor device fabrication. Although less common, wire bonding can be used to connect an IC to other electronics or to connect from one printed circuit board (PCB) to another. Wire bonding is generally considered the most cost-effective and flexible interconnect technology and is used to assemble the vast majority of semiconductor packages. Wire bonding can be used at frequencies above 100 GHz. Materials Bondwires usually consist of one of the following materials: Aluminium Copper Silver Gold Wire diameters start from under 10 μm and can be up to several hundred micrometres for high-powered applications. The wire bonding industry is transitioning from gold to copper. This change has been instigated by the rising cost of gold and the comparatively stable, and much lower, cost of copper. While possessing higher thermal and electrical conductivity than gold, copper had previously been seen as less reliable due to its hardness and susceptibility to corrosion. By 2015, it is expected that more than a third of all wire bonding machines in use will be set up for copper. Copper wire has become one of the preferred materials for wire bonding interconnects in many semiconductor and microelectronic applications. Copper is used for fine wire ball bonding in sizes from up to . Copper wire has the ability of being used at smaller diameters providing the same performance as gold without the high material cost. Copper wire up to can be successfully wedge bonded. Large diameter copper wire can and does replace aluminium wire where high current carrying capacity is needed or where there are problems with complex geometry. Annealing and process steps used by manufacturers enhance the ability to use large diameter copper wire to wedge bond to silicon without damage occurring to the die. Copper wire does pose some challenges in
https://en.wikipedia.org/wiki/WinMX
WinMX (Windows Music Exchange) is a freeware peer-to-peer file sharing program authored in 2000 by Kevin Hearn (president of Frontcode Technologies) in Windsor, Ontario (Canada). According to one study, it was the number one source for online music in 2005 with an estimated 2.1 million users. Frontcode Technologies itself abandoned development of WinMX in September 2005, but developers brought the service back online within a few days by releasing patches. WinMX continues to be used by a community of enthusiasts. Kevin Hearn released Tixati in 2009 and Fopnu in 2017. Fopnu is a client and a network with some similarities to WinMX. In 2021, he released DarkMX, a serverless file sharing client with built-in privacy preserving features and a built-in Tor client, as well as the ability to host a .onion file-sharing that is reachable via the Tor Browser. History Beginnings WinMX began its life as an OpenNAP client capable of connecting to several servers simultaneously. Frontcode Technologies later created a proprietary protocol, termed WinMX Peer Network Protocol (WPNP), which was used starting with WinMX 2 in May 2001. Frontcode Technologies had operated several peer cache servers to aid WPNP network operation. Downloads can be very fast for popular songs since the user can run a "multi-point download" that simultaneously downloads the same file in small pieces from several users. The WinMX program houses a few built-in features such as bandwidth monitoring, short messaging, and hosting chatrooms and functions as an OpenNap client. Users could negotiate an exchange of their files with the help of the short messaging system or chat. After the transfers start, each has the option of selecting bandwidth for the other to make sure both transfers end more or less at the same time. Closure of Frontcode Technologies On September 13, 2005, Frontcode Technologies received a cease and desist letter from the Recording Industry Association of America demanding that they
https://en.wikipedia.org/wiki/Ball%20bonding
Ball bonding is a type of wire bonding, and is the most common way to make the electrical interconnections between a bare silicon die and the lead frame of the package it is placed in during semiconductor device fabrication. Gold or copper wire can be used, though gold is more common because its oxide is not as problematic in making a weld. If copper wire is used, nitrogen must be used as a cover gas to prevent the copper oxides from forming during the wire bonding process. Copper is also harder than gold, which makes damage to the surface of the chip more likely. However copper is cheaper than gold and has superior electrical properties, and so remains a compelling choice. Almost all modern ball bonding processes use a combination of heat, pressure, and ultrasonic energy to make a weld at each end of the wire. The wire used can be as small as 15 µm in diameter—such that several welds could fit across the width of a human hair. A person upon first seeing a ball bonder will usually compare its operation to that of a sewing machine. In fact there is a needle-like disposable tool called the capillary, through which the wire is fed. A high-voltage electric charge is applied to the wire. This melts the wire at the tip of the capillary. The tip of the wire forms into a ball because of the surface tension of the molten metal. The ball quickly solidifies, and the capillary is lowered to the surface of the chip, which is typically heated to at least 125°C. The machine then pushes down on the capillary and applies ultrasonic energy with an attached transducer. The combined heat, pressure, and ultrasonic energy create a weld between the copper or gold ball and the surface of the chip—which is usually copper or aluminum. This is the so-called ball bond that gives the process its name. (All-aluminum systems in semiconductor fabrication eliminate the "purple plague"—a brittle gold-aluminum intermetallic compound—sometimes associated with pure gold bonding wire. This property
https://en.wikipedia.org/wiki/Keystone%20species
A keystone species is a species that has a disproportionately large effect on its natural environment relative to its abundance, a concept introduced in 1969 by the zoologist Robert T. Paine. Keystone species play a critical role in maintaining the structure of an ecological community, affecting many other organisms in an ecosystem and helping to determine the types and numbers of various other species in the community. Without keystone species, the ecosystem would be dramatically different or cease to exist altogether. Some keystone species, such as the wolf, are also apex predators. The role that a keystone species plays in its ecosystem is analogous to the role of a keystone in an arch. While the keystone is under the least pressure of any of the stones in an arch, the arch still collapses without it. Similarly, an ecosystem may experience a dramatic shift if a keystone species is removed, even though that species was a small part of the ecosystem by measures of biomass or productivity. It became a popular concept in conservation biology, alongside flagship and umbrella species. Although the concept is valued as a descriptor for particularly strong inter-species interactions, and has allowed easier communication between ecologists and conservation policy-makers, it has been criticized for oversimplifying complex ecological systems. History The concept of the keystone species was introduced in 1969 by zoologist Robert T. Paine. Paine developed the concept to explain his observations and experiments on the relationships between marine invertebrates of the intertidal zone (between the high and low tide lines), including starfish and mussels. He removed the starfish from an area, and documented the effects on the ecosystem. In his 1966 paper, Food Web Complexity and Species Diversity, Paine had described such a system in Makah Bay in Washington. In his 1969 paper, Paine proposed the keystone species concept, using Pisaster ochraceus, a species of starfish generall
https://en.wikipedia.org/wiki/Embedded%20operating%20system
An embedded operating system is an operating system for embedded computer systems. Embedded operating systems are computer systems designed to increase functionality and reliability for achieving a specific task. Depending on the method used for Computer multitasking, this type of operating system might be considered a real-time operating system (RTOS). All embedded systems contain a processor and software. There must be a place for embedded software to store the executable and temporary storage for run-time data processing. The main memory on an embedded system can be ROM and RAM. All embedded systems must also contain some form of input interface and output interface to function. The embedded hardware is usually unique and varies from application to application. Because the hardware running the embedded operating system can be very limited in resources, the embedded design of these operating systems may have a narrow in scope tailored to a specific application, so as to achieve the desired operation under the given hardware constraints. The embedded operating system that organizes and controls the hardware usually determines which other embedded hardware is needed. To take better advantage of the processing power of the Central processing unit (CPU), software developers may write critical code directly in assembly language. This machine efficient language can potentially result in gains in speed on deterministic systems at the cost of portability and maintainability. Often, embedded operating systems are written entirely in portable languages, such as C. Operating systems on typical embedded systems Embedded operating systems have been developed for consumer electronics, including cameras and mobile phones. Embedded operating systems also run on automotive electronics, helping the driver with cruise control as well as navigation. Furthermore factory automation infrastructure requires embedded operating systems. In everyday life, embedded operating systems can
https://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall%20algorithm
In computer science, the Floyd–Warshall algorithm (also known as Floyd's algorithm, the Roy–Warshall algorithm, the Roy–Floyd algorithm, or the WFI algorithm) is an algorithm for finding shortest paths in a directed weighted graph with positive or negative edge weights (but with no negative cycles). A single execution of the algorithm will find the lengths (summed weights) of shortest paths between all pairs of vertices. Although it does not return details of the paths themselves, it is possible to reconstruct the paths with simple modifications to the algorithm. Versions of the algorithm can also be used for finding the transitive closure of a relation , or (in connection with the Schulze voting system) widest paths between all pairs of vertices in a weighted graph. History and naming The Floyd–Warshall algorithm is an example of dynamic programming, and was published in its currently recognized form by Robert Floyd in 1962. However, it is essentially the same as algorithms previously published by Bernard Roy in 1959 and also by Stephen Warshall in 1962 for finding the transitive closure of a graph, and is closely related to Kleene's algorithm (published in 1956) for converting a deterministic finite automaton into a regular expression. The modern formulation of the algorithm as three nested for-loops was first described by Peter Ingerman, also in 1962. Algorithm The Floyd–Warshall algorithm compares many possible paths through the graph between each pair of vertices. It is guaranteed to find all shortest paths and is able to do this with comparisons in a graph, even though there may be edges in the graph. It does so by incrementally improving an estimate on the shortest path between two vertices, until the estimate is optimal. Consider a graph with vertices numbered 1 through . Further consider a function that returns the length of the shortest possible path (if one exists) from to using vertices only from the set as intermediate points along the way.
https://en.wikipedia.org/wiki/Trade%20name
A trade name, trading name, or business name is a pseudonym used by companies that do not operate under their registered company name. The term for this type of alternative name is a "fictitious" business name. Registering the fictitious name with a relevant government body is often required. In a number of countries, the phrase "trading as" (abbreviated to t/a) is used to designate a trade name. In the United States, the phrase "doing business as" (abbreviated to DBA, dba, d.b.a., or d/b/a) is used, among others, such as assumed business name or fictitious business name. In Canada, "operating as" (abbreviated to o/a) and "trading as" are used, although "doing business as" is also sometimes used. A company typically uses a trade name to conduct business using a simpler name rather than using their formal and often lengthier name. Trade names are also used when a preferred name cannot be registered, often because it may already be registered or is too similar to a name that is already registered. Legal aspects Using one (or more) fictitious business name(s) does not create one (or more) separate legal entities. The distinction between a registered legal name and a fictitious business name, or trade name, is important because fictitious business names do not always identify the entity that is legally responsible. Legal agreements (such as contracts) are normally made using the registered legal name of the business. If a corporation fails to consistently adhere to such important legal formalities like using its registered legal name in contracts, it may be subject to piercing of the corporate veil. In English, trade names are generally treated as proper nouns. By country Argentina In Argentina, a trade name is known as a nombre de fantasía ('fantasy' or 'fiction' name), and the legal name of business is called a razón social (social name). Brazil In Brazil, a trade name is known as a nome fantasia ('fantasy' or 'fiction' name), and the legal name of business
https://en.wikipedia.org/wiki/Opcode
In computing, an opcode (abbreviated from operation code, also known as instruction machine code, instruction code, instruction syllable, instruction parcel or opstring) is the portion of a machine language instruction that specifies the operation to be performed. Beside the opcode itself, most instructions also specify the data they will process, in the form of operands. In addition to opcodes used in the instruction set architectures of various CPUs, which are hardware devices, they can also be used in abstract computing machines as part of their byte code specifications. Overview Specifications and format of the opcodes are laid out in the instruction set architecture (ISA) of the processor in question, which may be a general CPU or a more specialized processing unit. Opcodes for a given instruction set can be described through the use of an opcode table detailing all possible opcodes. Apart from the opcode itself, an instruction normally also has one or more specifiers for operands (i.e. data) on which the operation should act, although some operations may have implicit operands, or none at all. There are instruction sets with nearly uniform fields for opcode and operand specifiers, as well as others (the x86 architecture for instance) with a more complicated, variable-length structure. Instruction sets can be extended through the use of opcode prefixes which add a subset of new instructions made up of existing opcodes following reserved byte sequences. Operands Depending on architecture, the operands may be register values, values in the stack, other memory values, I/O ports (which may also be memory mapped), etc., specified and accessed using more or less complex addressing modes. The types of operations include arithmetic, data copying, logical operations, and program control, as well as special instructions (such as CPUID and others). Assembly language, or just assembly, is a low-level programming language, which uses mnemonic instructions and operands to
https://en.wikipedia.org/wiki/Poincar%C3%A9%20group
The Poincaré group, named after Henri Poincaré (1906), was first defined by Hermann Minkowski (1908) as the group of Minkowski spacetime isometries. It is a ten-dimensional non-abelian Lie group that is of importance as a model in our understanding of the most basic fundamentals of physics. Overview A Minkowski spacetime isometry has the property that the interval between events is left invariant. For example, if everything were postponed by two hours, including the two events and the path you took to go from one to the other, then the time interval between the events recorded by a stop-watch you carried with you would be the same. Or if everything were shifted five kilometres to the west, or turned 60 degrees to the right, you would also see no change in the interval. It turns out that the proper length of an object is also unaffected by such a shift. A time or space reversal (a reflection) is also an isometry of this group. In Minkowski space (i.e. ignoring the effects of gravity), there are ten degrees of freedom of the isometries, which may be thought of as translation through time or space (four degrees, one per dimension); reflection through a plane (three degrees, the freedom in orientation of this plane); or a "boost" in any of the three spatial directions (three degrees). Composition of transformations is the operation of the Poincaré group, with proper rotations being produced as the composition of an even number of reflections. In classical physics, the Galilean group is a comparable ten-parameter group that acts on absolute time and space. Instead of boosts, it features shear mappings to relate co-moving frames of reference. Poincaré symmetry Poincaré symmetry is the full symmetry of special relativity. It includes: translations (displacements) in time and space (P), forming the abelian Lie group of translations on space-time; rotations in space, forming the non-abelian Lie group of three-dimensional rotations (J); boosts, transformations conne
https://en.wikipedia.org/wiki/Minkowski%20space
In mathematical physics, Minkowski space (or Minkowski spacetime) () combines inertial space and time manifolds with a non-inertial reference frame of space and time into a four-dimensional model relating a position (inertial frame of reference) to the field. A four-vector (x,y,z,t) consists of a coordinate axes such as a Euclidean space plus time. This may be used with the non-inertial frame to illustrate specifics of motion, but should not be confused with the spacetime model generally. The model helps show how a spacetime interval between any two events is independent of the inertial frame of reference in which they are recorded. Mathematician Hermann Minkowski developed it from the work of Hendrik Lorentz, Henri Poincaré, and others, and said it "was grown on experimental physical grounds." Minkowski space is closely associated with Einstein's theories of special relativity and general relativity and is the most common mathematical structure by which special relativity is formalized. While the individual components in Euclidean space and time might differ due to length contraction and time dilation, in Minkowski spacetime, all frames of reference will agree on the total interval in spacetime between events. Minkowski space differs from four-dimensional Euclidean space insofar as it treats time differently than the three spatial dimensions. In 3-dimensional Euclidean space, the isometry group (the maps preserving the regular Euclidean distance) is the Euclidean group. It is generated by rotations, reflections and translations. When time is appended as a fourth dimension, the further transformations of translations in time and Lorentz boosts are added, and the group of all these transformations is called the Poincaré group. Minkowski's model follows special relativity where motion causes time dilation changing the scale applied to the frame in motion and shifts the phase of light. Spacetime is equipped with an indefinite non-degenerate bilinear form, called
https://en.wikipedia.org/wiki/Lorentz%20group
In physics and mathematics, the Lorentz group is the group of all Lorentz transformations of Minkowski spacetime, the classical and quantum setting for all (non-gravitational) physical phenomena. The Lorentz group is named for the Dutch physicist Hendrik Lorentz. For example, the following laws, equations, and theories respect Lorentz symmetry: The kinematical laws of special relativity Maxwell's field equations in the theory of electromagnetism The Dirac equation in the theory of the electron The Standard Model of particle physics The Lorentz group expresses the fundamental symmetry of space and time of all known fundamental laws of nature. In small enough regions of spacetime where gravitational variances are negligible, physical laws are Lorentz invariant in the same manner as special relativity. Basic properties The Lorentz group is a subgroup of the Poincaré group—the group of all isometries of Minkowski spacetime. Lorentz transformations are, precisely, isometries that leave the origin fixed. Thus, the Lorentz group is the isotropy subgroup with respect to the origin of the isometry group of Minkowski spacetime. For this reason, the Lorentz group is sometimes called the homogeneous Lorentz group while the Poincaré group is sometimes called the inhomogeneous Lorentz group. Lorentz transformations are examples of linear transformations; general isometries of Minkowski spacetime are affine transformations. Mathematically, the Lorentz group may be described as the indefinite orthogonal group O(1,3), the matrix Lie group that preserves the quadratic form on (The vector space equipped with this quadratic form is sometimes written ). This quadratic form is, when put on matrix form (see classical orthogonal group), interpreted in physics as the metric tensor of Minkowski spacetime. The Lorentz group is a six-dimensional noncompact non-abelian real Lie group that is not connected. The four connected components are not simply connected. The identity compone
https://en.wikipedia.org/wiki/Elbrus%20%28computer%29
The Elbrus () is a line of Soviet and Russian computer systems developed by the Lebedev Institute of Precision Mechanics and Computer Engineering. These computers are used in the space program, nuclear weapons research, and defense systems, as well as for theoretical and researching purposes, such as an experimental Refal and CLU translators. History Historically, computers under the Elbrus brand comprised several different instruction set architectures (ISAs). The first of them was the line of the large fourth-generation computers, developed by Vsevolod Burtsev. These were heavily influenced by the Burroughs large systems and similarly to them implemented tagged architecture and a variant of ALGOL-68 as system programming language. After that Burtsev retired, and new Lebedev's chief developer, Boris Babayan, introduced the completely new system architecture. Differing completely from the architecture of both Elbrus 1 and Elbrus 2, it employed a very long instruction word (VLIW) approach. In 1992, a spin-off company Moscow Center of SPARC Technologies (MCST) was created and continued development, using the "Elbrus" moniker as a brand for all computer systems developed by the company. In the late 1990s, a series of SPARC-based central processing units (CPUs) were developed at MCST as a way to raise funds for in-house semiconductor intellectual property core development and to fill the niche of domestically-developed CPUs for the backdoor-wary military. Models Elbrus 1 (1979) was the first in the line. A side development was an update of the 1965 BESM-6 as Elbrus-1K2. a 10-processor computer, with superscalar, out-of-order execution and reduced instruction set computer (RISC) processors. Elbrus 2 (1984) Re-implementation of the Elbrus 1 architecture with faster emitter-coupled logic (ECL) chips. Elbrus 3 (1990) was a 16-processor computer developed by the Babayan's team, and one of the first VLIW computers in the world. Elbrus 2000 (2001) was a micropro
https://en.wikipedia.org/wiki/Index%20of%20a%20subgroup
In mathematics, specifically group theory, the index of a subgroup H in a group G is the number of left cosets of H in G, or equivalently, the number of right cosets of H in G. The index is denoted or or . Because G is the disjoint union of the left cosets and because each left coset has the same size as H, the index is related to the orders of the two groups by the formula (interpret the quantities as cardinal numbers if some of them are infinite). Thus the index measures the "relative sizes" of G and H. For example, let be the group of integers under addition, and let be the subgroup consisting of the even integers. Then has two cosets in , namely the set of even integers and the set of odd integers, so the index is 2. More generally, for any positive integer n. When G is finite, the formula may be written as , and it implies Lagrange's theorem that divides . When G is infinite, is a nonzero cardinal number that may be finite or infinite. For example, , but is infinite. If N is a normal subgroup of G, then is equal to the order of the quotient group , since the underlying set of is the set of cosets of N in G. Properties If H is a subgroup of G and K is a subgroup of H, then If H and K are subgroups of G, then with equality if . (If is finite, then equality holds if and only if .) Equivalently, if H and K are subgroups of G, then with equality if . (If is finite, then equality holds if and only if .) If G and H are groups and is a homomorphism, then the index of the kernel of in G is equal to the order of the image: Let G be a group acting on a set X, and let x ∈ X. Then the cardinality of the orbit of x under G is equal to the index of the stabilizer of x: This is known as the orbit-stabilizer theorem. As a special case of the orbit-stabilizer theorem, the number of conjugates of an element is equal to the index of the centralizer of x in G. Similarly, the number of conjugates of a subgroup H in G is equal to the index
https://en.wikipedia.org/wiki/Copper%20interconnects
In semiconductor technology, copper interconnects are interconnects made of copper. They are used in silicon integrated circuits (ICs) to reduce propagation delays and power consumption. Since copper is a better conductor than aluminium, ICs using copper for their interconnects can have interconnects with narrower dimensions, and use less energy to pass electricity through them. Together, these effects lead to ICs with better performance. They were first introduced by IBM, with assistance from Motorola, in 1997. The transition from aluminium to copper required significant developments in fabrication techniques, including radically different methods for patterning the metal as well as the introduction of barrier metal layers to isolate the silicon from potentially damaging copper atoms. Although the methods of superconformal copper electrodepostion were known since late 1960, their application at the (sub)micron via scale (e.g. in microchips) started only in 1988-1995 (see figure). By year 2002 it became a mature technology, and research and development efforts in this field started to decline. Patterning Although some form of volatile copper compound has been known to exist since 1947, with more discovered as the century progressed, none were in industrial use, so copper could not be patterned by the previous techniques of photoresist masking and plasma etching that had been used with great success with aluminium. The inability to plasma etch copper called for a drastic rethinking of the metal patterning process and the result of this rethinking was a process referred to as an additive patterning, also known as a "Damascene" or "dual-Damascene" process by analogy to a traditional technique of metal inlaying. In this process, the underlying silicon oxide insulating layer is patterned with open trenches where the conductor should be. A thick coating of copper that significantly overfills the trenches is deposited on the insulator, and chemical-mechanical planar
https://en.wikipedia.org/wiki/DigiCipher%202
DigiCipher 2, or simply DCII, is a proprietary standard format of digital signal transmission and it doubles as an encryption standard with MPEG-2/MPEG-4 signal video compression used on many communications satellite television and audio signals. The DCII standard was originally developed in 1997 by General Instrument, which then became the Home and Network Mobility division of Motorola, then bought by Google in Aug 2011, and lastly became the Home portion of the division to Arris. The original attempt for a North American digital signal encryption and compression standard was DigiCipher 1, which was used most notably in the now-defunct PrimeStar medium-power direct broadcast satellite (DBS) system during the early 1990s. The DCII standard predates wide acceptance of DVB-based digital terrestrial television compression (although not cable or satellite DVB) and therefore is incompatible with the DVB standard. Approximately 70% of newer first-generation digital cable networks in North America use the 4DTV/DigiCipher 2 format. The use of DCII is most prevalent in North American digital cable television set-top boxes. DCII is also used on Motorola's 4DTV digital satellite television tuner and Shaw Direct's DBS receiver. The DigiCipher 2 encryption standard was reverse engineered in 2016. Technical specifications DigiCipher II uses QPSK and BPSK at the same time. The primary difference between DigiCipher 2 and DVB lies in how each standard handles SI metadata, or System Information, where DVB reserves packet identifiers from 16 to 31 for metadata, DigiCipher reserves only packet identifier 8187 for its master guide table which acts as a look-up table for all other metadata tables. DigiCipher 2 also extends the MPEG program number that is assigned for each service in a transport stream with the concept of a virtual channel number, whereas the DVB system never defined this type of remapping preferring to use a registry of network identifiers to further differentiate p
https://en.wikipedia.org/wiki/Security%20protocol%20notation
In cryptography, security (engineering) protocol notation, also known as protocol narrations and Alice & Bob notation, is a way of expressing a protocol of correspondence between entities of a dynamic system, such as a computer network. In the context of a formal model, it allows reasoning about the properties of such a system. The standard notation consists of a set of principals (traditionally named Alice, Bob, Charlie, and so on) who wish to communicate. They may have access to a server S, shared keys K, timestamps T, and can generate nonces N for authentication purposes. A simple example might be the following: This states that Alice intends a message for Bob consisting of a plaintext X encrypted under shared key KA,B. Another example might be the following: This states that Bob intends a message for Alice consisting of a nonce NB encrypted using public key of Alice. A key with two subscripts, KA,B, is a symmetric key shared by the two corresponding individuals. A key with one subscript, KA, is the public key of the corresponding individual. A private key is represented as the inverse of the public key. The notation specifies only the operation and not its semantics — for instance, private key encryption and signature are represented identically. We can express more complicated protocols in such a fashion. See Kerberos as an example. Some sources refer to this notation as Kerberos Notation. Some authors consider the notation used by Steiner, Neuman, & Schiller as a notable reference. Several models exist to reason about security protocols in this way, one of which is BAN logic. Security protocol notation inspired many of the programming languages used in choreographic programming. References Cryptography
https://en.wikipedia.org/wiki/CAPTCHA
A CAPTCHA ( ) is a type of challenge–response test used in computing to determine whether the user is human in order to deter bot attacks and spam. The term was coined in 2003 by Luis von Ahn, Manuel Blum, Nicholas J. Hopper, and John Langford. It is a contrived acronym for "Completely Automated Public Turing test to tell Computers and Humans Apart." A historically common type of CAPTCHA (displayed as Version 1.0) was first invented in 1997 by two groups working in parallel. This form of CAPTCHA requires entering a sequence of letters or numbers in a distorted image. Because the test is administered by a computer, in contrast to the standard Turing test that is administered by a human, CAPTCHAs are sometimes described as reverse Turing tests. Two widely used CAPTCHA services are hCaptcha, an independent company, and reCAPTCHA, offered by Google. It takes the average person approximately 10 seconds to solve a typical CAPTCHA. Purpose CAPTCHAs' purpose is to prevent spam on websites, such as promotion spam, registration spam, and data scraping, and bots are less likely to abuse websites with spamming if those websites use CAPTCHA. Many websites use CAPTCHA effectively to prevent bot raiding. CAPTCHAs are designed so that humans can complete them, while most robots cannot. Newer CAPTCHAs look at the user's behaviour on the internet, to prove that they are a human. A normal CAPTCHA test only appears if the user acts like a bot, such as when they request webpages, or click links too fast. History Since the 1980s–1990s, users have wanted to make text illegible to computers. The first such people were hackers, posting about sensitive topics to Internet forums they thought were being automatically monitored on keywords. To circumvent such filters, they replaced a word with look-alike characters. HELLO could become or , and others, such that a filter could not detect all of them. This later became known as leetspeak. One of the earliest commercial uses of CAPTCHAs wa
https://en.wikipedia.org/wiki/Akra%E2%80%93Bazzi%20method
In computer science, the Akra–Bazzi method, or Akra–Bazzi theorem, is used to analyze the asymptotic behavior of the mathematical recurrences that appear in the analysis of divide and conquer algorithms where the sub-problems have substantially different sizes. It is a generalization of the master theorem for divide-and-conquer recurrences, which assumes that the sub-problems have equal size. It is named after mathematicians Mohamad Akra and Louay Bazzi. Formulation The Akra–Bazzi method applies to recurrence formulas of the form: The conditions for usage are: sufficient base cases are provided and are constants for all for all for all , where c is a constant and O notates Big O notation for all is a constant The asymptotic behavior of is found by determining the value of for which and plugging that value into the equation: (see Θ). Intuitively, represents a small perturbation in the index of . By noting that and that the absolute value of is always between 0 and 1, can be used to ignore the floor function in the index. Similarly, one can also ignore the ceiling function. For example, and will, as per the Akra–Bazzi theorem, have the same asymptotic behavior. Example Suppose is defined as 1 for integers and for integers . In applying the Akra–Bazzi method, the first step is to find the value of for which . In this example, . Then, using the formula, the asymptotic behavior can be determined as follows: Significance The Akra–Bazzi method is more useful than most other techniques for determining asymptotic behavior because it covers such a wide variety of cases. Its primary application is the approximation of the running time of many divide-and-conquer algorithms. For example, in the merge sort, the number of comparisons required in the worst case, which is roughly proportional to its runtime, is given recursively as and for integers , and can thus be computed using the Akra–Bazzi method to be . See also Master theo
https://en.wikipedia.org/wiki/Terrestrial%20television
Terrestrial television or over-the-air television (OTA) is a type of television broadcasting in which the signal transmission occurs via radio waves from the terrestrial (Earth-based) transmitter of a TV station to a TV receiver having an antenna. The term terrestrial is more common in Europe and Latin America, while in Canada and the United States it is called over-the-air or simply broadcast. This type of TV broadcast is distinguished from newer technologies, such as satellite television (direct broadcast satellite or DBS television), in which the signal is transmitted to the receiver from an overhead satellite; cable television, in which the signal is carried to the receiver through a cable; and Internet Protocol television, in which the signal is received over an Internet stream or on a network utilizing the Internet Protocol. Terrestrial television stations broadcast on television channels with frequencies between about 52 and 600 MHz in the VHF and UHF bands. Since radio waves in these bands travel by line of sight, reception is generally limited by the visual horizon to distances of , although under better conditions and with tropospheric ducting, signals can sometimes be received hundreds of kilometers distant. Terrestrial television was the first technology used for television broadcasting. The BBC began broadcasting in 1929 and by 1930 many radio stations had a regular schedule of experimental television programmes. However, these early experimental systems had insufficient picture quality to attract the public, due to their mechanical scan technology, and television did not become widespread until after World War II with the advent of electronic scan television technology. The television broadcasting business followed the model of radio networks, with local television stations in cities and towns affiliated with television networks, either commercial (in the US) or government-controlled (in Europe), which provided content. Television broadcasts were in g
https://en.wikipedia.org/wiki/Multichannel%20multipoint%20distribution%20service
Multichannel multipoint distribution service (MMDS), formerly known as broadband radio service (BRS) and also known as wireless cable, is a wireless telecommunications technology, used for general-purpose broadband networking or, more commonly, as an alternative method of cable television programming reception. MMDS is used in Australia, Barbados, Belarus, Bolivia, Brazil, Cambodia, Canada, Czech Republic, Dominican Republic, Iceland, India, Kazakhstan, Kyrgyzstan, Lebanon, Mexico, Nepal, Nigeria, Pakistan, Panama, Portugal (including Madeira), Russia, Slovakia, Sri Lanka, Sudan, Thailand, Ukraine, United States, Uruguay and Vietnam. It is most commonly used in sparsely populated rural areas, where laying cables is not economically viable, although some companies have also offered MMDS services in urban areas, most notably in Ireland, until they were phased out in 2016. Technology The BRS band uses microwave frequencies from 2.3 to 2.5 GHz. Reception of BRS-delivered television and data signals is done with a rooftop microwave antenna. The antenna is attached to a down-converter or transceiver to receive and transmit the microwave signal and convert them to frequencies compatible with standard TV tuners (much like on satellite dishes where the signals are converted down to frequencies more compatible with standard TV coaxial cabling), some antennas use an integrated down-converter or transceiver. Digital TV channels can then be decoded with a standard cable set-top box or directly for TVs with integrated digital tuners. Internet data can be received with a standard DOCSIS cable modem connected to the same antenna and transceiver. The MMDS band is separated into 33 (31 in USA) 6 MHz "channels", which may be licensed to cable companies offering service in different areas of a country. The concept was to allow entities to own several channels and multiplex several television, radio, and later Internet data onto each channel using digital technology. Just like with d
https://en.wikipedia.org/wiki/Four%20fours
Four fours is a mathematical puzzle, the goal of which is to find the simplest mathematical expression for every whole number from 0 to some maximum, using only common mathematical symbols and the digit four. No other digit is allowed. Most versions of the puzzle require that each expression have exactly four fours, but some variations require that each expression have some minimum number of fours. The puzzle requires skill and mathematical reasoning. The first printed occurrence of the specific problem of four fours is in Knowledge: An Illustrated Magazine of Science in 1881. A similar problem involving arranging four identical digits to equal a certain amount was given in Thomas Dilworth's popular 1734 textbook The Schoolmaster's Assistant, Being a Compendium of Arithmetic Both Practical and Theoretical. W. W. Rouse Ball described it in the 6th edition (1914) of his Mathematical Recreations and Essays. In this book it is described as a "traditional recreation". Rules There are many variations of four fours; their primary difference is which mathematical symbols are allowed. Essentially all variations at least allow addition ("+"), subtraction ("−"), multiplication ("×"), division ("÷"), and parentheses, as well as concatenation (e.g., "44" is allowed). Most also allow the factorial ("!"), exponentiation (e.g. "444"), the decimal point (".") and the square root ("√") operation. Other operations allowed by some variations include the reciprocal function ("1/x"), subfactorial ("!" before the number: !4 equals 9), overline (an infinitely repeated digit), an arbitrary root, the square function ("sqr"), the cube function ("cube"), the cube root, the gamma function (Γ(), where Γ(x) = (x − 1)!), and percent ("%"). Thus etc. A common use of the overline in this problem is for this value: Typically the successor function is not allowed since any integer above 4 is trivially reachable with it. Similarly, "log" operators are usually not allowed as they allow a general m
https://en.wikipedia.org/wiki/Channel%20capacity
Channel capacity, in electrical engineering, computer science, and information theory, is the tight upper bound on the rate at which information can be reliably transmitted over a communication channel. Following the terms of the noisy-channel coding theorem, the channel capacity of a given channel is the highest information rate (in units of information per unit time) that can be achieved with arbitrarily small error probability. Information theory, developed by Claude E. Shannon in 1948, defines the notion of channel capacity and provides a mathematical model by which it may be computed. The key result states that the capacity of the channel, as defined above, is given by the maximum of the mutual information between the input and output of the channel, where the maximization is with respect to the input distribution. The notion of channel capacity has been central to the development of modern wireline and wireless communication systems, with the advent of novel error correction coding mechanisms that have resulted in achieving performance very close to the limits promised by channel capacity. Formal definition The basic mathematical model for a communication system is the following: where: is the message to be transmitted; is the channel input symbol ( is a sequence of symbols) taken in an alphabet ; is the channel output symbol ( is a sequence of symbols) taken in an alphabet ; is the estimate of the transmitted message; is the encoding function for a block of length ; is the noisy channel, which is modeled by a conditional probability distribution; and, is the decoding function for a block of length . Let and be modeled as random variables. Furthermore, let be the conditional probability distribution function of given , which is an inherent fixed property of the communication channel. Then the choice of the marginal distribution completely determines the joint distribution due to the identity which, in turn, induces a mutual infor
https://en.wikipedia.org/wiki/Paper%20engineering
Paper engineering is a branch of engineering that deals with the usage of physical science (e.g. chemistry and physics) and life sciences (e.g. biology and biochemistry) in conjunction with mathematics as applied to the converting of raw materials into useful paper products and co-products. The field applies various principles in process engineering and unit operations to the manufacture of paper, chemicals, energy and related materials. The following timeline shows some of the key steps in the development of the science of chemical and bioprocess engineering: From a heritage perspective, the field encompasses the design and analysis of a wide variety of thermal, chemical and biochemical unit operations employed in the manufacture of pulp and paper, and addresses the preparation of its raw materials from trees or other natural resources via a pulping process, chemical and mechanical pretreatment of these recovered biopolymer (e.g. principally, although not solely, cellulose-based) fibers in a fluid suspension, the high-speed forming and initial dewatering of a non-woven web, the development of bulk sheet properties via control of energy and mass transfer operations, as well as post-treatment of the sheet with coating, calendering, and other chemical and mechanical processes. Applications Today, the field of paper and chemical engineering is applied to the manufacture of a wide variety of products. The forestry and biology, chemical science, (bio)chemical industry scope manufactures organic and agrochemicals (fertilizers, insecticides, herbicides), oleochemicals, fragrances and flavors, food, feed, pharmaceuticals, nutraceuticals, chemicals, polymers and power from biological materials. The resulting products of paper engineering including paper, cardboard, and various paper derivatives are widely used in everyday life. In addition to being a subset of chemical engineering, the field of paper engineering is closely linked to forest management, product recycling, a
https://en.wikipedia.org/wiki/Neo%20%28The%20Matrix%29
Neo (born as Thomas A. Anderson, also known as The One, an anagram for Neo) is a fictional character and the protagonist of The Matrix franchise, created by the Wachowskis. He was portrayed as a cybercriminal and computer programmer by Keanu Reeves in the films, as well as having a cameo in The Animatrix short film Kid's Story. Andrew Bowen provided Neo's voice in The Matrix: Path of Neo. In 2021, Reeves reprised his role in The Matrix Resurrections with what Vulture calls "his signature John Wick look". In 2008, Neo was selected by Empire as the 68th Greatest Movie Character of All Time... Neo is also an anagram of "one", a reference to his destiny of being The One who would bring peace. There are claims that a nightclub in Chicago inspired the name of the character. Neo is considered to be a superhero. Fictional character biography Thomas A. Anderson was born in Lower Downtown, Capital City, USA on March 11, 1962, according to his criminal record, or September 13, 1971 according to his passport (both seen in the film). His mother was Michelle McGahey (the name of the first film's art director) and his father was John Anderson. He attended Central West Junior High and Owen Patterson High (named after the film's production designer). In high school, he excelled at science, math and computer courses, and displayed an aptitude for literature and history. Although he had disciplinary troubles when he was thirteen to fourteen years old, Anderson went on to become a respected member of the school community through his involvement in football and hockey. At the start of the series, Neo is one of billions of humans neurally connected to the Matrix, unaware that the world he lives in is a simulated reality. The Matrix In his normal life, he is a quiet programmer for the "respectable software company" Meta Cortex, while in private, he is a computer hacker who penetrates computer systems illicitly and steals information under his hacker alias "Neo". He also sells illega
https://en.wikipedia.org/wiki/CAN%20bus
A Controller Area Network (CAN bus) is a vehicle bus standard designed to allow microcontrollers and devices to communicate with each other's applications without a host computer. It is a message-based protocol, designed originally for multiplex electrical wiring within automobiles to save on copper, but it can also be used in many other contexts. For each device, the data in a frame is transmitted serially but in such a way that if more than one device transmits at the same time, the highest priority device can continue while the others back off. Frames are received by all devices, including by the transmitting device. History Development of the CAN bus started in 1983 at Robert Bosch GmbH. The protocol was officially released in 1986 at the Society of Automotive Engineers (SAE) conference in Detroit, Michigan. The first CAN controller chips were introduced by Intel in 1987, and shortly thereafter by Philips. Released in 1991, the Mercedes-Benz W140 was the first production vehicle to feature a CAN-based multiplex wiring system. Bosch published several versions of the CAN specification. The latest is CAN 2.0, published in 1991. This specification has two parts. Part A is for the standard format with an 11-bit identifier, and part B is for the extended format with a 29-bit identifier. A CAN device that uses 11-bit identifiers is commonly called CAN 2.0A, and a CAN device that uses 29-bit identifiers is commonly called CAN 2.0B. These standards are freely available from Bosch along with other specifications and white papers. In 1993, the International Organization for Standardization (ISO) released CAN standard ISO 11898, which was later restructured into two parts: ISO 11898-1 which covers the data link layer, and ISO 11898-2 which covers the CAN physical layer for high-speed CAN. ISO 11898-3 was released later and covers the CAN physical layer for low-speed, fault-tolerant CAN. The physical layer standards ISO 11898-2 and ISO 11898-3 are not part of the Bosch C
https://en.wikipedia.org/wiki/Low-noise%20block%20downconverter
A low-noise block downconverter (LNB) is the receiving device mounted on satellite dishes used for satellite TV reception, which collects the radio waves from the dish and converts them to a signal which is sent through a cable to the receiver inside the building. Also called a low-noise block, low-noise converter (LNC), or even low-noise downconverter (LND), the device is sometimes inaccurately called a low-noise amplifier (LNA). The LNB is a combination of low-noise amplifier, frequency mixer, local oscillator and intermediate frequency (IF) amplifier. It serves as the RF front end of the satellite receiver, receiving the microwave signal from the satellite collected by the dish, amplifying it, and downconverting the block of frequencies to a lower block of intermediate frequencies (IF). This downconversion allows the signal to be carried to the indoor satellite TV receiver using relatively cheap coaxial cable; if the signal remained at its original microwave frequency it would require an expensive and impractical waveguide line. The LNB is usually a small box suspended on one or more short booms, or feed arms, in front of the dish reflector, at its focus (although some dish designs have the LNB on or behind the reflector). The microwave signal from the dish is picked up by a feedhorn on the LNB and is fed to a section of waveguide. One or more metal pins, or probes, protrude into the waveguide at right angles to the axis and act as antennas, feeding the signal to a printed circuit board inside the LNB's shielded box for processing. The lower frequency IF output signal emerges from a socket on the box to which the coaxial cable connects. The LNB gets its power from the receiver or set-top box, using the same coaxial cable that carries signals from the LNB to the receiver. This phantom power travels to the LNB; opposite to the signals from the LNB. A corresponding component, called a block upconverter (BUC), is used at the satellite earth station (uplink) dish
https://en.wikipedia.org/wiki/Modified%20discrete%20cosine%20transform
The modified discrete cosine transform (MDCT) is a transform based on the type-IV discrete cosine transform (DCT-IV), with the additional property of being lapped: it is designed to be performed on consecutive blocks of a larger dataset, where subsequent blocks are overlapped so that the last half of one block coincides with the first half of the next block. This overlapping, in addition to the energy-compaction qualities of the DCT, makes the MDCT especially attractive for signal compression applications, since it helps to avoid artifacts stemming from the block boundaries. As a result of these advantages, the MDCT is the most widely used lossy compression technique in audio data compression. It is employed in most modern audio coding standards, including MP3, Dolby Digital (AC-3), Vorbis (Ogg), Windows Media Audio (WMA), ATRAC, Cook, Advanced Audio Coding (AAC), High-Definition Coding (HDC), LDAC, Dolby AC-4, and MPEG-H 3D Audio, as well as speech coding standards such as AAC-LD (LD-MDCT), G.722.1, G.729.1, CELT, and Opus. The discrete cosine transform (DCT) was first proposed by Nasir Ahmed in 1972, and demonstrated by Ahmed with T. Natarajan and K. R. Rao in 1974. The MDCT was later proposed by John P. Princen, A.W. Johnson and Alan B. Bradley at the University of Surrey in 1987, following earlier work by Princen and Bradley (1986) to develop the MDCT's underlying principle of time-domain aliasing cancellation (TDAC), described below. (There also exists an analogous transform, the MDST, based on the discrete sine transform, as well as other, rarely used, forms of the MDCT based on different types of DCT or DCT/DST combinations.) In MP3, the MDCT is not applied to the audio signal directly, but rather to the output of a 32-band polyphase quadrature filter (PQF) bank. The output of this MDCT is postprocessed by an alias reduction formula to reduce the typical aliasing of the PQF filter bank. Such a combination of a filter bank with an MDCT is called a hybrid fil
https://en.wikipedia.org/wiki/Dual-modulus%20prescaler
A dual modulus prescaler is an electronic circuit used in high-frequency synthesizer designs to overcome the problem of generating narrowly spaced frequencies that are nevertheless too high to be passed directly through the feedback loop of the system. The modulus of a prescaler is its frequency divisor. A dual-modulus prescaler has two separate frequency divisors, usually M and M+1. The problem A frequency synthesizer produces an output frequency, fo, which, divided by the modulus, N, is the reference frequency, fr: The modulus is generally restricted to integer values, as the comparator will match when the waveform is in phase. Typically, the possible frequency multiples will be the channels for which the radio equipment is designed, so fr will usually be equal to the channel spacing. For example, on narrow-band radiotelephones, a channel spacing of 12.5 kHz is typical. Suppose that the programmable divider, using N, is only able to operate at a maximum clock frequency of 10 MHz, but the output fo is required to be in the hundreds of MHz range. Interposing a fixed prescaler that can operate at this frequency range with a division ratio M of, say, 40 drops the output frequency into the operating range of the programmable divider. However, a factor of 40 has been introduced into the equation, so the output frequency is now: If fr remains at 12.5 kHz, only every 40th channel can be obtained. Alternatively, if fr is reduced by a factor of 40 to compensate; it becomes 312.5 Hz, which is much too low to give good filtering and lock performance characteristics. It also means that programming the divider becomes more complex, as the modulus needs to be verified so that only those that give true channels are used, not every 1/40th of a channel that is available. The solution The solution is the dual modulus prescaler. The main divider is split into two parts, the main part N and an additional divider A, which is strictly less than N. Both dividers are clocked from th
https://en.wikipedia.org/wiki/Selectivity%20%28radio%29
Selectivity is a measure of the performance of a radio receiver to respond only to the radio signal it is tuned to (such as a radio station) and reject other signals nearby in frequency, such as another broadcast on an adjacent channel. Selectivity is usually measured as a ratio in decibels (dB), comparing the signal strength received against that of a similar signal on another frequency. If the signal is at the adjacent channel of the selected signal, this measurement is also known as adjacent-channel rejection ratio (ACRR). Selectivity also provides some immunity to blanketing interference. LC circuits are often used as filters; the Q ("Quality" factor) determines the bandwidth of each LC tuned circuit in the radio. The L/C ratio, in turn, determines their Q and so their selectivity, because the rest of the circuit - the aerial or amplifier feeding the tuned circuit for example - will contain present resistance. For a series resonant circuit, the higher the inductance and the lower the capacitance, the narrower the filter bandwidth (meaning the reactance of the inductance, L, and the capacitance, C, at resonant frequency will be relatively high compared with the series source/load resistances). For a parallel resonant circuit the opposite applies; small inductances reduce the damping of external circuitry (see electronic oscillator). There are practical limits to the increase in selectivity with changing L/C ratio: tuning capacitors of large values can be difficult to construct stray capacitance, and capacitance within the transistors or valves of associated circuitry, may become significant (and vary with time) the series resistance internal to the wire in the coil, may be significant (for parallel tuned circuits especially) large inductances imply physically large (and expensive coils) and/or thinner wire (hence worse internal resistance). Therefore other methods may be used to increase selectivity, such as Q multiplier circuits and regenerative receivers.
https://en.wikipedia.org/wiki/Plasmon
In physics, a plasmon is a quantum of plasma oscillation. Just as light (an optical oscillation) consists of photons, the plasma oscillation consists of plasmons. The plasmon can be considered as a quasiparticle since it arises from the quantization of plasma oscillations, just like phonons are quantizations of mechanical vibrations. Thus, plasmons are collective (a discrete number) oscillations of the free electron gas density. For example, at optical frequencies, plasmons can couple with a photon to create another quasiparticle called a plasmon polariton. Derivation The plasmon was initially proposed in 1952 by David Pines and David Bohm and was shown to arise from a Hamiltonian for the long-range electron-electron correlations. Since plasmons are the quantization of classical plasma oscillations, most of their properties can be derived directly from Maxwell's equations. Explanation Plasmons can be described in the classical picture as an oscillation of electron density with respect to the fixed positive ions in a metal. To visualize a plasma oscillation, imagine a cube of metal placed in an external electric field pointing to the right. Electrons will move to the left side (uncovering positive ions on the right side) until they cancel the field inside the metal. If the electric field is removed, the electrons move to the right, repelled by each other and attracted to the positive ions left bare on the right side. They oscillate back and forth at the plasma frequency until the energy is lost in some kind of resistance or damping. Plasmons are a quantization of this kind of oscillation. Role Plasmons play a huge role in the optical properties of metals and semiconductors. Frequencies of light below the plasma frequency are reflected by a material because the electrons in the material screen the electric field of the light. Light of frequencies above the plasma frequency is transmitted by a material because the electrons in the material cannot respond fast
https://en.wikipedia.org/wiki/Communications%20Security%20Establishment
The Communications Security Establishment (CSE; , CST), formerly (from 2008-2014) called the Communications Security Establishment Canada (CSEC), is the Government of Canada's national cryptologic agency. It is responsible for foreign signals intelligence (SIGINT) and communications security (COMSEC), protecting federal government electronic information and communication networks, and is the technical authority for cyber security and information assurance. Formally administered under the Department of National Defence (DND), the CSE is now a separate agency under the National Defence portfolio. The CSE is accountable to the Minister of National Defence through its deputy head, the Chief of CSE. The National Defence Minister is in turn accountable to the Cabinet and Parliament. The current Chief of the CSE is Caroline Xavier, who assumed the office on 31 August 2022. In 2015, the agency built a new headquarters and campus encompassing . The facility totals a little over and is adjacent to CSIS. History CSE originates from Canada's joint military and civilian code-breaking and intelligence efforts during the Second World War. Examination Unit The Examination Unit (XU) was established during the Second World War, in June 1941, as a branch of the National Research Council. It was the first civilian office in Canada solely dedicated to decryption of communications signals; until then, SIGINT was entirely within the purview of the Canadian military, and mostly limited to intercepts. In March 1942, XU moved next door to Laurier House in Sandy Hill, Ottawa; this location was chosen because they felt it would draw no suspicion to the enemies. In September, the Department of External Affairs established its Special Intelligence Section at XU with the purpose of reviewing decoded SIGINT with other collateral information to produce intelligence summaries. The original mandate of the Examination Unit was to intercept the communications of Vichy France and Germany. Its ma
https://en.wikipedia.org/wiki/Seismometer
A seismometer is an instrument that responds to ground noises and shaking such as caused by quakes, volcanic eruptions, and explosions. They are usually combined with a timing device and a recording device to form a seismograph. The output of such a device—formerly recorded on paper (see picture) or film, now recorded and processed digitally—is a seismogram. Such data is used to locate and characterize earthquakes, and to study the Earth's internal structure. Basic principles A simple seismometer, sensitive to up-down motions of the Earth, is like a weight hanging from a spring, both suspended from a frame that moves along with any motion detected. The relative motion between the weight (called the mass) and the frame provides a measurement of the vertical ground motion. A rotating drum is attached to the frame and a pen is attached to the weight, thus recording any ground motion in a seismogram. Any movement from the ground moves the frame. The mass tends not to move because of its inertia, and by measuring the movement between the frame and the mass, the motion of the ground can be determined. Early seismometers used optical levers or mechanical linkages to amplify the small motions involved, recording on soot-covered paper or photographic paper. Modern instruments use electronics. In some systems, the mass is held nearly motionless relative to the frame by an electronic negative feedback loop. The motion of the mass relative to the frame is measured, and the feedback loop applies a magnetic or electrostatic force to keep the mass nearly motionless. The voltage needed to produce this force is the output of the seismometer, which is recorded digitally. In other systems the weight is allowed to move, and its motion produces an electrical charge in a coil attached to the mass which voltage moves through the magnetic field of a magnet attached to the frame. This design is often used in a geophone, which is used in exploration for oil and gas. Seismic observa
https://en.wikipedia.org/wiki/List%20of%20emulators
This article lists software emulators. Central processing units ARM ARMulator Aemulor QEMU MIPS SPIM: The OVPsim 500 mips MIPS32 emulator, can be used to develop software using virtual platforms, emulators including MIPS processors running at up to 500 MIPS for MIPS32 processors running many OSes including Linux. OVP is used to build emulators of single MIPS processors or multiple - homogeneous MP or heterogenous MP. x86 architecture Bochs DOSBox FX!32 PCem QEMU – an opensource emulator that emulates 7 architectures including ARM, x86, MIPS, and others box86 Rosetta 2: Apple's emulator for macOS allowing to run x86_64 applications on arm64 platform Motorola 680x0 Mac 68K emulator: For PowerPC classic Mac OS PowerPC PearPC Rosetta: Apple's emulator for PowerPC processors, built into Mac OS X WarpUP: Amiga system for PowerPC expansion cards built into MorphOS and available for AmigaOS SheepShaver: Emulates the PowerPC processor. Can run Mac OS 7 to Mac OS 9. Computer system emulators Full system simulators Simics CPU Sim: A Java application that allows the user to design and create a simple architecture and instruction set and then run programs of instructions from the set through simulation GXemul: Framework for full-system computer architecture emulation Mobile phones and PDAs Palm OS Emulator Adobe Device Central BlueStacks Windows Subsystem for Android Blisk (browser) touchHLE Multi-system emulators blueMSX: Emulates Z80 based computers and consoles MAME: Emulates multiple arcade machines, video game consoles and computers DAPHNE is an arcade emulator application that emulates a variety of laserdisc video games with the intent of preserving these games and making the play experience as faithful to the originals as possible. The developer calls DAPHNE the "First Ever Multiple Arcade Laserdisc Emulator" ("FEMALE"). It derives its name from Princess Daphne, the heroine of Dragon's Lair. HYPSEUS is a modern SDL2 update to the DAPH
https://en.wikipedia.org/wiki/Scheduling%20%28computing%29
In computing, scheduling is the action of assigning resources to perform tasks. The resources may be processors, network links or expansion cards. The tasks may be threads, processes or data flows. The scheduling activity is carried out by a process called scheduler. Schedulers are often designed so as to keep all computer resources busy (as in load balancing), allow multiple users to share system resources effectively, or to achieve a target quality-of-service. Scheduling is fundamental to computation itself, and an intrinsic part of the execution model of a computer system; the concept of scheduling makes it possible to have computer multitasking with a single central processing unit (CPU). Goals A scheduler may aim at one or more goals, for example: maximizing throughput (the total amount of work completed per time unit); minimizing wait time (time from work becoming ready until the first point it begins execution); minimizing latency or response time (time from work becoming ready until it is finished in case of batch activity, or until the system responds and hands the first output to the user in case of interactive activity); maximizing fairness (equal CPU time to each process, or more generally appropriate times according to the priority and workload of each process). In practice, these goals often conflict (e.g. throughput versus latency), thus a scheduler will implement a suitable compromise. Preference is measured by any one of the concerns mentioned above, depending upon the user's needs and objectives. In real-time environments, such as embedded systems for automatic control in industry (for example robotics), the scheduler also must ensure that processes can meet deadlines; this is crucial for keeping the system stable. Scheduled tasks can also be distributed to remote devices across a network and managed through an administrative back end. Types of operating system schedulers The scheduler is an operating system module that selects the
https://en.wikipedia.org/wiki/PCI%20Mezzanine%20Card
A PCI Mezzanine Card or PMC is a printed circuit board assembly manufactured to the IEEE P1386.1 standard. This standard combines the electrical characteristics of the PCI bus with the mechanical dimensions of the Common Mezzanine Card or CMC format (IEEE 1386 standard). A mezzanine connector connects two parallel printed circuit boards in a stacking configuration. Many mezzanine connector styles are commercially available for this purpose, however PMC mezzanine applications usually use the 1.0 mm pitch 64 pin connector described in IEEE 1386. A PMC can have up to four 64-pin bus connectors. The first two ("P1" and "P2") are used for 32 bit PCI signals, a third ("P3") is needed for 64 bit PCI signals. An additional bus connector ("P4") can be used for non-specified I/O signals. In addition, arbitrary connectors can be supplied on the front panel of the chassis or case; also known as a "bezel". The PMC standard defines which connector pins are used for which PCI signals; in addition it defines the optional 64 "P4" connector pins for use of arbitrary I/O signals. It enables manufacturers to offer products that are compatible with the well-established PCI bus, but in a smaller and more robust package than standard PCI plug-in cards. The word mezzanine, derived from the Italian mezzanino and also commonly used to refer to a platform inserted between two floors of a building, describes the way in which a PMC fits between two adjacent host cards in a standard card rack, attached to one of the cards by connectors and mounting pillars. A single PMC measures 74 mm x 149 mm. The standard also defines a double-sized card, but this is rarely used. Carrier cards that accept PMCs are usually made in the Eurocard format, which includes single, double and triple-height VMEbus cards, CompactPCI (cPCI) cards and more recently, VPX cards. One PMC fits on a standard 3U carrier card while 6U models (typical for VMEbus cards) can carry up to two PMCs. PMCs were also used in early
https://en.wikipedia.org/wiki/Crystal%20radio
A crystal radio receiver, also called a crystal set, is a simple radio receiver, popular in the early days of radio. It uses only the power of the received radio signal to produce sound, needing no external power. It is named for its most important component, a crystal detector, originally made from a piece of crystalline mineral such as galena. This component is now called a diode. Crystal radios are the simplest type of radio receiver and can be made with a few inexpensive parts, such as a wire for an antenna, a coil of wire, a capacitor, a crystal detector, and earphones (because a crystal set has insufficient power for a loudspeaker). However they are passive receivers, while other radios use an amplifier powered by current from a battery or wall outlet to make the radio signal louder. Thus, crystal sets produce rather weak sound and must be listened to with sensitive earphones, and can receive stations only within a limited range of the transmitter. The rectifying property of a contact between a mineral and a metal was discovered in 1874 by Karl Ferdinand Braun. Crystals were first used as a detector of radio waves in 1894 by Jagadish Chandra Bose, in his microwave optics experiments. They were first used as a demodulator for radio communication reception in 1902 by G. W. Pickard. Crystal radios were the first widely used type of radio receiver, and the main type used during the wireless telegraphy era. Sold and homemade by the millions, the inexpensive and reliable crystal radio was a major driving force in the introduction of radio to the public, contributing to the development of radio as an entertainment medium with the beginning of radio broadcasting around 1920. Around 1920, crystal sets were superseded by the first amplifying receivers, which used vacuum tubes. With this technological advance, crystal sets became obsolete for commercial use but continued to be built by hobbyists, youth groups, and the Boy Scouts mainly as a way of learning about t
https://en.wikipedia.org/wiki/Numerically%20controlled%20oscillator
A numerically controlled oscillator (NCO) is a digital signal generator which creates a synchronous (i.e., clocked), discrete-time, discrete-valued representation of a waveform, usually sinusoidal. NCOs are often used in conjunction with a digital-to-analog converter (DAC) at the output to create a direct digital synthesizer (DDS). Numerically controlled oscillators offer several advantages over other types of oscillators in terms of agility, accuracy, stability and reliability. NCOs are used in many communications systems including digital up/down converters used in 3G wireless and software radio systems, digital phase-locked loops, radar systems, drivers for optical or acoustic transmissions, and multilevel FSK/PSK modulators/demodulators. Operation An NCO generally consists of two parts: A phase accumulator (PA), which adds to the value held at its output a frequency control value at each clock sample. A phase-to-amplitude converter (PAC), which uses the phase accumulator output word (phase word) usually as an index into a waveform look-up table (LUT) to provide a corresponding amplitude sample. Sometimes interpolation is used with the look-up table to provide better accuracy and reduce phase error noise. Other methods of converting phase to amplitude, including mathematical algorithms such as power series can be used, particularly in a software NCO. When clocked, the phase accumulator (PA) creates a modulo-2N sawtooth waveform which is then converted by the phase-to-amplitude converter (PAC) to a sampled sinusoid, where N is the number of bits carried in the phase accumulator. N sets the NCO frequency resolution and is normally much larger than the number of bits defining the memory space of the PAC look-up table. If the PAC capacity is 2M, the PA output word must be truncated to M bits as shown in Figure 1. However, the truncated bits can be used for interpolation. The truncation of the phase output word does not affect the frequency accuracy but produces
https://en.wikipedia.org/wiki/Digitally%20controlled%20oscillator
A digitally controlled oscillator or DCO is used in synthesizers, microcontrollers, and software-defined radios. The name is analogous with "voltage-controlled oscillator." DCOs were designed to overcome the tuning stability limitations of early VCO designs. Confusion over terminology The term "digitally controlled oscillator" has been used to describe the combination of a voltage-controlled oscillator driven by a control signal from a digital-to-analog converter, and is also sometimes used to describe numerically controlled oscillators. This article refers specifically to the DCOs used in many synthesizers of the 1980s . These include the Roland Juno-6, Juno-60, Juno-106, JX-3P, JX-8P, and JX-10, the Elka Synthex, the Korg Poly-61, the Oberheim Matrix-6, some instruments by Akai and Kawai, and the recent Prophet '08 and its successor Rev2 by Dave Smith Instruments. Relation to earlier VCO designs Many voltage-controlled oscillators for electronic music are based on a capacitor charging linearly in an op-amp integrator configuration. When the capacitor charge reaches a certain level, a comparator generates a reset pulse, which discharges the capacitor and the cycle begins again. This produces a rising ramp (or sawtooth) waveform, and this type of oscillator core is known as a ramp core. A common DCO design uses a programmable counter IC such as the 8253 instead of a comparator. This provides stable digital pitch generation by using the leading edge of a square wave to derive a reset pulse to discharge the capacitor in the oscillator's ramp core. Historical context In the early 1980s, many manufacturers were beginning to produce polyphonic synthesizers. The VCO designs of the time still left something to be desired in terms of tuning stability. Whilst this was an issue for monophonic synthesizers, the limited number of oscillators (typically 3 or fewer) meant that keeping instruments tuned was a manageable task, often performed using dedicated front panel
https://en.wikipedia.org/wiki/Surface-mount%20technology
Surface-mount technology (SMT), originally called planar mounting, is a method in which the electrical components are mounted directly onto the surface of a printed circuit board (PCB). An electrical component mounted in this manner is referred to as a surface-mount device (SMD). In industry, this approach has largely replaced the through-hole technology construction method of fitting components, in large part because SMT allows for increased manufacturing automation which reduces cost and improves quality. It also allows for more components to fit on a given area of substrate. Both technologies can be used on the same board, with the through-hole technology often used for components not suitable for surface mounting such as large transformers and heat-sinked power semiconductors. An SMT component is usually smaller than its through-hole counterpart because it has either smaller leads or no leads at all. It may have short pins or leads of various styles, flat contacts, a matrix of solder balls (BGAs), or terminations on the body of the component. History Surface-mount technology was developed in the 1960s. By 1986 surface mounted components accounted for 10% of the market at most, but was rapidly gaining popularity. By the late 1990s, the great majority of high-tech electronic printed circuit assemblies were dominated by surface mount devices. Much of the pioneering work in this technology was done by IBM. The design approach first demonstrated by IBM in 1960 in a small-scale computer was later applied in the Launch Vehicle Digital Computer used in the Instrument Unit that guided all Saturn IB and Saturn V vehicles. Components were mechanically redesigned to have small metal tabs or end caps that could be directly soldered to the surface of the PCB. Components became much smaller and component placement on both sides of a board became far more common with surface mounting than through-hole mounting, allowing much higher circuit densities and smaller circuit bo
https://en.wikipedia.org/wiki/Butyric%20acid
Butyric acid (; from , meaning "butter"), also known under the systematic name butanoic acid, is a straight-chain alkyl carboxylic acid with the chemical formula . It is an oily, colorless liquid with an unpleasant odor. Isobutyric acid (2-methylpropanoic acid) is an isomer. Salts and esters of butyric acid are known as butyrates or butanoates. The acid does not occur widely in nature, but its esters are widespread. It is a common industrial chemical and an important component in the mammalian gut. History Butyric acid was first observed in an impure form in 1814 by the French chemist Michel Eugène Chevreul. By 1818, he had purified it sufficiently to characterize it. However, Chevreul did not publish his early research on butyric acid; instead, he deposited his findings in manuscript form with the secretary of the Academy of Sciences in Paris, France. Henri Braconnot, a French chemist, was also researching the composition of butter and was publishing his findings, and this led to disputes about priority. As early as 1815, Chevreul claimed that he had found the substance responsible for the smell of butter. By 1817, he published some of his findings regarding the properties of butyric acid and named it. However, it was not until 1823 that he presented the properties of butyric acid in detail. The name butyric acid comes from , meaning "butter", the substance in which it was first found. The Latin name butyrum (or buturum) is similar. Occurrence Triglycerides of butyric acid compose 3–4% of butter. When butter goes rancid, butyric acid is liberated from the glyceride by hydrolysis. It is one of the fatty acid subgroup called short-chain fatty acids. Butyric acid is a typical carboxylic acid that reacts with bases and affects many metals. It is found in animal fat and plant oils, bovine milk, breast milk, butter, parmesan cheese, body odor, vomit, and as a product of anaerobic fermentation (including in the colon). It has a taste somewhat like butter and an unpleas
https://en.wikipedia.org/wiki/Columns%20%28video%20game%29
is a match-three puzzle video game released by Sega in 1990. Designed by Jay Geertsen, it was released by Sega for arcades and then ported to several Sega consoles. The game was subsequently ported to home computer platforms, including the Atari ST. Gameplay Columns was one of the many tile-matching puzzle games to appear after the great success of Tetris in the late 1980s. The area of play is enclosed within a tall, rectangular playing area. Columns of three different symbols (such as differently-colored jewels) appear, one at a time, at the top of the well and fall to the bottom, landing either on the floor or on top of previously-fallen "columns". While a column is falling, the player can move it left and right, and can also cycle the positions of the symbols within it. After a column lands, if three or more of the same symbols are connected in a horizontal, vertical, or diagonal line, those symbols disappear. The pile of columns then settles under gravity. If this resettlement causes three or more other symbols to align, they too disappear and the cycle repeats. Occasionally, a special column with a multicolor Magic Jewel appears. It destroys all the jewels with the same color as the one underneath it. The columns fall at a faster rate as the player progresses. The goal of the game is to play for as long as possible before the well fills up with jewels, which ends the game. Players can score up to 99,999,999 points. Some ports of the game offer alternate game modes as well. "Flash columns" involves mining their way through a set number of lines to get to a flashing jewel at the bottom. "Doubles" allows two players work together in the same well. "Time trial" involves racking up as many points as possible within the time limit. Ports Sega ported the arcade game to the Mega Drive/Genesis console. This version of the game was nearly identical to the original arcade game. Columns was the first pack-in game for the Game Gear. This version was slightly different f
https://en.wikipedia.org/wiki/Motor%20system
The motor system is the set of central and peripheral structures in the nervous system that support motor functions, i.e. movement. Peripheral structures may include skeletal muscles and neural connections with muscle tissues. Central structures include cerebral cortex, brainstem, spinal cord, pyramidal system including the upper motor neurons, extrapyramidal system, cerebellum, and the lower motor neurons in the brainstem and the spinal cord. The motor system is a biological system with close ties to the muscular system and the circulatory system. To achieve motor skill, the motor system must accommodate the working state of the muscles, whether hot or cold, stiff or loose, as well as physiological fatigue. Pyramidal motor system The pyramidal motor system, also called the pyramidal tract or the corticospinal tract, start in the motor center of the cerebral cortex. There are upper and lower motor neurons in the corticospinal tract. The motor impulses originate in the giant pyramidal cells or Betz cells of the motor area; i.e., precentral gyrus of cerebral cortex. These are the upper motor neurons (UMN) of the corticospinal tract. The axons of these cells pass in the depth of the cerebral cortex to the corona radiata and then to the internal capsule, passing through the posterior branch of internal capsule and continuing to descend in the midbrain and the medulla oblongata. In the lower part of the medulla oblongata, 90–95% of these fibers decussate (pass to the opposite side) and descend in the white matter of the lateral funiculus of the spinal cord on the opposite side. The remaining 5–10% pass to the same side. Fibers for the extremities (limbs) pass 100% to the opposite side. The fibers of the corticospinal tract terminate at different levels in the anterior horn of the grey matter of the spinal cord. Here, the lower motor neurons (LMN) of the corticospinal cord are located. Peripheral motor nerves carry the motor impulses from the anterior horn to the volun
https://en.wikipedia.org/wiki/Sneeze
A sneeze (also known as sternutation) is a semi-autonomous, convulsive expulsion of air from the lungs through the nose and mouth, usually caused by foreign particles irritating the nasal mucosa. A sneeze expels air forcibly from the mouth and nose in an explosive, spasmodic involuntary action. This action allows for mucus to escape through the nasal cavity. Sneezing is possibly linked to sudden exposure to bright light, sudden change (drop) in temperature, breeze of cold air, a particularly full stomach, exposure to allergens, or viral infection. Because sneezes can spread disease through infectious aerosol droplets, it is recommended to cover one's mouth and nose with the forearm, the inside of the elbow, a tissue or a handkerchief while sneezing. In addition to covering the mouth, looking down is also recommended in order to change the direction of the droplets spread and avoid high concentration in the human breathing heights. The function of sneezing is to expel mucus containing foreign particles or irritants and cleanse the nasal cavity. During a sneeze, the soft palate and palatine uvula depress while the back of the tongue elevates to partially close the passage to the mouth, creating a venturi (similar to a carburetor) due to Bernoulli's principle so that air ejected from the lungs is accelerated through the mouth and thus creating a low pressure point at the back of the nose. This way air is forced in through the front of the nose and the expelled mucus and contaminants are launched out the mouth. Sneezing with the mouth closed does expel mucus through the nose but is not recommended because it creates a very high pressure in the head and is potentially harmful. Sneezing cannot occur during sleep due to REM atonia – a bodily state where motor neurons are not stimulated and reflex signals are not relayed to the brain. Sufficient external stimulants, however, may cause a person to wake from sleep to sneeze, but any sneezing occurring afterwards would take
https://en.wikipedia.org/wiki/Putrefaction
Putrefaction is the fifth stage of death, following pallor mortis, livor mortis, algor mortis, and rigor mortis. This process references the breaking down of a body of an animal post-mortem. In broad terms, it can be viewed as the decomposition of proteins, and the eventual breakdown of the cohesiveness between tissues, and the liquefaction of most organs. This is caused by the decomposition of organic matter by bacterial or fungal digestion, which causes the release of gases that infiltrate the body's tissues, and leads to the deterioration of the tissues and organs. The approximate time it takes putrefaction to occur is dependent on various factors. Internal factors that affect the rate of putrefaction include the age at which death has occurred, the overall structure and condition of the body, the cause of death, and external injuries arising before or after death. External factors include environmental temperature, moisture and air exposure, clothing, burial factors, and light exposure. Body farms are facilities that study the way various factors affect the putrefaction process. The first signs of putrefaction are signified by a greenish discoloration on the outside of the skin on the abdominal wall corresponding to where the large intestine begins, as well as under the surface of the liver. Certain substances, such as carbolic acid, arsenic, strychnine, and zinc chloride, can be used to delay the process of putrefaction in various ways based on their chemical make up. Description In thermodynamic terms, all organic tissues are composed of chemical energy, which, when not maintained by the constant biochemical maintenance of the living organism, begin to chemically break down due to the reaction with water into amino acids, known as hydrolysis. The breakdown of the proteins of a decomposing body is a spontaneous process. Protein hydrolysis is accelerated as the anaerobic bacteria of the digestive tract consume, digest, and excrete the cellular proteins of th
https://en.wikipedia.org/wiki/Reification%20%28computer%20science%29
Reification is the process by which an abstract idea about a computer program is turned into an explicit data model or other object created in a programming language. A computable/addressable object—a resource—is created in a system as a proxy for a non computable/addressable object. By means of reification, something that was previously implicit, unexpressed, and possibly inexpressible is explicitly formulated and made available to conceptual (logical or computational) manipulation. Informally, reification is often referred to as "making something a first-class citizen" within the scope of a particular system. Some aspect of a system can be reified at language design time, which is related to reflection in programming languages. It can be applied as a stepwise refinement at system design time. Reification is one of the most frequently used techniques of conceptual analysis and knowledge representation. Reflective programming languages In the context of programming languages, reification is the process by which a user program or any aspect of a programming language that was implicit in the translated program and the run-time system, are expressed in the language itself. This process makes it available to the program, which can inspect all these aspects as ordinary data. In reflective languages, reification data is causally connected to the related reified aspect such that a modification to one of them affects the other. Therefore, the reification data is always a faithful representation of the related reified aspect . Reification data is often said to be made a first class object. Reification, at least partially, has been experienced in many languages to date: in early Lisp dialects and in current Prolog dialects, programs have been treated as data, although the causal connection has often been left to the responsibility of the programmer. In Smalltalk-80, the compiler from the source text to bytecode has been part of the run-time system since the very first imp
https://en.wikipedia.org/wiki/Motivation
Motivation is an internal state that propels individuals to engage in goal-directed behavior. It is often understood as a force that explains why people or animals initiate, continue, or terminate a certain behavior at a particular time. It is a complex phenomenon and its precise definition is disputed. It contrasts with amotivation, which is a state of apathy or listlessness. Motivation is studied in fields like psychology, motivation science, and philosophy. Motivational states are characterized by their direction, intensity, and persistence. The direction of a motivational state is shaped by the goal it aims to achieve. Intensity is the strength of the state and affects whether the state is translated into action and how much effort is employed. Persistence refers to how long an individual is willing to engage in an activity. Motivation is often divided into two phases: in the first phase, the individual establishes a goal, while in the second phase, they attempt to reach their desired goal or outcome. Many types of motivation are discussed in the academic literature. Intrinsic motivation comes from internal factors like enjoyment and curiosity. It contrasts with extrinsic motivation, which is driven by external factors like obtaining rewards and avoiding punishment. For conscious motivation, the individual is aware of the motive driving the behavior, which is not the case for unconscious motivation. Other types include rational and irrational motivation, biological and cognitive motivation, short-term and long-term motivation, and egoistic and altruistic motivation. Theories of motivation are conceptual frameworks that seek to explain motivational phenomena. Content theories aim to describe which internal factors motivate people and which goals they commonly follow. Examples are the hierarchy of needs, the two-factor theory, and the learned needs theory. They contrast with process theories, which discuss the cognitive, emotional, and decision-making process
https://en.wikipedia.org/wiki/Bertrand%27s%20postulate
In number theory, Bertrand's postulate is a theorem stating that for any integer , there always exists at least one prime number with A less restrictive formulation is: for every , there is always at least one prime such that Another formulation, where is the -th prime, is: for This statement was first conjectured in 1845 by Joseph Bertrand (1822–1900). Bertrand himself verified his statement for all integers . His conjecture was completely proved by Chebyshev (1821–1894) in 1852 and so the postulate is also called the Bertrand–Chebyshev theorem or Chebyshev's theorem. Chebyshev's theorem can also be stated as a relationship with , the prime-counting function (number of primes less than or equal to ): , for all . Prime number theorem The prime number theorem (PNT) implies that the number of primes up to x is roughly x/ln(x), so if we replace x with 2x then we see the number of primes up to 2x is asymptotically twice the number of primes up to x (the terms ln(2x) and ln(x) are asymptotically equivalent). Therefore, the number of primes between n and 2n is roughly n/ln(n) when n is large, and so in particular there are many more primes in this interval than are guaranteed by Bertrand's postulate. So Bertrand's postulate is comparatively weaker than the PNT. But PNT is a deep theorem, while Bertrand's Postulate can be stated more memorably and proved more easily, and also makes precise claims about what happens for small values of n. (In addition, Chebyshev's theorem was proved before the PNT and so has historical interest.) The similar and still unsolved Legendre's conjecture asks whether for every n ≥ 1, there is a prime p such that n2 < p < (n + 1)2. Again we expect that there will be not just one but many primes between n2 and (n + 1)2, but in this case the PNT doesn't help: the number of primes up to x2 is asymptotic to x2/ln(x2) while the number of primes up to (x + 1)2 is asymptotic to (x + 1)2/ln((x + 1)2), which is asymptotic to the estimate
https://en.wikipedia.org/wiki/Hypergraph
In mathematics, a hypergraph is a generalization of a graph in which an edge can join any number of vertices. In contrast, in an ordinary graph, an edge connects exactly two vertices. Formally, a directed hypergraph is a pair , where is a set of elements called nodes, vertices, points, or elements and is a set of pairs of subsets of . Each of these pairs is called an edge or hyperedge; the vertex subset is known as its tail or domain, and as its head or codomain. The order of a hypergraph is the number of vertices in . The size of the hypergraph is the number of edges in . The order of an edge in a directed hypergraph is : that is, the number of vertices in its tail followed by the number of vertices in its head. The definition above generalizes from a directed graph to a directed hypergraph by defining the head or tail of each edge as a set of vertices ( or ) rather than as a single vertex. A graph is then the special case where each of these sets contains only one element. Hence any standard graph theoretic concept that is independent of the edge orders will generalize to hypergraph theory. Under one definition, an undirected hypergraph is a directed hypergraph which has a symmetric edge set: If then . For notational simplicity one can remove the "duplicate" hyperedges since the modifier "undirected" is precisely informing us that they exist: If then where means implicitly in. While graph edges connect only 2 nodes, hyperedges connect an arbitrary number of nodes. However, it is often desirable to study hypergraphs where all hyperedges have the same cardinality; a k-uniform hypergraph is a hypergraph such that all its hyperedges have size k. (In other words, one such hypergraph is a collection of sets, each such set a hyperedge connecting k nodes.) So a 2-uniform hypergraph is a graph, a 3-uniform hypergraph is a collection of unordered triples, and so on. An undirected hypergraph is also called a set system or a family of sets drawn from the
https://en.wikipedia.org/wiki/Allele%20frequency
Allele frequency, or gene frequency, is the relative frequency of an allele (variant of a gene) at a particular locus in a population, expressed as a fraction or percentage. Specifically, it is the fraction of all chromosomes in the population that carry that allele over the total population or sample size. Microevolution is the change in allele frequencies that occurs over time within a population. Given the following: A particular locus on a chromosome and a given allele at that locus A population of N individuals with ploidy n, i.e. an individual carries n copies of each chromosome in their somatic cells (e.g. two chromosomes in the cells of diploid species) The allele exists in i chromosomes in the population then the allele frequency is the fraction of all the occurrences i of that allele and the total number of chromosome copies across the population, i/(nN). The allele frequency is distinct from the genotype frequency, although they are related, and allele frequencies can be calculated from genotype frequencies. In population genetics, allele frequencies are used to describe the amount of variation at a particular locus or across multiple loci. When considering the ensemble of allele frequencies for many distinct loci, their distribution is called the allele frequency spectrum. Calculation of allele frequencies from genotype frequencies The actual frequency calculations depend on the ploidy of the species for autosomal genes. Monoploids The frequency (p) of an allele A is the fraction of the number of copies (i) of the A allele and the population or sample size (N), so Diploids If , , and are the frequencies of the three genotypes at a locus with two alleles, then the frequency p of the A-allele and the frequency q of the B-allele in the population are obtained by counting alleles. Because p and q are the frequencies of the only two alleles present at that locus, they must sum to 1. To check this: and If there are more than two different alle
https://en.wikipedia.org/wiki/Noble%20rot
Noble rot (; ; ; ) is the beneficial form of a grey fungus, Botrytis cinerea, affecting wine grapes. Infestation by Botrytis requires moist conditions, but if the weather stays wet, the damaging form, "grey rot", can destroy crops of grapes. Grapes typically become infected with Botrytis when they are ripe. If they are then exposed to drier conditions and become partially raisined, this form of infection is known as noble rot. Grapes picked at a certain point during infestation can produce particularly fine and concentrated sweet wine. Wines produced by this method are known as botrytized wines. Origins According to Hungarian legend, the first aszú (a wine using botrytised grapes) was made by Laczkó Máté Szepsi in 1630. However, mention of wine made from botrytised grapes appears before this in the Nomenklatura of Fabricius Balázs Sziksai, which was completed in 1576. A recently discovered inventory of aszú predates this reference by five years. When vineyard classification began in 1730 in the Tokaj region, one of the gradings given to the various terroirs centered on their potential to develop Botrytis cinerea. There is a popular story that the practice originated independently in Germany in 1775, where the Riesling producers at Schloss Johannisberg (Geisenheim, in the Rheingau region) traditionally awaited the say-so of the estate owner, Heinrich von Bibra, Bishop of Fulda, before cutting their grapes. In this year (so the legend goes), the abbey messenger was robbed en route to delivering the order to harvest and the cutting was delayed for three weeks, time enough for the botrytis to take hold. The grapes were presumed worthless and given to local peasants, who produced a surprisingly good, sweet wine which subsequently became known as Spätlese, or late harvest wine. In the following few years, several different classes of increasing must weight were introduced, and the original Spätlese was further elaborated, first into Auslese in 1787 and later Eiswein in
https://en.wikipedia.org/wiki/Botrytis%20cinerea
Botrytis cinerea is a necrotrophic fungus that affects many plant species, although its most notable hosts may be wine grapes. In viticulture, it is commonly known as "botrytis bunch rot"; in horticulture, it is usually called "grey mould" or "gray mold". The fungus gives rise to two different kinds of infections on grapes. The first, grey rot, is the result of consistently wet or humid conditions, and typically results in the loss of the affected bunches. The second, noble rot, occurs when drier conditions follow wetter, and can result in distinctive sweet dessert wines, such as Sauternes the Aszú of Tokaji or Grasă de Cotnari. The species name Botrytis cinerea is derived from the Latin for "grapes like ashes"; although poetic, the "grapes" refers to the bunching of the fungal spores on their conidiophores, and "ashes" just refers to the greyish colour of the spores en masse. The fungus is usually referred to by its anamorph (asexual form) name, because the sexual phase is rarely observed. The teleomorph (sexual form) is an ascomycete, Botryotinia fuckeliana, also known as Botryotinia cinerea (see taxonomy box). Etymology "Botrytis" is derived from the Ancient Greek botrys (βότρυς) meaning "grapes", combined with the Neo-Latin suffix -itis for disease. Botryotinia fuckeliana was named by mycologist Heinrich Anton de Bary in honor of another mycologist, Karl Wilhelm Gottlieb Leopold Fuckel. Synonyms for the sexual stage are: Botrytis fuckeliana N.F. Buchw., (1949) Botrytis gemella (Bonord.) Sacc., (1881) Botrytis grisea (Schwein.) Fr., (1832) Botrytis vulgaris (Pers.) Fr., (1832) Haplaria grisea Link, (1809) fuckeliana de Bary Phymatotrichum gemellum Bonord., (1851) Polyactis vulgaris Pers., (1809) Sclerotinia fuckeliana (de Bary) Fuckel, (1870) Hosts and symptoms Hosts The disease, gray mold, affects more than 200 dicotyledonous plant species and a few monocotyledonous plants found in temperate and subtropical regions, and potentially over a thousan
https://en.wikipedia.org/wiki/Maximum%20life%20span
Maximum life span (or, for humans, maximum reported age at death) is a measure of the maximum amount of time one or more members of a population have been observed to survive between birth and death. The term can also denote an estimate of the maximum amount of time that a member of a given species could survive between birth and death, provided circumstances that are optimal to that member's longevity. Most living species have an upper limit on the number of times somatic cells not expressing telomerase can divide. This is called the Hayflick limit, although this number of cell divisions does not strictly control lifespan. Definition In animal studies, maximum span is often taken to be the mean life span of the most long-lived 10% of a given cohort. By another definition, however, maximum life span corresponds to the age at which the oldest known member of a species or experimental group has died. Calculation of the maximum life span in the latter sense depends upon the initial sample size. Maximum life span contrasts with mean life span (average life span, life expectancy), and longevity. Mean life span varies with susceptibility to disease, accident, suicide and homicide, whereas maximum life span is determined by "rate of aging". Longevity refers only to the characteristics of the especially long lived members of a population, such as infirmities as they age or compression of morbidity, and not the specific life span of an individual. In humans Demographic evidence The longest living person whose dates of birth and death were verified according to the modern norms of Guinness World Records and the Gerontology Research Group was Jeanne Calment (1875–1997), a French woman who is verified to have lived to 122. The oldest male lifespan has only been verified as 116, by Japanese man Jiroemon Kimura. Reduction of infant mortality has accounted for most of the increased average life span longevity, but since the 1960s mortality rates among those over 80 years hav
https://en.wikipedia.org/wiki/Method%20%28computer%20programming%29
A method in object-oriented programming (OOP) is a procedure associated with an object, and generally also a message. An object consists of state data and behavior; these compose an interface, which specifies how the object may be used. A method is a behavior of an object parametrized by a user. Data is represented as properties of the object, and behaviors are represented as methods. For example, a Window object could have methods such as open and close, while its state (whether it is open or closed at any given point in time) would be a property. In class-based programming, methods are defined within a class, and objects are instances of a given class. One of the most important capabilities that a method provides is method overriding - the same name (e.g., area) can be used for multiple different kinds of classes. This allows the sending objects to invoke behaviors and to delegate the implementation of those behaviors to the receiving object. A method in Java programming sets the behavior of a class object. For example, an object can send an area message to another object and the appropriate formula is invoked whether the receiving object is a rectangle, circle, triangle, etc. Methods also provide the interface that other classes use to access and modify the properties of an object; this is known as encapsulation. Encapsulation and overriding are the two primary distinguishing features between methods and procedure calls. Overriding and overloading Method overriding and overloading are two of the most significant ways that a method differs from a conventional procedure or function call. Overriding refers to a subclass redefining the implementation of a method of its superclass. For example, findArea may be a method defined on a shape class, triangle, etc. would each define the appropriate formula to calculate their area. The idea is to look at objects as "black boxes" so that changes to the internals of the object can be made with minimal impact on the other o
https://en.wikipedia.org/wiki/Mathematics%20of%20paper%20folding
The discipline of origami or paper folding has received a considerable amount of mathematical study. Fields of interest include a given paper model's flat-foldability (whether the model can be flattened without damaging it), and the use of paper folds to solve up-to cubic mathematical equations. Computational origami is a recent branch of computer science that is concerned with studying algorithms that solve paper-folding problems. The field of computational origami has also grown significantly since its inception in the 1990s with Robert Lang's TreeMaker algorithm to assist in the precise folding of bases. Computational origami results either address origami design or origami foldability. In origami design problems, the goal is to design an object that can be folded out of paper given a specific target configuration. In origami foldability problems, the goal is to fold something using the creases of an initial configuration. Results in origami design problems have been more accessible than in origami foldability problems. History In 1893, Indian civil servant T. Sundara Row published Geometric Exercises in Paper Folding which used paper folding to demonstrate proofs of geometrical constructions. This work was inspired by the use of origami in the kindergarten system. Row demonstrated an approximate trisection of angles and implied construction of a cube root was impossible. In 1922, Harry Houdini published "Houdini's Paper Magic," which described origami techniques that drew informally from mathematical approaches that were later formalized. In 1936 Margharita P. Beloch showed that use of the 'Beloch fold', later used in the sixth of the Huzita–Hatori axioms, allowed the general cubic equation to be solved using origami. In 1949, R C Yeates' book "Geometric Methods" described three allowed constructions corresponding to the first, second, and fifth of the Huzita–Hatori axioms. The Yoshizawa–Randlett system of instruction by diagram was introduced in 1961. I
https://en.wikipedia.org/wiki/Huzita%E2%80%93Hatori%20axioms
The Huzita–Justin axioms or Huzita–Hatori axioms are a set of rules related to the mathematical principles of origami, describing the operations that can be made when folding a piece of paper. The axioms assume that the operations are completed on a plane (i.e. a perfect piece of paper), and that all folds are linear. These are not a minimal set of axioms but rather the complete set of possible single folds. The first seven axioms were first discovered by French folder and mathematician Jacques Justin in 1986. Axioms 1 through 6 were rediscovered by Japanese-Italian mathematician Humiaki Huzita and reported at the First International Conference on Origami in Education and Therapy in 1991. Axioms 1 though 5 were rediscovered by Auckly and Cleveland in 1995. Axiom 7 was rediscovered by Koshiro Hatori in 2001; Robert J. Lang also found axiom 7. The seven axioms The first 6 axioms are known as Justin's axioms or Huzita's axioms. Axiom 7 was discovered by Jacques Justin. Koshiro Hatori and Robert J. Lang also found axiom 7. The axioms are as follows: Given two distinct points p1 and p2, there is a unique fold that passes through both of them. Given two distinct points p1 and p2, there is a unique fold that places p1 onto p2. Given two lines l1 and l2, there is a fold that places l1 onto l2. Given a point p1 and a line l1, there is a unique fold perpendicular to l1 that passes through point p1. Given two points p1 and p2 and a line l1, there is a fold that places p1 onto l1 and passes through p2. Given two points p1 and p2 and two lines l1 and l2, there is a fold that places p1 onto l1 and p2 onto l2. Given one point p and two lines l1 and l2, there is a fold that places p onto l1 and is perpendicular to l2. Axiom 5 may have 0, 1, or 2 solutions, while Axiom 6 may have 0, 1, 2, or 3 solutions. In this way, the resulting geometries of origami are stronger than the geometries of compass and straightedge, where the maximum number of solutions an axiom has is 2. Th
https://en.wikipedia.org/wiki/Law%20of%20Demeter
The Law of Demeter (LoD) or principle of least knowledge is a design guideline for developing software, particularly object-oriented programs. In its general form, the LoD is a specific case of loose coupling. The guideline was proposed by Ian Holland at Northeastern University towards the end of 1987, and the following three recommendations serve as a succinct summary: Each unit should have only limited knowledge about other units: only units "closely" related to the current unit. Each unit should only talk to its friends; don't talk to strangers. Only talk to your immediate friends. The fundamental notion is that a given object should assume as little as possible about the structure or properties of anything else (including its subcomponents), in accordance with the principle of "information hiding". It may be viewed as a corollary to the principle of least privilege, which dictates that a module possess only the information and resources necessary for its legitimate purpose. It is so named for its origin in the Demeter Project, an adaptive programming and aspect-oriented programming effort. The project was named in honor of Demeter, “distribution-mother” and the Greek goddess of agriculture, to signify a bottom-up philosophy of programming which is also embodied in the law itself. History The law dates back to 1987 when it was first proposed by Ian Holland, who was working on the Demeter Project. The Demeter Project was the birthplace of a lot of AOP (Aspect Oriented Programming) principles. A quote in one of the remainders of the project seems to clarify the origins of the name: In object-oriented programming An object a can request a service (call a method) of an object instance b, but object a should not "reach through" object b to access yet another object, c, to request its services. Doing so would mean that object a implicitly requires greater knowledge of object b's internal structure. Instead, b's interface should be modified if necessary so it
https://en.wikipedia.org/wiki/Degeneracy%20%28mathematics%29
In mathematics, a degenerate case is a limiting case of a class of objects which appears to be qualitatively different from (and usually simpler than) the rest of the class, and the term degeneracy is the condition of being a degenerate case. The definitions of many classes of composite or structured objects often implicitly include inequalities. For example, the angles and the side lengths of a triangle are supposed to be positive. The limiting cases, where one or several of these inequalities become equalities, are degeneracies. In the case of triangles, one has a degenerate triangle if at least one side length or angle is zero. Equivalently, it becomes a "line segment". Often, the degenerate cases are the exceptional cases where changes to the usual dimension or the cardinality of the object (or of some part of it) occur. For example, a triangle is an object of dimension two, and a degenerate triangle is contained in a line, which makes its dimension one. This is similar to the case of a circle, whose dimension shrinks from two to zero as it degenerates into a point. As another example, the solution set of a system of equations that depends on parameters generally has a fixed cardinality and dimension, but cardinality and/or dimension may be different for some exceptional values, called degenerate cases. In such a degenerate case, the solution set is said to be degenerate. For some classes of composite objects, the degenerate cases depend on the properties that are specifically studied. In particular, the class of objects may often be defined or characterized by systems of equations. In most scenarios, a given class of objects may be defined by several different systems of equations, and these different systems of equations may lead to different degenerate cases, while characterizing the same non-degenerate cases. This may be the reason for which there is no general definition of degeneracy, despite the fact that the concept is widely used and defined (if need
https://en.wikipedia.org/wiki/Cubic%20foot
The cubic foot (symbol ft3 or cu ft) is an imperial and US customary (non-metric) unit of volume, used in the United States and the United Kingdom. It is defined as the volume of a cube with sides of one foot () in length. Its volume is (about of a cubic metre). Conversions Symbols and abbreviations The IEEE symbol for the cubic foot is ft3. The following abbreviations are used: cubicfeet, cubicfoot, cubicft, cufeet, cufoot, cuft, cu.ft, cuft, cbft, cb.ft, cbft, cbf, feet, foot, ft, feet/-3, foot/-3, ft/-3. Larger multiples are in common usage in commerce and industry in the United States: CCF or HCF: Centum (Latin hundred) cubic feet; i.e., Used in the billing of natural gas and water delivered to households. MCF: Mille (Latin thousand) cubic feet; i.e., MMCF: Mille mille cubic feet; i.e., MMCFD: MMCF per day; i.e., /d Used in the oil and gas industry. BCF or TMC: Billion or thousand million cubic feet; i.e., TMC is usually used for referring to storage capacity and actual storage volume of storage dams. TCF: Trillion cubic feet; i.e., Used in the oil and gas industry. Cubic foot per second and related flow rates The IEEE symbol for the cubic foot per second is ft3/s. The following other abbreviations are also sometimes used: ft3/sec cu ft/s cfs or CFS cusec second-feet The flow or discharge of rivers, i.e., the volume of water passing a location per unit of time, is commonly expressed in units of cubic feet per second or cubic metres per second. Cusec is a unit of flow rate, used mostly in the United States in the context of water flow, particularly of rivers and canals. Conversions: 1 ft3s−1 = = = = Cubic foot per minute The IEEE symbol for the cubic foot per minute is ft3/min. The following abbreviations are used: cu ft/min cfm or CFM cfpm or CFPM Cubic feet per minute is used to measure the amount of air that is being delivered, and is a common metric used for carburettors, pneumatic tools, and air-compressor system
https://en.wikipedia.org/wiki/Blight
Blight refers to a specific symptom affecting plants in response to infection by a pathogenic organism. Description Blight is a rapid and complete chlorosis, browning, then death of plant tissues such as leaves, branches, twigs, or floral organs. Accordingly, many diseases that primarily exhibit this symptom are called blights. Several notable examples are: Late blight of potato, caused by the water mold Phytophthora infestans (Mont.) de Bary, the disease which led to the Great Irish Famine Southern corn leaf blight, caused by the fungus Cochliobolus heterostrophus (Drechs.) Drechs, anamorph Bipolaris maydis (Nisikado & Miyake) Shoemaker, incited a severe loss of corn in the United States in 1970. Chestnut blight, caused by the fungus Cryphonectria parasitica (Murrill) Barr, has nearly completely eradicated mature American chestnuts in North America. Citrus blight, caused by an unknown agent, infects all citrus scions. Fire blight of pome fruits, caused by the bacterium Erwinia amylovora (Burrill) Winslow et al., is the most severe disease of pear and also is found in apple and raspberry, among others. Bacterial leaf blight of rice, caused by the bacterium Xanthomonas oryzae (Uyeda & Ishiyama) Dowson. Bacterial seedling blight of rice (Oryza sativa), caused by pathogen Burkholderia plantarii Early blight of potato and tomato, caused by species of the ubiquitous fungal genus Alternaria Leaf blight of the grasses e.g. Ascochyta species and Alternaria triticina that causes blight in wheat Bur oak blight, caused by the fungal pathogen Tubakia iowensis. South American leaf blight, caused by the ascomycete Pseudocercospora ulei, also called Microcyclus ulei, ended the cultivation of the rubber tree (Hevea brasiliensis) in South America. On leaf tissue, symptoms of blight are the initial appearance of lesions which rapidly engulf surrounding tissue. However, leaf spots may, in advanced stages, expand to kill entire areas of leaf tissue and thus exhibit blight
https://en.wikipedia.org/wiki/Round-robin%20scheduling
Round-robin (RR) is one of the algorithms employed by process and network schedulers in computing. As the term is generally used, time slices (also known as time quanta) are assigned to each process in equal portions and in circular order, handling all processes without priority (also known as cyclic executive). Round-robin scheduling is simple, easy to implement, and starvation-free. Round-robin scheduling can be applied to other scheduling problems, such as data packet scheduling in computer networks. It is an operating system concept. The name of the algorithm comes from the round-robin principle known from other fields, where each person takes an equal share of something in turn. Process scheduling To schedule processes fairly, a round-robin scheduler generally employs time-sharing, giving each job a time slot or quantum (its allowance of CPU time), and interrupting the job if it is not completed by then. The job is resumed next time a time slot is assigned to that process. If the process terminates or changes its state to waiting during its attributed time quantum, the scheduler selects the first process in the ready queue to execute. In the absence of time-sharing, or if the quanta were large relative to the sizes of the jobs, a process that produced large jobs would be favored over other processes. Round-robin algorithm is a pre-emptive algorithm as the scheduler forces the process out of the CPU once the time quota expires. For example, if the time slot is 100 milliseconds, and job1 takes a total time of 250 ms to complete, the round-robin scheduler will suspend the job after 100 ms and give other jobs their time on the CPU. Once the other jobs have had their equal share (100 ms each), job1 will get another allocation of CPU time and the cycle will repeat. This process continues until the job finishes and needs no more time on the CPU. Job1 = Total time to complete 250 ms (quantum 100 ms). First allocation = 100 ms. Second allocation = 100 ms. Th
https://en.wikipedia.org/wiki/Wireless%20transaction%20protocol
Wireless transaction protocol (WTP) is a standard used in mobile telephony. It is a layer of the Wireless Application Protocol (WAP) that is intended to bring Internet access to mobile phones. WTP provides functions similar to TCP, except that WTP has reduced amount of information needed for each transaction (e.g. does not include a provision for rearranging out-of-order packets). WTP runs on top of UDP and performs many of the same tasks as TCP but in a way optimized for wireless devices, which saves processing and memory cost as compared to TCP. It Supports 3 types of transaction: Unreliable One-Way Request Reliable One-Way Request Reliable Two-Way Request External links Open Mobile Alliance References Open Mobile Alliance standards Mobile telecommunications standards Transport layer protocols Wireless Application Protocol
https://en.wikipedia.org/wiki/Umbilical%20cord
In placental mammals, the umbilical cord (also called the navel string, birth cord or funiculus umbilicalis) is a conduit between the developing embryo or fetus and the placenta. During prenatal development, the umbilical cord is physiologically and genetically part of the fetus and (in humans) normally contains two arteries (the umbilical arteries) and one vein (the umbilical vein), buried within Wharton's jelly. The umbilical vein supplies the fetus with oxygenated, nutrient-rich blood from the placenta. Conversely, the fetal heart pumps low-oxygen, nutrient-depleted blood through the umbilical arteries back to the placenta. Structure and development The umbilical cord develops from and contains remnants of the yolk sac and allantois. It forms by the fifth week of development, replacing the yolk sac as the source of nutrients for the embryo. The cord is not directly connected to the mother's circulatory system, but instead joins the placenta, which transfers materials to and from the maternal blood without allowing direct mixing. The length of the umbilical cord is approximately equal to the crown-rump length of the fetus throughout pregnancy. The umbilical cord in a full term neonate is usually about 50 centimeters (20 in) long and about 2 centimeters (0.75 in) in diameter. This diameter decreases rapidly within the placenta. The fully patent umbilical artery has two main layers: an outer layer consisting of circularly arranged smooth muscle cells and an inner layer which shows rather irregularly and loosely arranged cells embedded in abundant ground substance staining metachromatic. The smooth muscle cells of the layer are rather poorly differentiated, contain only a few tiny myofilaments and are thereby unlikely to contribute actively to the process of post-natal closure. Umbilical cord can be detected on ultrasound by 6 weeks of gestation and well-visualised by 8 to 9 weeks of gestation. The umbilical cord lining is a good source of mesenchymal and epith
https://en.wikipedia.org/wiki/Borwein%27s%20algorithm
In mathematics, Borwein's algorithm is an algorithm devised by Jonathan and Peter Borwein to calculate the value of . They devised several other algorithms. They published the book Pi and the AGM – A Study in Analytic Number Theory and Computational Complexity. Ramanujan–Sato series These two are examples of a Ramanujan–Sato series. The related Chudnovsky algorithm uses a discriminant with class number 1. Class number 2 (1989) Start by setting Then Each additional term of the partial sum yields approximately 25 digits. Class number 4 (1993) Start by setting Then Each additional term of the series yields approximately 50 digits. Iterative algorithms Quadratic convergence (1984) Start by setting Then iterate Then pk converges quadratically to ; that is, each iteration approximately doubles the number of correct digits. The algorithm is not self-correcting; each iteration must be performed with the desired number of correct digits for 's final result. Cubic convergence (1991) Start by setting Then iterate Then ak converges cubically to ; that is, each iteration approximately triples the number of correct digits. Quartic convergence (1985) Start by setting Then iterate Then ak converges quartically against ; that is, each iteration approximately quadruples the number of correct digits. The algorithm is not self-correcting; each iteration must be performed with the desired number of correct digits for 's final result. One iteration of this algorithm is equivalent to two iterations of the Gauss–Legendre algorithm. A proof of these algorithms can be found here: Quintic convergence Start by setting where is the golden ratio. Then iterate Then ak converges quintically to (that is, each iteration approximately quintuples the number of correct digits), and the following condition holds: Nonic convergence Start by setting Then iterate Then ak converges nonically to ; that is, each iteration approximately multiplies the number of correc
https://en.wikipedia.org/wiki/Local%20food
Local food is food that is produced within a short distance of where it is consumed, often accompanied by a social structure and supply chain different from the large-scale supermarket system. Local food (or locavore) movements aim to connect food producers and consumers in the same geographic region, to develop more self-reliant and resilient food networks; improve local economies; or to affect the health, environment, community, or society of a particular place. The term has also been extended to include not only the geographic location of supplier and consumer but can also be "defined in terms of social and supply chain characteristics." For example, local food initiatives often promote sustainable and organic farming practices, although these are not explicitly related to the geographic proximity of producer and consumer. Local food represents an alternative to the global food model, which often sees food traveling long distances before it reaches the consumer. History In the US, the local food movement has been traced to the Agricultural Adjustment Act of 1933, which spawned agricultural subsidies and price supports. The contemporary American movement can be traced back to proposed resolutions to the Society for Nutrition Education's 1981 guidelines. In 1994, Chicago pop culture made local food a trend in the Midwest. These largely unsuccessful resolutions encouraged increased local production to slow farmland loss. The program described "sustainable diets" - a term then new to the American public. At the time, the resolutions were met with strong criticism from pro-business institutions, but have had a strong resurgence of backing since 2000. In 2008, the United States farm bill was revised to emphasise nutrition: "it provides low-income seniors with vouchers for use at local produce markets, and it added more than $1 billion to the fresh fruit and vegetable program, which serves healthy snacks to 3 million low-income children in schools". Definitions
https://en.wikipedia.org/wiki/Zero-player%20game
A zero-player game or no-player game is a simulation game that has no sentient players. Types There are various different types of games that can be considered "zero-player". Determined by initial state A game that evolves as determined by its initial state, requiring no further input from humans is considered a zero-player game. Cellular automaton games that are determined by initial conditions including Conway's Game of Life are examples of this. Progress Quest is another example, in the game the player sets up an artificial character, and afterwards the game plays itself with no further input from the player. Godville is a similar game that took inspiration from Progress Quest, in the game the player is a god that can communicate with a non-player character hero, however the game can progress with no interaction from the player. Incremental games, sometimes called idle games, are games which do require some player intervention near the beginning however may be zero-player at higher levels. As an example, Cookie Clicker requires that players click cookies manually before purchasing assets to click cookies in the place of the player independently. AI vs AI games In computer games, the term refers to programs that use artificial intelligence rather than human players, for example some fighting and real-time strategy games can be put into zero-player mode where multiple AIs can play against each other. Humans may have a challenge in designing the AI and giving it sufficient skill to play the game well, but the actual evolution of the game has no human intervention. See also :Category:Video games with AI-versus-AI modes Single-player game Two-player game Multiplayer video game Incremental game References Game theory game classes Game artificial intelligence
https://en.wikipedia.org/wiki/Tachometer
A tachometer (revolution-counter, tach, rev-counter, RPM gauge) is an instrument measuring the rotation speed of a shaft or disk, as in a motor or other machine. The device usually displays the revolutions per minute (RPM) on a calibrated analogue dial, but digital displays are increasingly common. The word comes from Greek ( "speed") and ( "measure"). Essentially the words tachometer and speedometer have identical meaning: a device that measures speed. It is by arbitrary convention that in the automotive world one is used for engine revolutions and the other for vehicle speed. In formal engineering nomenclature, more precise terms are used to distinguish the two. History The first tachometer was described by Bryan Donkin in a paper to the Royal Society of Arts in 1810 for which he was awarded the Gold medal of the society. This consisted of a bowl of mercury constructed in such a way that centrifugal force caused the level in a central tube to fall when it rotated and brought down the level in a narrower tube above filled with coloured spirit. The bowl was connected to the machinery to be measured by pulleys. The first mechanical tachometers were based on measuring the centrifugal force, similar to the operation of a centrifugal governor. The inventor is assumed to be the German engineer Dietrich Uhlhorn; he used it for measuring the speed of machines in 1817. Since 1840, it has been used to measure the speed of locomotives. In automobiles, trucks, tractors and aircraft Tachometers or revolution counters on cars, aircraft, and other vehicles show the rate of rotation of the engine's crankshaft, and typically have markings indicating a safe range of rotation speeds. This can assist the driver in selecting appropriate throttle and gear settings for the driving conditions. Prolonged use at high speeds may cause inadequate lubrication, overheating (exceeding capability of the cooling system), exceeding speed capability of sub-parts of the engine (for example spr
https://en.wikipedia.org/wiki/Machine%20learning
Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can effectively generalize and thus perform tasks without explicit instructions. Recently, generative artificial neural networks have been able to surpass many previous approaches in performance. Machine learning approaches have been applied to large language models, computer vision, speech recognition, email filtering, agriculture and medicine, where it is too costly to develop algorithms to perform the needed tasks. The mathematical foundations of ML are provided by mathematical optimization (mathematical programming) methods. Data mining is a related (parallel) field of study, focusing on exploratory data analysis through unsupervised learning. ML is known in its application across business problems under the name predictive analytics. Although not all machine learning is statistically based, computational statistics is an important source of the field's methods. History and relationships to other fields The term machine learning was coined in 1959 by Arthur Samuel, an IBM employee and pioneer in the field of computer gaming and artificial intelligence. The synonym self-teaching computers was also used in this time period. By the early 1960s an experimental "learning machine" with punched tape memory, called Cybertron, had been developed by Raytheon Company to analyze sonar signals, electrocardiograms, and speech patterns using rudimentary reinforcement learning. It was repetitively "trained" by a human operator/teacher to recognize patterns and equipped with a "goof" button to cause it to re-evaluate incorrect decisions. A representative book on research into machine learning during the 1960s was Nilsson's book on Learning Machines, dealing mostly with machine learning for pattern classification. Interest related to pattern recognition continued into the 1970s, as described by Duda and Hart in 1973. In 1981 a report
https://en.wikipedia.org/wiki/Pilottone
Pilottone (or Pilotone) and the related neo-pilotone are special synchronization signals recorded by analog audio recorders designed for use in motion picture production, to keep sound and film recorded on separate media (otherwise known as double system recording) synchronised. Before the adoption of timecode by the motion picture industry, pilotone sync was used in almost all 1/4-inch magnetic double system motion picture sound recording from the late 50s until the late 1980s. Previous to the introduction of 1/4-inch audio tape recordings were made on 35mm optical cameras and then later, with the introduction of magnetic recording, 16mm or 35mm magnetic stock. The first 1/4-inch recorder capable of recording a synch track to regulate the playback speed of the recording was made by Rangertone and was a variation on the soon to come pilotone system. History According to Carsten Diercks, camera operator and filmmaker at West-German Nordwestdeutscher Rundfunk (NWDR) during the 1950s, pilottone was invented at the NWDR studio in Hamburg-Lokstedt, West Germany by NWDR technical engineer Adalbert Lohmann and his assistant Udo Stepputat in the early 1950s for single-camera 16mm TV news gathering and documentaries. The first program featuring the use of pilottone was the documentary Musuri - Es geht aufwärts am Kongo ("Musuri: Upstream/progress at the Congo"), shot in early 1954 in Africa and first broadcast on ARD on March 31, 1954. The new technology required new editing suites, and Musuri camera operator Diercks turned to a small nearby 6-man workshop named Steenbeck. The subsequent success of priorly shunned 16mm for TV program gathering facilitated by the pilotone system turned Steenbeck into a multinational corporation. Neo-pilottone was invented in 1957 by Stefan Kudelski with the Nagra III tape recorder. The new technology of pilottone was brought to international attention by its use by Richard Leacock, former cameraman of filmmaker Robert Flaherty, in his do
https://en.wikipedia.org/wiki/Dormancy
Dormancy is a period in an organism's life cycle when growth, development, and (in animals) physical activity are temporarily stopped. This minimizes metabolic activity and therefore helps an organism to conserve energy. Dormancy tends to be closely associated with environmental conditions. Organisms can synchronize entry to a dormant phase with their environment through predictive or consequential means. Predictive dormancy occurs when an organism enters a dormant phase before the onset of adverse conditions. For example, photoperiod and decreasing temperature are used by many plants to predict the onset of winter. Consequential dormancy occurs when organisms enter a dormant phase after adverse conditions have arisen. This is commonly found in areas with an unpredictable climate. While very sudden changes in conditions may lead to a high mortality rate among animals relying on consequential dormancy, its use can be advantageous, as organisms remain active longer and are therefore able to make greater use of available resources. Animals Hibernation Hibernation is a mechanism used by many mammals to reduce energy expenditure and survive food shortages over the winter. Hibernation may be predictive or consequential. An animal prepares for hibernation by building up a thick layer of body fat during late summer and autumn that will provide it with energy during the dormant period. During hibernation, the animal undergoes many physiological changes, including decreased heart rate (by as much as 95%) and decreased body temperature. In addition to shivering, some hibernating animals also produce body heat by non-shivering thermogenesis to avoid freezing. Non-shivering thermogenesis is a regulated process in which the proton gradient generated by electron transport in mitochondria is used to produce heat instead of ATP in brown adipose tissue. Animals that hibernate include bats, ground squirrels and other rodents, mouse lemurs, the European hedgehog and other insectivo
https://en.wikipedia.org/wiki/Rational%20unified%20process
The rational unified process (RUP) is an iterative software development process framework created by the Rational Software Corporation, a division of IBM since 2003. RUP is not a single concrete prescriptive process, but rather an adaptable process framework, intended to be tailored by the development organizations and software project teams that will select the elements of the process that are appropriate for their needs. RUP is a specific implementation of the Unified Process. History Rational Software originally developed the rational unified process as a software process product. The product includes a hyperlinked knowledge-base with sample artifacts and detailed descriptions for many different types of activities. RUP is included in the IBM Rational Method Composer (RMC) product which allows customization of the process. Philippe Kruchten, an experienced Rational technical representative was tasked with heading up the original RUP team. These initial versions combined the Rational Software organisation's extensive field experience building object-oriented systems (referred to by Rational field staff as the Rational Approach) with Objectory's guidance on practices such as use cases, and incorporated extensive content from Jim Rumbaugh's Object Modeling Technology (OMT) approach to modeling, Grady Booch's Booch method, and the newly released UML 0.8. To help make this growing knowledge base more accessible, Philippe Kruchten was tasked with the assembly of an explicit process framework for modern software engineering. This effort employed the HTML-based process delivery mechanism developed by Objectory. The resulting "Rational Unified Process" (RUP) completed a strategic tripod for Rational: a tailorable process that guided development tools that automated the application of that process services that accelerated adoption of both the process and the tools. This guidance was augmented in subsequent versions with knowledge based on the experience of compa
https://en.wikipedia.org/wiki/Spherical%20Earth
Spherical Earth or Earth's curvature refers to the approximation of the figure of the Earth as a sphere. The earliest documented mention of the concept dates from around the 5th century BC, when it appears in the writings of Greek philosophers. In the 3rd century BC, Hellenistic astronomy established the roughly spherical shape of Earth as a physical fact and calculated the Earth's circumference. This knowledge was gradually adopted throughout the Old World during Late Antiquity and the Middle Ages. A practical demonstration of Earth's sphericity was achieved by Ferdinand Magellan and Juan Sebastián Elcano's circumnavigation (1519–1522). The concept of a spherical Earth displaced earlier beliefs in a flat Earth: In early Mesopotamian mythology, the world was portrayed as a disk floating in the ocean with a hemispherical sky-dome above, and this forms the premise for early world maps like those of Anaximander and Hecataeus of Miletus. Other speculations on the shape of Earth include a seven-layered ziggurat or cosmic mountain, alluded to in the Avesta and ancient Persian writings (see seven climes). The realization that the figure of the Earth is more accurately described as an ellipsoid dates to the 17th century, as described by Isaac Newton in Principia. In the early 19th century, the flattening of the earth ellipsoid was determined to be of the order of 1/300 (Delambre, Everest). The modern value as determined by the US DoD World Geodetic System since the 1960s is close to 1/298.25. Cause Earth is massive enough that the pull of gravity maintains its roughly spherical shape. Most of its deviation from spherical stems from the centrifugal force caused by rotation around its north-south axis. This force deforms the sphere into an oblate ellipsoid. Formation The Solar System formed from a dust cloud that was at least partially the remnant of one or more supernovas that produced heavy elements by nucleosynthesis. Grains of matter accreted through electrostatic i
https://en.wikipedia.org/wiki/World%20Geodetic%20System
The World Geodetic System (WGS) is a standard used in cartography, geodesy, and satellite navigation including GPS. The current version, WGS 84, defines an Earth-centered, Earth-fixed coordinate system and a geodetic datum, and also describes the associated Earth Gravitational Model (EGM) and World Magnetic Model (WMM). The standard is published and maintained by the United States National Geospatial-Intelligence Agency. History Efforts to supplement the various national surveying systems began in the 19th century with F.R. Helmert's famous book (Mathematical and Physical Theories of Physical Geodesy). Austria and Germany founded the Zentralbüro für die Internationale Erdmessung (Central Bureau of International Geodesy), and a series of global ellipsoids of the Earth were derived (e.g., Helmert 1906, Hayford 1910/ 1924). A unified geodetic system for the whole world became essential in the 1950s for several reasons: International space science and the beginning of astronautics. The lack of inter-continental geodetic information. The inability of the large geodetic systems, such as European Datum (ED50), North American Datum (NAD), and Tokyo Datum (TD), to provide a worldwide geo-data basis Need for global maps for navigation, aviation, and geography. Western Cold War preparedness necessitated a standardised, NATO-wide geospatial reference system, in accordance with the NATO Standardisation Agreement WGS 60 In the late 1950s, the United States Department of Defense, together with scientists of other institutions and countries, began to develop the needed world system to which geodetic data could be referred and compatibility established between the coordinates of widely separated sites of interest. Efforts of the U.S. Army, Navy and Air Force were combined leading to the DoD World Geodetic System 1960 (WGS 60). The term datum as used here refers to a smooth surface somewhat arbitrarily defined as zero elevation, consistent with a set of surveyor's measures o
https://en.wikipedia.org/wiki/History%20of%20geodesy
The history of geodesy (/dʒiːˈɒdɪsi/), concerning developments in measuring and representing the planet Earth, began during antiquity and ultimately blossomed during the Age of Enlightenment. Many early conceptions of the Earth held it to be flat, with the heavens being a physical dome spanning over it. Early arguments for a spherical Earth pointed to various more subtle empirical observations, including how lunar eclipses were seen as circular shadows, as well as the that Polaris is seen lower in the sky as one travels southward. Hellenic world Initial developments Though the earliest written mention of a spherical Earth comes from ancient Greek sources, there is no account of how the sphericity of Earth was discovered, or if it was initially simply a guess. A plausible explanation given by the historian Otto E. Neugebauer is that it was "the experience of travellers that suggested such an explanation for the variation in the observable altitude of the pole and the change in the area of circumpolar stars, a change that was quite drastic between Greek settlements" around the eastern Mediterranean Sea, particularly those between the Nile Delta and Crimea. Another possible explanation can be traced back to earlier Phoenician sailors. The first circumnavigation of Africa is described as being undertaken by Phoenician explorers employed by Egyptian pharaoh Necho II c. 610–595 BC. In The Histories, written 431–425 BC, Herodotus cast doubt on a report of the Sun observed shining from the north. He stated that the phenomenon was observed by Phoenician explorers during their circumnavigation of Africa (The Histories, 4.42) who claimed to have had the Sun on their right when circumnavigating in a clockwise direction. To modern historians, these details confirm the truth of the Phoenicians' report. The historian Dmitri Panchenko hypothesizes that it was the Phoenician circumnavigation of Africa that inspired the theory of a spherical Earth, the earliest mention of which wa
https://en.wikipedia.org/wiki/Figure%20of%20the%20Earth
In geodesy, the figure of the Earth is the size and shape used to model planet Earth. The kind of figure depends on application, including the precision needed for the model. A spherical Earth is a well-known historical approximation that is satisfactory for geography, astronomy and many other purposes. Several models with greater accuracy (including ellipsoid) have been developed so that coordinate systems can serve the precise needs of navigation, surveying, cadastre, land use, and various other concerns. Motivation Earth's topographic surface is apparent with its variety of land forms and water areas. This topographic surface is generally the concern of topographers, hydrographers, and geophysicists. While it is the surface on which Earth measurements are made, mathematically modeling it while taking the irregularities into account would be extremely complicated. The Pythagorean concept of a spherical Earth offers a simple surface that is easy to deal with mathematically. Many astronomical and navigational computations use a sphere to model the Earth as a close approximation. However, a more accurate figure is needed for measuring distances and areas on the scale beyond the purely local. Better approximations can be made by modeling the entire surface as an oblate spheroid, using spherical harmonics to approximate the geoid, or modeling a region with a best-fit reference ellipsoid. For surveys of small areas, a planar (flat) model of Earth's surface suffices because the local topography overwhelms the curvature. Plane-table surveys are made for relatively small areas without considering the size and shape of the entire Earth. A survey of a city, for example, might be conducted this way. By the late 1600s, serious effort was devoted to modeling the Earth as an ellipsoid, beginning with Jean Picard's measurement of a degree of arc along the Paris meridian. Improved maps and better measurement of distances and areas of national territories motivated these early
https://en.wikipedia.org/wiki/Pontifex%20%28project%29
PONTIFEX (Planning Of Non-specific Transportation by an Intelligent Fleet EXpert) was a mid-1980s project that introduced a novel approach to complex aircraft fleet scheduling, partially funded by the European Commission's Strategic Programme for R&D in Information Technology. Since the mathematical problems stemming from nontrivial fleet scheduling easily become computationally unsolvable, the PONTIFEX idea consisted in a seamless merge of algorithms and heuristic knowledge embedded in rules. The system, based on domain knowledge collected from airliners Alitalia, KLM, Swissair, and TAP Portugal, was first adopted by Swissair and Alitalia in the late 1980s, then also by the Italian railroad national operator, for their cargo division. It was still in use . References Mathematical modeling
https://en.wikipedia.org/wiki/Ultra-high-temperature%20processing
Ultra-high temperature processing (UHT), ultra-heat treatment, or ultra-pasteurization is a food processing technology that sterilizes liquid food by heating it above  – the temperature required to kill bacterial endospores – for 2 to 5 seconds. UHT is most commonly used in milk production, but the process is also used for fruit juices, cream, soy milk, yogurt, wine, soups, honey, and stews. UHT milk was first developed in the 1960s and became generally available for consumption in the 1970s. The heat used during the UHT process can cause Maillard browning and change the taste and smell of dairy products. An alternative process is flash pasteurization, in which the milk is heated to for at least 15 seconds. UHT milk packaged in a sterile container has a typical unrefrigerated shelf life of six to nine months. In contrast, flash pasteurized milk has a shelf life of about two weeks from processing, or about one week from being put on sale. History The most commonly applied technique to provide a safe and shelf-stable milk is heat treatment. The first system involving indirect heating with continuous flow ( for 6 min) was manufactured in 1893. In 1912, a continuous-flow, direct-heating method of mixing steam with milk at temperatures of was patented. However, without commercially available aseptic packaging systems to pack and store the product, such technology was not very useful in itself, and further development was stalled until the 1950s. In 1953, APV pioneered a steam injection technology, involving direct injection of steam through a specially designed nozzle which raises the product temperature instantly, under brand name Uperiser; milk was packaged in sterile cans. In the 1960s APV launched the first commercial steam infusion system under the Palarisator brand name. In Sweden, Tetra Pak launched tetrahedral paperboard cartons in 1952. They made a commercial breakthrough in the 1960s, after technological advances, combining carton assembling and aseptic p
https://en.wikipedia.org/wiki/HiperLAN
HiperLAN (High Performance Radio LAN) is a wireless LAN standard. It is a European alternative for the IEEE 802.11 standards. It is defined by the European Telecommunications Standards Institute (ETSI). In ETSI the standards are defined by the BRAN project (Broadband Radio Access Networks). The HiperLAN standard family has four different versions. HiperLAN/1 Planning for the first version of the standard, called HiperLAN/1, started 1992, when planning of 802.11 was already going on. The goal of the HiperLAN was the high data rate, higher than 802.11. The standard was approved in 1997. The functional specification is EN300652, the rest is in ETS300836. The standard covers the physical layer and the media access control part of the data link layer like 802.11. There is a new sublayer called Channel Access and Control sublayer (CAC). This sublayer deals with the access requests to the channels. The accomplishing of the request is dependent on the usage of the channel and the priority of the request. CAC layer provides hierarchical independence with Elimination-Yield Non-Preemptive Multiple Access mechanism (EY-NPMA). EY-NPMA codes priority choices and other functions into one variable length radio pulse preceding the packet data. EY-NPMA enables the network to function with few collisions even though there would be a large number of users. Multimedia applications work in HiperLAN because of EY-NPMA priority mechanism. MAC layer defines protocols for routing, security and power saving and provides naturally data transfer to the upper layers. On the physical layer FSK and GMSK modulations are used in HiperLAN/1. HiperLAN features: range 100 m slow mobility (1.4 m/s) supports asynchronous and synchronous traffic Bit rate - 23.59 Mbit/s Description- Wireless Ethernet Frequency range- 5 GHz HiperLAN does not conflict with microwave and other kitchen appliances, which are on 2.4 GHz. An innovative feature of HIPERLAN 1, which other wireless networks do not offer, is
https://en.wikipedia.org/wiki/Anti-pattern
An anti-pattern in software engineering, project management, and business processes is a common response to a recurring problem that is usually ineffective and risks being highly counterproductive. The term, coined in 1995 by computer programmer Andrew Koenig, was inspired by the book Design Patterns (which highlights a number of design patterns in software development that its authors considered to be highly reliable and effective) and first published in his article in the Journal of Object-Oriented Programming. A further paper in 1996 presented by Michael Ackroyd at the Object World West Conference also documented anti-patterns. It was, however, the 1998 book AntiPatterns that both popularized the idea and extended its scope beyond the field of software design to include software architecture and project management. Other authors have extended it further since to encompass environmental/organizational/cultural anti-patterns. Definition According to the authors of Design Patterns, there are two key elements to an anti-pattern that distinguish it from a bad habit, bad practice, or bad idea: The anti-pattern is a commonly-used process, structure or pattern of action that, despite initially appearing to be an appropriate and effective response to a problem, has more bad consequences than good ones. Another solution exists to the problem the anti-pattern is attempting to address. This solution is documented, repeatable, and proven to be effective where the anti-pattern is not. A guide to what is commonly used is a "rule-of-three" similar to that for patterns: to be an anti-pattern it must have been witnessed occurring at least three times. Uses Documenting anti-patterns can be an effective way to analyze a problem space and to capture expert knowledge. While some anti-pattern descriptions merely document the adverse consequences of the pattern, good anti-pattern documentation also provides an alternative, or a means to ameliorate the anti-pattern. Software e
https://en.wikipedia.org/wiki/Aggregate%20pattern
An Aggregate pattern can refer to concepts in either statistics or computer programming. Both uses deal with considering a large case as composed of smaller, simpler, pieces. Statistics An aggregate pattern is an important statistical concept in many fields that rely on statistics to predict the behavior of large groups, based on the tendencies of subgroups to consistently behave in a certain way. It is particularly useful in sociology, economics, psychology, and criminology. Computer programming In Design Patterns, an aggregate is not a design pattern but rather refers to an object such as a list, vector, or generator which provides an interface for creating iterators. The following example code is in Python. def fibonacci(n: int): a, b = 0, 1 count = 0 while count < n: count += 1 a, b = b, a + b yield a for x in fibonacci(10): print(x) def fibsum(n: int) -> int: total = 0 for x in fibonacci(n): total += x return total def fibsum_alt(n: int) -> int: """ Alternate implementation. demonstration that Python's built-in function sum() works with arbitrary iterators. """ return sum(fibonacci(n)) myNumbers = [1, 7, 4, 3, 22] def average(g) -> float: return float(sum(g)) / len(g) # In Python 3 the cast to float is no longer be necessary Python hides essentially all of the details using the iterator protocol. Confusingly, Design Patterns uses "aggregate" to refer to the blank in the code for x in ___: which is unrelated to the term "aggregation". Neither of these terms refer to the statistical aggregation of data such as the act of adding up the Fibonacci sequence or taking the average of a list of numbers. See also Visitor pattern Template class Facade pattern Type safety Functional programming References Software design patterns Articles with example Python (programming language) code
https://en.wikipedia.org/wiki/Assertion%20%28software%20development%29
In computer programming, specifically when using the imperative programming paradigm, an assertion is a predicate (a Boolean-valued function over the state space, usually expressed as a logical proposition using the variables of a program) connected to a point in the program, that always should evaluate to true at that point in code execution. Assertions can help a programmer read the code, help a compiler compile it, or help the program detect its own defects. For the latter, some programs check assertions by actually evaluating the predicate as they run. Then, if it is not in fact true – an assertion failure – the program considers itself to be broken and typically deliberately crashes or throws an assertion failure exception. Details The following code contains two assertions, x > 0 and x > 1, and they are indeed true at the indicated points during execution: x = 1; assert x > 0; x++; assert x > 1; Programmers can use assertions to help specify programs and to reason about program correctness. For example, a precondition—an assertion placed at the beginning of a section of code—determines the set of states under which the programmer expects the code to execute. A postcondition—placed at the end—describes the expected state at the end of execution. For example: x > 0 { x++ } x > 1. The example above uses the notation for including assertions used by C. A. R. Hoare in his 1969 article. That notation cannot be used in existing mainstream programming languages. However, programmers can include unchecked assertions using the comment feature of their programming language. For example, in C++: x = 5; x = x + 1; // {x > 1} The braces included in the comment help distinguish this use of a comment from other uses. Libraries may provide assertion features as well. For example, in C using glibc with C99 support: #include <assert.h> int f(void) { int x = 5; x = x + 1; assert(x > 1); } Several modern programming languages include checked assertions – st
https://en.wikipedia.org/wiki/Boat%20anchor%20%28metaphor%29
In amateur radio and computing, a boat anchor or boatanchor is something obsolete, useless, and cumbersome – so-called because metaphorically its only productive use is to be thrown into the water as a boat mooring. Terms such as brick, doorstop, and paperweight are similar. Amateur radio In amateur radio, a boat anchor or boatanchor is an old piece of radio equipment. It is usually used in reference to large, heavy radio equipment of earlier decades that used tubes. In this context boat anchors are often prized by their owners and their strengths (e.g. immunity to EMP) emphasised, even if newer equipment is more capable. An early use of the term appeared in a 1956 issue of CQ Amateur Radio Magazine. The magazine published a letter from a reader seeking "schematics or conversion data" for a war surplus Wireless Set No. 19 MK II transceiver in order to modify it for use on the amateur bands. The editor added this reply: The editor's use of the term generated some reader interest, and in February 1957, CQ published a follow-up story that included photos. Computers The metaphor transfers directly from old radios to old computers. It also has been extended to refer to relic software. Hardware Early computers were physically large and heavy devices. As computers became more compact, the term boat anchor became popular among users to signify that the earlier, larger computer gear was obsolete and no longer useful. Software The term boat anchor has been extended to software code that is left in a system's codebase, typically in case it is needed later. This is an example of an anti-pattern and therefore can cause many problems for people attempting to maintain the program that contains the obsolete code. The key problem comes from the fact that programmers will have a hard time differentiating between obsolete code which doesn't do anything and working code which does. For example, a programmer may be looking into a bug with the program's input handling system, so
https://en.wikipedia.org/wiki/Negative%20cache
In computer programming, negative cache is a cache that also stores "negative" responses, i.e. failures. This means that a program remembers the result indicating a failure even after the cause has been corrected. Usually negative cache is a design choice, but it can also be a software bug. Examples Consider a web browser which attempts to load a page while the network is unavailable. The browser will receive an error code indicating the problem, and may display this error message to the user in place of the requested page. However, it is incorrect for the browser to place the error message in the page cache, as this would lead it to display the error again when the user tries to load the same page - even after the network is back up. The error message must not be cached under the page's URL; until the browser is able to successfully load the page, whenever the user tries to load the page, the browser must make a new attempt. A frustrating aspect of negative caches is that the user may put a great effort into troubleshooting the problem, and then after determining and removing the root cause, the error still does not vanish. There are cases where failure-like states must be cached. For instance, DNS requires that caching nameservers remember negative responses as well as positive ones. If an authoritative nameserver returns a negative response, indicating that a name does not exist, this is cached. The negative response may be perceived as a failure at the application level; however, to the nameserver caching it, it is not a failure. The cache times for negative and positive caching may be tuned independently. Description A negative cache is normally only desired if failure is very expensive and the error condition raises automatically without user's action. It creates a situation where the user is unable to isolate the cause of the failure: despite fixing everything they can think of, the program still refuses to work. When a failure is cached, the program shou
https://en.wikipedia.org/wiki/Code%20smell
In computer programming, a code smell is any characteristic in the source code of a program that possibly indicates a deeper problem. Determining what is and is not a code smell is subjective, and varies by language, developer, and development methodology. The term was popularised by Kent Beck on WardsWiki in the late 1990s. Usage of the term increased after it was featured in the 1999 book Refactoring: Improving the Design of Existing Code by Martin Fowler. It is also a term used by agile programmers. Definition One way to look at smells is with respect to principles and quality: "Smells are certain structures in the code that indicate violation of fundamental design principles and negatively impact design quality". Code smells are usually not bugs; they are not technically incorrect and do not prevent the program from functioning. Instead, they indicate weaknesses in design that may slow down development or increase the risk of bugs or failures in the future. Bad code smells can be an indicator of factors that contribute to technical debt. Robert C. Martin calls a list of code smells a "value system" for software craftsmanship. Often the deeper problem hinted at by a code smell can be uncovered when the code is subjected to a short feedback cycle, where it is refactored in small, controlled steps, and the resulting design is examined to see if there are any further code smells that in turn indicate the need for more refactoring. From the point of view of a programmer charged with performing refactoring, code smells are heuristics to indicate when to refactor, and what specific refactoring techniques to use. Thus, a code smell is a driver for refactoring. A 2015 study utilizing automated analysis for half a million source code commits and the manual examination of 9,164 commits determined to exhibit "code smells" found that: There exists empirical evidence for the consequences of "technical debt", but there exists only anecdotal evidence as to how, when, or w
https://en.wikipedia.org/wiki/Image%20response
Image response (or more correctly, image response rejection ratio, or IMRR) is a measure of performance of a radio receiver that operates on the superheterodyne principle. In such a radio receiver, a local oscillator (LO) is used to heterodyne or "beat" against the incoming radio frequency (RF), generating sum and difference frequencies. One of these will be at the intermediate frequency (IF), and will be selected and amplified. The radio receiver is responsive to any signal at its designed IF frequency, including unwanted signals. For example, with a LO tuned to 110 MHz, there are two incoming signal frequencies that can generate a 10 MHz IF frequency. A signal broadcast at 100 MHz (the wanted signal), and mixed with the 110 MHz LO will create the sum frequency of 210 MHz (ignored by the receiver), and the difference frequency at the desired 10 MHz. However, a signal broadcast at 120 MHz (the unwanted signal), and mixed with the 110 MHz LO will create a sum frequency of 230 MHz (ignored by the receiver), and the difference frequency also at 10 MHz. The signal at 120 MHz is called the image of the wanted signal at 100 MHz. The ability of the receiver to reject this image gives the image rejection ratio (IMRR) of the system. Image rejection ratio The image rejection ratio, or image frequency rejection ratio, is the ratio of the intermediate-frequency (IF) signal level produced by the desired input frequency to that produced by the image frequency. The image rejection ratio is usually expressed in dB. When the image rejection ratio is measured, the input signal levels of the desired and image frequencies must be equal for the measurement to be meaningful. IMRR is measured in dB, giving the ratio of the wanted to the unwanted signal to yield the same output from the receiver. In a good design, ratios of >60 dB are achieveable. Note that IMRR is not a measurement of the performance of the IF stages or IF filtering (selectivity); the signal yields a perfectly
https://en.wikipedia.org/wiki/Automatic%20gain%20control
Automatic gain control (AGC) is a closed-loop feedback regulating circuit in an amplifier or chain of amplifiers, the purpose of which is to maintain a suitable signal amplitude at its output, despite variation of the signal amplitude at the input. The average or peak output signal level is used to dynamically adjust the gain of the amplifiers, enabling the circuit to work satisfactorily with a greater range of input signal levels. It is used in most radio receivers to equalize the average volume (loudness) of different radio stations due to differences in received signal strength, as well as variations in a single station's radio signal due to fading. Without AGC the sound emitted from an AM radio receiver would vary to an extreme extent from a weak to a strong signal; the AGC effectively reduces the volume if the signal is strong and raises it when it is weaker. In a typical receiver the AGC feedback control signal is usually taken from the detector stage and applied to control the gain of the IF or RF amplifier stages. How it works The signal to be gain controlled (the detector output in a radio) goes to a diode & capacitor, which produce a peak-following DC voltage. This is fed to the RF gain blocks to alter their bias, thus altering their gain. Traditionally all the gain-controlled stages came before the signal detection, but it is also possible to improve gain control by adding a gain-controlled stage after signal detection. Example use cases AM radio receivers In 1925, Harold Alden Wheeler invented automatic volume control (AVC) and obtained a patent. Karl Küpfmüller published an analysis of AGC systems in 1928. By the early 1930s most new commercial broadcast receivers included automatic volume control. AGC is a departure from linearity in AM radio receivers. Without AGC, an AM radio would have a linear relationship between the signal amplitude and the sound waveform – the sound amplitude, which correlates with loudness, is proportional to the ra
https://en.wikipedia.org/wiki/Copy-and-paste%20programming
Copy-and-paste programming, sometimes referred to as just pasting, is the production of highly repetitive computer programming code, as produced by copy and paste operations. It is primarily a pejorative term; those who use the term are often implying a lack of programming competence. It may also be the result of technology limitations (e.g., an insufficiently expressive development environment) as subroutines or libraries would normally be used instead. However, there are occasions when copy-and-paste programming is considered acceptable or necessary, such as for boilerplate, loop unrolling (when not supported automatically by the compiler), or certain programming idioms, and it is supported by some source code editors in the form of snippets. Origins Copy-and-paste programming is often done by inexperienced or student programmers, who find the act of writing code from scratch difficult or irritating and prefer to search for a pre-written solution or partial solution they can use as a basis for their own problem solving. (See also Cargo cult programming) Inexperienced programmers who copy code often do not fully understand the pre-written code they are taking. As such, the problem arises more from their inexperience and lack of courage in programming than from the act of copying and pasting, per se. The code often comes from disparate sources such as friends' or co-workers' code, Internet forums, code provided by the student's professors/TAs, or computer science textbooks. The result risks being a disjointed clash of styles, and may have superfluous code that tackles problems for which new solutions are no longer required. A further problem is that bugs can easily be introduced by assumptions and design choices made in the separate sources that no longer apply when placed in a new environment. Such code may also, in effect, be unintentionally obfuscated, as the names of variables, classes, functions and the like are typically left unchanged, even though their
https://en.wikipedia.org/wiki/Crystal%20filter
A crystal filter allows some frequencies to 'pass' through an electrical circuit while attenuating undesired frequencies. An electronic filter can use quartz crystals as resonator components of a filter circuit. Quartz crystals are piezoelectric, so their mechanical characteristics can affect electronic circuits (see mechanical filter). In particular, quartz crystals can exhibit mechanical resonances with a very high factor (from 10,000 to 100,000 and greater – far higher than conventional resonators built from inductors and capacitors). The crystal's stability and its high Q factor allow crystal filters to have precise center frequencies and steep band-pass characteristics. Typical crystal filter attenuation in the band-pass is approximately 2-3dB. Crystal filters are commonly used in communication devices such as radio receivers. Crystal filters are used in the intermediate frequency (IF) stages of high-quality radio receivers. They are preferred because they are very stable mechanically and thus have little change in resonant frequency with changes in operating temperature. For the highest available stability applications, crystals are placed in ovens with controlled temperature making operating temperature independent of ambient temperature. Cheaper sets may use ceramic filters built from ceramic resonators (which also exploit the piezoelectric effect) or tuned LC circuits. Very high quality "crystal ladder" filters can be constructed of serial arrays of crystals. The most common use of crystal filters are at frequencies of 9 MHz or 10.7 MHz to provide selectivity in communications receivers, or at higher frequencies as a roofing filter in receivers using up-conversion. The vibrating frequencies of the crystal are determined by its "cut" (physical shape), such as the common AT cut used for crystal filters designed for radio communications. The cut also determines some temperature characteristics, which affect the stability of the resonant frequency. However
https://en.wikipedia.org/wiki/Concern%20%28computer%20science%29
In computer science, a concern is a particular set of information that has an effect on the code of a computer program. A concern can be as general as the details of database interaction or as specific as performing a primitive calculation, depending on the level of conversation between developers and the program being discussed. IBM uses the term concern space to describe the sectioning of conceptual information. Overview Usually the code can be separated into logical sections, each addressing separate concerns, and so it hides the need for a given section to know particular information addressed by a different section. This leads to a modular program. Edsger W. Dijkstra coined the term "separation of concerns" to describe the mentality behind this modularization, which allows the programmer to reduce the complexity of the system being designed. Two different concerns that intermingle in the same section of code are called "highly coupled". Sometimes the chosen module divisions do not allow for one concern to be completely separated from another, resulting in cross-cutting concerns. The various programming paradigms address the issue of cross-cutting concerns to different degrees. Data logging is a common cross-cutting concern, being used in many other parts of the program other than the particular module(s) that actually log the data. Since changes to the logging code can affect other sections, it could introduce bugs in the operation of the program. Paradigms that specifically address the issue of concern separation: Object-oriented programming, describing concerns as objects Functional programming, describing concerns as functions Aspect-oriented software development, treating concerns and their interaction as constructs of their own standing See also Cross-cutting concern Separation of concerns Issue (computers), a unit of work to accomplish an improvement in a data system References External links Concerns in Rails, by DHH, the Rails creator Soft
https://en.wikipedia.org/wiki/Product%20bundling
In marketing, product bundling is offering several products or services for sale as one combined product or service package. It is a common feature in many imperfectly competitive product and service markets. Industries engaged in the practice include telecommunications services, financial services, health care, information, and consumer electronics. A software bundle might include a word processor, spreadsheet, and presentation program into a single office suite. The cable television industry often bundles many TV and movie channels into a single tier or package. The fast food industry combines separate food items into a "meal deal" or "value meal". A bundle of products may be called a package deal; in recorded music or video games, a compilation or box set; or in publishing, an anthology. Most firms are multi-product or multi-service companies faced with the decision whether to sell products or services separately at individual prices or whether combinations of products should be marketed in the form of "bundles" for which a "bundle price" is asked. Price bundling plays an increasingly important role in many industries (e.g. banking, insurance, software, automotive) and some companies even build their business strategies on bundling. In bundle pricing, companies sell a package or set of goods or services for a lower price than they would charge if the customer bought all of them separately. Pursuing a bundle pricing strategy allows a business to increase its profit by using a discount to induce customers to buy more than they otherwise would have. Rationale Bundling is most successful when: There are economies of scale in production. There are economies of scope in distribution. This can be seen in consumer electronics bundles where a big box electronics store offers all of the components for a home theatre setup (DVD player, flatscreen TV, surround sound speakers, receiver, subwoofer) for a lower price than if each component were to be purchased separately. T
https://en.wikipedia.org/wiki/Graham%27s%20number
Graham's number is an immense number that arose as an upper bound on the answer of a problem in the mathematical field of Ramsey theory. It is much larger than many other large numbers such as Skewes's number and Moser's number, both of which are in turn much larger than a googolplex. As with these, it is so large that the observable universe is far too small to contain an ordinary digital representation of Graham's number, assuming that each digit occupies one Planck volume, possibly the smallest measurable space. But even the number of digits in this digital representation of Graham's number would itself be a number so large that its digital representation cannot be represented in the observable universe. Nor even can the number of digits of that number—and so forth, for a number of times far exceeding the total number of Planck volumes in the observable universe. Thus Graham's number cannot be expressed even by physical universe-scale power towers of the form . However, Graham's number can be explicitly given by computable recursive formulas using Knuth's up-arrow notation or equivalent, as was done by Ronald Graham, the number's namesake. As there is a recursive formula to define it, it is much smaller than typical busy beaver numbers. Though too large to ever be computed in full, the sequence of digits of Graham's number can be computed explicitly via simple algorithms; the last 13 digits are ...7262464195387. Using Knuth's up-arrow notation, Graham's number is , where Graham's number was used by Graham in conversations with popular science writer Martin Gardner as a simplified explanation of the upper bounds of the problem he was working on. In 1977, Gardner described the number in Scientific American, introducing it to the general public. At the time of its introduction, it was the largest specific positive integer ever to have been used in a published mathematical proof. The number was described in the 1980 Guinness Book of World Records, adding to its
https://en.wikipedia.org/wiki/Separation%20of%20concerns
In computer science, separation of concerns is a design principle for separating a computer program into distinct sections. Each section addresses a separate concern, a set of information that affects the code of a computer program. A concern can be as general as "the details of the hardware for an application", or as specific as "the name of which class to instantiate". A program that embodies SoC well is called a modular program. Modularity, and hence separation of concerns, is achieved by encapsulating information inside a section of code that has a well-defined interface. Encapsulation is a means of information hiding. Layered designs in information systems are another embodiment of separation of concerns (e.g., presentation layer, business logic layer, data access layer, persistence layer). Separation of concerns results in more degrees of freedom for some aspect of the program's design, deployment, or usage. Common among these is increased freedom for simplification and maintenance of code. When concerns are well-separated, there are more opportunities for module upgrade, reuse, and independent development. Hiding the implementation details of modules behind an interface enables improving or modifying a single concern's section of code without having to know the details of other sections and without having to make corresponding changes to those other sections. Modules can also expose different versions of an interface, which increases the freedom to upgrade a complex system in piecemeal fashion without interim loss of functionality. Separation of concerns is a form of abstraction. As with most abstractions, separating concerns means adding additional code interfaces, generally creating more code to be executed. So despite the many benefits of well-separated concerns, there is often an associated execution penalty. Implementation The mechanisms for modular or object-oriented programming that are provided by a programming language are mechanisms that allow d
https://en.wikipedia.org/wiki/Automorphic%20number
In mathematics, an automorphic number (sometimes referred to as a circular number) is a natural number in a given number base whose square "ends" in the same digits as the number itself. Definition and properties Given a number base , a natural number with digits is an automorphic number if is a fixed point of the polynomial function over , the ring of integers modulo . As the inverse limit of is , the ring of -adic integers, automorphic numbers are used to find the numerical representations of the fixed points of over . For example, with , there are four 10-adic fixed points of , the last 10 digits of which are one of these Thus, the automorphic numbers in base 10 are 0, 1, 5, 6, 25, 76, 376, 625, 9376, 90625, 109376, 890625, 2890625, 7109376, 12890625, 87109376, 212890625, 787109376, 1787109376, 8212890625, 18212890625, 81787109376, 918212890625, 9918212890625, 40081787109376, 59918212890625, ... . A fixed point of is a zero of the function . In the ring of integers modulo , there are zeroes to , where the prime omega function is the number of distinct prime factors in . An element in is a zero of if and only if or for all . Since there are two possible values in , and there are such , there are zeroes of , and thus there are fixed points of . According to Hensel's lemma, if there are zeroes or fixed points of a polynomial function modulo , then there are corresponding zeroes or fixed points of the same function modulo any power of , and this remains true in the inverse limit. Thus, in any given base there are -adic fixed points of . As 0 is always a zero-divisor, 0 and 1 are always fixed points of , and 0 and 1 are automorphic numbers in every base. These solutions are called trivial automorphic numbers. If is a prime power, then the ring of -adic numbers has no zero-divisors other than 0, so the only fixed points of are 0 and 1. As a result, nontrivial automorphic numbers, those other than 0 and 1, only exist when the base
https://en.wikipedia.org/wiki/Leptocephalus
Leptocephalus (meaning "slim head") is the flat and transparent larva of the eel, marine eels, and other members of the superorder Elopomorpha. This is one of the most diverse groups of teleosts, containing 801 species in 4 orders, 24 families, and 156 genera. This group is thought to have arisen in the Cretaceous period over 140 million years ago. Fishes with a leptocephalus larval stage include the most familiar eels such as the conger, moray eel, and garden eel as well as members of the family Anguillidae, plus more than 10 other families of lesser-known types of marine eels. These are all true eels of the order Anguilliformes. Leptocephali of eight species of eels from the South Atlantic Ocean were described by Meyer-Rochow The fishes of the other four traditional orders of elopomorph fishes that have this type of larvae are more diverse in their body forms and include the tarpon, bonefish, spiny eel, pelican eel and deep sea species like Cyema atrum and notacanthidae species, the latter with giant Leptocephalus-like larvae. Description Leptocephali (singular leptocephalus) all have laterally compressed bodies that contain transparent jelly-like substances on the inside of the body and a thin layer of muscle with visible myomeres on the outside. Their body organs are small and they have only a simple tube for a gut. This combination of features results in them being very transparent when they are alive. Leptocephali have dorsal and anal fins confluent with caudal fins, but lack pelvic fins. They also lack red blood cells until they begin to metamorphose into the juvenile glass eel stage when they start to look like eels. Leptocephali are also characterized by their fang-like teeth that are present until metamorphosis, when they are lost. Leptocephali differ from most fish larvae because they grow to much larger sizes and have long larval periods of about three months to more than a year. Another distinguishing feature of these organisms is their mucinou
https://en.wikipedia.org/wiki/XM%20%28file%20format%29
XM, standing for "extended module", is an audio file type introduced by Triton's FastTracker 2. XM introduced multisampling-capable instruments with volume and panning envelopes, sample looping and basic pattern compression. It also expanded the available effect commands and channels, added 16-bit sample support, and offered an alternative frequency table for portamentos. XM is a common format for many module files. The file format has been initially documented by its creator in the file XM.TXT, which accompanied the 2.08 release of FastTracker 2, as well as its latest known beta version: 2.09b. The file, written in 1994 and attributed to Mr.H of Triton (Fredrik Huss), bears the header "The XM module format description for XM files version $0104." The contents of the file have been posted on this article's Talk subpage for reference. This documentation is however said to be incomplete and insufficient to properly recreate the behaviour of the original program. The MilkyTracker project has expanded the documentation of the XM file format, in an attempt to replicate not only the behaviour of the original software but also its quirks. Their documentation of the XM file format is available on the project's GitHub repository. OXM (oggmod) is a subformat, which compresses the XM samples using Vorbis. Supporting Media Players Windows Media Player – supports .XM files as long as the player version is x86 (32-bit) Cowon jetAudio – A freeware audio player for Windows which supports .XM files Xmplay – A freeware audio player for Windows which supports .XM files Foobar2000 – A freeware audio player for Windows that supports .XM files through a plugin. VLC Media Player – An open-source media player for Windows, Linux, & macOS which supports .XM files MusicBee – A freeware audio player for Windows which supports .XM files References See also Module file MOD (file format) S3M (file format) IT (file format) Module file formats Digital audio