source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Hilbert%20matrix
In linear algebra, a Hilbert matrix, introduced by , is a square matrix with entries being the unit fractions For example, this is the 5 × 5 Hilbert matrix: The entries can also be defined by the integral that is, as a Gramian matrix for powers of x. It arises in the least squares approximation of arbitrary functions by polynomials. The Hilbert matrices are canonical examples of ill-conditioned matrices, being notoriously difficult to use in numerical computation. For example, the 2-norm condition number of the matrix above is about 4.8. Historical note introduced the Hilbert matrix to study the following question in approximation theory: "Assume that , is a real interval. Is it then possible to find a non-zero polynomial P with integer coefficients, such that the integral is smaller than any given bound ε > 0, taken arbitrarily small?" To answer this question, Hilbert derives an exact formula for the determinant of the Hilbert matrices and investigates their asymptotics. He concludes that the answer to his question is positive if the length of the interval is smaller than 4. Properties The Hilbert matrix is symmetric and positive definite. The Hilbert matrix is also totally positive (meaning that the determinant of every submatrix is positive). The Hilbert matrix is an example of a Hankel matrix. It is also a specific example of a Cauchy matrix. The determinant can be expressed in closed form, as a special case of the Cauchy determinant. The determinant of the n × n Hilbert matrix is where Hilbert already mentioned the curious fact that the determinant of the Hilbert matrix is the reciprocal of an integer (see sequence in the OEIS), which also follows from the identity Using Stirling's approximation of the factorial, one can establish the following asymptotic result: where an converges to the constant as , where A is the Glaisher–Kinkelin constant. The inverse of the Hilbert matrix can be expressed in closed form using
https://en.wikipedia.org/wiki/Syntax%20highlighting
Syntax highlighting is a feature of text editors that is used for programming, scripting, or markup languages, such as HTML. The feature displays text, especially source code, in different colours and fonts according to the category of terms. This feature facilitates writing in a structured language such as a programming language or a markup language as both structures and syntax errors are visually distinct. This feature is also employed in many programming related contexts (such as programming manuals), either in the form of colorful books or online websites to make understanding code snippets easier for readers. Highlighting does not affect the meaning of the text itself; it is intended only for human readers. Syntax highlighting is a form of secondary notation, since the highlights are not part of the text meaning, but serve to reinforce it. Some editors also integrate syntax highlighting with other features, such as spell checking or code folding, as aids to editing which are external to the language. Practical benefits Syntax highlighting is one strategy to improve the readability and context of the text; especially for code that spans several pages. The reader can easily ignore large sections of comments or code, depending on what they are looking for. Syntax highlighting also helps programmers find errors in their program. For example, most editors highlight string literals in a different color. Consequently, spotting a missing delimiter is much easier because of the contrasting color of the text. Brace matching is another important feature with many popular editors. This makes it simple to see if a brace has been left out or locate the match of the brace the cursor is on by highlighting the pair in a different color. A study published in the conference PPIG evaluated the effects of syntax highlighting on the comprehension of short programs, finding that the presence of syntax highlighting significantly reduces the time taken for a programmer to internal
https://en.wikipedia.org/wiki/Health%20claim
A health claim on a food label and in food marketing is a claim by a manufacturer of food products that their food will reduce the risk of developing a disease or condition. For example, it is claimed by the manufacturers of oat cereals that oat bran can reduce cholesterol, which will lower the chances of developing serious heart conditions. Vague health claims include that the food inside is "healthy," "organic," "low fat," "non-GMO," "no sugar added," or "natural". Health claims are also made for over-the-counter drugs and prescription drugs, medical procedures, and medical devices, but these generally have a separate, much more stringent set of regulations. Health claims in the United States In the United States, health claims on nutrition facts labels are regulated by the U.S. Food and Drug Administration (FDA), while advertising is regulated by the Federal Trade Commission. Dietary supplements are regulated as a separate type of consumer item from food or over-the-counter drugs. Food FDA guidelines According to the FDA, "Authorized health claims in food labeling are claims that have been reviewed by FDA and are allowed on food products or dietary supplements to show that a food or food component may reduce the risk of a disease or a health-related condition." An authorized health claim is limited to evidence for reducing the risk of a disease, and does not apply to the diagnosis, cure, mitigation, or treatment of disease. It must be reviewed, evaluated, and publicly-announced by the FDA prior to use. Approval of a health claim by the FDA requires significant scientific agreement (SSA) among reputable scientists that the claim is based on publicly-available evidence that a relationship exists between an element and a disease. The SSA standard provides a high degree of confidence that the relationship between the element and the disease is valid. Based on scientific evidence, such claims may be used for marketing on foods or dietary supplements. The au
https://en.wikipedia.org/wiki/Isinglass
Isinglass ( ) is a substance obtained from the dried swim bladders of fish. It is a form of collagen used mainly for the clarification or fining of some beer and wine. It can also be cooked into a paste for specialised gluing purposes. The English word origin is from the obsolete Dutch huizenblaas – huizen is a kind of sturgeon, and blaas is a bladder, or German Hausenblase, meaning essentially the same. Although originally made exclusively from sturgeon, especially beluga, in 1795 an invention by William Murdoch facilitated a cheap substitute using cod. This was extensively used in Britain in place of Russian isinglass, and in the US hake was important. In modern British brewing all commercial isinglass products are blends of material from a limited range of tropical fish. The bladders, once removed from the fish, processed, and dried, are formed into various shapes for use. Foods and drinks Before the inexpensive production of gelatin and other competing products, isinglass was used in confectionery and desserts such as fruit jelly and blancmange. Isinglass finings are widely used as a processing aid in the British brewing industry to accelerate the fining, or clarification, of beer. It is used particularly in the production of cask-conditioned beers, although many cask ales are available which are not fined using isinglass. The finings flocculate the live yeast in the beer into a jelly-like mass, which settles to the bottom of the cask. Left undisturbed, beer will clear naturally; the use of isinglass finings accelerates the process. Isinglass is sometimes used with an auxiliary fining, which further accelerates the process of sedimentation. Non-cask beers that are destined for kegs, cans, or bottles are often pasteurised and filtered. The yeast in these beers tends to settle to the bottom of the storage tank naturally, so the sediment from these beers can often be filtered without using isinglass. However, some breweries still use isinglass finings for n
https://en.wikipedia.org/wiki/Inverted%20pendulum
An inverted pendulum is a pendulum that has its center of mass above its pivot point. It is unstable and without additional help will fall over. It can be suspended stably in this inverted position by using a control system to monitor the angle of the pole and move the pivot point horizontally back under the center of mass when it starts to fall over, keeping it balanced. The inverted pendulum is a classic problem in dynamics and control theory and is used as a benchmark for testing control strategies. It is often implemented with the pivot point mounted on a cart that can move horizontally under control of an electronic servo system as shown in the photo; this is called a cart and pole apparatus. Most applications limit the pendulum to 1 degree of freedom by affixing the pole to an axis of rotation. Whereas a normal pendulum is stable when hanging downwards, an inverted pendulum is inherently unstable, and must be actively balanced in order to remain upright; this can be done either by applying a torque at the pivot point, by moving the pivot point horizontally as part of a feedback system, changing the rate of rotation of a mass mounted on the pendulum on an axis parallel to the pivot axis and thereby generating a net torque on the pendulum, or by oscillating the pivot point vertically. A simple demonstration of moving the pivot point in a feedback system is achieved by balancing an upturned broomstick on the end of one's finger. A second type of inverted pendulum is a tiltmeter for tall structures, which consists of a wire anchored to the bottom of the foundation and attached to a float in a pool of oil at the top of the structure that has devices for measuring movement of the neutral position of the float away from its original position. Overview A pendulum with its bob hanging directly below the support pivot is at a stable equilibrium point; there is no torque on the pendulum so it will remain motionless, and if displaced from this position will experien
https://en.wikipedia.org/wiki/Dactyly
In biology, dactyly is the arrangement of digits (fingers and toes) on the hands, feet, or sometimes wings of a tetrapod animal. It comes from the Greek word () = "finger". Sometimes the ending "-dactylia" is used. The derived adjectives end with "-dactyl" or "-dactylous". As a normal feature Pentadactyly Pentadactyly (from Greek "five") is the condition of having five digits on each limb. It is traditionally believed that all living tetrapods are descended from an ancestor with a pentadactyl limb, although many species have now lost or transformed some or all of their digits by the process of evolution. However, this viewpoint was challenged by Stephen Jay Gould in his 1991 essay "Eight (Or Fewer) Little Piggies", where he pointed out polydactyly in early tetrapods and described the specializations of digit reduction. Despite the individual variations listed below, the relationship is to the original five-digit model. In reptiles, the limbs are pentadactylous. Dogs and cats have tetradactylous paws but the dewclaw makes them pentadactyls. Tetradactyly Tetradactyly (from Greek "four") is the condition of having four digits on a limb, as in many birds, amphibians, and theropod dinosaurs. Tridactyly Tridactyly (from Greek "three") is the condition of having three digits on a limb, as in the rhinoceros and ancestors of the horse such as Protohippus and Hipparion. These all belong to the Perissodactyla. Some birds also have three toes, including emus, bustards, and quail. Didactyly Didactyly (from Greek "two") or bidactyly is the condition of having two digits on each limb, as in the Hypertragulidae and two-toed sloth, Choloepus didactylus. In humans this name is used for an abnormality in which the middle digits are missing, leaving only the thumb and fifth finger, or big and little toes. Cloven-hoofed mammals (such as deer, sheep and cattle – Artiodactyla) have only two digits, as do ostriches. Monodactyly Monodactyly (from Greek "one") is the co
https://en.wikipedia.org/wiki/Decision-making
In psychology, decision-making (also spelled decision making and decisionmaking) is regarded as the cognitive process resulting in the selection of a belief or a course of action among several possible alternative options. It could be either rational or irrational. The decision-making process is a reasoning process based on assumptions of values, preferences and beliefs of the decision-maker. Every decision-making process produces a final choice, which may or may not prompt action. Research about decision-making is also published under the label problem solving, particularly in European psychological research. Overview Decision-making can be regarded as a problem-solving activity yielding a solution deemed to be optimal, or at least satisfactory. It is therefore a process which can be more or less rational or irrational and can be based on explicit or tacit knowledge and beliefs. Tacit knowledge is often used to fill the gaps in complex decision-making processes. Usually, both of these types of knowledge, tacit and explicit, are used together in the decision-making process. Human performance has been the subject of active research from several perspectives: Psychological: examining individual decisions in the context of a set of needs, preferences and values the individual has or seeks. Cognitive: the decision-making process is regarded as a continuous process integrated in the interaction with the environment. Normative: the analysis of individual decisions concerned with the logic of decision-making, or communicative rationality, and the invariant choice it leads to. A major part of decision-making involves the analysis of a finite set of alternatives described in terms of evaluative criteria. Then the task might be to rank these alternatives in terms of how attractive they are to the decision-maker(s) when all the criteria are considered simultaneously. Another task might be to find the best alternative or to determine the relative total priority of each a
https://en.wikipedia.org/wiki/Open%20system%20%28systems%20theory%29
An open system is a system that has external interactions. Such interactions can take the form of information, energy, or material transfers into or out of the system boundary, depending on the discipline which defines the concept. An open system is contrasted with the concept of an isolated system which exchanges neither energy, matter, nor information with its environment. An open system is also known as a flow system. The concept of an open system was formalized within a framework that enabled one to interrelate the theory of the organism, thermodynamics, and evolutionary theory. This concept was expanded upon with the advent of information theory and subsequently systems theory. Today the concept has its applications in the natural and social sciences. In the natural sciences an open system is one whose border is permeable to both energy and mass. By contrast, a closed system is permeable to energy but not to matter. The definition of an open system assumes that there are supplies of energy that cannot be depleted; in practice, this energy is supplied from some source in the surrounding environment, which can be treated as infinite for the purposes of study. One type of open system is the radiant energy system, which receives its energy from solar radiation – an energy source that can be regarded as inexhaustible for all practical purposes. Social sciences In the social sciences an open system is a process that exchanges material, energy, people, capital and information with its environment. French/Greek philosopher Kostas Axelos argued that seeing the "world system" as inherently open (though unified) would solve many of the problems in the social sciences, including that of praxis (the relation of knowledge to practice), so that various social scientific disciplines would work together rather than create monopolies whereby the world appears only sociological, political, historical, or psychological. Axelos argues that theorizing a closed system contribut
https://en.wikipedia.org/wiki/Thermodynamic%20equilibrium
Thermodynamic equilibrium is an axiomatic concept of thermodynamics. It is an internal state of a single thermodynamic system, or a relation between several thermodynamic systems connected by more or less permeable or impermeable walls. In thermodynamic equilibrium, there are no net macroscopic flows of matter nor of energy within a system or between systems. In a system that is in its own state of internal thermodynamic equilibrium, no macroscopic change occurs. Systems in mutual thermodynamic equilibrium are simultaneously in mutual thermal, mechanical, chemical, and radiative equilibria. Systems can be in one kind of mutual equilibrium, while not in others. In thermodynamic equilibrium, all kinds of equilibrium hold at once and indefinitely, until disturbed by a thermodynamic operation. In a macroscopic equilibrium, perfectly or almost perfectly balanced microscopic exchanges occur; this is the physical explanation of the notion of macroscopic equilibrium. A thermodynamic system in a state of internal thermodynamic equilibrium has a spatially uniform temperature. Its intensive properties, other than temperature, may be driven to spatial inhomogeneity by an unchanging long-range force field imposed on it by its surroundings. In systems that are at a state of non-equilibrium there are, by contrast, net flows of matter or energy. If such changes can be triggered to occur in a system in which they are not already occurring, the system is said to be in a meta-stable equilibrium. Though not a widely named "law," it is an axiom of thermodynamics that there exist states of thermodynamic equilibrium. The second law of thermodynamics states that when an isolated body of material starts from an equilibrium state, in which portions of it are held at different states by more or less permeable or impermeable partitions, and a thermodynamic operation removes or makes the partitions more permeable, then it spontaneously reaches its own new state of internal thermodynamic equ
https://en.wikipedia.org/wiki/Josef%20%C4%8Capek
Josef Čapek (; 23 March 1887 – April 1945) was a Czech artist who was best known as a painter, but who was also noted as a writer and a poet. He invented the word "robot", which was introduced into literature by his brother, Karel Čapek. Life Čapek was born in Hronov, Bohemia (Austria-Hungary, later Czechoslovakia, now the Czech Republic) in 1887. First a painter of the Cubist school, he later developed his own playful, minimalist style. He collaborated with his brother Karel on a number of plays and short stories; on his own, he wrote the utopian play Land of Many Names and several novels, as well as critical essays in which he argued for the art of the unconscious, of children, and of 'savages'. He was named by his brother as the true inventor of the term robot. As a cartoonist, he worked for Lidové Noviny, a newspaper based in Prague. His illustrated stories Povídání o Pejskovi a Kočičce (English translation as The Adventures of Puss and Pup) are considered classics of Czech children's literature. Death Due to his critical attitude towards national socialism and Adolf Hitler, he was arrested after the German invasion of Czechoslovakia in 1939. He wrote Poems from a Concentration Camp in the Bergen-Belsen concentration camp, where he died in 1945. In June 1945 Rudolf Margolius, accompanied by Čapek's wife Jarmila Čapková, went to Bergen-Belsen to search for him. His remains were never found. In 1948 the court officially set the date of his death as 30 April 1947. Selected literary works Lelio, 1917 Ze života hmyzu (Pictures from the Insects' Life), 1921 – with Karel Čapek Povídání o pejskovi a kočičce (The Adventures of Puss and Pup), 1929 Stín kapradiny, 1930, novel Kulhavý poutník, essays, 1936 Land of Many Names Básně z koncentračního tabora (Poems from a Concentration Camp), published posthumously 1946 Adam Stvořitel (Adam the Creator) – with Karel Čapek Dášeňka, čili život štěněte (Dashenka, consequently the life of a Puppy) – with Karel Čapek, illustra
https://en.wikipedia.org/wiki/Direct%20Client-to-Client
Direct Client-to-Client (DCC) (originally Direct Client Connection) is an IRC-related sub-protocol enabling peers to interconnect using an IRC server for handshaking in order to exchange files or perform non-relayed chats. Once established, a typical DCC session runs independently from the IRC server. Originally designed to be used with ircII it is now supported by many IRC clients. Some peer-to-peer clients on napster-protocol servers also have DCC send/get capability, including TekNap, SunshineUN and Lopster. A variation of the DCC protocol called SDCC (Secure Direct Client-to-Client), also known as DCC SCHAT supports encrypted connections. An RFC specification on the use of DCC does not exist. DCC connections can be initiated in two different ways: The most common way is to use CTCP to initiate a DCC session. The CTCP is sent from one user, over the IRC network, to another user. Another way to initiate a DCC session is for the client to connect directly to the DCC server. Using this method, no traffic will go across the IRC network (the parties involved do not need to be connected to an IRC network in order to initiate the DCC connection). History ircII was the first IRC client to implement the CTCP and DCC protocols. The CTCP protocol was implemented by Michael Sandrof in 1990 for ircII version 2.1. The DCC protocol was implemented by Troy Rollo in 1991 for version 2.1.2, but was never intended to be portable to other IRC clients. Common DCC applications DCC CHAT The CHAT service enables users to chat with each other over a DCC connection. The traffic will go directly between the users, and not over the IRC network. When compared to sending messages normally, this reduces IRC network load, allows sending of larger amounts of text at once, due to the lack of flood control, and makes the communication more secure by not exposing the message to the IRC servers (however, the message is still in plaintext). DCC CHAT is normally initiated using a CTCP handshake
https://en.wikipedia.org/wiki/Metalworking
Metalworking is the process of shaping and reshaping metals to create useful objects, parts, assemblies, and large scale structures. As a term it covers a wide and diverse range of processes, skills, and tools for producing objects on every scale: from huge ships, buildings, and bridges down to precise engine parts and delicate jewelry. The historical roots of metalworking predate recorded history; its use spans cultures, civilizations and millennia. It has evolved from shaping soft, native metals like gold with simple hand tools, through the smelting of ores and hot forging of harder metals like iron, up to highly technical modern processes such as machining and welding. It has been used as an industry, a driver of trade, individual hobbies, and in the creation of art; it can be regarded as both a science and a craft. Modern metalworking processes, though diverse and specialized, can be categorized into one of three broad areas known as forming, cutting, or joining processes. Modern metalworking workshops, typically known as machine shops, hold a wide variety of specialized or general-use machine tools capable of creating highly precise, useful products. Many simpler metalworking techniques, such as blacksmithing, are no longer economically competitive on a large scale in developed countries; some of them are still in use in less developed countries, for artisanal or hobby work, or for historical reenactment. Prehistory The oldest archaeological evidence of copper mining and working was the discovery of a copper pendant in northern Iraq from 8,700 BCE. The earliest substantiated and dated evidence of metalworking in the Americas was the processing of copper in Wisconsin, near Lake Michigan. Copper was hammered until it became brittle, then heated so it could be worked further. In America, this technology is dated to about 4000–5000 BCE. The oldest gold artifacts in the world come from the Bulgarian Varna Necropolis and date from 4450 BCE. Not all metal required
https://en.wikipedia.org/wiki/Centericq
Centericq is a text mode menu- and window-driven instant messaging interface that supports the ICQ, Yahoo!, AIM, MSN, IRC, XMPP, LiveJournal, and Gadu-Gadu protocols. Overview Centericq allows you to send, receive, and forward messages, URLs, SMSes (both through the ICQ server and email gateways supported by Mirabilis), contacts, and email express messages, and it has many other useful features. Known to work in Linux, FreeBSD, NetBSD, OpenBSD, Solaris, Windows and macOS/Darwin Operating Systems. Its heyday was in the first half-decade of the 2000s, with reviews appearing in Softpedia, Czech online magazines ABC Linux and Linux.cz. It was recommended in a 2004 OSNews article on console applications, and in a similar article in the Russian magazine Computerra. Two tutorial articles appeared in the German magazine LinuxUser in 2001 and 2004; the latter article appeared in the English version of Linux Magazine. It was included in a 2002 round-up of ICQ clients in the Russian XAKEP magazine, and in a 2005 round-up review of IRC Clients in Free Software Magazine. The FSM reviewer noted Centericq for its windows-like interface built on top of the usual curses library, which provides much information, but can look cluttered on smaller terminal windows, including the standard 80 by 25 terminal. It found that IRC support was "excellent" due to support for multiple servers and channels and the ease of switching between them in the “windowed” interface. In September 2002, Steven J. Vaughan-Nichols found it "the greatest of all console IM clients" due to its "excellent interface and a huge number of features and configuration options" in a category review on Freshmeat. In 2005 it was reviewed in The Unofficial Apple Weblog; despite support for .mac accounts, the reviewer noted "annoying key combos" required to access the menus, because on Mac OS X the usual function key assignments of Centericq could not be used. Though he could not access the MSN network, he concluded: "A
https://en.wikipedia.org/wiki/Has-a
In database design, object-oriented programming and design, has-a (has_a or has a) is a composition relationship where one object (often called the constituted object, or part/constituent/member object) "belongs to" (is part or member of) another object (called the composite type), and behaves according to the rules of ownership. In simple words, has-a relationship in an object is called a member field of an object. Multiple has-a relationships will combine to form a possessive hierarchy. Related concepts "Has-a" is to be contrasted with an is-a (is_a or is a) relationship which constitutes a taxonomic hierarchy (subtyping). The decision whether the most logical relationship for an object and its subordinate is not always clearly has-a or is-a. Confusion over such decisions have necessitated the creation of these metalinguistic terms. A good example of the has-a relationship is containers in the C++ STL. To summarize the relations, we have hypernym-hyponym (supertype-subtype) relations between types (classes) defining a taxonomic hierarchy, where for an inheritance relation: a hyponym (subtype, subclass) has a type-of (is-a) relationship with its hypernym (supertype, superclass); holonym-meronym (whole/entity/container-part/constituent/member) relations between types (classes) defining a possessive hierarchy, where for an aggregation (i.e. without ownership) relation: a holonym (whole) has a has-a relationship with its meronym (part), for a composition (i.e. with ownership) relation: a meronym (constituent) has a part-of relationship with its holonym (entity), for a containment relation: a meronym (member) has a member-of relationship with its holonym (container); concept-object (type-token) relations between types (classes) and objects (instances), where a token (object) has an instance-of relationship with its type (class). Examples Entity–relationship model In databases has-a relationships are usually represented in an Entity–relationship
https://en.wikipedia.org/wiki/FileMaker
FileMaker is a cross-platform relational database application from Claris International, a subsidiary of Apple Inc. It integrates a database engine with a graphical user interface (GUI) and security features, allowing users to modify a database by dragging new elements into layouts, screens, or forms. It is available in desktop, server, iOS and web-delivery configurations. FileMaker Pro, the desktop app, evolved from a DOS application, originally called simply FileMaker, but was then developed primarily for the Apple Macintosh and released in April 1985. It was rebranded as FileMaker Pro in 1990. Since 1992 it has been available for Microsoft Windows and for the classic Mac OS and macOS, and can be used in a cross-platform environment. FileMaker Go, the mobile app, was released for iOS devices in July 2010. FileMaker Server allows centralized hosting of apps which can be used by clients running the desktop or mobile apps. It is also available hosted by Claris, called FileMaker Cloud. History FileMaker began as an MS-DOS-based computer program named Nutshell – developed by Nashoba Systems of Concord, Massachusetts, in the early 1980s. Nutshell was distributed by Leading Edge, an electronics marketing company that had recently started selling IBM PC-compatible computers. With the introduction of the Macintosh, Nashoba combined the basic data engine with a new forms-based graphical user interface (GUI). Leading Edge was not interested in newer versions, preferring to remain a DOS-only vendor, and kept the Nutshell name. Nashoba found another distributor, Forethought Inc., and introduced the program on the Macintosh platform as FileMaker in April 1985. When Apple introduced the Macintosh Plus in 1986 the next version of FileMaker was named FileMaker Plus to reflect the new model's name. Forethought was purchased by Microsoft, which was then introducing their PowerPoint product that became part of Microsoft Office. Microsoft had introduced its own database applica
https://en.wikipedia.org/wiki/Finite%20geometry
A finite geometry is any geometric system that has only a finite number of points. The familiar Euclidean geometry is not finite, because a Euclidean line contains infinitely many points. A geometry based on the graphics displayed on a computer screen, where the pixels are considered to be the points, would be a finite geometry. While there are many systems that could be called finite geometries, attention is mostly paid to the finite projective and affine spaces because of their regularity and simplicity. Other significant types of finite geometry are finite Möbius or inversive planes and Laguerre planes, which are examples of a general type called Benz planes, and their higher-dimensional analogs such as higher finite inversive geometries. Finite geometries may be constructed via linear algebra, starting from vector spaces over a finite field; the affine and projective planes so constructed are called Galois geometries. Finite geometries can also be defined purely axiomatically. Most common finite geometries are Galois geometries, since any finite projective space of dimension three or greater is isomorphic to a projective space over a finite field (that is, the projectivization of a vector space over a finite field). However, dimension two has affine and projective planes that are not isomorphic to Galois geometries, namely the non-Desarguesian planes. Similar results hold for other kinds of finite geometries. Finite planes The following remarks apply only to finite planes. There are two main kinds of finite plane geometry: affine and projective. In an affine plane, the normal sense of parallel lines applies. In a projective plane, by contrast, any two lines intersect at a unique point, so parallel lines do not exist. Both finite affine plane geometry and finite projective plane geometry may be described by fairly simple axioms. Finite affine planes An affine plane geometry is a nonempty set X (whose elements are called "points"), along with a nonempty
https://en.wikipedia.org/wiki/Game%20tree
In the context of combinatorial game theory, which typically studies sequential games with perfect information, a game tree is a graph representing all possible game states within such a game. Such games include well-known ones such as chess, checkers, Go, and tic-tac-toe. This can be used to measure the complexity of a game, as it represents all the possible ways a game can pan out. Due to the large game trees of complex games such as chess, algorithms that are designed to play this class of games will use partial game trees, which makes computation feasible on modern computers. Various methods exist to solve game trees. If a complete game tree can be generated, a deterministic algorithm, such as backward induction or retrograde analysis can be used. Randomized algorithms and minmax algorithms such as MCTS can be used in cases where a complete game tree is not feasible. Understanding the game tree To better understand the game tree, it can be thought of as a technique for analyzing adversarial games, which determine the actions that player takes to win the game. In game theory, a game tree is a directed graph whose nodes are positions in a game (e.g., the arrangement of the pieces in a board game) and whose edges are moves (e.g., to move pieces from one position on a board to another). The complete game tree for a game is the game tree starting at the initial position and containing all possible moves from each position; the complete tree is the same tree as that obtained from the extensive-form game representation. To be more specific, the complete game is a norm for the game in game theory. Which can clearly express many important aspects. For example, the sequence of actions that stakeholders may take, their choices at each decision point, information about actions taken by other stakeholders when each stakeholder makes a decision, and the benefits of all possible game results. The diagram shows the first two levels, or plies, in the game tree for tic-tac-to
https://en.wikipedia.org/wiki/Thales%27s%20theorem
In geometry, Thales's theorem states that if , , and are distinct points on a circle where the line is a diameter, the angle is a right angle. Thales's theorem is a special case of the inscribed angle theorem and is mentioned and proved as part of the 31st proposition in the third book of Euclid's Elements. It is generally attributed to Thales of Miletus, but it is sometimes attributed to Pythagoras. History Babylonian mathematicians knew this for special cases before Greek mathematicians proved it. Thales of Miletus (early 6th century BC) is traditionally credited with proving the theorem; however, even by the 5th century BC there was nothing extant of Thales' writing, and inventions and ideas were attributed to men of wisdom such as Thales and Pythagoras by later doxographers based on hearsay and speculation. Reference to Thales was made by Proclus (5th century AD), and by Diogenes Laërtius (3rd century AD) documenting Pamphila's (1st century AD) statement that Thales "was the first to inscribe in a circle a right-angle triangle". Thales was claimed to have traveled to Egypt and Babylonia, where he is supposed to have learned about geometry and astronomy and thence brought their knowledge to the Greeks, along the way inventing the concept of geometric proof and proving various geometric theorems. However, there is no direct evidence for any of these claims, and they were most likely invented speculative rationalizations. Modern scholars believe that Greek deductive geometry as found in Euclid's Elements was not developed until the 4th century BC, and any geometric knowledge Thales may have had would have been observational. The theorem appears in Book II of Euclid's Elements () as proposition 31: "In a circle the angle in the semicircle is right, that in a greater segment less than a right angle, and that in a less segment greater than a right angle; further the angle of the greater segment is greater than a right angle, and the angle of the less segment i
https://en.wikipedia.org/wiki/Synthetic%20geometry
Synthetic geometry (sometimes referred to as axiomatic geometry or even pure geometry) is geometry without the use of coordinates. It relies on the axiomatic method for proving all results from a few basic properties initially called postulate, and at present called axioms. The term "synthetic geometry" was coined only after the 17th century, and the introduction by René Descartes of the coordinate method, which was called analytic geometry. So the term "synthetic geometry" was introduced to refer to the older methods that were, before Descartes, the only known ones. According to Felix Klein Synthetic geometry is that which studies figures as such, without recourse to formulae, whereas analytic geometry consistently makes use of such formulae as can be written down after the adoption of an appropriate system of coordinates. The first systematic approach for synthetic geometry is Euclid's Elements. However, it appeared at the end of the 19th century that Euclid's postulates were not sufficient for characterizing geometry. The first complete axiom system for geometry was given only at the end of the 19th century by David Hilbert. At the same time, it appeared that both synthetic methods and analytic methods can be used to build geometry. The fact that the two approches are equivalent has been proved by Emil Artin in his book Geometric Algebra. Because of this equivalence, the distinction between synthetic and analytic geometry is no more in use, except at elementary level, or for geometries that are not related to any sort of numbers, such as some finite geometries and non-Desarguesian geometry. Logical synthesis The process of logical synthesis begins with some arbitrary but definite starting point. This starting point is the introduction of primitive notions or primitives and axioms about these primitives: Primitives are the most basic ideas. Typically they include both objects and relationships. In geometry, the objects are things such as points, lines and
https://en.wikipedia.org/wiki/MESI%20protocol
The MESI protocol is an Invalidate-based cache coherence protocol, and is one of the most common protocols that support write-back caches. It is also known as the Illinois protocol due to its development at the University of Illinois at Urbana-Champaign. Write back caches can save considerable bandwidth generally wasted on a write through cache. There is always a dirty state present in write back caches that indicates that the data in the cache is different from that in main memory. The Illinois Protocol requires a cache to cache transfer on a miss if the block resides in another cache. This protocol reduces the number of main memory transactions with respect to the MSI protocol. This marks a significant improvement in performance. States The letters in the acronym MESI represent four exclusive states that a cache line can be marked with (encoded using two additional bits): Modified (M) The cache line is present only in the current cache, and is dirty - it has been modified (M state) from the value in main memory. The cache is required to write the data back to main memory at some time in the future, before permitting any other read of the (no longer valid) main memory state. The write-back changes the line to the Shared state(S). Exclusive (E) The cache line is present only in the current cache, but is clean - it matches main memory. It may be changed to the Shared state at any time, in response to a read request. Alternatively, it may be changed to the Modified state when writing to it. Shared (S) Indicates that this cache line may be stored in other caches of the machine and is clean - it matches the main memory. The line may be discarded (changed to the Invalid state) at any time. Invalid (I) Indicates that this cache line is invalid (unused). For any given pair of caches, the permitted states of a given cache line are as follows: When the block is marked M (modified) or E (exclusive), the copies of the block in other Caches are marked as I (Invalid).
https://en.wikipedia.org/wiki/Procmail
procmail is an email server software component — specifically, a message delivery agent (MDA). It was one of the earliest mail filter programs. It is typically used in Unix-like mail systems, using the mbox and Maildir storage formats. procmail was first developed in 1990, by Stephen R. van den Berg. Philip Guenther took over maintainership for a number of years, but relinquished the role in 2014. The software remained unmaintained for several years, and was believed to be defunct. In 2020 May, Stephen van den Berg resumed maintenance again. The program has since seen multiple releases and bug-fixes. Uses The most common use case for procmail is to filter mail into different mailboxes, based on criteria such as sender address, subject keywords, and/or mailing list address. Another use is to let procmail call an external spam filter program, such as SpamAssassin. This method can allow for spam to be filtered or deleted. The procmail developers have built a mailing list manager called SmartList on top of procmail. Procmail is an early example of a mail filtering tool and language. It is a data-driven programming language, similar to earlier line-oriented languages such as sed and AWK. Operation procmail reads mail messages given to it on standard input, delivering or otherwise dispositioning each one. procmail is typically not invoked directly by the user. Rather, some other mail program will call upon procmail to deliver a message according to the user's wishes. Message transfer agents (MTAs), such as Sendmail or Postfix), can be configured to use procmail to deliver all mail. A mail retrieval agent such as fetchmail can invoke procmail as needed. The companion tool formail allows procmail to be applied to mail already in a mailbox. procmail's behavior is controlled by a config file (by default, in the user's home directory) containing one or more recipes, read in order. Each recipe consists of a mode, zero or more conditions, and an action.
https://en.wikipedia.org/wiki/Sinc%20filter
In signal processing, a sinc filter can refer to either a sinc-in-time filter whose impulse response is a sinc function and whose frequency response is rectangular, or to a sinc-in-frequency filter whose impulse response is rectangular and whose frequency response is a sinc function. Calling them according to which domain the filter resembles a sinc avoids confusion. If the domain if it is unspecified, sinc-in-time is often assumed, or context hopefully can infer the correct domain. Sinc-in-time Sinc-in-time is an ideal filter that removes all frequency components above a given cutoff frequency, without attenuating lower frequencies, and has linear phase response. It may thus be considered a brick-wall filter or rectangular filter. Its impulse response is a sinc function in the time domain: while its frequency response is a rectangular function: where (representing its bandwidth) is an arbitrary cutoff frequency. Its impulse response is given by the inverse Fourier transform of its frequency response: where sinc is the normalized sinc function. Brick-wall filters An idealized electronic filter with full transmission in the pass band, complete attenuation in the stop band, and abrupt transitions is known colloquially as a "brick-wall filter" (in reference to the shape of the transfer function). The sinc-in-time filter is a brick-wall low-pass filter, from which brick-wall band-pass filters and high-pass filters are easily constructed. The lowpass filter with brick-wall cutoff at frequency BL has impulse response and transfer function given by: The band-pass filter with lower band edge BL and upper band edge BH is just the difference of two such sinc-in-time filters (since the filters are zero phase, their magnitude responses subtract directly): The high-pass filter with lower band edge BH is just a transparent filter minus a sinc-in-time filter, which makes it clear that the Dirac delta function is the limit of a narrow-in-time sinc-in-time filter: U
https://en.wikipedia.org/wiki/RoboCup
RoboCup is an annual international robotics competition founded in 1996 by a group of university professors (including Hiroaki Kitano, Manuela M. Veloso, and Minoru Asada). The aim of the competition is to promote robotics and AI research by offering a publicly appealing – but formidable – challenge. The name RoboCup is a contraction of the competition's full name, "Robot World Cup Initiative" (based on the FIFA World Cup), but there are many other areas of competition such as "RoboCupRescue", "RoboCup@Home" and "RoboCupJunior". Peter Stone is the current president of RoboCup, and has been since 2019. The official goal of the project is: "By the middle of the 21st century, a team of fully autonomous humanoid robot soccer players shall win a soccer game, complying with the official rules of FIFA, against the winner of the most recent World Cup." RoboCup leagues The contest currently has six major domains of competition, each with a number of leagues and sub-leagues. These include: RoboCup Soccer Standard Platform League (formerly Four Legged League) (Standard Platform League Homepage) Small Size League Middle Size League (Middle Size League Homepage) Simulation League 2D Soccer Simulation 3D Soccer Simulation Humanoid League RoboCup Rescue League Rescue Robot League Rescue Simulation League Rapidly Manufactured Robot Challenge RoboCup@Home, which debuted in 2006, focuses on the introduction of autonomous robots to human society RoboCup@Home Open Platform League (formerly just RoboCup@Home) Robocup@Home Domestic Standard Platform League RoboCup@Home Social Standard Platform League RoboCup Logistics League, which debuted in 2012, is an application-driven league inspired by the industrial scenario of a smart factory RoboCup@Work, which debuted in 2016, "targets the use of robots in work-related scenarios" RoboCupJunior Soccer League OnStage (formerly Dance) League Rescue League Rescue CoSpace League Each team is fully autonomous in all RoboCup leagues. Once the g
https://en.wikipedia.org/wiki/Tit%20for%20tat
Tit for tat is an English saying meaning "equivalent retaliation". It developed from "tip for tap", first recorded in 1558. It is also a highly effective strategy in game theory. An agent using this strategy will first cooperate, then subsequently replicate an opponent's previous action. If the opponent previously was cooperative, the agent is cooperative. If not, the agent is not. Game theory Tit-for-tat has been very successfully used as a strategy for the iterated prisoner's dilemma. The strategy was first introduced by Anatol Rapoport in Robert Axelrod's two tournaments, held around 1980. Notably, it was (on both occasions) both the simplest strategy and the most successful in direct competition. An agent using this strategy will first cooperate, then subsequently replicate an opponent's previous action. If the opponent previously was cooperative, the agent is cooperative. If not, the agent is not. This is similar to reciprocal altruism in biology. History The term developed most concretely in Northern Ireland, to describe increasing eye for an eye mentality, amongst the Irish Republicans and Ulster Unionists. This can be seen with the Red Lion Pub bombing by the IRA being followed by the McGurk's Bar bombing, both targeting civilians. Specifically the attacks of massacres would be structured around the mutual killings of Protestant and Catholic communities, both communities being generally uninterested in the violence. This sectarian mentality led to the term "Tit for tat bombings" to enter the common lexicon of Northern Irish society. Implications The success of the tit-for-tat strategy, which is largely cooperative despite that its name emphasizes an adversarial nature, took many by surprise. Arrayed against strategies produced by various teams it won in two competitions. After the first competition, new strategies formulated specifically to combat tit-for-tat failed due to their negative interactions with each other; a successful strategy other than ti
https://en.wikipedia.org/wiki/Prostate-specific%20antigen
Prostate-specific antigen (PSA), also known as gamma-seminoprotein or kallikrein-3 (KLK3), P-30 antigen, is a glycoprotein enzyme encoded in humans by the KLK3 gene. PSA is a member of the kallikrein-related peptidase family and is secreted by the epithelial cells of the prostate gland. PSA is produced for the ejaculate, where it liquefies semen in the seminal coagulum and allows sperm to swim freely. It is also believed to be instrumental in dissolving cervical mucus, allowing the entry of sperm into the uterus. PSA is present in small quantities in the serum of men with healthy prostates, but is often elevated in the presence of prostate cancer or other prostate disorders. PSA is not uniquely an indicator of prostate cancer, but may also detect prostatitis or benign prostatic hyperplasia. Medical diagnostic uses Prostate cancer Screening Clinical practice guidelines for prostate cancer screening vary and are controversial, in part due to uncertainty as to whether the benefits of screening ultimately outweigh the risks of overdiagnosis and overtreatment. In the United States, the Food and Drug Administration (FDA) has approved the PSA test for annual screening of prostate cancer in men of age 50 and older. The patient is required to be informed of the risks and benefits of PSA testing prior to performing the test. In the United Kingdom, the National Health Service (NHS) does not mandate, nor advise for PSA test, but allows patients to decide based on their doctor's advice. The NHS does not offer general PSA screening, for similar reasons. PSA levels between 4 and 10ng/mL (nanograms per milliliter) are considered to be suspicious, and consideration should be given to confirming the abnormal PSA with a repeat test. If indicated, prostate biopsy is performed to obtain a tissue sample for histopathological analysis. While PSA testing may help 1 in 1,000 avoid death due to prostate cancer, 4 to 5 in 1,000 would die from prostate cancer after 10 years even wit
https://en.wikipedia.org/wiki/Y-%CE%94%20transform
In electrical engineering, the Y-Δ transform, also written wye-delta and also known by many other names, is a mathematical technique to simplify the analysis of an electrical network. The name derives from the shapes of the circuit diagrams, which look respectively like the letter Y and the Greek capital letter Δ. This circuit transformation theory was published by Arthur Edwin Kennelly in 1899. It is widely used in analysis of three-phase electric power circuits. The Y-Δ transform can be considered a special case of the star-mesh transform for three resistors. In mathematics, the Y-Δ transform plays an important role in theory of circular planar graphs. Names The Y-Δ transform is known by a variety of other names, mostly based upon the two shapes involved, listed in either order. The Y, spelled out as wye, can also be called T or star; the Δ, spelled out as delta, can also be called triangle, Π (spelled out as pi), or mesh. Thus, common names for the transformation include wye-delta or delta-wye, star-delta, star-mesh, or T-Π. Basic Y-Δ transformation The transformation is used to establish equivalence for networks with three terminals. Where three elements terminate at a common node and none are sources, the node is eliminated by transforming the impedances. For equivalence, the impedance between any pair of terminals must be the same for both networks. The equations given here are valid for complex as well as real impedances. Complex impedance is a quantity measured in ohms which represents resistance as positive real numbers in the usual manner, and also represents reactance as positive and negative imaginary values. Equations for the transformation from Δ to Y The general idea is to compute the impedance at a terminal node of the Y circuit with impedances , to adjacent nodes in the Δ circuit by where are all impedances in the Δ circuit. This yields the specific formula Equations for the transformation from Y to Δ The general idea is to compute an i
https://en.wikipedia.org/wiki/Clifford%20A.%20Pickover
Clifford Alan Pickover (born August 15, 1957) is an American author, editor, and columnist in the fields of science, mathematics, science fiction, innovation, and creativity. For many years, he was employed at the IBM Thomas J. Watson Research Center in Yorktown, New York, where he was editor-in-chief of the IBM Journal of Research and Development. He has been granted more than 700 U.S. patents, is an elected Fellow for the Committee for Skeptical Inquiry, and is author of more than 50 books, translated into more than a dozen languages. Life, education and career He received his PhD in 1982 from Yale University's Department of Molecular Biophysics and Biochemistry, where he conducted research on X-ray scattering and protein structure. Pickover graduated first in his class from Franklin and Marshall College, after completing the four-year undergraduate program in three years. Pickover was elected as a Fellow for the Committee for Skeptical Inquiry for his "significant contributions to the general public's understanding of science, reason, and critical inquiry through their scholarship, writing, and work in the media." Other Fellows have included Carl Sagan and Isaac Asimov. He has been awarded almost 700 United States patents, and his The Math Book was winner of the 2011 Neumann Prize. He joined IBM at the Thomas J. Watson Research Center in 1982, as a member of the speech synthesis group and later worked on the design-automation workstations. For much of his career, Pickover has published technical articles in the areas of scientific visualization, computer art, and recreational mathematics. He is currently an associate editor for the scientific journal Computers and Graphics and is an editorial board member for Odyssey and Leonardo. He is also the Brain-Strain columnist for Odyssey magazine, and, for many years, he was the Brain-Boggler columnist for Discover magazine. Pickover has received more than 100 IBM invention achievement awards, three research divi
https://en.wikipedia.org/wiki/Efficiency
Efficiency is the often measurable ability to avoid wasting materials, energy, efforts, money, and time while performing a task. In a more general sense, it is the ability to do things well, successfully, and without waste. In more mathematical or scientific terms, it signifies the level of performance that uses the least amount of inputs to achieve the highest amount of output. It often specifically comprises the capability of a specific application of effort to produce a specific outcome with a minimum amount or quantity of waste, expense, or unnecessary effort. Efficiency refers to very different inputs and outputs in different fields and industries. In 2019, the European Commission said: "Resource efficiency means using the Earth's limited resources in a sustainable manner while minimising impacts on the environment. It allows us to create more with less and to deliver greater value with less input." Writer Deborah Stone notes that efficiency is "not a goal in itself. It is not something we want for its own sake, but rather because it helps us attain more of the things we value." Efficiency and effectiveness Efficiency is very often confused with effectiveness. In general, efficiency is a measurable concept, quantitatively determined by the ratio of useful output to total useful input. Effectiveness is the simpler concept of being able to achieve a desired result, which can be expressed quantitatively but does not usually require more complicated mathematics than addition. Efficiency can often be expressed as a percentage of the result that could ideally be expected, for example if no energy were lost due to friction or other causes, in which case 100% of fuel or other input would be used to produce the desired result. In some cases efficiency can be indirectly quantified with a non-percentage value, e.g. specific impulse. A common but confusing way of distinguishing between efficiency and effectiveness is the saying "Efficiency is doing things right, while
https://en.wikipedia.org/wiki/Two%20New%20Sciences
The Discourses and Mathematical Demonstrations Relating to Two New Sciences ( ) published in 1638 was Galileo Galilei's final book and a scientific testament covering much of his work in physics over the preceding thirty years. It was written partly in Italian and partly in Latin. After his Dialogue Concerning the Two Chief World Systems, the Roman Inquisition had banned the publication of any of Galileo's works, including any he might write in the future. After the failure of his initial attempts to publish Two New Sciences in France, Germany, and Poland, it was published by Lodewijk Elzevir who was working in Leiden, South Holland, where the writ of the Inquisition was of less consequence (see House of Elzevir). Fra Fulgenzio Micanzio, the official theologian of the Republic of Venice, had initially offered to help Galileo publish in Venice the new work, but he pointed out that publishing the Two New Sciences in Venice might cause Galileo unnecessary trouble; thus, the book was eventually published in Holland. Galileo did not seem to suffer any harm from the Inquisition for publishing this book since in January 1639, the book reached Rome's bookstores, and all available copies (about fifty) were quickly sold. Discourses was written in a style similar to Dialogues, in which three men (Simplicio, Sagredo, and Salviati) discuss and debate the various questions Galileo is seeking to answer. There is a notable change in the men, however; Simplicio, in particular, is no longer quite as simple-minded, stubborn and Aristotelian as his name implies. His arguments are representative of Galileo's own early beliefs, as Sagredo represents his middle period, and Salviati proposes Galileo's newest models. Introduction The book is divided into four days, each addressing different areas of physics. Galileo dedicates Two New Sciences to Lord Count of Noailles. In the First Day, Galileo addressed topics that were discussed in Aristotle's Physics and also the Aristotelian school
https://en.wikipedia.org/wiki/Antivirus%20software
Antivirus software (abbreviated to AV software), also known as anti-malware, is a computer program used to prevent, detect, and remove malware. Antivirus software was originally developed to detect and remove computer viruses, hence the name. However, with the proliferation of other malware, antivirus software started to protect against other computer threats. Some products also include protection from malicious URLs, spam, and phishing. History 1949–1980 period (pre-antivirus days) Although the roots of the computer virus date back as early as 1949, when the Hungarian scientist John von Neumann published the "Theory of self-reproducing automata", the first known computer virus appeared in 1971 and was dubbed the "Creeper virus". This computer virus infected Digital Equipment Corporation's (DEC) PDP-10 mainframe computers running the TENEX operating system. The Creeper virus was eventually deleted by a program created by Ray Tomlinson and known as "The Reaper". Some people consider "The Reaper" the first antivirus software ever written – it may be the case, but it is important to note that the Reaper was actually a virus itself specifically designed to remove the Creeper virus. The Creeper virus was followed by several other viruses. The first known that appeared "in the wild" was "Elk Cloner", in 1981, which infected Apple II computers. In 1983, the term "computer virus" was coined by Fred Cohen in one of the first ever published academic papers on computer viruses. Cohen used the term "computer virus" to describe programs that: "affect other computer programs by modifying them in such a way as to include a (possibly evolved) copy of itself." (note that a more recent definition of computer virus has been given by the Hungarian security researcher Péter Szőr: "a code that recursively replicates a possibly evolved copy of itself"). The first IBM PC compatible "in the wild" computer virus, and one of the first real widespread infections, was "Brain" in 1986. F
https://en.wikipedia.org/wiki/Mattel%20Aquarius
Aquarius is a home computer designed by Radofin and released by Mattel Electronics in 1983. Based on the Zilog Z80 microprocessor, the system has a rubber chiclet keyboard, 4K of RAM, and a subset of Microsoft BASIC in ROM. It connects to a television set for audiovisual output, and uses a cassette tape recorder for secondary data storage. A limited number of peripherals, such as a 40-column thermal printer, a 4-color printer/plotter, and a 300 baud modem, were released. The Aquarius was discontinued in October 1983, only a few months after it was launched. Development Looking to compete in the home computer market, Mattel Electronics turned to Radofin, the Hong Kong based manufacturer of their Intellivision consoles. Radofin had designed two computer systems. Internally they were known as "Checkers" and the more sophisticated "Chess". Mattel contracted for these to become the Aquarius and Aquarius II, respectively. Aquarius was announced in 1982 and finally released in June 1983, at a price of $160. Production ceased four months later because of poor sales. Mattel paid Radofin to take back the marketing rights. Four other companies: CEZAR Industries, CRIMAC, New Era Incentives, and Bentley Industries also marketed the unit and accessories. The Aquarius was often bundled with the Mini-Expander peripheral, which added game pads, an additional cartridge port for memory expansion, and the General Instrument AY-3-8910 sound chip. Other peripherals were the Data recorder, 40 column thermal printer, 4K and 16K RAM carts. Less common first party peripherals include a 300 baud cartridge modem, 32k RAM cart, 4 color plotter, and Quick Disk drive. Reception Although less expensive than the TI-99/4A and VIC-20, the Aquarius had comparatively weak graphics and limited memory. Internally, Mattel programmers adopted Bob Del Principe's mock slogan, "Aquarius -a system for the seventies". Of the 32 software titles Mattel announced for the unit, only 21 were released, most of
https://en.wikipedia.org/wiki/Autopoiesis
The term autopoiesis () refers to a system capable of producing and maintaining itself by creating its own parts. The term was introduced in the 1972 publication Autopoiesis and Cognition: The Realization of the Living by Chilean biologists Humberto Maturana and Francisco Varela to define the self-maintaining chemistry of living cells. The concept has since been applied to the fields of cognition, systems theory, architecture and sociology. Niklas Luhmann briefly introduced the concept of autopoiesis to organizational theory. Overview In their 1972 book Autopoiesis and Cognition, Chilean biologists Maturana and Varela described how they invented the word autopoiesis. They explained that, They described the "space defined by an autopoietic system" as "self-contained", a space that "cannot be described by using dimensions that define another space. When we refer to our interactions with a concrete autopoietic system, however, we project this system on the space of our manipulations and make a description of this projection." Meaning Autopoiesis was originally presented as a system description that was said to define and explain the nature of living systems. A canonical example of an autopoietic system is the biological cell. The eukaryotic cell, for example, is made of various biochemical components such as nucleic acids and proteins, and is organized into bounded structures such as the cell nucleus, various organelles, a cell membrane and cytoskeleton. These structures, based on an internal flow of molecules and energy, produce the components which, in turn, continue to maintain the organized bounded structure that gives rise to these components. An autopoietic system is to be contrasted with an allopoietic system, such as a car factory, which uses raw materials (components) to generate a car (an organized structure) which is something other than itself (the factory). However, if the system is extended from the factory to include components in the factory's
https://en.wikipedia.org/wiki/Noise-cancelling%20headphones
Noise-cancelling headphones are a special type/kind of headphones which suppress unwanted ambient sounds using active noise control. This is distinct from passive headphones which, if they reduce ambient sounds at all, use techniques such as soundproofing. Noise cancellation makes it possible to listen to audio content without raising the volume excessively. It can also help a passenger sleep in a noisy vehicle such as an airliner. In the aviation environment, noise-cancelling headphones increase the signal-to-noise ratio significantly more than passive noise attenuating headphones or no headphones, making hearing important information such as safety announcements easier. Noise-cancelling headphones can improve listening enough to completely offset the effect of a distracting concurrent activity. Theory To cancel the lower-frequency portions of the noise, noise-cancelling headphones use active noise control or ANC. A microphone captures the targeted ambient sounds, and a small amplifier generates sound waves that are exactly out of phase with the undesired sounds. When the sound pressure of the noise wave is high, the cancelling wave is low (and vice versa). The opposite sound waves collide and are eliminated or "cancelled" (destructive interference). Most noise-cancelling headsets in the consumer market generate the noise-cancelling waveform in real time with analogue technology. In contrast, other active noise and vibration control products use soft real-time digital processing. According to an experiment conducted to test how lightweight earphones reduced noise as compared to commercial headphones and earphones, lightweight headphones achieved better noise reduction than normal headphones. The experiment also supported that in-ear headphones worked better at reducing noise than outer-ear headphones. Cancellation focuses on constant droning sounds like road noise and is less effective on short/sharp sounds like voices or breaking glass. It also is ineffective
https://en.wikipedia.org/wiki/M.U.L.E.
M.U.L.E. is a 1983 multiplayer video game written for the Atari 8-bit family of home computers by Ozark Softscape. Designer Danielle Bunten Berry (credited as Dan Bunten) took advantage of the four joystick ports of the Atari 400 and 800 to allow four-player simultaneous play. M.U.L.E. was one of the first five games published in 1983 by new company Electronic Arts, alongside Axis Assassin, Archon: The Light and the Dark, Worms?, and Hard Hat Mack. Primarily a turn-based strategy game, it incorporates real-time elements where players compete directly as well as aspects that simulate economics. The game was ported to the Commodore 64, Nintendo Entertainment System, and IBM PC (as a self-booting disk). Japanese versions also exist for the PC-88, Sharp X1, and MSX2 computers. Like the subsequent models of the Atari 8-bit family, none of these systems allow four players with separate joysticks. The Commodore 64 version lets four players share joysticks, with two players using the keyboard during action portions. Gameplay Set on the fictional planet Irata (Atari backwards), the game is an exercise in supply and demand economics involving competition among four players, with computer opponents automatically filling in for any missing players. Players choose the race of their colonist, which has advantages and disadvantages that can be paired to their respective strategies. To win, players not only compete against each other to amass the largest amount of wealth, but must also cooperate for the survival of the colony. Central to the game is the acquisition and use of Multiple Use Labor Elements, or M.U.L.E.s, to develop and harvest resources from the player's real estate. Depending on how it is outfitted, a M.U.L.E. can be configured to harvest Energy, Food, Smithore (from which M.U.L.E.s are constructed), and Crystite (a valuable mineral available only at the "Tournament" level). Players must balance supply and demand of these elements, buying what they need and se
https://en.wikipedia.org/wiki/Formal%20verification
In the context of hardware and software systems, formal verification is the act of proving or disproving the correctness of a system with respect to a certain formal specification or property, using formal methods of mathematics. Formal verification is a key incentive for formal specification of systems, and is at the core of formal methods. It represents an important dimension of analysis and verification in electronic design automation and is one approach to software verification. The use of formal verification enables the highest Evaluation Assurance Level (EAL7) in the framework of common criteria for computer security certification. Formal verification can be helpful in proving the correctness of systems such as: cryptographic protocols, combinational circuits, digital circuits with internal memory, and software expressed as source code in a programming language. Prominent examples of verified software systems include the CompCert verified C compiler and the seL4 high-assurance operating system kernel. The verification of these systems is done by ensuring the existence of a formal proof of a mathematical model of the system. Examples of mathematical objects used to model systems are: finite-state machines, labelled transition systems, Horn clauses, Petri nets, vector addition systems, timed automata, hybrid automata, process algebra, formal semantics of programming languages such as operational semantics, denotational semantics, axiomatic semantics and Hoare logic. Approaches One approach and formation is model checking, which consists of a systematically exhaustive exploration of the mathematical model (this is possible for finite models, but also for some infinite models where infinite sets of states can be effectively represented finitely by using abstraction or taking advantage of symmetry). Usually, this consists of exploring all states and transitions in the model, by using smart and domain-specific abstraction techniques to consider whole groups o
https://en.wikipedia.org/wiki/Operational%20semantics
Operational semantics is a category of formal programming language semantics in which certain desired properties of a program, such as correctness, safety or security, are verified by constructing proofs from logical statements about its execution and procedures, rather than by attaching mathematical meanings to its terms (denotational semantics). Operational semantics are classified in two categories: structural operational semantics (or small-step semantics) formally describe how the individual steps of a computation take place in a computer-based system; by opposition natural semantics (or big-step semantics) describe how the overall results of the executions are obtained. Other approaches to providing a formal semantics of programming languages include axiomatic semantics and denotational semantics. The operational semantics for a programming language describes how a valid program is interpreted as sequences of computational steps. These sequences then are the meaning of the program. In the context of functional programming, the final step in a terminating sequence returns the value of the program. (In general there can be many return values for a single program, because the program could be nondeterministic, and even for a deterministic program there can be many computation sequences since the semantics may not specify exactly what sequence of operations arrives at that value.) Perhaps the first formal incarnation of operational semantics was the use of the lambda calculus to define the semantics of Lisp. Abstract machines in the tradition of the SECD machine are also closely related. History The concept of operational semantics was used for the first time in defining the semantics of Algol 68. The following statement is a quote from the revised ALGOL 68 report: The meaning of a program in the strict language is explained in terms of a hypothetical computer which performs the set of actions that constitute the elaboration of that program. (Algol68, Section
https://en.wikipedia.org/wiki/Words%20%28Unix%29
words is a standard file on Unix and Unix-like operating systems, and is simply a newline-delimited list of dictionary words. It is used, for instance, by spell-checking programs. The words file is usually stored in or . On Debian and Ubuntu, the file is provided by the package, or its provider packages , , etc. On Fedora and Arch Linux, the file is provided by the package. The package is sourced from data from the Moby Project, a public domain compilation of words. References External links Sample words file from Duke CS department Unix Unix software
https://en.wikipedia.org/wiki/Series%20and%20parallel%20circuits
Two-terminal components and electrical networks can be connected in series or parallel. The resulting electrical network will have two terminals, and itself can participate in a series or parallel topology. Whether a two-terminal "object" is an electrical component (e.g. a resistor) or an electrical network (e.g. resistors in series) is a matter of perspective. This article will use "component" to refer to a two-terminal "object" that participate in the series/parallel networks. Components connected in series are connected along a single "electrical path", and each component has the same electric current through it, equal to the current through the network. The voltage across the network is equal to the sum of the voltages across each component. Components connected in parallel are connected along multiple paths, and each component has the same voltage across it, equal to the voltage across the network. The current through the network is equal to the sum of the currents through each component. The two preceding statements are equivalent, except for exchanging the role of voltage and current. A circuit composed solely of components connected in series is known as a series circuit; likewise, one connected completely in parallel is known as a parallel circuit. Many circuits can be analyzed as a combination of series and parallel circuits, along with other configurations. In a series circuit, the current that flows through each of the components is the same, and the voltage across the circuit is the sum of the individual voltage drops across each component. In a parallel circuit, the voltage across each of the components is the same, and the total current is the sum of the currents flowing through each component. Consider a very simple circuit consisting of four light bulbs and a 12-volt automotive battery. If a wire joins the battery to one bulb, to the next bulb, to the next bulb, to the next bulb, then back to the battery in one continuous loop, the bulbs are s
https://en.wikipedia.org/wiki/Fresnel%20integral
The Fresnel integrals and are two transcendental functions named after Augustin-Jean Fresnel that are used in optics and are closely related to the error function (). They arise in the description of near-field Fresnel diffraction phenomena and are defined through the following integral representations: The simultaneous parametric plot of and is the Euler spiral (also known as the Cornu spiral or clothoid). Definition The Fresnel integrals admit the following power series expansions that converge for all : Some widely used tables use instead of for the argument of the integrals defining and . This changes their limits at infinity from to and the arc length for the first spiral turn from to 2 (at ). These alternative functions are usually known as normalized Fresnel integrals. Euler spiral The Euler spiral, also known as Cornu spiral or clothoid, is the curve generated by a parametric plot of against . The Cornu spiral was created by Marie Alfred Cornu as a nomogram for diffraction computations in science and engineering. From the definitions of Fresnel integrals, the infinitesimals and are thus: Thus the length of the spiral measured from the origin can be expressed as That is, the parameter is the curve length measured from the origin , and the Euler spiral has infinite length. The vector also expresses the unit tangent vector along the spiral, giving . Since is the curve length, the curvature can be expressed as Thus the rate of change of curvature with respect to the curve length is An Euler spiral has the property that its curvature at any point is proportional to the distance along the spiral, measured from the origin. This property makes it useful as a transition curve in highway and railway engineering: if a vehicle follows the spiral at unit speed, the parameter in the above derivatives also represents the time. Consequently, a vehicle following the spiral at constant speed will have a constant rate of angular acceleration.
https://en.wikipedia.org/wiki/Nomogram
A nomogram (from Greek , "law" and , "line"), also called a nomograph, alignment chart, or abac, is a graphical calculating device, a two-dimensional diagram designed to allow the approximate graphical computation of a mathematical function. The field of nomography was invented in 1884 by the French engineer Philbert Maurice d'Ocagne (1862–1938) and used extensively for many years to provide engineers with fast graphical calculations of complicated formulas to a practical precision. Nomograms use a parallel coordinate system invented by d'Ocagne rather than standard Cartesian coordinates. A nomogram consists of a set of n scales, one for each variable in an equation. Knowing the values of n-1 variables, the value of the unknown variable can be found, or by fixing the values of some variables, the relationship between the unfixed ones can be studied. The result is obtained by laying a straightedge across the known values on the scales and reading the unknown value from where it crosses the scale for that variable. The virtual or drawn line created by the straightedge is called an index line or isopleth. Nomograms flourished in many different contexts for roughly 75 years because they allowed quick and accurate computations before the age of pocket calculators. Results from a nomogram are obtained very quickly and reliably by simply drawing one or more lines. The user does not have to know how to solve algebraic equations, look up data in tables, use a slide rule, or substitute numbers into equations to obtain results. The user does not even need to know the underlying equation the nomogram represents. In addition, nomograms naturally incorporate implicit or explicit domain knowledge into their design. For example, to create larger nomograms for greater accuracy the nomographer usually includes only scale ranges that are reasonable and of interest to the problem. Many nomograms include other useful markings such as reference labels and colored regions. All of thes
https://en.wikipedia.org/wiki/Computational%20neuroscience
Computational neuroscience (also known as theoretical neuroscience or mathematical neuroscience) is a branch of neuroscience which employs mathematics, computer science, theoretical analysis and abstractions of the brain to understand the principles that govern the development, structure, physiology and cognitive abilities of the nervous system. Computational neuroscience employs computational simulations to validate and solve mathematical models, and so can be seen as a sub-field of theoretical neuroscience; however, the two fields are often synonymous. The term mathematical neuroscience is also used sometimes, to stress the quantitative nature of the field. Computational neuroscience focuses on the description of biologically plausible neurons (and neural systems) and their physiology and dynamics, and it is therefore not directly concerned with biologically unrealistic models used in connectionism, control theory, cybernetics, quantitative psychology, machine learning, artificial neural networks, artificial intelligence and computational learning theory; although mutual inspiration exists and sometimes there is no strict limit between fields, with model abstraction in computational neuroscience depending on research scope and the granularity at which biological entities are analyzed. Models in theoretical neuroscience are aimed at capturing the essential features of the biological system at multiple spatial-temporal scales, from membrane currents, and chemical coupling via network oscillations, columnar and topographic architecture, nuclei, all the way up to psychological faculties like memory, learning and behavior. These computational models frame hypotheses that can be directly tested by biological or psychological experiments. History The term 'computational neuroscience' was introduced by Eric L. Schwartz, who organized a conference, held in 1985 in Carmel, California, at the request of the Systems Development Foundation to provide a summary of the curr
https://en.wikipedia.org/wiki/ABAP
ABAP (Advanced Business Application Programming, originally Allgemeiner Berichts-Aufbereitungs-Prozessor, German for "general report preparation processor") is a high-level programming language created by the German software company SAP SE. It is currently positioned, alongside Java, as the language for programming the SAP NetWeaver Application Server, which is part of the SAP NetWeaver platform for building business applications. Introduction ABAP is one of the many application-specific fourth-generation languages (4GLs) first developed in the 1980s. It was originally the report language for SAP R/2, a platform that enabled large corporations to build mainframe business applications for materials management and financial and management accounting. ABAP used to be an abbreviation of Allgemeiner Berichts-Aufbereitungs-Prozessor, German for "generic report preparation processor", but was later renamed to the English Advanced Business Application Programming. ABAP was one of the first languages to include the concept of Logical Databases (LDBs), which provides a high level of abstraction from the basic database level(s),which supports every platform, language and units. The ABAP language was originally used by developers to develop the SAP R/3 platform. It was also intended to be used by SAP customers to enhance SAP applications – customers can develop custom reports and interfaces with ABAP programming. The language was geared towards more technical customers with programming experience. ABAP remains as the language for creating programs for the client–server R/3 system, which SAP first released in 1992. As computer hardware evolved through the 1990s, more and more of SAP's applications and systems were written in ABAP. By 2001, all but the most basic functions were written in ABAP. In 1999, SAP released an object-oriented extension to ABAP called ABAP Objects, along with R/3 release 4.6. SAP's current development platform NetWeaver supports both ABAP and Java.
https://en.wikipedia.org/wiki/Bruun%27s%20FFT%20algorithm
Bruun's algorithm is a fast Fourier transform (FFT) algorithm based on an unusual recursive polynomial-factorization approach, proposed for powers of two by G. Bruun in 1978 and generalized to arbitrary even composite sizes by H. Murakami in 1996. Because its operations involve only real coefficients until the last computation stage, it was initially proposed as a way to efficiently compute the discrete Fourier transform (DFT) of real data. Bruun's algorithm has not seen widespread use, however, as approaches based on the ordinary Cooley–Tukey FFT algorithm have been successfully adapted to real data with at least as much efficiency. Furthermore, there is evidence that Bruun's algorithm may be intrinsically less accurate than Cooley–Tukey in the face of finite numerical precision (Storn, 1993). Nevertheless, Bruun's algorithm illustrates an alternative algorithmic framework that can express both itself and the Cooley–Tukey algorithm, and thus provides an interesting perspective on FFTs that permits mixtures of the two algorithms and other generalizations. A polynomial approach to the DFT Recall that the DFT is defined by the formula: For convenience, let us denote the N roots of unity by ωNn (n = 0, ..., N − 1): and define the polynomial x(z) whose coefficients are xn: The DFT can then be understood as a reduction of this polynomial; that is, Xk is given by: where mod denotes the polynomial remainder operation. The key to fast algorithms like Bruun's or Cooley–Tukey comes from the fact that one can perform this set of N remainder operations in recursive stages. Recursive factorizations and FFTs In order to compute the DFT, we need to evaluate the remainder of modulo N degree-1 polynomials as described above. Evaluating these remainders one by one is equivalent to the evaluating the usual DFT formula directly, and requires O(N2) operations. However, one can combine these remainders recursively to reduce the cost, using the following trick: if we want to
https://en.wikipedia.org/wiki/Intentional%20radiator
An intentional radiator is any device that is deliberately designed to produce radio waves. Radio transmitters of all kinds, including the garage door opener, cordless telephone, cellular phone, wireless video sender, wireless microphone, and many others fall into this category. In the United States, intentional radiators are regulated under 47 CFR Part 15, Subpart C. See also Spurious emission Unintentional radiator (incidental radiator) Radio electronics Garage door openers
https://en.wikipedia.org/wiki/Unintentional%20radiator
In United States regulatory law, an unintentional radiator is any device that is designed to use radio frequency electrical signals within itself, or sends radio frequency signals over conducting cabling to other equipment, but is not intended to radiate radio frequency energy. An incidental radiator is a device that can generate radio frequency electrical energy even though it is not intentionally designed to do so. Unintentional and incidental radio frequency radiation can interfere with other electronic devices. In the United States, limits on radiated emissions from unintentional and incidental radiators are established by the Federal Communications Commission. Similar regulations have been promulgated by other governments. Reference is usually made in regulations to technical standards established by organizations such as ANSI, IEC and ITU. Example unintentional and incidental radiating devices A computer is a typical example of an unintentional radiator. Radio frequency signals used within the computer circuitry may be unintentionally coupled to the power cord or to an interconnecting cable, which then acts as an antenna. A radio receiver will often use an intermediate frequency which is detectable outside the radio—the concept behind at least one audience measurement concept for roadside detection of radio stations which passing motorists are listening to. Examples of incidental radiators include electric motors, transformers, dimmers, and corona from electrical powerlines. Radiated emissions from these commonly create interference on AM radio receivers and on television receivers. Regulatory overview In North America, active devices that are characterized as unintentional radiators are governed by Part 15 of the FCC regulations. In Canada, Innovation, Science and Economic Development considers them as interference-causing Equipment. Globally, most domestic regulation of unintentional radiators are based on ITU recommendations. Generally, this means the
https://en.wikipedia.org/wiki/Spurious%20emission
In radio communication, a spurious emission is any component of a radiated radio frequency signal the complete suppression of which would not impair the integrity of the modulation type or the information being transmitted. A radiated signal outside of a transmitter's assigned channel is an example of a spurious emission. Spurious emissions can include harmonic emissions, intermodulation products and frequency conversion products. See also Interference (communication) Radio spectrum pollution References Radio technology
https://en.wikipedia.org/wiki/Al-Kindi
Abū Yūsuf Yaʻqūb ibn ʼIsḥāq aṣ-Ṣabbāḥ al-Kindī (; ; ; ) was an Arab Muslim polymath active as a philosopher, mathematician, physician, and music theorist. Al-Kindi was the first of the Islamic peripatetic philosophers, and is hailed as the "father of Arab philosophy". Al-Kindi was born in Kufa and educated in Baghdad. He became a prominent figure in the House of Wisdom, and a number of Abbasid Caliphs appointed him to oversee the translation of Greek scientific and philosophical texts into the Arabic language. This contact with "the philosophy of the ancients" (as Hellenistic philosophy was often referred to by Muslim scholars) had a profound effect on him, as he synthesized, adapted and promoted Hellenistic and Peripatetic philosophy in the Muslim world. He subsequently wrote hundreds of original treatises of his own on a range of subjects ranging from metaphysics, ethics, logic and psychology, to medicine, pharmacology, mathematics, astronomy, astrology and optics, and further afield to more practical topics like perfumes, swords, jewels, glass, dyes, zoology, tides, mirrors, meteorology and earthquakes. In the field of mathematics, al-Kindi played an important role in introducing Hindu numerals to the Islamic world, and their further development into Arabic numerals along with al-Khwarizmi which eventually was adopted by the rest of the world. Al-Kindi was also one of the fathers of cryptography. Building on the work of al-Khalil (717–786), Al-Kindi's book entitled Manuscript on Deciphering Cryptographic Messages gave rise to the birth of cryptanalysis, was the earliest known use of statistical inference, and introduced several new methods of breaking ciphers, notably frequency analysis. He was able to create a scale that would enable doctors to gauge the effectiveness of their medication by combining his knowledge of mathematics and medicine. The central theme underpinning al-Kindi's philosophical writings is the compatibility between philosophy and other "or
https://en.wikipedia.org/wiki/Subcarrier
A subcarrier is a sideband of a radio frequency carrier wave, which is modulated to send additional information. Examples include the provision of colour in a black and white television system or the provision of stereo in a monophonic radio broadcast. There is no physical difference between a carrier and a subcarrier; the "sub" implies that it has been derived from a carrier, which has been amplitude modulated by a steady signal and has a constant frequency relation to it. FM stereo Stereo broadcasting is made possible by using a subcarrier on FM radio stations, which takes the left channel and "subtracts" the right channel from it — essentially by hooking up the right-channel wires backward (reversing polarity) and then joining left and reversed-right. The result is modulated with suppressed carrier AM, more correctly called sum and difference modulation or SDM, at 38 kHz in the FM signal, which is joined at 2% modulation with the mono left+right audio (which ranges 50 Hz ~ 15 kHz). A 19 kHz pilot tone is also added at a 9% modulation to trigger radios to decode the stereo subcarrier, making FM stereo fully compatible with mono. Once the receiver demodulates the L+R and L−R signals, it adds the two signals ([L+R] + [L−R] = 2L) to get the left channel and subtracts ([L+R] − [L−R] = 2R) to get the right channel. Rather than having a local oscillator, the 19 kHz pilot tone provides an in-phase reference signal used to reconstruct the missing carrier wave from the 38 kHz signal. For AM broadcasting, different analog (AM stereo) and digital (HD Radio) methods are used to produce stereophonic audio. Modulated subcarriers of the type used in FM broadcasting are impractical for AM broadcast due to the relatively narrow signal bandwidth allocated for a given AM signal. On standard AM broadcast radios, the entire 9 kHz to 10 kHz allocated bandwidth of the AM signal may be used for audio. Television Likewise, analog TV signals are transmitted with the black and whit
https://en.wikipedia.org/wiki/ISDB
Integrated Services Digital Broadcasting (ISDB; Japanese: , Tōgō dejitaru hōsō sābisu) is a Japanese broadcasting standard for digital television (DTV) and digital radio. ISDB supersedes both the NTSC-J analog television system and the previously used MUSE Hi-vision analog HDTV system in Japan. An improved version of ISDB-T (ISDB-T International) will soon replace the NTSC, PAL-M, and PAL-N broadcast standards in South America and the Philippines. Digital Terrestrial Television Broadcasting (DTTB) services using ISDB-T started in Japan in December 2003, and since then, many countries have adopted ISDB over other digital broadcasting standards. A newer and "advanced" version of the ISDB standard (that will eventually allow up to 8K terrestrial broadcasts and 1080p mobile broadcasts via the VVC codec, including HDR and HFR) is currently under development. Countries and territories using ISDB-T Asia (officially adopted ISDB-T, started broadcasting in digital) (officially adopted ISDB-T) (officially adopted ISDB-T HD) (currently assessing digital platform) Americas (officially adopted ISDB-T International, started broadcasting in digital) (officially adopted ISDB-T International, started broadcasting in digital) (officially adopted ISDB-T International, started broadcasting in digital) (officially adopted ISDB-T International, started broadcasting in digital) (officially adopted ISDB-T International, started broadcasting in digital) (officially adopted ISDB-T International, started broadcasting in digital) (officially adopted ISDB-T International, started broadcasting in digital) (officially adopted ISDB-T International, started broadcasting in digital) (officially adopted ISDB-T International, started broadcasting in digital) (officially adopted ISDB-T International, started broadcasting in digital) (officially adopted ISDB-T International, started pre-implementation stage) (officially adopted ISDB-T International, started pre-imp
https://en.wikipedia.org/wiki/ATSC%20standards
Advanced Television Systems Committee (ATSC) standards are an American set of standards for digital television transmission over terrestrial, cable and satellite networks. It is largely a replacement for the analog NTSC standard and, like that standard, is used mostly in the United States, Mexico, Canada, South Korea and Trinidad & Tobago. Several former NTSC users, such as Japan, have not used ATSC during their digital television transition, because they adopted other systems such as ISDB developed by Japan, and DVB developed in Europe, for example. The ATSC standards were developed in the early 1990s by the Grand Alliance, a consortium of electronics and telecommunications companies that assembled to develop a specification for what is now known as HDTV. The standard is now administered by the Advanced Television Systems Committee. It includes a number of patented elements, and licensing is required for devices that use these parts of the standard. Key among these is the 8VSB modulation system used for over-the-air broadcasts. ATSC technology was primarily developed with patent contributions from LG Electronics, which holds most of the patents for the ATSC standard. ATSC includes two primary high definition video formats, 1080i and 720p. It also includes standard-definition formats, although initially only HDTV services were launched in the digital format. ATSC can carry multiple channels of information on a single stream, and it is common for there to be a single high-definition signal and several standard-definition signals carried on a single 6 MHz (former NTSC) channel allocation. Background The high-definition television standards defined by the ATSC produce widescreen 16:9 images up to 1920×1080 pixels in sizemore than six times the display resolution of the earlier standard. However, many different image sizes are also supported. The reduced bandwidth requirements of lower-resolution images allow up to six standard-definition "subchannels" to be broadc
https://en.wikipedia.org/wiki/Permutation%20City
Permutation City is a 1994 science-fiction novel by Greg Egan that explores many concepts, including quantum ontology, through various philosophical aspects of artificial life and simulated reality. Sections of the story were adapted from Egan's 1992 short story "Dust", which dealt with many of the same philosophical themes. Permutation City won the John W. Campbell Award for the best science-fiction novel of the year in 1995 and was nominated for the Philip K. Dick Award the same year. The novel was also cited in a 2003 Scientific American article on multiverses by Max Tegmark. Themes and setting Permutation City asks whether there is a difference between a computer simulation of a person and a "real" person. It focuses on a model of consciousness and reality, the Dust Theory, similar to the Ultimate Ensemble Mathematical Universe hypothesis proposed by Max Tegmark. It uses the assumption that human consciousness is Turing-computable: that consciousness can be produced by a computer program. The book deals with consequences of human consciousness being amenable to mathematical manipulation, as well as some consequences of simulated realities. In this way, Egan attempts to deconstruct notions of self, memory, and mortality, and of physical reality. The Autoverse is an artificial life simulator based on a cellular automaton complex enough to represent the substratum of an artificial chemistry. It is deterministic, internally consistent and vaguely resembles real chemistry. Tiny environments, simulated in the Autoverse and filled with populations of a simple, designed lifeform, Autobacterium lamberti, are maintained by a community of enthusiasts obsessed with getting A. lamberti to evolve, something the Autoverse chemistry seems to make extremely difficult. Related explorations go on in virtual realities (VR), which make extensive use of patchwork heuristics to crudely simulate immersive and convincing physical environments, albeit at a maximal speed of seventeen t
https://en.wikipedia.org/wiki/Aspect%20ratio%20%28aeronautics%29
In aeronautics, the aspect ratio of a wing is the ratio of its span to its mean chord. It is equal to the square of the wingspan divided by the wing area. Thus, a long, narrow wing has a high aspect ratio, whereas a short, wide wing has a low aspect ratio. Aspect ratio and other features of the planform are often used to predict the aerodynamic efficiency of a wing because the lift-to-drag ratio increases with aspect ratio, improving the fuel economy in powered airplanes and the gliding angle of sailplanes. Definition The aspect ratio is the ratio of the square of the wingspan to the projected wing area , which is equal to the ratio of the wingspan to the standard mean chord : Mechanism As a useful simplification, an airplane in flight can be imagined to affect a circular cylinder of air with a diameter equal to the wingspan. A large wingspan affects a large cylinder of air, and a small wingspan affects a small cylinder of air. A small air cylinder must be pushed down with a greater power (energy change per unit time) than a large cylinder in order to produce an equal upward force (momentum change per unit time). This is because giving the same momentum change to a smaller mass of air requires giving it a greater velocity change, and a much greater energy change because energy is proportional to the square of the velocity while momentum is only linearly proportional to the velocity. The aft-leaning component of this change in velocity is proportional to the induced drag, which is the force needed to take up that power at that airspeed. The interaction between undisturbed air outside the cylinder of air, and the downward-moving cylinder of air occurs at the wingtips and can be seen as wingtip vortices. It is important to keep in mind that this is a drastic oversimplification, and an airplane wing affects a very large area around itself. In aircraft Although a long, narrow wing with a high aspect ratio has aerodynamic advantages like better lift-to-drag-ra
https://en.wikipedia.org/wiki/Low-power%20broadcasting
Low-power broadcasting is broadcasting by a broadcast station at a low transmitter power output to a smaller service area than "full power" stations within the same region. It is often distinguished from "micropower broadcasting" (more commonly "microbroadcasting") and broadcast translators. LPAM, LPFM and LPTV are in various levels of use across the world, varying widely based on the laws and their enforcement. Canada Radio communications in Canada are regulated by the Radio Communications and Broadcasting Regulatory Branch, a branch of Industry Canada, in conjunction with the Canadian Radio-television and Telecommunications Commission (CRTC). Interested parties must apply for both a certificate from Industry Canada and a license from CRTC in order to operate a radio station. Industry Canada manages the technicalities of spectrum space and technological requirements whereas content regulation is conducted more so by CRTC. LPAM stations are authorized to operate with less than 100 watts of power. LPFM is broken up into two classes in Canada, Low (50 watts) and Very Low (10 watts). The transmitters therefore range from 1 to 50 watts, as opposed to 1 to 100 watts in the U.S. , 500 licenses (very low and low-power FM) have been issued. These transmitters are generally only allowed in remote areas. Stations in the low-power class are subject to the same CRTC licensing requirements, and will generally follow the same call sign format, as full-power stations. Stations in the very low-power class formerly had to have CRTC licenses as well, although a series of CRTC regulation changes in the early 2000s exempted most such stations from licensing; a station in this class will usually not have a conventional call sign, but will instead be identified in a naming format consisting of a four-digit number preceded by the letters CH for a television station or VF for a radio station. The regulation of spectrum space is strict in Canada, as well having restrictions on second a
https://en.wikipedia.org/wiki/Effective%20radiated%20power
Effective radiated power (ERP), synonymous with equivalent radiated power, is an IEEE standardized definition of directional radio frequency (RF) power, such as that emitted by a radio transmitter. It is the total power in watts that would have to be radiated by a half-wave dipole antenna to give the same radiation intensity (signal strength or power flux density in watts per square meter) as the actual source antenna at a distant receiver located in the direction of the antenna's strongest beam (main lobe). ERP measures the combination of the power emitted by the transmitter and the ability of the antenna to direct that power in a given direction. It is equal to the input power to the antenna multiplied by the gain of the antenna. It is used in electronics and telecommunications, particularly in broadcasting to quantify the apparent power of a broadcasting station experienced by listeners in its reception area. An alternate parameter that measures the same thing is effective isotropic radiated power (EIRP). Effective isotropic radiated power is the hypothetical power that would have to be radiated by an isotropic antenna to give the same ("equivalent") signal strength as the actual source antenna in the direction of the antenna's strongest beam. The difference between EIRP and ERP is that ERP compares the actual antenna to a half-wave dipole antenna, while EIRP compares it to a theoretical isotropic antenna. Since a half-wave dipole antenna has a gain of 1.64 (or 2.15 dB) compared to an isotropic radiator, if ERP and EIRP are expressed in watts their relation is If they are expressed in decibels Definitions Effective radiated power and effective isotropic radiated power both measure the power density a radio transmitter and antenna (or other source of electromagnetic waves) radiates in a specific direction: in the direction of maximum signal strength (the "main lobe") of its radiation pattern. This apparent power is dependent on two factors: the total powe
https://en.wikipedia.org/wiki/FM%20broadcast%20band
The FM broadcast band is a range of radio frequencies used for FM broadcasting by radio stations. The range of frequencies used differs between different parts of the world. In Europe and Africa (defined as International Telecommunication Union (ITU) region 1) and in Australia and New Zealand, it spans from 87.5 to 108 megahertz (MHz) - also known as VHF Band II - while in the Americas (ITU region 2) it ranges from 88 to 108 MHz. The FM broadcast band in Japan uses 76 to 95 MHz, and in Brazil, 76 to 108 MHz. The International Radio and Television Organisation (OIRT) band in Eastern Europe is from 65.9 to 74.0 MHz, although these countries now primarily use the 87.5 to 108 MHz band, as in the case of Russia. Some other countries have already discontinued the OIRT band and have changed to the 87.5 to 108 MHz band. Narrow band Frequency Modulation was developed and demonstrated by Hanso Idzerda in 1919. Wide band Frequency modulation radio originated in the United States during the 1930s; the system was developed by the American electrical engineer Edwin Howard Armstrong. However, FM broadcasting did not become widespread, even in North America, until the 1960s. Frequency-modulated radio waves can be generated at any frequency. All the bands mentioned in this article are in the very high frequency (VHF) range, which extends from 30 to 300 MHz. CCIR bandplan Center frequencies While all countries use FM channel center frequencies ending in 0.1, 0.3, 0.5, 0.7, and 0.9 MHz, some countries also use center frequencies ending in 0.0, 0.2, 0.4, 0.6, and 0.8 MHz. A few others also use 0.05, 0.15, 0.25, 0.35, 0.45, 0.55, 0.65, 0.75, 0.85, and 0.95 MHz. An ITU conference in Geneva, Switzerland, on December 7, 1984, resolved to discontinue the use of 50 kHz channel spacings throughout Europe. Most countries have used 100 kHz or 200 kHz channel spacings for FM broadcasting since this ITU conference in 1984. Some digitally-tuned FM radios are unable to tune using 50 kHz or
https://en.wikipedia.org/wiki/Trigger%20finger
Trigger finger, also known as stenosing tenosynovitis, is a disorder characterized by catching or locking of the involved finger in full or near full flexion, typically with force. There may be tenderness in the palm of the hand near the last skin crease (distal palmar crease). The name "trigger finger" may refer to the motion of "catching" like a trigger on a gun. The ring finger and thumb are most commonly affected. The problem is generally idiopathic (no known cause). People with diabetes might be relatively prone to trigger finger. The pathophysiology is enlargement of the flexor tendon and the A1 pulley of the tendon sheath. While often referred to as a type of stenosing tenosynovitis (which implies inflammation) the pathology is mucoid degeneration. Mucoid degeneration is when fibrous tissue such as tendon has less organized collagen, more abundant extra-cellular matrix, and changes in the cells (fibrocytes) to act and look more like cartilage cells (chondroid metaplasia). Diagnosis is typically based on symptoms and signs after excluding other possible causes. Trigger digits can resolve without treatment. Treatment options that are disease modifying include steroid injections and surgery. Splinting immobilization of the finger may or may not be disease modifying. Signs and symptoms Symptoms include catching or locking of the involved finger when it is forcefully flexed. There may be tenderness in the palm of the hand near the last skin crease (distal palmar crease). Often a nodule can be felt in this area. There is some evidence that idiopathic trigger finger behaves differently in people with diabetes. Causes It is important to distinguish association and causation. The vast majority of trigger digits are idiopathic, meaning there is no known cause. However, recent publications indicate that diabetes and high blood sugar levels increases the risk of developing trigger finger. Some speculate that repetitive forceful use of a digit leads to narrowing of
https://en.wikipedia.org/wiki/Null%20fill
Null fill in radio engineering is used in radio antenna systems which are located on mountains or tall towers, to prevent too much of the signal from overshooting the nearest part of intended coverage area. Phasing is used between antenna elements to take power away from the main lobe and electrically direct more of it at a more downward angle in the vertical plane. This requires a phased array. Changing the relative power supplied to each element also changes the radiation pattern in this manner, and often both methods are used in combination. References "Analysis of Antenna Null-fill and Broadcast Coverage." Myron D. Fanton. ERI Technical Series, vol. 6, April 2006. "The Tide is Turning." Charles Rhodes. "A Study on Null-Fill Array Antenna for Service Coverage Expansion in PCS Band" Youngseok Ko See also Null (radio) Beam tilt Antennas (radio) Broadcast engineering
https://en.wikipedia.org/wiki/Beam%20tilt
Beam tilt is used in radio to aim the main lobe of the vertical plane radiation pattern of an antenna below (or above) the horizontal plane. The simplest way is mechanical beam tilt, where the antenna is physically mounted in such a manner as to lower the angle of the signal on one side. However, this also raises it on the other side, making it useful in only very limited situations. More common is electrical beam tilt, where the phasing between antenna elements is tweaked to make the signal go down (usually) in all directions. This is extremely useful when the antenna is at a very high point, and the edge of the signal is likely to miss the target (broadcast audience, cellphone users, etc.) entirely. With electrical tilting, front and back lobes tilt in the same direction. For example, an electrical downtilt will make both the front lobe and the back lobe tilt down. This is the property used in the above example where the signal is pointed down in all directions. On the contrary, mechanical downtilting will make the front lobe tilt down and the back lobe tilt up. In almost all practical cases, antennas are only tilted down – though tilting up is technically possible. The use of purely electrical tilt with no mechanical tilt is an attractive choice for aesthetic reasons which are very important for operators seeking acceptance of integrated antennas in visible locations. In GSM and UMTS cellular networks, mechanical tilt is almost always fixed whereas electrical tilt can be controlled using remote actuators and position sensors, thus reducing operating expenses. Remote electrical tilt is abbreviated as RET and it is part of the Antenna Interface Standards Group's open specification for the control interface of antenna devices. Occasionally, mechanical and electrical tilt will be used together in order to create greater beam tilt in one direction than the other, mainly to accommodate unusual terrain. Along with null fill, beam tilt is the essential paramet
https://en.wikipedia.org/wiki/Rhizobia
Rhizobia are diazotrophic bacteria that fix nitrogen after becoming established inside the root nodules of legumes (Fabaceae). To express genes for nitrogen fixation, rhizobia require a plant host; they cannot independently fix nitrogen. In general, they are gram negative, motile, non-sporulating rods. Rhizobia are a "group of soil bacteria that infect the roots of legumes to form root nodules". Rhizobia are found in the soil and after infection, produce nodules in the legume where they fix nitrogen gas (N2) from the atmosphere turning it into a more readily useful form of nitrogen. From here, the nitrogen is exported from the nodules and used for growth in the legume. Once the legume dies, the nodule breaks down and releases the rhizobia back into the soil where they can live individually or reinfect a new legume host. History The first known species of rhizobia, Rhizobium leguminosarum, was identified in 1889, and all further species were initially placed in the Rhizobium genus. Most research has been done on crop and forage legumes such as clover, alfalfa, beans, peas, and soybeans; more research is being done on North American legumes. Taxonomy Rhizobia are a paraphyletic group that fall into two classes of Pseudomonadota—the alphaproteobacteria and betaproteobacteria. As shown below, most belong to the order Hyphomicrobiales, but several rhizobia occur in distinct bacterial orders of the Pseudomonadota. These groups include a variety of non-symbiotic bacteria. For instance, the plant pathogen Agrobacterium is a closer relative of Rhizobium than the Bradyrhizobium that nodulate soybean. Importance in agriculture Although much of the nitrogen is removed when protein-rich grain or hay is harvested, significant amounts can remain in the soil for future crops. This is especially important when nitrogen fertilizer is not used, as in organic rotation schemes or some less-industrialized countries. Nitrogen is the most commonly deficient nutrient in many soils
https://en.wikipedia.org/wiki/Rich%20client
In computer networking, a rich client (also called heavy, fat or thick client) is a computer (a "client" in client–server network architecture) that typically provides rich functionality independent of the central server. This kind of computer was originally known as just a "client" or "thick client," in contrast with "thin client", which describes a computer heavily dependent on a server's applications. A rich client may be described as having a rich user interaction. While a rich client still requires at least periodic connection to a network or central server , it is often characterised by the ability to perform many functions without a connection. In contrast, a thin client generally does as little processing as possible on the client, relying on access to the server each time input data needs to be processed or validated. Introduction The designer of a client–server application decides which parts of the task should be executed on the client, and which on the server. This decision can crucially affect the cost of clients and servers, the robustness and security of the application as a whole, and the flexibility of the design to later modification or porting. The characteristics of the user interface often force the decision on a designer. For instance, a drawing package could require download of an initial image from a server, and allow all edits to be made locally, returning the revised drawing to the server upon completion. This would require a rich client and might be characterised by a long delay to start and stop (while a whole complex drawing was transferred), but quick to edit. Conversely, a thin client could download just the visible parts of the drawing at the beginning and send each change back to the server to update the drawing. This might be characterised by a short start-up time, but a tediously slow editing process. History The original server clients were simple text display terminals including Wyse VDUs, and rich clients were genera
https://en.wikipedia.org/wiki/Ultraproduct
The ultraproduct is a mathematical construction that appears mainly in abstract algebra and mathematical logic, in particular in model theory and set theory. An ultraproduct is a quotient of the direct product of a family of structures. All factors need to have the same signature. The ultrapower is the special case of this construction in which all factors are equal. For example, ultrapowers can be used to construct new fields from given ones. The hyperreal numbers, an ultrapower of the real numbers, are a special case of this. Some striking applications of ultraproducts include very elegant proofs of the compactness theorem and the completeness theorem, Keisler's ultrapower theorem, which gives an algebraic characterization of the semantic notion of elementary equivalence, and the Robinson–Zakon presentation of the use of superstructures and their monomorphisms to construct nonstandard models of analysis, leading to the growth of the area of nonstandard analysis, which was pioneered (as an application of the compactness theorem) by Abraham Robinson. Definition The general method for getting ultraproducts uses an index set a structure (assumed to be non-empty in this article) for each element (all of the same signature), and an ultrafilter on For any two elements and of the Cartesian product declare them to be , written or if and only if the set of indices on which they agree is an element of in symbols, which compares components only relative to the ultrafilter This binary relation is an equivalence relation on the Cartesian product The is the quotient set of with respect to and is therefore sometimes denoted by or Explicitly, if the -equivalence class of an element is denoted by then the ultraproduct is the set of all -equivalence classes Although was assumed to be an ultrafilter, the construction above can be carried out more generally whenever is merely a filter on in which case the resulting quotient set is called a . W
https://en.wikipedia.org/wiki/Weber%E2%80%93Fechner%20law
The Weber–Fechner laws are two related hypotheses in the field of psychophysics, known as Weber's law and Fechner's law. Both laws relate to human perception, more specifically the relation between the actual change in a physical stimulus and the perceived change. This includes stimuli to all senses: vision, hearing, taste, touch, and smell. Weber states that, "the minimum increase of stimulus which will produce a perceptible increase of sensation is proportional to the pre-existent stimulus," while Fechner's law is an inference from Weber's law (with additional assumptions) which states that the intensity of our sensation increases as the logarithm of an increase in energy rather than as rapidly as the increase. History and formulation of the laws Both Weber's law and Fechner's law were formulated by Gustav Theodor Fechner (1801–1887). They were first published in 1860 in the work Elemente der Psychophysik (Elements of Psychophysics). This publication was the first work ever in this field, and where Fechner coined the term psychophysics to describe the interdisciplinary study of how humans perceive physical magnitudes. He made the claim that "...psycho-physics is an exact doctrine of the relation of function or dependence between body and soul." Weber's law Ernst Heinrich Weber (1795–1878) was one of the first persons to approach the study of the human response to a physical stimulus in a quantitative fashion. Fechner was a student of Weber and named his first law in honor of his mentor, since it was Weber who had conducted the experiments needed to formulate the law. Fechner formulated several versions of the law, all communicating the same idea. One formulation states: What this means is that the perceived change in stimuli is proportional to the initial stimuli. Weber's law also incorporates the just-noticeable difference (JND). This is the smallest change in stimuli that can be perceived. As stated above, the JND is proportional to the initial stimul
https://en.wikipedia.org/wiki/Telomerase
Telomerase, also called terminal transferase, is a ribonucleoprotein that adds a species-dependent telomere repeat sequence to the 3' end of telomeres. A telomere is a region of repetitive sequences at each end of the chromosomes of most eukaryotes. Telomeres protect the end of the chromosome from DNA damage or from fusion with neighbouring chromosomes. The fruit fly Drosophila melanogaster lacks telomerase, but instead uses retrotransposons to maintain telomeres. Telomerase is a reverse transcriptase enzyme that carries its own RNA molecule (e.g., with the sequence 3′-CCCAAUCCC-5′ in Trypanosoma brucei) which is used as a template when it elongates telomeres. Telomerase is active in gametes and most cancer cells, but is normally absent in most somatic cells. History The existence of a compensatory mechanism for telomere shortening was first found by Soviet biologist Alexey Olovnikov in 1973, who also suggested the telomere hypothesis of aging and the telomere's connections to cancer. Telomerase in the ciliate Tetrahymena was discovered by Carol W. Greider and Elizabeth Blackburn in 1984. Together with Jack W. Szostak, Greider and Blackburn were awarded the 2009 Nobel Prize in Physiology or Medicine for their discovery. The role of telomeres and telomerase in cell aging and cancer was established by scientists at biotechnology company Geron with the cloning of the RNA and catalytic components of human telomerase and the development of a polymerase chain reaction (PCR) based assay for telomerase activity called the TRAP assay, which surveys telomerase activity in multiple types of cancer. The negative stain electron microscopy (EM) structures of human and Tetrahymena telomerases were characterized in 2013. Two years later, the first cryo-electron microscopy (cryo-EM) structure of telomerase holoenzyme (Tetrahymena) was determined. In 2018, the structure of human telomerase was determined through cryo-EM by UC Berkeley scientists. Human telomerase structure The
https://en.wikipedia.org/wiki/Wireless%20local%20loop
Wireless local loop (WLL) is the use of a wireless communications link as the "last mile / first mile" connection for delivering plain old telephone service (POTS) or Internet access (marketed under the term "broadband") to telecommunications customers. Various types of WLL systems and technologies exist. Other terms for this type of access include broadband wireless access (BWA), radio in the loop (RITL), fixed-radio access (FRA), fixed wireless access (FWA) and metro wireless (MW). Definition of fixed wireless service Fixed wireless terminal (FWT) units differ from conventional mobile terminal units operating within cellular networks such as GSM in that a fixed wireless terminal or desk phone will be limited to an almost permanent location with almost no roaming abilities. WLL and FWT are generic terms for radio-based telecommunications technologies and the respective devices, which can be implemented using a number of different wireless and radio technologies. Wireless local-loop services are segmented into a number of broad market and deployment groups. Services are split between licensed commonly used by carriers and telcos and unlicensed services more commonly deployed by home users and wireless ISPs (WISPs). Licensed Point-to-Point Microwave Licensed point-to-point microwave was first deployed by AT&T Long Lines in the 1960s for high-bandwidth, interstate transmission of voice, data and television. AT&T's network covered the entire U.S., carried across hundreds of microwave towers, largely transmitting at 3700–4200 MHz and 5000–6200 MHz. The network was slowly obsoleted, starting in the late 1980's, as fiber optics became the solution of choice for communications backhaul. Following the Breakup of the Bell System on January 8, 1982, licensed point-to-point microwave solutions could be sold to enterprise and government accounts for their own private use. Frequently, the argument was to bypass wired local loops in order to save money or backup weak cop
https://en.wikipedia.org/wiki/CorDECT
corDECT is a wireless local loop standard developed in India by IIT Madras and Midas Communications (www.midascomms.com) at Chennai, under leadership of Prof Ashok Jhunjhunwala, based on the DECT digital cordless phone standard. Overview The technology is a Fixed Wireless Option, which has extremely low capital costs and is ideal for small start ups to scale, as well as for sparse rural areas. It is very suitable for ICT4D projects and India has one such organization, n-Logue Communications that has aptly done this. The full form of DECT is Digital Enhanced Cordless Telecommunications, which is useful in designing small capacity WLL (wireless in local loop) systems. These systems are operative only on LOS Conditions and are very much affected by weather conditions. System is designed for rural and sub urban areas where subscriber density is medium or low. "corDECT" system provides simultaneous voice and Internet access. Following are the main parts of the system. DECT Interface Unit (DIU) This is a 1000 line exchange provides E1 interface to the PSTN. This can cater up to 20 base stations. These base stations are interfaced through ISDN link which carries signals and power feed for the base stations even up to 3 km. Compact Base Station (CBS) This is the radio fixed part of the DECT wireless local loop. CBSs are typically mounted on a tower top which can cater up to 50 subscribers with 0.1 erlang traffic. Base Station Distributor (BSD) This is a traffic aggregator used to extend the range of the wireless local-loop where 4 CBS can be connected to this. Relay Base Station (RBS) This another technique used to extend the range of the corDECT wireless local loop up to 25 km by a radio chain. Fixed Remote Station (FRS) This is the subscriber-end equipment used the corDECT wireless local loop which provides standard telephone instrument and Internet access up to 70kbit/s through Ethernet port. The new generation corDECT technology is called Broadband
https://en.wikipedia.org/wiki/Stack%20%28abstract%20data%20type%29
In computer science, a stack is an abstract data type that serves as a collection of elements, with two main operations: Push, which adds an element to the collection, and Pop, which removes the most recently added element that was not yet removed. Additionally, a peek operation can, without modifying the stack, return the value of the last element added. Calling this structure a stack is by analogy to a set of physical items stacked one atop another, such as a stack of plates. The order in which an element added to or removed from a stack is described as last in, first out, referred to by the acronym LIFO. As with a stack of physical objects, this structure makes it easy to take an item off the top of the stack, but accessing a datum deeper in the stack may require taking off multiple other items first. Considered as a linear data structure, or more abstractly a sequential collection, the push and pop operations occur only at one end of the structure, referred to as the top of the stack. This data structure makes it possible to implement a stack as a singly linked list and as a pointer to the top element. A stack may be implemented to have a bounded capacity. If the stack is full and does not contain enough space to accept another element, the stack is in a state of stack overflow. A stack is needed to implement depth-first search. History Stacks entered the computer science literature in 1946, when Alan M. Turing used the terms "bury" and "unbury" as a means of calling and returning from subroutines. Subroutines and a 2-level stack had already been implemented in Konrad Zuse's Z4 in 1945. Klaus Samelson and Friedrich L. Bauer of Technical University Munich proposed the idea of a stack called (Engl. "operational cellar") in 1955 and filed a patent in 1957. In March 1988, by which time Samelson was deceased, Bauer received the IEEE Computer Pioneer Award for the invention of the stack principle. Similar concepts were developed, independently, by Charles L
https://en.wikipedia.org/wiki/IBM%20Portable%20Personal%20Computer
The IBM Portable Personal Computer 5155 model 68 is an early portable computer developed by IBM after the success of the suitcase-size Compaq Portable. It was released in February 1984 and was quickly replaced by the IBM Convertible, only roughly two years after its debut. Design The Portable was basically a PC/XT motherboard, transplanted into a Compaq-style luggable case. The system featured 256 kilobytes of memory (expandable to 640 KB), an added CGA card connected to an internal monochrome amber composite monitor, and one or two half-height -inch 360 KB floppy disk drives, manufactured by Qume. Unlike the Compaq Portable, which used a dual-mode monitor and special display card, IBM used a stock CGA card and a 9-inch amber monochrome composite monitor, which had lower resolution. It could, however, display color if connected to an external monitor or television. A separate 83-key keyboard and cable was provided, which uses a front panel mounted phone jack styled connector RJ11. The cable from the connector then went to the back of the machine, where the original XT keyboard jack was. Experts stated that IBM developed the Portable in part because its sales force needed a computer that would compete against the Compaq Portable. If less sophisticated than the Compaq, the IBM had the advantage of a lower price tag. The motherboard had eight expansion slots. The power supply was rated 114 watts and was suitable for operation on either 115 or 230 VAC. Hard disks were a very common third-party add-on as IBM did not offer them from the factory. Typically in a two-drive context, floppy drive A: ran the operating system, and drive B: would be used for application and data diskettes. Its selling point as a "portable" was that it combined the monitor into a base unit approximating a medium-sized suitcase that could be simply set on its flat side, the back panel slid away to reveal the power connector, plugged in, the keyboard folded down or detached, and booted up for
https://en.wikipedia.org/wiki/IBM%20PC%20Convertible
The IBM PC Convertible (model 5140) is a laptop computer made by IBM, first sold in April 1986. The Convertible was IBM's first laptop-style computer, following the luggable IBM Portable, and introduced the 3½-inch floppy disk format to the IBM product line. Like modern laptops, it featured power management and the ability to run from batteries. It was replaced in 1991 by the IBM PS/2 L40 SX, and in Japan by the IBM Personal System/55note, the predecessor to the ThinkPad. Predecessors IBM had been working on a laptop for some time before the Convertible. In 1983, work was underway on a laptop similar to the Tandy Model 100, codenamed "Sweetpea", but it was rejected by Don Estridge for not being PC compatible. Another attempt in 1984 produced the "P-14" prototype machine, but it failed to pass IBM's human factors tests, especially after poor public reception of the display in the competing Data General-One. Description The PC Convertible came in three models: PC Convertible, PC Convertible Model 2 and PC Convertible Model 3. The latter two were released in October 1987 and are primarily distinguished by their LCD panels. The original Convertible used a non-backlit panel which was considered difficult to read. The Model 2 lacked a backlight as well but upgraded to an improved supertwist panel, and the Model 3 included a backlight. The other hardware specifications are largely the same for all three models. The CPU is an Intel 80C88, the CMOS version of the Intel 8088 CPU. The base configuration included of RAM, expandable to , dual 3.5-inch floppy drives, and a monochrome, CGA-compatible LCD screen. It weighed just over 12 pounds and featured a built-in carrying handle, with a battery rated for 10 hours (4 hours in the backlit Model 3). The first model was introduced at a price of , the Model 2 at with 256K of RAM and with 640K, and the Model 3 at with 256K of RAM. The LCD screen displayed characters, but has a very wide aspect ratio, so text characters an
https://en.wikipedia.org/wiki/Quantitative%20marketing%20research
Quantitative marketing research is the application of quantitative research techniques to the field of marketing research. It has roots in both the positivist view of the world, and the modern marketing viewpoint that marketing is an interactive process in which both the buyer and seller reach a satisfying agreement on the "four Ps" of marketing: Product, Price, Place (location) and Promotion. As a social research method, it typically involves the construction of questionnaires and scales. People who respond (respondents) are asked to complete the survey. Marketers use the information to obtain and understand the needs of individuals in the marketplace, and to create strategies and marketing plans. Data collection The most popular quantitative marketing research method is a survey. Surveys typically contain a combination of structured questions and open questions. Survey participants respond to the same set of questions, which allows the researcher to easily compare responses by different types of respondent. Surveys can be distributed in one of four ways: telephone, mail, in-person and online (whether by mobile or desktop). Another quantitative research method is to conduct experiments into how individuals respond to different situations or scenarios. One example of this is A/B testing of a piece of marketing communications, such as a website landing page. Website visitors are shown different versions of the landing page, and marketers track which is more effective. Differences between consumer and B2B quantitative research Quantitative research is used in both consumer research and business-to-business (B2B) research. However, there are differences in how consumer researchers and B2B researchers distribute their surveys. Generally, surveys are distributed online more than in-person, by telephone or by mail. However, in B2B research, online research is not always possible, often because it is difficult to get hold of certain business decision-makers via email
https://en.wikipedia.org/wiki/Apache%20SpamAssassin
Apache SpamAssassin is a computer program used for e-mail spam filtering. It uses a variety of spam-detection techniques, including DNS and fuzzy checksum techniques, Bayesian filtering, external programs, blacklists and online databases. It is released under the Apache License 2.0 and is a part of the Apache Foundation since 2004. The program can be integrated with the mail server to automatically filter all mail for a site. It can also be run by individual users on their own mailbox and integrates with several mail programs. Apache SpamAssassin is highly configurable; if used as a system-wide filter it can still be configured to support per-user preferences. History Apache SpamAssassin was created by Justin Mason, who had maintained a number of patches against an earlier program named filter.plx by Mark Jeftovic, which in turn was begun in August 1997. Mason rewrote all of Jeftovic's code from scratch and uploaded the resulting codebase to SourceForge on April 20, 2001. In Summer 2004 the project became an Apache Software Foundation project and later officially renamed to Apache SpamAssassin. The SpamAssassin 3.4.2 release in September 2019 was the first in over three years, but the developers say that "The project has picked up a new set of developers and is moving forward again.". In December 2019, version 3.4.3 of SpamAssassin was released. In April, 2021, version 3.4.6 of SpamAssassin was released. It was announced that development of version 4.0.0 would become project's focus. Methods of usage Apache SpamAssassin is a Perl-based application ( in CPAN) which is usually used to filter all incoming mail for one or several users. It can be run as a standalone application or as a subprogram of another application (such as a Milter, SA-Exim, Exiscan, MailScanner, MIMEDefang, Amavis) or as a client () that communicates with a daemon (). The client/server or embedded mode of operation has performance benefits, but under certain circumstances may introduce add
https://en.wikipedia.org/wiki/Core%20War
Core War is a 1984 programming game created by D. G. Jones and A. K. Dewdney in which two or more battle programs (called "warriors") compete for control of a virtual computer. These battle programs are written in an abstract assembly language called Redcode. The standards for the language and the virtual machine were initially set by the International Core Wars Society (ICWS), but later standards were determined by community consensus. Gameplay At the beginning of a game, each battle program is loaded into memory at a random location, after which each program executes one instruction in turn. The goal of the game is to cause the processes of opposing programs to terminate (which happens if they execute an invalid instruction), leaving the victorious program in sole possession of the machine. The earliest published version of Redcode defined only eight instructions. The ICWS-86 standard increased the number to 10 while the ICWS-88 standard increased it to 11. The currently used 1994 draft standard has 16 instructions. However, Redcode supports a number of different addressing modes and (starting from the 1994 draft standard) instruction modifiers which increase the actual number of operations possible to 7168. The Redcode standard leaves the underlying instruction representation undefined and provides no means for programs to access it. Arithmetic operations may be done on the two address fields contained in each instruction, but the only operations supported on the instruction codes themselves are copying and comparing for equality. Constant instruction length and time Each Redcode instruction occupies exactly one memory slot and takes exactly one cycle to execute. The rate at which a process executes instructions, however, depends on the number of other processes in the queue, as processing time is shared equally. Circular memory The memory is addressed in units of one instruction. The memory space (or core) is of finite size, but only relative addressing is u
https://en.wikipedia.org/wiki/Consistency%20model
In computer science, a consistency model specifies a contract between the programmer and a system, wherein the system guarantees that if the programmer follows the rules for operations on memory, memory will be consistent and the results of reading, writing, or updating memory will be predictable. Consistency models are used in distributed systems like distributed shared memory systems or distributed data stores (such as filesystems, databases, optimistic replication systems or web caching). Consistency is different from coherence, which occurs in systems that are cached or cache-less, and is consistency of data with respect to all processors. Coherence deals with maintaining a global order in which writes to a single location or single variable are seen by all processors. Consistency deals with the ordering of operations to multiple locations with respect to all processors. High level languages, such as C++ and Java, maintain the consistency contract by translating memory operations into low-level operations in a way that preserves memory semantics, reordering some memory instructions, and encapsulating required synchronization with library calls such as pthread_mutex_lock(). Example Assume that the following case occurs: The row X is replicated on nodes M and N The client A writes row X to node M After a period of time t, client B reads row X from node N The consistency model determines whether client B will definitely see the write performed by client A, will definitely not, or cannot depend on seeing the write. Types Consistency models define rules for the apparent order and visibility of updates, and are on a continuum with tradeoffs. There are two methods to define and categorize consistency models; issue and view. Issue Issue method describes the restrictions that define how a process can issue operations. View View method which defines the order of operations visible to processes. For example, a consistency model can define that a process is not
https://en.wikipedia.org/wiki/Martingale%20%28probability%20theory%29
In probability theory, a martingale is a sequence of random variables (i.e., a stochastic process) for which, at a particular time, the conditional expectation of the next value in the sequence is equal to the present value, regardless of all prior values. History Originally, martingale referred to a class of betting strategies that was popular in 18th-century France. The simplest of these strategies was designed for a game in which the gambler wins their stake if a coin comes up heads and loses it if the coin comes up tails. The strategy had the gambler double their bet after every loss so that the first win would recover all previous losses plus win a profit equal to the original stake. As the gambler's wealth and available time jointly approach infinity, their probability of eventually flipping heads approaches 1, which makes the martingale betting strategy seem like a sure thing. However, the exponential growth of the bets eventually bankrupts its users due to finite bankrolls. Stopped Brownian motion, which is a martingale process, can be used to model the trajectory of such games. The concept of martingale in probability theory was introduced by Paul Lévy in 1934, though he did not name it. The term "martingale" was introduced later by , who also extended the definition to continuous martingales. Much of the original development of the theory was done by Joseph Leo Doob among others. Part of the motivation for that work was to show the impossibility of successful betting strategies in games of chance. Definitions A basic definition of a discrete-time martingale is a discrete-time stochastic process (i.e., a sequence of random variables) X1, X2, X3, ... that satisfies for any time n, That is, the conditional expected value of the next observation, given all the past observations, is equal to the most recent observation. Martingale sequences with respect to another sequence More generally, a sequence Y1, Y2, Y3 ... is said to be a martingale with resp
https://en.wikipedia.org/wiki/Maillard%20reaction
The Maillard reaction ( ; ) is a chemical reaction between amino acids and reducing sugars to create melanoidins, the compounds which give browned food its distinctive flavor. Seared steaks, fried dumplings, cookies and other kinds of biscuits, breads, toasted marshmallows, and many other foods undergo this reaction. It is named after French chemist Louis Camille Maillard, who first described it in 1912 while attempting to reproduce biological protein synthesis. The reaction is a form of non-enzymatic browning which typically proceeds rapidly from around . Many recipes call for an oven temperature high enough to ensure that a Maillard reaction occurs. At higher temperatures, caramelization (the browning of sugars, a distinct process) and subsequently pyrolysis (final breakdown leading to burning and the development of acrid flavors) become more pronounced. The reactive carbonyl group of the sugar reacts with the nucleophilic amino group of the amino acid and forms a complex mixture of poorly characterized molecules responsible for a range of aromas and flavors. This process is accelerated in an alkaline environment (e.g., lye applied to darken pretzels; see lye roll), as the amino groups () are deprotonated, and hence have an increased nucleophilicity. This reaction is the basis for many of the flavoring industry's recipes. At high temperatures, a probable carcinogen called acrylamide can form. This can be discouraged by heating at a lower temperature, adding asparaginase, or injecting carbon dioxide. In the cooking process, Maillard reactions can produce hundreds of different flavor compounds depending on the chemical constituents in the food, the temperature, the cooking time, and the presence of air. These compounds, in turn, often break down to form yet more flavor compounds. Flavour scientists have used the Maillard reaction over the years to make artificial flavors. History In 1912, Louis Camille Maillard published a paper describing the reaction between a
https://en.wikipedia.org/wiki/Distributed%20control%20system
A distributed control system (DCS) is a computerised control system for a process or plant usually with many control loops, in which autonomous controllers are distributed throughout the system, but there is no central operator supervisory control. This is in contrast to systems that use centralized controllers; either discrete controllers located at a central control room or within a central computer. The DCS concept increases reliability and reduces installation costs by localising control functions near the process plant, with remote monitoring and supervision. Distributed control systems first emerged in large, high value, safety critical process industries, and were attractive because the DCS manufacturer would supply both the local control level and central supervisory equipment as an integrated package, thus reducing design integration risk. Today the functionality of Supervisory control and data acquisition (SCADA) and DCS systems are very similar, but DCS tends to be used on large continuous process plants where high reliability and security is important, and the control room is not geographically remote. Many machine control systems exhibit similar properties as plant and process control systems do. Structure The key attribute of a DCS is its reliability due to the distribution of the control processing around nodes in the system. This mitigates a single processor failure. If a processor fails, it will only affect one section of the plant process, as opposed to a failure of a central computer which would affect the whole process. This distribution of computing power local to the field Input/Output (I/O) connection racks also ensures fast controller processing times by removing possible network and central processing delays. The accompanying diagram is a general model which shows functional manufacturing levels using computerised control. Referring to the diagram; Level 0 contains the field devices such as flow and temperature sensors, and final cont
https://en.wikipedia.org/wiki/Tularemia
Tularemia, also known as rabbit fever, is an infectious disease caused by the bacterium Francisella tularensis. Symptoms may include fever, skin ulcers, and enlarged lymph nodes. Occasionally, a form that results in pneumonia or a throat infection may occur. The bacterium is typically spread by ticks, deer flies, or contact with infected animals. It may also be spread by drinking contaminated water or breathing in contaminated dust. It does not spread directly between people. Diagnosis is by blood tests or cultures of the infected site. Prevention is by using insect repellent, wearing long pants, rapidly removing ticks, and not disturbing dead animals. Treatment is typically with the antibiotic streptomycin. Gentamicin, doxycycline, or ciprofloxacin may also be used. Between the 1970s and 2015, around 200 cases were reported in the United States a year. Males are affected more often than females. It occurs most frequently in the young and the middle aged. In the United States, most cases occur in the summer. The disease is named after Tulare County, California, where the disease was discovered in 1911. A number of other animals, such as rabbits, may also be infected. Signs and symptoms Depending on the site of infection, tularemia has six characteristic clinical variants: ulceroglandular (the most common type representing 75% of all forms), glandular, oropharyngeal, pneumonic, oculoglandular, and typhoidal. The incubation period for tularemia is 1 to 14 days; most human infections become apparent after three to five days. In most susceptible mammals, the clinical signs include fever, lethargy, loss of appetite, signs of sepsis, and possibly death. Nonhuman mammals rarely develop the skin lesions seen in people. Subclinical infections are common, and animals often develop specific antibodies to the organism. Fever is moderate or very high, and tularemia bacilli can be isolated from blood cultures at this stage. The face and eyes redden and become inflamed. Infla
https://en.wikipedia.org/wiki/Visual%20Component%20Library
The Visual Component Library (VCL) is a visual component-based object-oriented framework for developing the user interface of Microsoft Windows applications. It is written in Object Pascal. History The VCL was developed by Borland for use in, and is tightly integrated with, its Delphi and C++Builder RAD tools. In 1995 Borland released Delphi, its first release of an Object Pascal IDE and language. Up until that point, Borland's Turbo Pascal for DOS and Windows was largely a procedural language, with minimal object-oriented features, and building UI frameworks with the language required using frameworks like Turbo Vision and Object Windows Library. OWL, a similar framework to MFC, required writing code to create UI objects. A key aim of the VCL combined with the Delphi language was to change the requirements of building a user interface. (For context, the Delphi variant of Pascal had a number of innovative object-oriented features, such as properties and runtime type information, inspired by Modula and Smalltalk.) At the time, much UI code work required creating classes inheriting from other classes, and customized objects were often not reusable (for example, a button that performs a specific action cannot be reused in a different application.) UI code was also complicated, forcing the programmer to understand and use the Windows API, manage GDI resources, etc. Finally, a visual user interface arguably should be designed visually, and yet most tools to do so - at the time, mainly Visual Basic - did so in terms of the designer outputting code, creating a fragile, un-manually-editable situation - a problem that still persists today with many UI frameworks, particularly C++-based ones such as Qt. The combination of the Delphi language and the VCL framework written in that language addressed these by: A streaming framework, allowing an object and subobjects to be streamed to text or binary format - TComponent, the root class of the VCL framework A form designe
https://en.wikipedia.org/wiki/Component%20Library%20for%20Cross%20Platform
Component Library for Cross Platform (CLX) (pronounced clicks), is a cross-platform visual component-based framework for developing Microsoft Windows and Linux applications. It is developed by Borland for use in its Kylix, Delphi, and C++ Builder software development environment. Its aim was to replace the popular Microsoft Foundation Classes with Visual Component Library. CLX was based on Qt by Nokia. The API of CLX almost completely followed VCL. It was envisioned that existing applications using VCL would be recompiled with CLX. However, due to lacklustre performance on Windows, subtle differences from VCL, and bugs, it didn't become the expected successor to VCL. Commercial failure of Kylix stopped further development of CLX. In terms of object-oriented approach, CLX forms an object hierarchy where the TObject class serves as the base class. All other classes inherit or indirectly inherit the TObject class. Today, many concepts that were defined with CLX have been implemented with the Lazarus Component Library (LCL) for the Lazarus IDE. By docking to different widgetsets, the LCL is able to support an even larger spectrum of platforms including Mac OS X and Android. References Borland Component-based software engineering Pascal (programming language) libraries Computer libraries
https://en.wikipedia.org/wiki/WxWidgets
wxWidgets (formerly wxWindows) is a widget toolkit and tools library for creating graphical user interfaces (GUIs) for cross-platform applications. wxWidgets enables a program's GUI code to compile and run on several computer platforms with minimal or no code changes. A wide choice of compilers and other tools to use with wxWidgets facilitates development of sophisticated applications. wxWidgets supports a comprehensive range of popular operating systems and graphical libraries, both proprietary and free, and is widely deployed in prominent organizations (see text). The project was started under the name wxWindows in 1992 by Julian Smart at the University of Edinburgh. The project was renamed wxWidgets in 2004 in response to a trademark claim by Microsoft UK. It is free and open source software, distributed under the terms of the wxWidgets Licence, which satisfies those who wish to produce for GPL and proprietary software. Portability and deployment wxWidgets covers systems such as Microsoft Windows, Mac OS (Carbon and Cocoa), iOS (Cocoa Touch), Linux/Unix (X11, Motif, and GTK), OpenVMS, OS/2 and AmigaOS. A version for embedded systems is under development. wxWidgets is used across many industry sectors, most notably by Xerox, Advanced Micro Devices (AMD), Lockheed Martin, NASA and the Center for Naval Analyses. It is also used in the public sector and education by, for example, Dartmouth Medical School, National Human Genome Research Institute, National Center for Biotechnology Information, and many others. wxWidgets is used in many open source projects, and by individual developers. History wxWidgets (initially wxWindows; "w" is for Windows, and "x" is for X Window System) was started in 1992 by Julian Smart at the University of Edinburgh. He attained an honours degree in Computational science from the University of St Andrews in 1986, and is still a core developer. On 20 February 2004, the developers of wxWindows announced that the project was changing its
https://en.wikipedia.org/wiki/C-QUAM
C-QUAM (Compatible QUadrature Amplitude Modulation) is the method of AM stereo broadcasting used in Canada, the United States and most other countries. It was invented in 1977 by Norman Parker, Francis Hilbert, and Yoshio Sakaie, and published in an IEEE journal. Using circuitry developed by Motorola, C-QUAM uses quadrature amplitude modulation (QAM) to encode the stereo separation signal. This extra signal is then stripped down in such a way that it is compatible with the envelope detector of older receivers, hence the name C-QUAM for Compatible. A 25 Hz pilot tone is added to trigger receivers; unlike its counterpart in FM radio, this carrier is not necessary for the reconstruction of the original audio sources. Description The C-QUAM signal is composed of two distinct modulation stages: a conventional AM version and a compatible quadrature PM version. Stage 1 provides the transmitter with a summed L+R mono audio input. This input is precisely the same as conventional AM-Mono transmission methods and ensures 100% compatibility with conventional 'envelope detector' receivers. Stage 2 provides the stereo multiplexed (muxed) audio input and replaces the conventional crystal oscillator stage of otherwise AM-Mono transmitters. So as to not create interference with 'envelope detector' receivers, the stage 2 signal takes the multiplexed (muxed) audio signals and phase modulates both, using a divide-by-4 Johnson counter and two balanced modulators operating 90 degrees out of phase with each other. Stage 2 is not amplitude modulated, it is phase modulated, and is made up of both a L+R input and a L-R input. To recover the 'stereo' audio signals, a synchronous detector extracts the L-R audio from the phase modulated quadrature portion of the signal created in stage 2. The L+R audio can be extracted from either the AM (stage 1) or the PM (stage 2) modulation component. From there, the audio can be readily de-multiplexed (de-muxed) back to 'stereo', a.k.a. Left and Ri
https://en.wikipedia.org/wiki/Building%20%28mathematics%29
In mathematics, a building (also Tits building, named after Jacques Tits) is a combinatorial and geometric structure which simultaneously generalizes certain aspects of flag manifolds, finite projective planes, and Riemannian symmetric spaces. Buildings were initially introduced by Jacques Tits as a means to understand the structure of exceptional groups of Lie type. The more specialized theory of Bruhat–Tits buildings (named also after François Bruhat) plays a role in the study of -adic Lie groups analogous to that of the theory of symmetric spaces in the theory of Lie groups. Overview The notion of a building was invented by Jacques Tits as a means of describing simple algebraic groups over an arbitrary field. Tits demonstrated how to every such group one can associate a simplicial complex with an action of , called the spherical building of . The group imposes very strong combinatorial regularity conditions on the complexes that can arise in this fashion. By treating these conditions as axioms for a class of simplicial complexes, Tits arrived at his first definition of a building. A part of the data defining a building is a Coxeter group , which determines a highly symmetrical simplicial complex , called the Coxeter complex. A building is glued together from multiple copies of , called its apartments, in a certain regular fashion. When is a finite Coxeter group, the Coxeter complex is a topological sphere, and the corresponding buildings are said to be of spherical type. When is an affine Weyl group, the Coxeter complex is a subdivision of the affine plane and one speaks of affine, or Euclidean, buildings. An affine building of type is the same as an infinite tree without terminal vertices. Although the theory of semisimple algebraic groups provided the initial motivation for the notion of a building, not all buildings arise from a group. In particular, projective planes and generalized quadrangles form two classes of graphs studied in incidence geom
https://en.wikipedia.org/wiki/Polyglot%20%28computing%29
In computing, a polyglot is a computer program or script (or other file) written in a valid form of multiple programming languages or file formats. The name was coined by analogy to multilingualism. A polyglot file is composed by combining syntax from two or more different formats. When the file formats are to be compiled or interpreted as source code, the file can be said to be a polyglot program, though file formats and source code syntax are both fundamentally streams of bytes, and exploiting this commonality is key to the development of polyglots. Polyglot files have practical applications in compatibility, but can also present a security risk when used to bypass validation or to exploit a vulnerability. History Polyglot programs have been crafted as challenges and curios in hacker culture since at least the early 1990s. A notable early example, named simply polyglot was published on the Usenet group rec.puzzles in 1991, supporting 8 languages, though this was inspired by even earlier programs. In 2000, a polyglot program was named a winner in the International Obfuscated C Code Contest. In the 21st century, polyglot programs and files gained attention as a covert channel mechanism for propagation of malware. Construction A polyglot is composed by combining syntax from two or more different formats, leveraging various syntactic constructs that are either common between the formats, or constructs that are language specific but carrying different meaning in each language. A file is a valid polyglot if it can be successfully interpreted by multiple interpreting programs. For example, a PDF-Zip polyglot might be opened as both a valid PDF document and decompressed as a valid zip archive. To maintain validity across interpreting programs, one must ensure that constructs specific to one interpreter are not interpreted by another, and vice versa. This is often accomplished by hiding language-specific constructs in segments interpreted as comments or plain text of
https://en.wikipedia.org/wiki/Skvader
The skvader () is a Swedish fictional creature that was constructed in 1918 by the taxidermist Rudolf Granberg and is permanently displayed at the museum at Norra Berget in Sundsvall. It has the forequarters and hindlegs of a European hare (Lepus europaeus), and the back, wings and tail of a female wood grouse (Tetrao urogallus). It was later jokingly given the Latin name Tetrao lepus pseudo-hybridus rarissimus L. The term has taken on a general meaning of two disparate elements put together, often conveying a sense of a less fortunate such combination. Name The name is a combination of two words, explained by the Svenska Akademiens ordbok (Dictionary of the Swedish Academy) as being from the "prefix from (quack or chirp), and the suffix -der from (wood grouse)". Origins The skvader originates from a tall tale hunting story told by a man named Håkan Dahlmark during a dinner at a restaurant in Sundsvall in the beginning of the 20th century. To the amusement of the other guests, Dahlmark claimed that he in 1874 had shot such an animal during a hunt north of Sundsvall. On his birthday in 1907, his housekeeper jokingly presented him with a painting of the animal, made by her nephew and shortly before his death in 1912, Dahlmark donated the painting to a local museum. During an exhibition in Örnsköldsvik in 1916 the manager of the museum became acquainted with the taxidermist Rudolf Granberg. He then mentioned the hunting story and the painting and asked Granberg if he could re-construct the animal. In 1918 Granberg had completed the skvader and it has since then been a very popular exhibition item at the museum, which also has the painting on display. Similar creatures A strikingly similar creature called the "rabbit-bird" was described by Pliny the Elder in Natural History. This creature had the body of a bird with a rabbit's head and was said to have inhabited the Alps. Other similar creatures include the Bavarian wolpertinger, the Austrian raurakl, the Thur
https://en.wikipedia.org/wiki/Hemodynamics
Hemodynamics or haemodynamics are the dynamics of blood flow. The circulatory system is controlled by homeostatic mechanisms of autoregulation, just as hydraulic circuits are controlled by control systems. The hemodynamic response continuously monitors and adjusts to conditions in the body and its environment. Hemodynamics explains the physical laws that govern the flow of blood in the blood vessels. Blood flow ensures the transportation of nutrients, hormones, metabolic waste products, oxygen, and carbon dioxide throughout the body to maintain cell-level metabolism, the regulation of the pH, osmotic pressure and temperature of the whole body, and the protection from microbial and mechanical harm. Blood is a non-Newtonian fluid, and is most efficiently studied using rheology rather than hydrodynamics. Because blood vessels are not rigid tubes, classic hydrodynamics and fluids mechanics based on the use of classical viscometers are not capable of explaining haemodynamics. The study of the blood flow is called hemodynamics, and the study of the properties of the blood flow is called hemorheology. Blood Blood is a complex liquid. Blood is composed of plasma and formed elements. The plasma contains 91.5% water, 7% proteins and 1.5% other solutes. The formed elements are platelets, white blood cells, and red blood cells. The presence of these formed elements and their interaction with plasma molecules are the main reasons why blood differs so much from ideal Newtonian fluids. Viscosity of plasma Normal blood plasma behaves like a Newtonian fluid at physiological rates of shear. Typical values for the viscosity of normal human plasma at 37 °C is 1.4 mN·s/m2. The viscosity of normal plasma varies with temperature in the same way as does that of its solvent water; a 5 °C increase of temperature in the physiological range reduces plasma viscosity by about 10%. Osmotic pressure of plasma The osmotic pressure of solution is determined by the number of particles present
https://en.wikipedia.org/wiki/Cynology
Cynology (rarely kynology, ) is the study of matters related to canines or domestic dogs. In English, it is a term sometimes used to denote a serious zoological approach to the study of dogs as well as by writers on canine subjects, dog breeders, trainers and enthusiasts who study the dog informally. Etymology Cynology is a classical compound word (from Greek , , , , 'dog'; and , -logia) referring to the study of dogs. The word is not found in major English dictionaries and it is not a recognized study in English-speaking countries. Similar words are in other languages, such German and Dutch . is also the source of the English word cynic, and is directly related to canine and hound. Usage in English The suffix '-logy' in English words refers to a study, or an academic discipline, or field of scientific study. English classical compound words of this type may confer an impression of scientific rigor on a non-scientific occupation or profession. Usage in English of the word cynology is rare, and occasionally found in the names of dog training academies, with cynologist sometimes being used as a title by some dog trainers or handlers. People who informally study the dog may refer to themselves as 'cynologists' to imply serious study or scientific work. The very rare term cynologist in English is generally found to refer to "canine specialists" such as; certified care professionals, certified show judges, breeders, breed enthusiasts, certified dog-trainers and professional dog-handlers. Usage in other languages Cynology may have other connotations or uses in languages other than English; see German , Dutch and Czech . A similar word is used to refer to dog handlers and dog trainers in Russia. A veterinary clinic in Armenia offers a 'cynologist' to assist with dog training. A magazine in the Baltic states described as 'dedicated to the development of cynology in the Baltic countries' covers dog training, dog shows, and veterinary advice (a hobbyist magazi
https://en.wikipedia.org/wiki/Control%20system
A control system manages, commands, directs, or regulates the behavior of other devices or systems using control loops. It can range from a single home heating controller using a thermostat controlling a domestic boiler to large industrial control systems which are used for controlling processes or machines. The control systems are designed via control engineering process. For continuously modulated control, a feedback controller is used to automatically control a process or operation. The control system compares the value or status of the process variable (PV) being controlled with the desired value or setpoint (SP), and applies the difference as a control signal to bring the process variable output of the plant to the same value as the setpoint. For sequential and combinational logic, software logic, such as in a programmable logic controller, is used. Open-loop and closed-loop control Feedback control systems Logic control Logic control systems for industrial and commercial machinery were historically implemented by interconnected electrical relays and cam timers using ladder logic. Today, most such systems are constructed with microcontrollers or more specialized programmable logic controllers (PLCs). The notation of ladder logic is still in use as a programming method for PLCs. Logic controllers may respond to switches and sensors and can cause the machinery to start and stop various operations through the use of actuators. Logic controllers are used to sequence mechanical operations in many applications. Examples include elevators, washing machines and other systems with interrelated operations. An automatic sequential control system may trigger a series of mechanical actuators in the correct sequence to perform a task. For example, various electric and pneumatic transducers may fold and glue a cardboard box, fill it with the product and then seal it in an automatic packaging machine. PLC software can be written in many different ways – ladder diagra
https://en.wikipedia.org/wiki/Microscope%20image%20processing
Microscope image processing is a broad term that covers the use of digital image processing techniques to process, analyze and present images obtained from a microscope. Such processing is now commonplace in a number of diverse fields such as medicine, biological research, cancer research, drug testing, metallurgy, etc. A number of manufacturers of microscopes now specifically design in features that allow the microscopes to interface to an image processing system. Image acquisition Until the early 1990s, most image acquisition in video microscopy applications was typically done with an analog video camera, often simply closed circuit TV cameras. While this required the use of a frame grabber to digitize the images, video cameras provided images at full video frame rate (25-30 frames per second) allowing live video recording and processing. While the advent of solid state detectors yielded several advantages, the real-time video camera was actually superior in many respects. Today, acquisition is usually done using a CCD camera mounted in the optical path of the microscope. The camera may be full colour or monochrome. Very often, very high resolution cameras are employed to gain as much direct information as possible. Cryogenic cooling is also common, to minimise noise. Often digital cameras used for this application provide pixel intensity data to a resolution of 12-16 bits, much higher than is used in consumer imaging products. Ironically, in recent years, much effort has been put into acquiring data at video rates, or higher (25-30 frames per second or higher). What was once easy with off-the-shelf video cameras now requires special, high speed electronics to handle the vast digital data bandwidth. Higher speed acquisition allows dynamic processes to be observed in real time, or stored for later playback and analysis. Combined with the high image resolution, this approach can generate vast quantities of raw data, which can be a challenge to deal with, even
https://en.wikipedia.org/wiki/Deconvolution
In mathematics, deconvolution is the operation inverse to convolution. Both operations are used in signal processing and image processing. For example, it may be possible to recover the original signal after a filter (convolution) by using a deconvolution method with a certain degree of accuracy. Due to the measurement error of the recorded signal or image, it can be demonstrated that the worse the signal-to-noise ratio (SNR), the worse the reversing of a filter will be; hence, inverting a filter is not always a good solution as the error amplifies. Deconvolution offers a solution to this problem. The foundations for deconvolution and time-series analysis were largely laid by Norbert Wiener of the Massachusetts Institute of Technology in his book Extrapolation, Interpolation, and Smoothing of Stationary Time Series (1949). The book was based on work Wiener had done during World War II but that had been classified at the time. Some of the early attempts to apply these theories were in the fields of weather forecasting and economics. Description In general, the objective of deconvolution is to find the solution f of a convolution equation of the form: Usually, h is some recorded signal, and f is some signal that we wish to recover, but has been convolved with a filter or distortion function g, before we recorded it. Usually, h is a distorted version of f and the shape of f can't be easily recognized by the eye or simpler time-domain operations. The function g represents the impulse response of an instrument or a driving force that was applied to a physical system. If we know g, or at least know the form of g, then we can perform deterministic deconvolution. However, if we do not know g in advance, then we need to estimate it. This can be done using methods of statistical estimation or building the physical principles of the underlying system, such as the electrical circuit equations or diffusion equations. There are several deconvolution techniques, depend
https://en.wikipedia.org/wiki/Shared-nothing%20architecture
A shared-nothing architecture (SN) is a distributed computing architecture in which each update request is satisfied by a single node (processor/memory/storage unit) in a computer cluster. The intent is to eliminate contention among nodes. Nodes do not share (independently access) the same memory or storage. One alternative architecture is shared everything, in which requests are satisfied by arbitrary combinations of nodes. This may introduce contention, as multiple nodes may seek to update the same data at the same time. SN eliminates single points of failure, allowing the overall system to continue operating despite failures in individual nodes and allowing individual nodes to upgrade hardware or software without a system-wide shutdown. A SN system can scale simply by adding nodes, since no central resource bottlenecks the system. In databases, a term for the part of a database on a single node is a shard. A SN system typically partitions its data among many nodes. A refinement is to replicate commonly used but infrequently modified data across many nodes, allowing more requests to be resolved on a single node. History Michael Stonebraker at the University of California, Berkeley used the term in a 1986 database paper. Teradata delivered the first SN database system in 1983. Tandem Computers NonStop systems, a shared-nothing implementation of hardware and software was released to market in 1976. Tandem Computers later released NonStop SQL, a shared-nothing relational database, in 1984. Applications Shared-nothing is popular for web development. Shared-nothing architectures are prevalent for data warehousing applications, although requests that require data from multiple nodes can dramatically reduce throughput. See also References Data partitioning Distributed computing architecture
https://en.wikipedia.org/wiki/Digital%20imaging
Digital imaging or digital image acquisition is the creation of a digital representation of the visual characteristics of an object, such as a physical scene or the interior structure of an object. The term is often assumed to imply or include the processing, compression, storage, printing and display of such images. A key advantage of a digital image, versus an analog image such as a film photograph, is the ability to digitally propagate copies of the original subject indefinitely without any loss of image quality. Digital imaging can be classified by the type of electromagnetic radiation or other waves whose variable attenuation, as they pass through or reflect off objects, conveys the information that constitutes the image. In all classes of digital imaging, the information is converted by image sensors into digital signals that are processed by a computer and made output as a visible-light image. For example, the medium of visible light allows digital photography (including digital videography) with various kinds of digital cameras (including digital video cameras). X-rays allow digital X-ray imaging (digital radiography, fluoroscopy, and CT), and gamma rays allow digital gamma ray imaging (digital scintigraphy, SPECT, and PET). Sound allows ultrasonography (such as medical ultrasonography) and sonar, and radio waves allow radar. Digital imaging lends itself well to image analysis by software, as well as to image editing (including image manipulation). History Before digital imaging, the first photograph ever produced, View from the Window at Le Gras, was in 1826 by Frenchman Joseph Nicéphore Niépce. When Joseph was 28, he was discussing with his brother Claude about the possibility of reproducing images with light. His focus on his new innovations began in 1816. He was in fact more interested in creating an engine for a boat. Joseph and his brother focused on that for quite some time and Claude successfully promoted his innovation moving and advancing him to
https://en.wikipedia.org/wiki/Cleanroom
A cleanroom or clean room is an engineered space, which maintains a very low concentration of airborne particulates. It is well isolated, well-controlled from contamination, and actively cleansed. Such rooms are commonly needed for scientific research, and in industrial production for all nanoscale processes, such as semiconductor manufacturing. A cleanroom is designed to keep everything from dust, to airborne organisms, or vaporised particles, away from it, and so from whatever material is being handled inside it. A cleanroom can also prevent the escape of materials. This is often the primary aim in hazardous biology and nuclear work, in pharmaceutics and in virology. Cleanrooms typically come with a cleanliness level quantified by the number of particles per cubic meter at a predetermined molecule measure. The ambient outdoor air in a typical urban area contains 35,000,000 particles for each cubic meter in the size range 0.5 μm and bigger, equivalent to an ISO 9 certified cleanroom. By comparison an ISO 14644-1 level 1 certified cleanroom permits no particles in that size range, and just 12 particles for each cubic meter of 0.3 μm and smaller. Semiconductor facilities often get by with level 7 or 5, while level 1 facilities are exceedingly rare. History The modern cleanroom was invented by American physicist Willis Whitfield. As employee of the Sandia National Laboratories, Whitfield created the initial plans for the cleanroom in 1960. Prior to Whitfield's invention, earlier cleanrooms often had problems with particles and unpredictable airflows. Whitfield designed his cleanroom with a constant, highly filtered air flow to flush out impurities. Within a few years of its invention in the 1960s, Whitfield's modern cleanroom had generated more than US$50 billion in sales worldwide (approximately $ billion today). The majority of the integrated circuit manufacturing facilities in Silicon Valley were made by three companies: MicroAire, PureAire, and Key Plastics. T
https://en.wikipedia.org/wiki/List%20comprehension
A list comprehension is a syntactic construct available in some programming languages for creating a list based on existing lists. It follows the form of the mathematical set-builder notation (set comprehension) as distinct from the use of map and filter functions. Overview Consider the following example in set-builder notation. or often This can be read, " is the set of all numbers "2 times " SUCH THAT is an ELEMENT or MEMBER of the set of natural numbers (), AND squared is greater than ." The smallest natural number, x = 1, fails to satisfy the condition x2>3 (the condition 12>3 is false) so 2 ·1 is not included in S. The next natural number, 2, does satisfy the condition (22>3) as does every other natural number. Thus x consists of 2, 3, 4, 5... Since the set consists of all numbers "2 times x" it is given by S = {4, 6, 8, 10,...}. S is, in other words, the set of all even numbers greater than 2. In this annotated version of the example: is the variable representing members of an input set. represents the input set, which in this example is the set of natural numbers is a predicate expression acting as a filter on members of the input set. is an output expression producing members of the new set from members of the input set that satisfy the predicate expression. braces indicate that the result is a set the vertical bar is read as "SUCH THAT". The bar and the colon ":" are used interchangeably. commas separate the predicates and can be read as "AND". A list comprehension has the same syntactic components to represent generation of a list in order from an input list or iterator: A variable representing members of an input list. An input list (or iterator). An optional predicate expression. And an output expression producing members of the output list from members of the input iterable that satisfy the predicate. The order of generation of members of the output list is based on the order of items in the input. In Haskell's list comprehe
https://en.wikipedia.org/wiki/Counting
Counting is the process of determining the number of elements of a finite set of objects; that is, determining the size of a set. The traditional way of counting consists of continually increasing a (mental or spoken) counter by a unit for every element of the set, in some order, while marking (or displacing) those elements to avoid visiting the same element more than once, until no unmarked elements are left; if the counter was set to one after the first object, the value after visiting the final object gives the desired number of elements. The related term enumeration refers to uniquely identifying the elements of a finite (combinatorial) set or infinite set by assigning a number to each element. Counting sometimes involves numbers other than one; for example, when counting money, counting out change, "counting by twos" (2, 4, 6, 8, 10, 12, ...), or "counting by fives" (5, 10, 15, 20, 25, ...). There is archaeological evidence suggesting that humans have been counting for at least 50,000 years. Counting was primarily used by ancient cultures to keep track of social and economic data such as the number of group members, prey animals, property, or debts (that is, accountancy). Notched bones were also found in the Border Caves in South Africa, which may suggest that the concept of counting was known to humans as far back as 44,000 BCE. The development of counting led to the development of mathematical notation, numeral systems, and writing. Forms of counting Counting can occur in a variety of forms. Counting can be verbal; that is, speaking every number out loud (or mentally) to keep track of progress. This is often used to count objects that are present already, instead of counting a variety of things over time. Counting can also be in the form of tally marks, making a mark for each number and then counting all of the marks when done tallying. This is useful when counting objects over time, such as the number of times something occurs during the course of a da
https://en.wikipedia.org/wiki/TWiki
TWiki is a Perl-based structured wiki application, typically used to run a collaboration platform, knowledge or document management system, a knowledge base, or team portal. Users can create wiki pages using the TWiki Markup Language, and developers can extend wiki application functionality with plugins. The TWiki project was founded by Peter Thoeny in 1998 as an open-source wiki-based application platform. In October 2008, the company TWiki.net, created by Thoeny, assumed full control over the TWiki project while much of the developer community forked off to join the Foswiki project. Major features Revision control - complete audit trail, also for meta data such as attachments and access control settings Fine-grained access control - restrict read/write/rename on site level, web level, page level based on user groups Extensible TWiki markup language TinyMCE based WYSIWYG editor Dynamic content generation with TWiki variables Forms and reporting - capture structured content, report on it with searches embedded in pages Built in database - users can create wiki applications using the TWiki Markup Language Skinnable user interface RSS/Atom feeds and e-mail notification Over 400 Extensions and 200 Plugins TWiki extensions TWiki has a plugin API that has spawned over 300 extensions to link into databases, create charts, tags, sort tables, write spreadsheets, create image gallery and slideshows, make drawings, write blogs, plot graphs, interface to many different authentication schemes, track Extreme Programming projects and so on. TWiki application platform TWiki as a structured wiki provides database-like manipulation of fields stored on pages, and offers a SQL-like query language to embed reports in wiki pages. Wiki applications are also called situational applications because they are created ad hoc by the users for very specific needs. Users have built TWiki applications that include call center status boards, to-do lists, inventory systems, employ
https://en.wikipedia.org/wiki/Signal
In signal processing, a signal is a function that conveys information about a phenomenon. Any quantity that can vary over space or time can be used as a signal to share messages between observers. The IEEE Transactions on Signal Processing includes audio, video, speech, image, sonar, and radar as examples of signals. A signal may also be defined as observable change in a quantity over space or time (a time series), even if it does not carry information. In nature, signals can be actions done by an organism to alert other organisms, ranging from the release of plant chemicals to warn nearby plants of a predator, to sounds or motions made by animals to alert other animals of food. Signaling occurs in all organisms even at cellular levels, with cell signaling. Signaling theory, in evolutionary biology, proposes that a substantial driver for evolution is the ability of animals to communicate with each other by developing ways of signaling. In human engineering, signals are typically provided by a sensor, and often the original form of a signal is converted to another form of energy using a transducer. For example, a microphone converts an acoustic signal to a voltage waveform, and a speaker does the reverse. Another important property of a signal is its entropy or information content. Information theory serves as the formal study of signals and their content. The information of a signal is often accompanied by noise, which primarily refers to unwanted modifications of signals, but is often extended to include unwanted signals conflicting with desired signals (crosstalk). The reduction of noise is covered in part under the heading of signal integrity. The separation of desired signals from background noise is the field of signal recovery, one branch of which is estimation theory, a probabilistic approach to suppressing random disturbances. Engineering disciplines such as electrical engineering have advanced the design, study, and implementation of systems involving t
https://en.wikipedia.org/wiki/Ring%20modulation
In electronics, ring modulation is a signal processing function, an implementation of frequency mixing, in which two signals are combined to yield an output signal. One signal, called the carrier, is typically a sine wave or another simple waveform; the other signal is typically more complicated and is called the input or the modulator signal. A ring modulator is an electronic device for ring modulation. A ring modulator may be used in music synthesizers and as an effects unit. The name derives from the fact that the analog circuit of diodes originally used to implement this technique takes the shape of a ring: a diode ring. The circuit is similar to a bridge rectifier, except that instead of the diodes facing left or right, they face clockwise or counterclockwise. Ring modulation is quite similar to amplitude modulation, with the difference that in the latter the modulator is shifted to be positive before being multiplied with the carrier, while in the former the unshifted modulator is multiplied with the carrier. This has the effect that ring modulation of two sine waves having frequencies of 1,500 Hz and 400 Hz will produce as output signal the sum of a sine wave with frequency 1,900 Hz and one with frequency 1,100 Hz. These two output frequencies are known as sidebands. If one of the input signals has significant overtones (which is the case for square waves), the output will sound quite different, since each harmonic will generate its own pair of sidebands that won't be harmonically-related. Operation Denoting the carrier signal by , the modulator signal by and the output signal by (where denotes time), ring modulation is described by the formula If and are sine waves with frequencies and , respectively, then will be the sum of two (phase-shifted) sine waves, one of frequency and the other of frequency . This is a consequence of the trigonometric identity Alternatively, one can use the fact that multiplication in the time domain is the same as co
https://en.wikipedia.org/wiki/Black-box%20testing
Black-box testing is a method of software testing that examines the functionality of an application without peering into its internal structures or workings. This method of test can be applied virtually to every level of software testing: unit, integration, system and acceptance. It is sometimes referred to as specification-based testing. Test procedures Specific knowledge of the application's code, internal structure and programming knowledge in general is not required. The tester is aware of what the software is supposed to do but is not aware of how it does it. For instance, the tester is aware that a particular input returns a certain, invariable output but is not aware of how the software produces the output in the first place. Test cases Test cases are built around specifications and requirements, i.e., what the application is supposed to do. Test cases are generally derived from external descriptions of the software, including specifications, requirements and design parameters. Although the tests used are primarily functional in nature, non-functional tests may also be used. The test designer selects both valid and invalid inputs and determines the correct output, often with the help of a test oracle or a previous result that is known to be good, without any knowledge of the test object's internal structure. Test design techniques Typical black-box test design techniques include: Decision table testing All-pairs testing Equivalence partitioning Boundary value analysis Cause–effect graph Error guessing State transition testing Use case testing User story testing Domain analysis Syntax testing Combining technique Hacking In penetration testing, black-box testing refers to a method where an ethical hacker has no knowledge of the system being attacked. The goal of a black-box penetration test is to simulate an external hacking or cyber warfare attack. See also ABX test Acceptance testing Blind experiment Boundary testing Fuzz testing Gr
https://en.wikipedia.org/wiki/Range%20of%20a%20function
In mathematics, the range of a function may refer to either of two closely related concepts: The codomain of the function The image of the function Given two sets and , a binary relation between and is a (total) function (from to ) if for every in there is exactly one in such that relates to . The sets and are called domain and codomain of , respectively. The image of is then the subset of consisting of only those elements of such that there is at least one in with . Terminology As the term "range" can have different meanings, it is considered a good practice to define it the first time it is used in a textbook or article. Older books, when they use the word "range", tend to use it to mean what is now called the codomain. More modern books, if they use the word "range" at all, generally use it to mean what is now called the image. To avoid any confusion, a number of modern books don't use the word "range" at all. Elaboration and example Given a function with domain , the range of , sometimes denoted or , may refer to the codomain or target set (i.e., the set into which all of the output of is constrained to fall), or to , the image of the domain of under (i.e., the subset of consisting of all actual outputs of ). The image of a function is always a subset of the codomain of the function. As an example of the two different usages, consider the function as it is used in real analysis (that is, as a function that inputs a real number and outputs its square). In this case, its codomain is the set of real numbers , but its image is the set of non-negative real numbers , since is never negative if is real. For this function, if we use "range" to mean codomain, it refers to ; if we use "range" to mean image, it refers to . In many cases, the image and the codomain can coincide. For example, consider the function , which inputs a real number and outputs its double. For this function, the codomain and the image are the same (both being
https://en.wikipedia.org/wiki/Molar%20concentration
Molar concentration (also called molarity, amount concentration or substance concentration) is a measure of the concentration of a chemical species, in particular, of a solute in a solution, in terms of amount of substance per unit volume of solution. In chemistry, the most commonly used unit for molarity is the number of moles per liter, having the unit symbol mol/L or mol/dm3 in SI units. A solution with a concentration of 1 mol/L is said to be 1 molar, commonly designated as 1 M or 1 M. Molarity is often depicted with square brackets around the substance of interest; for example, the molarity of the hydrogen ion is depicted as [H+]. Definition Molar concentration or molarity is most commonly expressed in units of moles of solute per litre of solution. For use in broader applications, it is defined as amount of substance of solute per unit volume of solution, or per unit volume available to the species, represented by lowercase : Here, is the amount of the solute in moles, is the number of constituent particles present in volume (in litres) of the solution, and is the Avogadro constant, since 2019 defined as exactly . The ratio is the number density . In thermodynamics the use of molar concentration is often not convenient because the volume of most solutions slightly depends on temperature due to thermal expansion. This problem is usually resolved by introducing temperature correction factors, or by using a temperature-independent measure of concentration such as molality. The reciprocal quantity represents the dilution (volume) which can appear in Ostwald's law of dilution. Formality or analytical concentration If a molecular entity dissociates in solution, the concentration refers to the original chemical formula in solution, the molar concentration is sometimes called formal concentration or formality (FA) or analytical concentration (cA). For example, if a sodium carbonate solution () has a formal concentration of c() = 1 mol/L, the molar concentra
https://en.wikipedia.org/wiki/Naval%20Communication%20Station%20Harold%20E.%20Holt
Naval Communication Station Harold E. Holt is a joint Australian and United States naval communication station located on the north-west coast of Australia, north of the town of Exmouth, Western Australia. The station is operated and maintained by the Australian Department of Defence on behalf of Australia and the United States and provides very low frequency (VLF) radio transmission to United States Navy, Royal Australian Navy and allied ships and submarines in the western Pacific Ocean and eastern Indian Ocean. The frequency is 19.8 kHz. With a transmission power of 1 megawatt, it is the most powerful transmission station in the Southern Hemisphere. The town of Exmouth was built at the same time as the communications station to provide support to the base and to house dependent families of US Navy personnel. VLF transmitter masts The station features thirteen tall radio towers. The tallest tower is called Tower Zero and is tall, and was for many years the tallest man-made structure in the Southern Hemisphere. Six towers, each tall, are placed in a hexagon around Tower Zero. The other six towers, which are each tall, are placed in a larger hexagon around Tower Zero. On 3 March 2009, the Defence Materiel Organisation advertised on the AusTender website a tender to construct two new roads at the station. The tender stated the 357 guy wires which support the 13 towers had exceeded their life expectancy and the roads will support the installation of the VLF guy wires. It states: History Sir Garfield Barwick, Australian Minister for External Affairs, negotiated the lease on the US Base at North West Cape in 1963 with US Ambassador William Battle. The station was commissioned as U.S. Naval Communication Station North West Cape on 16 September 1967 at a ceremony with the US Ambassador to Australia Ed Clark and the Prime Minister of Australia Harold Holt, at which peppercorn rent for the base for the first year was paid. On 20 September 1968, the station was o