source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Trachtenberg%20system
The Trachtenberg system is a system of rapid mental calculation. The system consists of a number of readily memorized operations that allow one to perform arithmetic computations very quickly. It was developed by the Russian engineer Jakow Trachtenberg in order to keep his mind occupied while being in a Nazi concentration camp. The rest of this article presents some methods devised by Trachtenberg. Some of the algorithms Trachtenberg developed are ones for general multiplication, division and addition. Also, the Trachtenberg system includes some specialised methods for multiplying small numbers between 5 and 13 (but shown here is 2–12). The section on addition demonstrates an effective method of checking calculations that can also be applied to multiplication. General multiplication The method for general multiplication is a method to achieve multiplications with low space complexity, i.e. as few temporary results as possible to be kept in memory. This is achieved by noting that the final digit is completely determined by multiplying the last digit of the multiplicands. This is held as a temporary result. To find the next to last digit, we need everything that influences this digit: The temporary result, the last digit of times the next-to-last digit of , as well as the next-to-last digit of times the last digit of . This calculation is performed, and we have a temporary result that is correct in the final two digits. In general, for each position in the final result, we sum for all : People can learn this algorithm and thus multiply four-digit numbers in their head – writing down only the final result. They would write it out starting with the rightmost digit and finishing with the leftmost. Trachtenberg defined this algorithm with a kind of pairwise multiplication where two digits are multiplied by one digit, essentially only keeping the middle digit of the result. By performing the above algorithm with this pairwise multiplication, even fewer tempora
https://en.wikipedia.org/wiki/Jakob%20Steiner
Jakob Steiner (18 March 1796 – 1 April 1863) was a Swiss mathematician who worked primarily in geometry. Life Steiner was born in the village of Utzenstorf, Canton of Bern. At 18, he became a pupil of Heinrich Pestalozzi and afterwards studied at Heidelberg. Then, he went to Berlin, earning a livelihood there, as in Heidelberg, by tutoring. Here he became acquainted with A. L. Crelle, who, encouraged by his ability and by that of Niels Henrik Abel, then also staying at Berlin, founded his famous Journal (1826). After Steiner's publication (1832) of his Systematische Entwickelungen he received, through Carl Gustav Jacob Jacobi, who was then professor at Königsberg University, and earned an honorary degree there; and through the influence of Jacobi and of the brothers Alexander and Wilhelm von Humboldt a new chair of geometry was founded for him at Berlin (1834). This he occupied until his death in Bern on 1 April 1863. He was described by Thomas Hirst as follows: "He is a middle-aged man, of pretty stout proportions, has a long intellectual face, with beard and moustache and a fine prominent forehead, hair dark rather inclining to turn grey. The first thing that strikes you on his face is a dash of care and anxiety, almost pain, as if arising from physical suffering—he has rheumatism. He never prepares his lectures beforehand. He thus often stumbles or fails to prove what he wishes at the moment, and at every such failure he is sure to make some characteristic remark." Mathematical contributions Steiner's mathematical work was mainly confined to geometry. This he treated synthetically, to the total exclusion of analysis, which he hated, and he is said to have considered it a disgrace to synthetic geometry if equal or higher results were obtained by analytical geometry methods. In his own field he surpassed all his contemporaries. His investigations are distinguished by their great generality, by the fertility of his resources, and by the rigour in his proofs.
https://en.wikipedia.org/wiki/Gaussian%20integral
The Gaussian integral, also known as the Euler–Poisson integral, is the integral of the Gaussian function over the entire real line. Named after the German mathematician Carl Friedrich Gauss, the integral is Abraham de Moivre originally discovered this type of integral in 1733, while Gauss published the precise integral in 1809. The integral has a wide range of applications. For example, with a slight change of variables it is used to compute the normalizing constant of the normal distribution. The same integral with finite limits is closely related to both the error function and the cumulative distribution function of the normal distribution. In physics this type of integral appears frequently, for example, in quantum mechanics, to find the probability density of the ground state of the harmonic oscillator. This integral is also used in the path integral formulation, to find the propagator of the harmonic oscillator, and in statistical mechanics, to find its partition function. Although no elementary function exists for the error function, as can be proven by the Risch algorithm, the Gaussian integral can be solved analytically through the methods of multivariable calculus. That is, there is no elementary indefinite integral for but the definite integral can be evaluated. The definite integral of an arbitrary Gaussian function is Computation By polar coordinates A standard way to compute the Gaussian integral, the idea of which goes back to Poisson, is to make use of the property that: Consider the function on the plane , and compute its integral two ways: on the one hand, by double integration in the Cartesian coordinate system, its integral is a square: on the other hand, by shell integration (a case of double integration in polar coordinates), its integral is computed to be Comparing these two computations yields the integral, though one should take care about the improper integrals involved. where the factor of is the Jacobian determinant which
https://en.wikipedia.org/wiki/Lexicographic%20order
In mathematics, the lexicographic or lexicographical order (also known as lexical order, or dictionary order) is a generalization of the alphabetical order of the dictionaries to sequences of ordered symbols or, more generally, of elements of a totally ordered set. There are several variants and generalizations of the lexicographical ordering. One variant applies to sequences of different lengths by comparing the lengths of the sequences before considering their elements. Another variant, widely used in combinatorics, orders subsets of a given finite set by assigning a total order to the finite set, and converting subsets into increasing sequences, to which the lexicographical order is applied. A generalization defines an order on an n-ary Cartesian product of partially ordered sets; this order is a total order if and only if all factors of the Cartesian product are totally ordered. Definition The words in a lexicon (the set of words used in some language) have a conventional ordering, used in dictionaries and encyclopedias, that depends on the underlying ordering of the alphabet of symbols used to build the words. The lexicographical order is one way of formalizing word order given the order of the underlying symbols. The formal notion starts with a finite set , often called the alphabet, which is totally ordered. That is, for any two symbols and in that are not the same symbol, either or . The words of are the finite sequences of symbols from , including words of length 1 containing a single symbol, words of length 2 with 2 symbols, and so on, even including the empty sequence with no symbols at all. The lexicographical order on the set of all these finite words orders the words as follows: Given two different words of the same length, say and , the order of the two words depends on the alphabetic order of the symbols in the first place where the two words differ (counting from the beginning of the words): if and only if in the underlying orde
https://en.wikipedia.org/wiki/Zero%20crossing
A zero-crossing is a point where the sign of a mathematical function changes (e.g. from positive to negative), represented by an intercept of the axis (zero value) in the graph of the function. It is a commonly used term in electronics, mathematics, acoustics, and image processing. In electronics In alternating current, the zero-crossing is the instantaneous point at which there is no voltage present. In a sine wave or other simple waveform, this normally occurs twice during each cycle. It is a device for detecting the point where the voltage crosses zero in either direction. The zero-crossing is important for systems that send digital data over AC circuits, such as modems, X10 home automation control systems, and Digital Command Control type systems for Lionel and other AC model trains. Counting zero-crossings is also a method used in speech processing to estimate the fundamental frequency of speech. In a system where an amplifier with digitally controlled gain is applied to an input signal, artifacts in the non-zero output signal occur when the gain of the amplifier is abruptly switched between its discrete gain settings. At audio frequencies, such as in modern consumer electronics like digital audio players, these effects are clearly audible, resulting in a 'zipping' sound when rapidly ramping the gain or a soft 'click' when a single gain change is made. Artifacts are disconcerting and clearly not desirable. If changes are made only at zero-crossings of the input signal, then no matter how the amplifier gain setting changes, the output also remains at zero, thereby minimizing the change. (The instantaneous change in gain will still produce distortion, but it will not produce a click.) If electrical power is to be switched, no electrical interference is generated if switched at an instant when there is no current—a zero crossing. Early light dimmers and similar devices generated interference; later versions were designed to switch at the zero crossing. In
https://en.wikipedia.org/wiki/Riemann%E2%80%93Liouville%20integral
In mathematics, the Riemann–Liouville integral associates with a real function another function of the same kind for each value of the parameter . The integral is a manner of generalization of the repeated antiderivative of in the sense that for positive integer values of , is an iterated antiderivative of of order . The Riemann–Liouville integral is named for Bernhard Riemann and Joseph Liouville, the latter of whom was the first to consider the possibility of fractional calculus in 1832. The operator agrees with the Euler transform, after Leonhard Euler, when applied to analytic functions. It was generalized to arbitrary dimensions by Marcel Riesz, who introduced the Riesz potential. Definition The Riemann–Liouville integral is defined by where is the gamma function and is an arbitrary but fixed base point. The integral is well-defined provided is a locally integrable function, and is a complex number in the half-plane . The dependence on the base-point is often suppressed, and represents a freedom in constant of integration. Clearly is an antiderivative of (of first order), and for positive integer values of , is an antiderivative of order by Cauchy formula for repeated integration. Another notation, which emphasizes the base point, is This also makes sense if , with suitable restrictions on . The fundamental relations hold the latter of which is a semigroup property. These properties make possible not only the definition of fractional integration, but also of fractional differentiation, by taking enough derivatives of . Properties Fix a bounded interval . The operator associates to each integrable function on the function on which is also integrable by Fubini's theorem. Thus defines a linear operator on : Fubini's theorem also shows that this operator is continuous with respect to the Banach space structure on 1, and that the following inequality holds: Here denotes the norm on . More generally, by Hölder's inequality, it follows th
https://en.wikipedia.org/wiki/Vedic%20Mathematics
Vedic Mathematics is a book written by the Indian monk Bharati Krishna Tirtha, and first published in 1965. It contains a list of mathematical techniques, which were falsely claimed to have been retrieved from the Vedas and to contain advanced mathematical knowledge. Krishna Tirtha failed to produce the sources, and scholars unanimously note it to be a mere compendium of tricks for increasing the speed of elementary mathematical calculations sharing no overlap with historical mathematical developments during the Vedic period. However, there has been a proliferation of publications in this area and multiple attempts to integrate the subject into mainstream education by right-wing Hindu nationalist governments. Contents The book contains metaphorical aphorisms in the form of sixteen sutras and thirteen sub-sutras, which Krishna Tirtha states allude to significant mathematical tools. The range of their asserted applications spans from topic as diverse as statics and pneumatics to astronomy and financial domains. Tirtha stated that no part of advanced mathematics lay beyond the realms of his book and propounded that studying it for a couple of hours every day for a year equated to spending about two decades in any standardized education system to become professionally trained in the discipline of mathematics. STS scholar S. G. Dani in 'Vedic Mathematics': Myth and Reality states that the book is primarily a compendium of tricks that can be applied in elementary, middle and high school arithmetic and algebra, to gain faster results. The sutras and sub-sutras are abstract literary expressions (for example, "as much less" or "one less than previous one") prone to creative interpretations; Krishna Tirtha exploited this to the extent of manipulating the same shloka to generate widely different mathematical equivalencies across a multitude of contexts. Source and relation with The Vedas According to Krishna Tirtha, the sutras and other accessory content were found after
https://en.wikipedia.org/wiki/Solution%20set
In mathematics, a solution set is the set of values that satisfy a given set of equations or inequalities. For example, for a set of polynomials over a ring , the solution set is the subset of on which the polynomials all vanish (evaluate to 0), formally The feasible region of a constrained optimization problem is the solution set of the constraints. Examples The solution set of the single equation is the set {0}. For any non-zero polynomial over the complex numbers in one variable, the solution set is made up of finitely many points. However, for a complex polynomial in more than one variable the solution set has no isolated points. Remarks In algebraic geometry, solution sets are called algebraic sets if there are no inequalities. Over the reals, and with inequalities, there are called semialgebraic sets. Other meanings More generally, the solution set to an arbitrary collection E of relations (Ei) (i varying in some index set I) for a collection of unknowns , supposed to take values in respective spaces , is the set S of all solutions to the relations E, where a solution is a family of values such that substituting by in the collection E makes all relations "true". (Instead of relations depending on unknowns, one should speak more correctly of predicates, the collection E is their logical conjunction, and the solution set is the inverse image of the boolean value true by the associated boolean-valued function.) The above meaning is a special case of this one, if the set of polynomials fi if interpreted as the set of equations fi(x)=0. Examples The solution set for E = { x+y = 0 } with respect to is S = { (a,−a) : a ∈ R }. The solution set for E = { x+y = 0 } with respect to is S = { −y }. (Here, y is not "declared" as an unknown, and thus to be seen as a parameter on which the equation, and therefore the solution set, depends.) The solution set for with respect to is the interval S = [0,2] (since is undefined for negative values of
https://en.wikipedia.org/wiki/Memory%20address
In computing, a memory address is a reference to a specific memory location used at various levels by software and hardware. Memory addresses are fixed-length sequences of digits conventionally displayed and manipulated as unsigned integers. Such numerical semantic bases itself upon features of CPU (such as the instruction pointer and incremental address registers), as well upon use of the memory like an array endorsed by various programming languages. Types Physical addresses A digital computer's main memory consists of many memory locations. Each memory location has a physical address which is a code. The CPU (or other device) can use the code to access the corresponding memory location. Generally only system software, i.e. the BIOS, operating systems, and some specialized utility programs (e.g., memory testers), address physical memory using machine code operands or processor registers, instructing the CPU to direct a hardware device, called the memory controller, to use the memory bus or system bus, or separate control, address and data busses, to execute the program's commands. The memory controllers' bus consists of a number of parallel lines, each represented by a binary digit (bit). The width of the bus, and thus the number of addressable storage units, and the number of bits in each unit, varies among computers. Logical addresses A computer program uses memory addresses to execute machine code, and to store and retrieve data. In early computers logical and physical addresses corresponded, but since the introduction of virtual memory most application programs do not have a knowledge of physical addresses. Rather, they address logical addresses, or virtual addresses, using the computer's memory management unit and operating system memory mapping; see below. Unit of address resolution Most modern computers are byte-addressable. Each address identifies a single byte (eight bits) of storage. Data larger than a single byte may be stored in a sequence of
https://en.wikipedia.org/wiki/Paralanguage
Paralanguage, also known as vocalics, is a component of meta-communication that may modify meaning, give nuanced meaning, or convey emotion, by using techniques such as prosody, pitch, volume, intonation, etc. It is sometimes defined as relating to nonphonemic properties only. Paralanguage may be expressed consciously or unconsciously. The study of paralanguage is known as paralinguistics and was invented by George L. Trager in the 1950s, while he was working at the Foreign Service Institute of the U.S. Department of State. His colleagues at the time included Henry Lee Smith, Charles F. Hockett (working with him on using descriptive linguistics as a model for paralanguage), Edward T. Hall developing proxemics, and Ray Birdwhistell developing kinesics. Trager published his conclusions in 1958, 1960 and 1961. His work has served as a basis for all later research, especially those investigating the relationship between paralanguage and culture (since paralanguage is learned, it differs by language and culture). A good example is the work of John J. Gumperz on language and social identity, which specifically describes paralinguistic differences between participants in intercultural interactions. The film Gumperz made for BBC in 1982, Multiracial Britain: Cross talk, does a particularly good job of demonstrating cultural differences in paralanguage and their impact on relationships. Paralinguistic information, because it is phenomenal, belongs to the external speech signal (Ferdinand de Saussure's parole) but not to the arbitrary conmodality. Even vocal language has some paralinguistic as well as linguistic properties that can be seen (lip reading, McGurk effect), and even felt, e.g. by the Tadoma method. Aspects of the speech signal Perspectival aspects Speech signals arrive at a listener's ears with acoustic properties that may allow listeners to identify location of the speaker (sensing distance and direction, for example). Sound localization functions in a sim
https://en.wikipedia.org/wiki/America%3A%20A%20Tribute%20to%20Heroes
America: A Tribute to Heroes was a benefit concert created by the heads of the four major American broadcast networks; Fox, ABC, NBC and CBS. Joel Gallen was selected by them to produce and run the show. Actor George Clooney organized celebrities to perform and to staff the telephone bank. It was broadcast live by the four major American television networks and all of the cable networks in the aftermath of the September 11 attacks on the World Trade Center and the Pentagon in 2001. Done in the style of a telethon, it featured a number of national and international entertainers performing to raise money for the victims and their families, particularly the New York City firefighters and New York City police officers. It aired September 21, 2001, uninterrupted and commercial-free, for which it won a Peabody Award. It was released on December 4, 2001, on compact disc and DVD. On a dark stage illuminated by hundreds of candles, twenty-one artists performed songs of mourning and hope, while various actors and other celebrities delivered short spoken messages. The musical performances took place at three studios in Los Angeles (CBS Television City), New York, and London, while the celebrity messages took place in Los Angeles. Some of the musicians, including Neil Young and Eddie Vedder, were heard working the phone banks taking pledges. Over $200 million was raised and given to the United Way's September 11 Telethon Fund. In 2004, Rolling Stone magazine selected this concert, along with the Concert for New York City, as one of the 50 moments that changed rock and roll. The show was also simulcast in Canada. Performers Bruce Springsteen: "My City of Ruins", a song he had performed at only a few New Jersey shows. Written before the September 11 attacks, it is actually about his home town Asbury Park, New Jersey; with a few phrases slightly modified, and introduced as "a prayer for our fallen brothers and sisters." It appeared on his The Rising album the following year.
https://en.wikipedia.org/wiki/Basement
A basement or cellar is one or more floors of a building that are completely or partly below the ground floor. It generally is used as a utility space for a building, where such items as the furnace, water heater, breaker panel or fuse box, car park, and air-conditioning system are located; so also are amenities such as the electrical system and cable television distribution point. In cities with high property prices, such as London, basements are often fitted out to a high standard and used as living space. In British English, the word basement is usually used for underground floors of, for example, department stores. The word is usually used with houses when the space below the ground floor is habitable, with windows and (usually) its own access. The word cellar applies to the whole underground level or to any large underground room. A subcellar or subbasement is a cellar that lies further underneath. Purpose, geography, and history A basement can be used in almost exactly the same manner as an additional above-ground floor of a house or other building. However, the use of basements depends largely on factors specific to a particular geographical area such as climate, soil, seismic activity, building technology, and real estate economics. Basements in small buildings such as single-family detached houses are rare in wet climates such as Great Britain and Ireland where flooding can be a problem, though they may be used in larger structures. However, basements are considered standard on all but the smallest new buildings in many places with temperate continental climates such as the American Midwest and the Canadian Prairies where a concrete foundation below the frost line is needed in any case, to prevent a building from shifting during the freeze-thaw cycle. Basements are much easier to construct in areas with relatively soft soils and may be avoided in places where the soil is too compact for easy excavation. Their use may be restricted in earthquake zones, be
https://en.wikipedia.org/wiki/Backward%20chaining
Backward chaining (or backward reasoning) is an inference method described colloquially as working backward from the goal. It is used in automated theorem provers, inference engines, proof assistants, and other artificial intelligence applications. In game theory, researchers apply it to (simpler) subgames to find a solution to the game, in a process called backward induction. In chess, it is called retrograde analysis, and it is used to generate table bases for chess endgames for computer chess. Backward chaining is implemented in logic programming by SLD resolution. Both rules are based on the modus ponens inference rule. It is one of the two most commonly used methods of reasoning with inference rules and logical implications – the other is forward chaining. Backward chaining systems usually employ a depth-first search strategy, e.g. Prolog. How it works Backward chaining starts with a list of goals (or a hypothesis) and works backwards from the consequent to the antecedent to see if any data supports any of these consequents. An inference engine using backward chaining would search the inference rules until it finds one with a consequent (Then clause) that matches a desired goal. If the antecedent (If clause) of that rule is not known to be true, then it is added to the list of goals (for one's goal to be confirmed one must also provide data that confirms this new rule). For example, suppose a new pet, Fritz, is delivered in an opaque box along with two facts about Fritz: Fritz croaks Fritz eats flies The goal is to decide whether Fritz is green, based on a rule base containing the following four rules: If X croaks and X eats flies – Then X is a frog If X chirps and X sings – Then X is a canary If X is a frog – Then X is green If X is a canary – Then X is yellow With backward reasoning, an inference engine can determine whether Fritz is green in four steps. To start, the query is phrased as a goal assertion that is to be proven: "Fritz is green".
https://en.wikipedia.org/wiki/Gmail
Gmail is a free email service provided by Google. it had 1.5 billion active users worldwide making it the largest email service in the world. It also provides a webmail interface, accessible through a web browser, and is also accessible through the official mobile application. Google also supports the use of third-party email clients via the POP and IMAP protocols. At its launch in 2004, Gmail provided a storage capacity of one gigabyte per user, which was significantly higher than its competitors offered at the time. Today, the service comes with 15 gigabytes of storage for free, which is divided among other Google services, such as Google Drive, and Google Photos. Users in need of more storage can purchase Google One to increase this 15GB limit. Users can receive emails up to 50 megabytes in size, including attachments, while they can send emails up to 25 megabytes. In order to send larger files, users can insert files from Google Drive into the message. Gmail has a search-oriented interface and a "conversation view" similar to an Internet forum. The service is notable among website developers for its early adoption of Ajax. Google's mail servers automatically scan emails for multiple purposes, including to filter spam and malware, and to add context-sensitive advertisements next to emails. This advertising practice has been significantly criticized by privacy advocates due to concerns over unlimited data retention, ease of monitoring by third parties, users of other email providers not having agreed to the policy upon sending emails to Gmail addresses, and the potential for Google to change its policies to further decrease privacy by combining information with other Google data usage. The company has been the subject of lawsuits concerning the issues. Google has stated that email users must "necessarily expect" their emails to be subject to automated processing and claims that the service refrains from displaying ads next to potentially sensitive messages, suc
https://en.wikipedia.org/wiki/Behavioral%20modernity
Behavioral modernity is a suite of behavioral and cognitive traits that distinguishes current Homo sapiens from other anatomically modern humans, hominins, and primates. Most scholars agree that modern human behavior can be characterized by abstract thinking, planning depth, symbolic behavior (e.g., art, ornamentation), music and dance, exploitation of large game, and blade technology, among others. Underlying these behaviors and technological innovations are cognitive and cultural foundations that have been documented experimentally and ethnographically by evolutionary and cultural anthropologists. These human universal patterns include cumulative cultural adaptation, social norms, language, and extensive help and cooperation beyond close kin. Within the tradition of evolutionary anthropology and related disciplines, it has been argued that the development of these modern behavioral traits, in combination with the climatic conditions of the Last Glacial Period and Last Glacial Maximum causing population bottlenecks, contributed to the evolutionary success of Homo sapiens worldwide relative to Neanderthals, Denisovans, and other archaic humans. Debate continues as to whether anatomically modern humans were behaviorally modern as well. There are many theories on the evolution of behavioral modernity. These generally fall into two camps: cognitive and gradualist approaches. The Later Upper Paleolithic Model theorizes that modern human behavior arose through cognitive, genetic changes in Africa abruptly around 40,000–50,000 years ago around the time of the Out-of-Africa migration, prompting the movement of modern humans out of Africa and across the world. Other models focus on how modern human behavior may have arisen through gradual steps, with the archaeological signatures of such behavior appearing only through demographic or subsistence-based changes. Many cite evidence of behavioral modernity earlier (by at least about 150,000–75,000 years ago and possibly ear
https://en.wikipedia.org/wiki/Basionym
In the scientific name of organisms, basionym or basyonym means the original name on which a new name is based; the author citation of the new name should include the authors of the basionym in parentheses. The term "basionym" is used in both botany and zoology. In zoology, alternate terms such as original combination or protonym are sometimes used instead. Bacteriology uses a similar term, basonym, spelled without an i. Although "basionym" and "protonym" are often used interchangeably, they have slightly different technical definitions. A basionym is the correct spelling of the original name (according to the applicable nomenclature rules), while a protonym is the original spelling of the original name. These are typically the same, but in rare cases may differ. Use in botany The term "basionym" is used in botany only for the circumstances where a previous name exists with a useful description, and the International Code of Nomenclature for algae, fungi, and plants does not require a full description with the new name. A basionym must therefore be legitimate. Basionyms are regulated by the code's articles 6.10, 7.3, 41, and others. When a current name has a basionym, the author or authors of the basionym are included in parentheses at the start of the author citation. If a basionym is later found to be illegitimate, it becomes a replaced synonym and the current name's author citation must be changed so that the basionym authors do not appear. Combinatio nova The basionym of the name Picea abies (the Norway spruce) is Pinus abies. The species was originally named Pinus abies by Carl Linnaeus and so the author citation of the basionym is simply "L." Later on, botanist Gustav Karl Wilhelm Hermann Karsten decided this species should not be grouped in the same genus (Pinus) as the pines, so he transferred it to the genus Picea (the spruces). The new name Picea abies is combinatio nova, a new combination (abbreviated comb. nov.). With author citation, the curren
https://en.wikipedia.org/wiki/Transduction%20%28physiology%29
In physiology, transduction is the translation of arriving stimulus into an action potential by a sensory receptor. It begins when stimulus changes the membrane potential of a receptor cell. A receptor cell converts the energy in a stimulus into an electrical signal. Receptors are broadly split into two main categories: exteroceptors, which receive external sensory stimuli, and interoceptors, which receive internal sensory stimuli. Transduction and the senses The visual system In the visual system, sensory cells called rod and cone cells in the retina convert the physical energy of light signals into electrical impulses that travel to the brain. The light causes a conformational change in a protein called rhodopsin. This conformational change sets in motion a series of molecular events that result in a reduction of the electrochemical gradient of the photoreceptor. The decrease in the electrochemical gradient causes a reduction in the electrical signals going to the brain. Thus, in this example, more light hitting the photoreceptor results in the transduction of a signal into fewer electrical impulses, effectively communicating that stimulus to the brain. A change in neurotransmitter release is mediated through a second messenger system. The change in neurotransmitter release is by rods. Because of the change, a change in light intensity causes the response of the rods to be much slower than expected (for a process associated with the nervous system). The auditory system In the auditory system, sound vibrations (mechanical energy) are transduced into electrical energy by hair cells in the inner ear. Sound vibrations from an object cause vibrations in air molecules, which in turn, vibrate the ear drum. The movement of the eardrum causes the bones of the middle ear (the ossicles) to vibrate. These vibrations then pass into the cochlea, the organ of hearing. Within the cochlea, the hair cells on the sensory epithelium of the organ of Corti bend and cause movement
https://en.wikipedia.org/wiki/Program%20slicing
In computer programming, program slicing is the computation of the set of program statements, the program slice, that may affect the values at some point of interest, referred to as a slicing criterion. Program slicing can be used in debugging to locate source of errors more easily. Other applications of slicing include software maintenance, optimization, program analysis, and information flow control. Slicing techniques have been seeing a rapid development since the original definition by Mark Weiser. At first, slicing was only static, i.e., applied on the source code with no other information than the source code. Bogdan Korel and Janusz Laski introduced dynamic slicing, which works on a specific execution of the program (for a given execution trace). Other forms of slicing exist, for instance path slicing. Static slicing Based on the original definition of Weiser, informally, a static program slice S consists of all statements in program P that may affect the value of variable v in a statement x. The slice is defined for a slicing criterion C=(x,v) where x is a statement in program P and v is variable in x. A static slice includes all the statements that can affect the value of variable v at statement x for any possible input. Static slices are computed by backtracking dependencies between statements. More specifically, to compute the static slice for (x,v), we first find all statements that can directly affect the value of v before statement x is encountered. Recursively, for each statement y which can affect the value of v in statement x, we compute the slices for all variables z in y that affect the value of v. The union of all those slices is the static slice for (x,v). Example For example, consider the C program below. Let's compute the slice for ( write(sum), sum ). The value of sum is directly affected by the statements "sum = sum + i + w" if N>1 and "int sum = 0" if N <= 1. So, slice( write(sum), sum) is the union of three slices and the "in
https://en.wikipedia.org/wiki/Receptor%20%28biochemistry%29
In biochemistry and pharmacology, receptors are chemical structures, composed of protein, that receive and transduce signals that may be integrated into biological systems. These signals are typically chemical messengers which bind to a receptor and produce physiological responses such as change in the electrical activity of a cell. For example, GABA, an inhibitory neurotransmitter inhibits electrical activity of neurons by binding to GABA receptors. There are three main ways the action of the receptor can be classified: relay of signal, amplification, or integration. Relaying sends the signal onward, amplification increases the effect of a single ligand, and integration allows the signal to be incorporated into another biochemical pathway. Receptor proteins can be classified by their location. Cell surface receptors also known as transmembrane receptors, include ligand-gated ion channels, G protein-coupled receptors, and enzyme-linked hormone receptors. Intracellular receptors are those found inside the cell, and include cytoplasmic receptors and nuclear receptors. A molecule that binds to a receptor is called a ligand and can be a protein, peptide (short protein), or another small molecule, such as a neurotransmitter, hormone, pharmaceutical drug, toxin, calcium ion or parts of the outside of a virus or microbe. An endogenously produced substance that binds to a particular receptor is referred to as its endogenous ligand. E.g. the endogenous ligand for the nicotinic acetylcholine receptor is acetylcholine, but it can also be activated by nicotine and blocked by curare. Receptors of a particular type are linked to specific cellular biochemical pathways that correspond to the signal. While numerous receptors are found in most cells, each receptor will only bind with ligands of a particular structure. This has been analogously compared to how locks will only accept specifically shaped keys. When a ligand binds to a corresponding receptor, it activates or inhibits th
https://en.wikipedia.org/wiki/Cell-mediated%20immunity
Cell-mediated immunity or cellular immunity is an immune response that does not involve antibodies. Rather, cell-mediated immunity is the activation of phagocytes, antigen-specific cytotoxic T-lymphocytes, and the release of various cytokines in response to an antigen. History In the late 19th century Hippocratic tradition medicine system, the immune system was imagined into two branches: humoral immunity, for which the protective function of immunization could be found in the humor (cell-free bodily fluid or serum) and cellular immunity, for which the protective function of immunization was associated with cells. CD4 cells or helper T cells provide protection against different pathogens. Naive T cells, which are immature T cells that have yet to encounter an antigen, are converted into activated effector T cells after encountering antigen-presenting cells (APCs). These APCs, such as macrophages, dendritic cells, and B cells in some circumstances, load antigenic peptides onto the major histocompatibility complex (MHC) of the cell, in turn presenting the peptide to receptors on T cells. The most important of these APCs are highly specialized dendritic cells; conceivably operating solely to ingest and present antigens. Activated effector T cells can be placed into three functioning classes, detecting peptide antigens originating from various types of pathogen: The first class being 1) Cytotoxic T cells, which kill infected target cells by apoptosis without using cytokines, 2) Th1 cells, which primarily function to activate macrophages, and 3) Th2 cells, which primarily function to stimulate B cells into producing antibodies. In another ideology, the innate immune system and the adaptive immune system each comprise both humoral and cell-mediated components. Some cell-mediated components of the innate immune system include myeloid phagocytes, innate lymphoid cells (NK cells) and intraepithelial lymphocytes. Synopsis Cellular immunity protects the body through: T-ce
https://en.wikipedia.org/wiki/Localhost
In computer networking, localhost is a hostname that refers to the current computer used to access it. The name localhost is reserved for loopback purposes. It is used to access the network services that are running on the host via the loopback network interface. Using the loopback interface bypasses any local network interface hardware. Loopback The local loopback mechanism may be used to run a network service on a host without requiring a physical network interface, or without making the service accessible from the networks the computer may be connected to. For example, a locally installed website may be accessed from a Web browser by the URL http://localhost to display its home page. IPv4 network standards reserve the entire address block (more than 16 million addresses) for loopback purposes. That means any packet sent to any of those addresses is looped back. The address is the standard address for IPv4 loopback traffic; the rest are not supported by all operating systems. However, they can be used to set up multiple server applications on the host, all listening on the same port number. In the IPv6 addressing architecture there is only a single address assigned for loopback: . The standard precludes the assignment of that address to any physical interface, as well as its use as the source or destination address in any packet sent to remote hosts. Name resolution The name localhost normally resolves to the IPv4 loopback address , and to the IPv6 loopback address . This resolution is normally configured by the following lines in the operating system's hosts file: 127.0.0.1 localhost ::1 localhost The name may also be resolved by Domain Name System (DNS) servers, but there are special considerations governing the use of this name: An IPv4 or IPv6 address query for the name localhost must always resolve to the respective loopback address. Applications may resolve the name to a loopback address themselves, or pass it to the local name resolv
https://en.wikipedia.org/wiki/Infinite%20impulse%20response
Infinite impulse response (IIR) is a property applying to many linear time-invariant systems that are distinguished by having an impulse response which does not become exactly zero past a certain point, but continues indefinitely. This is in contrast to a finite impulse response (FIR) system in which the impulse response does become exactly zero at times for some finite , thus being of finite duration. Common examples of linear time-invariant systems are most electronic and digital filters. Systems with this property are known as IIR systems or IIR filters. In practice, the impulse response, even of IIR systems, usually approaches zero and can be neglected past a certain point. However the physical systems which give rise to IIR or FIR responses are dissimilar, and therein lies the importance of the distinction. For instance, analog electronic filters composed of resistors, capacitors, and/or inductors (and perhaps linear amplifiers) are generally IIR filters. On the other hand, discrete-time filters (usually digital filters) based on a tapped delay line employing no feedback are necessarily FIR filters. The capacitors (or inductors) in the analog filter have a "memory" and their internal state never completely relaxes following an impulse (assuming the classical model of capacitors and inductors where quantum effects are ignored). But in the latter case, after an impulse has reached the end of the tapped delay line, the system has no further memory of that impulse and has returned to its initial state; its impulse response beyond that point is exactly zero. Implementation and design Although almost all analog electronic filters are IIR, digital filters may be either IIR or FIR. The presence of feedback in the topology of a discrete-time filter (such as the block diagram shown below) generally creates an IIR response. The z domain transfer function of an IIR filter contains a non-trivial denominator, describing those feedback terms. The transfer function of an
https://en.wikipedia.org/wiki/23%20enigma
The 23 enigma is a belief in the significance of the number 23. The concept of the 23 enigma has been popularized by various books, movies, and conspiracy theories, which suggest that the number 23 appears with unusual frequency in various contexts and may be a symbol of some larger, hidden significance. A topic related to the 23 enigma is eikositriophobia, which is the fear of the number 23. Origins Robert Anton Wilson cites William S. Burroughs as the first person to believe in the 23 enigma. Wilson, in a 1977 article in Fortean Times, related the following anecdote: In literature The 23 enigma can be seen in: Robert Anton Wilson and Robert Shea's 1975 book, The Illuminatus! Trilogy (therein called the "23/17 Phenomenon") Wilson's 1977 book Cosmic Trigger I: The Final Secret of the Illuminati (therein called "the Law of Fives" or "the 23 Enigma") Arthur Koestler's contribution to The Challenge of Chance: A Mass Experiment in Telepathy and Its Unexpected Outcome (1973) Principia Discordia The text titled Principia Discordia claims that "All things happen in fives, or are divisible by or are multiples of five, or are somehow directly or indirectly appropriate to 5"—this is referred to as the Law of Fives. The 23 enigma is regarded as a corollary of the Law of Fives because 2 + 3 = 5. In these works, 23 is considered lucky, unlucky, sinister, strange, sacred to the goddess Eris, or sacred to the unholy gods of the Cthulhu Mythos. The 23 enigma can be viewed as an example of apophenia, selection bias, and confirmation bias. In interviews, Wilson acknowledged the self-fulfilling nature of the 23 enigma, implying that the real value of the Law of Fives and the 23 enigma is in their demonstration of the mind's ability to perceive "truth" in nearly anything. In the Illuminatus! Trilogy, Wilson expresses the same view, saying that one can find numerological significance in anything, provided that one has "sufficient cleverness". In popular culture Music and ar
https://en.wikipedia.org/wiki/Diethyl%20malonate
Diethyl malonate, also known as DEM, is the diethyl ester of malonic acid. It occurs naturally in grapes and strawberries as a colourless liquid with an apple-like odour, and is used in perfumes. It is also used to synthesize other compounds such as barbiturates, artificial flavourings, vitamin B1, and vitamin B6. Structure and properties Malonic acid is a rather simple dicarboxylic acid, with two the carboxyl groups close together. In forming diethyl malonate from malonic acid, the hydroxyl group (−OH) on both of the carboxyl groups is replaced by an ethoxy group (−OEt; −OCH2CH3). The methylene group (−CH2−) in the middle of the malonic part of the diethyl malonate molecule is neighboured by two carbonyl groups (−C(=O)−). The hydrogen atoms on the carbon adjacent to the carbonyl group in a molecule is significantly more acidic than hydrogen atoms on a carbon adjacent to alkyl groups (up to 30 orders of magnitude). (This is known as the α position with respect to the carbonyl.) The hydrogen atoms on a carbon adjacent to two carbonyl groups are even more acidic because the carbonyl groups help stabilize the carbanion resulting from the removal of a proton from the methylene group between them. The extent of resonance stabilization of this compound's conjugate base is suggested by the three resonance forms below: Preparation Diethyl malonate is produced from the reaction of the sodium salt of chloroacetic acid with sodium cyanide, which produces the nitrile. This intermediate is then treated with ethanol in the presence of acid catalyst: Alternatively, sodium chloroacetate undergoes carboxyesterification by treatment with carbon monoxide and ethanol: Dicobalt octacarbonyl is employed as the catalyst. Reactions Malonic ester synthesis One of the principal uses of this compound is in the malonic ester synthesis. The carbanion (2) formed by reacting diethyl malonate (1) with a suitable base can be alkylated with a suitable electrophile. This alkylated 1,3-dic
https://en.wikipedia.org/wiki/Gateway%2C%20Inc.
Gateway, Inc., previously Gateway 2000, Inc., was an American computer company originally based in Iowa and South Dakota. Founded by Ted Waitt and Mike Hammond in 1985, the company developed, manufactured, supported, and marketed a wide range of personal computers, computer monitors, servers, and computer accessories. At its peak in the year 2000, the company employed nearly 25,000 worldwide. Following a seven-year-long slump, punctuated by the acquisition of rival computer manufacturer eMachines in 2004 and massive consolidation of the company's various divisions in an attempt to curb losses and regain market share, Gateway was acquired by Taiwanese hardware and electronics corporation Acer, in October 2007 for US$710 million. History 1985–1990: Foundation Gateway was founded as the TIPC Network by Ted Waitt and Mike Hammond in September 1985. Ted Waitt was the company's principal founder; he was later joined by his older brother Norman Waitt, Jr. Before founding the company, Ted Waitt lived on his family's cattle farmhouse in Sioux City, Iowa. He had dropped out of two different colleges to work on the farm before landing a job at a computer store in Des Moines, Iowa. After nine months of experience gained on the job, Ted had the idea to start his own computer reselling company that would allow him to sell to niche customers who needed systems in between the lower- and upper-ends of the personal computer market, whose systems were either too limited in terms of speed and memory or too expensive with seldom-used higher-end features. Ted also found that educated salespeople could successfully sell computers to customers completely over the telephone, impressing on him the idea that he could eliminate overhead by having a robust remote salesforce and impressive catalog. Strapped for cash, however, Ted Waitt took out a $10,000 loan from his grandmother Mildred Smith and occupied the empty upper floor of his father's dilapidated cattle brokerage. He was joined by M
https://en.wikipedia.org/wiki/Dirichlet%20problem
In mathematics, a Dirichlet problem is the problem of finding a function which solves a specified partial differential equation (PDE) in the interior of a given region that takes prescribed values on the boundary of the region. The Dirichlet problem can be solved for many PDEs, although originally it was posed for Laplace's equation. In that case the problem can be stated as follows: Given a function f that has values everywhere on the boundary of a region in Rn, is there a unique continuous function u twice continuously differentiable in the interior and continuous on the boundary, such that u is harmonic in the interior and u = f on the boundary? This requirement is called the Dirichlet boundary condition. The main issue is to prove the existence of a solution; uniqueness can be proven using the maximum principle. History The Dirichlet problem goes back to George Green, who studied the problem on general domains with general boundary conditions in his Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism, published in 1828. He reduced the problem into a problem of constructing what we now call Green's functions, and argued that Green's function exists for any domain. His methods were not rigorous by today's standards, but the ideas were highly influential in the subsequent developments. The next steps in the study of the Dirichlet's problem were taken by Karl Friedrich Gauss, William Thomson (Lord Kelvin) and Peter Gustav Lejeune Dirichlet, after whom the problem was named, and the solution to the problem (at least for the ball) using the Poisson kernel was known to Dirichlet (judging by his 1850 paper submitted to the Prussian academy). Lord Kelvin and Dirichlet suggested a solution to the problem by a variational method based on the minimization of "Dirichlet's energy". According to Hans Freudenthal (in the Dictionary of Scientific Biography, vol. 11), Bernhard Riemann was the first mathematician who solved this va
https://en.wikipedia.org/wiki/Stratification%20%28mathematics%29
Stratification has several usages in mathematics. In mathematical logic In mathematical logic, stratification is any consistent assignment of numbers to predicate symbols guaranteeing that a unique formal interpretation of a logical theory exists. Specifically, we say that a set of clauses of the form is stratified if and only if there is a stratification assignment S that fulfills the following conditions: If a predicate P is positively derived from a predicate Q (i.e., P is the head of a rule, and Q occurs positively in the body of the same rule), then the stratification number of P must be greater than or equal to the stratification number of Q, in short . If a predicate P is derived from a negated predicate Q (i.e., P is the head of a rule, and Q occurs negatively in the body of the same rule), then the stratification number of P must be greater than the stratification number of Q, in short . The notion of stratified negation leads to a very effective operational semantics for stratified programs in terms of the stratified least fixpoint, that is obtained by iteratively applying the fixpoint operator to each stratum of the program, from the lowest one up. Stratification is not only useful for guaranteeing unique interpretation of Horn clause theories. In a specific set theory In New Foundations (NF) and related set theories, a formula in the language of first-order logic with equality and membership is said to be stratified if and only if there is a function which sends each variable appearing in (considered as an item of syntax) to a natural number (this works equally well if all integers are used) in such a way that any atomic formula appearing in satisfies and any atomic formula appearing in satisfies . It turns out that it is sufficient to require that these conditions be satisfied only when both variables in an atomic formula are bound in the set abstract under consideration. A set abstract satisfying this weaker condition is said to be
https://en.wikipedia.org/wiki/Position-independent%20code
In computing, position-independent code (PIC) or position-independent executable (PIE) is a body of machine code that, being placed somewhere in the primary memory, executes properly regardless of its absolute address. PIC is commonly used for shared libraries, so that the same library code can be loaded at a location in each program's address space where it does not overlap with other memory in use by, for example, other shared libraries. PIC was also used on older computer systems that lacked an MMU, so that the operating system could keep applications away from each other even within the single address space of an MMU-less system. Position-independent code can be executed at any memory address without modification. This differs from absolute code, which must be loaded at a specific location to function correctly, and load-time locatable (LTL) code, in which a linker or program loader modifies a program before execution, so it can be run only from a particular memory location. Generating position-independent code is often the default behavior for compilers, but they may place restrictions on the use of some language features, such as disallowing use of absolute addresses (position-independent code has to use relative addressing). Instructions that refer directly to specific memory addresses sometimes execute faster, and replacing them with equivalent relative-addressing instructions may result in slightly slower execution, although modern processors make the difference practically negligible. History In early computers such as the IBM 701 (29 April 1952) or the UNIVAC I (31 March 1951) code was not position-independent: each program was built to load into and run from a particular address. Those early computers did not have an operating system and were not multitasking-capable. Programs were loaded into main storage (or even stored on magnetic drum for execution directly from there) and run one at a time. In such an operational context, position-independent cod
https://en.wikipedia.org/wiki/Sample%20and%20hold
In electronics, a sample and hold (also known as sample and follow) circuit is an analog device that samples (captures, takes) the voltage of a continuously varying analog signal and holds (locks, freezes) its value at a constant level for a specified minimum period of time. Sample and hold circuits and related peak detectors are the elementary analog memory devices. They are typically used in analog-to-digital converters to eliminate variations in input signal that can corrupt the conversion process. They are also used in electronic music, for instance to impart a random quality to successively-played notes. A typical sample and hold circuit stores electric charge in a capacitor and contains at least one switching device such as a FET (field effect transistor) switch and normally one operational amplifier. To sample the input signal, the switch connects the capacitor to the output of a buffer amplifier. The buffer amplifier charges or discharges the capacitor so that the voltage across the capacitor is practically equal, or proportional to, input voltage. In hold mode the switch disconnects the capacitor from the buffer. The capacitor is invariably discharged by its own leakage currents and useful load currents, which makes the circuit inherently volatile, but the loss of voltage (voltage drop) within a specified hold time remains within an acceptable error margin for all but the most demanding applications. Purpose Sample and hold circuits are used in linear systems. In some kinds of analog-to-digital converters (ADCs), the input is compared to a voltage generated internally from a digital-to-analog converter (DAC). The circuit tries a series of values and stops converting once the voltages are equal, within some defined error margin. If the input value was permitted to change during this comparison process, the resulting conversion would be inaccurate and possibly unrelated to the true input value. Such successive approximation converters will often incorporate
https://en.wikipedia.org/wiki/SGI%20Octane
Octane series of IRIX workstations was developed and sold by SGI in the 2000s. Octane and Octane2 are two-way multiprocessing-capable workstations, originally based on the MIPS Technologies R10000 microprocessor. Newer Octanes are based on the R12000 and R14000. The Octane2 has four improvements: a revised power supply, system board, and Xbow ASIC. The Octane2 has VPro graphics and supports all the VPro cards. Later revisions of the Octane include some of the improvements introduced in the Octane2. The codenames for the Octane and Octane2 are "Racer" and "Speedracer" respectively. The Octane is the direct successor to the Indigo2, and was succeeded by the Tezro, and its immediate sibling is the O2. SGI withdrew the Octane2 from the price book on May 26, 2004, and ceased Octane2 production on June 25, 2004. Support for the Octane2 ceased in June 2009. Octane III was introduced in early 2010 after SGI's bankruptcy reorganization. It is a series of Intel based deskside systems, as a Xeon-based workstation with 1 or 2 3U EATX trays, or as cluster servers with 10 system trays configured with up to 10 Twin Blade nodes or 20 Intel ATOM MINI-ITX nodes. Hardware The Octane's system board is designated as IP30, based on SGI's Xtalk architecture. Xtalk does not use a system bus, but a Crossbow application-specific integrated circuit (ASIC), referred to as Xbow, a dynamic crossbar switch that connects the XIO ports to the hub. One of the ports is used for the processor and memory subsystem, one is available for PCI-X expansion, and four are XIO slots (packet-based high-bandwidth bus, somewhat similar to HyperTransport). This makes it very similar to a single node of the Origin 200 system. The XIO can be bridged to PCI-X, using a chip named BRIDGE. This bridging includes the system board (for the IOC3 multi-I/O chip, two ISP1040B SCSI controllers and RAD1 audio), MENET cards (four IOC3s) and the PCI cage (used for PCI cards in Octane). The Octane uses ARCS boot firmware, li
https://en.wikipedia.org/wiki/TF1
TF1 (; standing for Télévision Française 1) is a French commercial television network owned by TF1 Group, controlled by the Bouygues conglomerate. TF1's average market share of 24% makes it the most popular domestic network. TF1 is part of the TF1 Group of mass media companies, which also includes the news channel LCI. It previously owned the satellite TV provider TPS, which was sold to Canal+ Group. The network is a supporter of the Hybrid Broadcast Broadband TV (HBBTV) initiative promoting and establishing an open European standard for hybrid set-top boxes for the reception of terrestrial TV and broadband multimedia applications with a single user interface. History It was the only television channel in France for 28 years, and has changed its name numerous times since the creation of Radio-PTT Vision on 26 April 1935, making it among the oldest television stations in the world, and one of the very few prewar television stations to remain in existence to the present day. It became Radiodiffusion nationale Télévision (RN Télévision) in 1937, Fernsehsender Paris (Paris Television) during German occupation in 1943, RDF Télévision française in 1944, RTF Télévision in 1949, la Première chaîne de la RTF in 1963 following the creation of the second channel, la Première chaîne de l'ORTF in 1964 and finally, Télévision Française 1 (TF1) in 1975. Radio-PTT Vision (1935–1937) The first public demonstration of a 30-line mechanical television took place on April 14, 1931. The image rendering was an improvement upon Baird's thanks to the development of the "moving light point" system and the use of a camera with Weiller mirror drums by the engineer René Barthélemy, head of the radio laboratory of the Compagnie desmètres (CdC) of Montrouge. In charge of French broadcasting, the PTT administration carried out some rudimentary television experiments from December 1931 by broadcasting experimental 30 to 45 minute broadcasts at variable times from Monday to Saturday with Baird
https://en.wikipedia.org/wiki/Absorbed%20dose
Absorbed dose is a dose quantity which is the measure of the energy deposited in matter by ionizing radiation per unit mass. Absorbed dose is used in the calculation of dose uptake in living tissue in both radiation protection (reduction of harmful effects), and radiology (potential beneficial effects, for example in cancer treatment). It is also used to directly compare the effect of radiation on inanimate matter such as in radiation hardening. The SI unit of measure is the gray (Gy), which is defined as one Joule of energy absorbed per kilogram of matter. The older, non-SI CGS unit rad, is sometimes also used, predominantly in the USA. Deterministic effects Conventionally, in radiation protection, unmodified absorbed dose is only used for indicating the immediate health effects due to high levels of acute dose. These are tissue effects, such as in acute radiation syndrome, which are also known as deterministic effects. These are effects which are certain to happen in a short time. The time between exposure and vomiting may be used as a heuristic for quantifying a dose when more precise means of testing are unavailable. Effects of acute radiation exposure Radiation therapy The measurement of absorbed dose in tissue is of fundamental importance in radiobiology as it is the measure of the amount of energy the incident radiation is imparting to the target tissue. Dose computation The absorbed dose is equal to the radiation exposure (ions or C/kg) of the radiation beam multiplied by the ionization energy of the medium to be ionized. For example, the ionization energy of dry air at 20 °C and 101.325 kPa of pressure is . (33.97 eV per ion pair) Therefore, an exposure of (1 roentgen) would deposit an absorbed dose of (0.00876 Gy or 0.876 rad) in dry air at those conditions. When the absorbed dose is not uniform, or when it is only applied to a portion of a body or object, an absorbed dose representative of the entire item can be calculated by taking a mass-we
https://en.wikipedia.org/wiki/Syncytium
A syncytium (; : syncytia; from Greek: σύν syn "together" and κύτος kytos "box, i.e. cell") or symplasm is a multinucleate cell which can result from multiple cell fusions of uninuclear cells (i.e., cells with a single nucleus), in contrast to a coenocyte, which can result from multiple nuclear divisions without accompanying cytokinesis. The muscle cell that makes up animal skeletal muscle is a classic example of a syncytium cell. The term may also refer to cells interconnected by specialized membranes with gap junctions, as seen in the heart muscle cells and certain smooth muscle cells, which are synchronized electrically in an action potential. The field of embryogenesis uses the word syncytium to refer to the coenocytic blastoderm embryos of invertebrates, such as Drosophila melanogaster. Physiological examples Protists In protists, syncytia can be found in some rhizarians (e.g., chlorarachniophytes, plasmodiophorids, haplosporidians) and acellular slime moulds, dictyostelids (amoebozoans), acrasids (Excavata) and Haplozoon. Plants Some examples of plant syncytia, which result during plant development, include: Developing endosperm The non-articulated laticifers The plasmodial tapetum, and The "nucellar plasmodium" of the family Podostemaceae Fungi A syncytium is the normal cell structure for many fungi. Most fungi of Basidiomycota exist as a dikaryon in which thread-like cells of the mycelium are partially partitioned into segments each containing two differing nuclei, called a heterokaryon. Animals Nerve net The neurons which makes up the subepithelial nerve net in comb jellies (Ctenophora) are fused into a neural syncytium, consisting of a continuous plasma membrane instead of being connected through synapses. Skeletal muscle A classic example of a syncytium is the formation of skeletal muscle. Large skeletal muscle fibers form by the fusion of thousands of individual muscle cells. The multinucleated arrangement is important in pathologic states such
https://en.wikipedia.org/wiki/Dragline%20excavator
A dragline excavator is a piece of heavy equipment used in civil engineering and surface mining. Draglines fall into two broad categories: those that are based on standard, lifting cranes, and the heavy units which have to be built on-site. Most crawler cranes, with an added winch drum on the front, can act as a dragline. These units (like other cranes) are designed to be dismantled and transported over the road on flatbed trailers. Draglines used in civil engineering are almost always of this smaller, crane type. These are used for road, port construction, pond and canal dredging, and as pile driving rigs. These types are built by crane manufacturers such as Link-Belt and Hyster. The much larger type which is built on site is commonly used in strip-mining operations to remove overburden above coal and more recently for oil sands mining. The largest heavy draglines are among the largest mobile land machines ever built. The smallest and most common of the heavy type weigh around 8,000 tons while the largest built weighed around 13,000 tons. A dragline bucket system consists of a large bucket which is suspended from a boom (a large truss-like structure) with wire ropes. The bucket is maneuvered by means of a number of ropes and chains. The hoist rope, powered by large diesel or electric motors, supports the bucket and hoist-coupler assembly from the boom. The dragrope is used to draw the bucket assembly horizontally. By skillful maneuver of the hoist and the dragropes the bucket is controlled for various operations. A schematic of a large dragline bucket system is shown below. History The dragline was invented in 1904 by John W. Page (as a partner of the firm Page & Schnable Contracting) for use in digging the Chicago Canal. By 1912, Page realized that building draglines was more lucrative than contracting, so he created the Page Engineering Company to build draglines. Page built its first crude walking dragline in 1923. These used legs operated by rack and pinio
https://en.wikipedia.org/wiki/P%E2%80%93n%20junction
A p–n junction is a boundary or interface between two types of semiconductor materials, p-type and n-type, inside a single crystal of semiconductor. The "p" (positive) side contains an excess of holes, while the "n" (negative) side contains an excess of electrons in the outer shells of the electrically neutral atoms there. This allows electric current to pass through the junction only in one direction. The p- and n-type regions creating the junction are made by doping the semiconductor, for example by ion implantation, diffusion of dopants, or by epitaxy (growing a layer of crystal doped with one type of dopant on top of a layer of crystal doped with another type of dopant). p–n junctions are elementary "building blocks" of semiconductor electronic devices such as diodes, transistors, solar cells, light-emitting diodes (LEDs), and integrated circuits; they are the active sites where the electronic action of the device takes place. For example, a common type of transistor, the bipolar junction transistor (BJT), consists of two p–n junctions in series, in the form n–p–n or p–n–p; while a diode can be made from a single p-n junction. A Schottky junction is a special case of a p–n junction, where metal serves the role of the n-type semiconductor. History The invention of the p–n junction is usually attributed to American physicist Russell Ohl of Bell Laboratories in 1939. Two years later (1941), Vadim Lashkaryov reported discovery of p–n junctions in Cu2O and silver sulphide photocells and selenium rectifiers. The modern theory of p-n junctions was elucidated by William Shockley in his classic work Electrons and Holes in Semiconductors (1950). Properties The p–n junction possesses a useful property for modern semiconductor electronics. A p-doped semiconductor is relatively conductive. The same is true of an n-doped semiconductor, but the junction between them can become depleted of charge carriers, depending on the relative voltages of the two semiconductor regi
https://en.wikipedia.org/wiki/Ferranti%20Mark%201
The Ferranti Mark 1, also known as the Manchester Electronic Computer in its sales literature, and thus sometimes called the Manchester Ferranti, was produced by British electrical engineering firm Ferranti Ltd. It was the world's first commercially available electronic general-purpose stored program digital computer. Although preceded as a commercial digital computer by the BINAC and the Z4, the Z4 was electromechanical and lacked software programmability, while BINAC never operated successfully after delivery The Ferranti Mark 1 was "the tidied up and commercialised version of the Manchester Mark I". The first machine was delivered to the Victoria University of Manchester in February 1951 (publicly demonstrated in July) ahead of the UNIVAC I which was delivered to the United States Census Bureau in late December 1952, having been sold on 31 March 1951. History and specifications Based on the Manchester Mark 1, which was designed at the University of Manchester by Freddie Williams and Tom Kilburn, the machine was built by Ferranti of the United Kingdom. The main improvements over it were in the size of the primary and secondary storage, a faster multiplier, and additional instructions. The Mark 1 used a 20-bit word stored as a single line of dots of electric charges settled on the surface of a Williams tube display, each cathodic tube storing 64 lines of dots. Instructions were stored in a single word, while numbers were stored in two words. The main memory consisted of eight tubes, each storing one such page of 64 words. Other tubes stored the single 80-bit accumulator (A), the 40-bit "multiplicand/quotient register" (MQ) and eight "B-lines", or index registers, which was one of the unique features of the Mark 1 design. The accumulator could also be addressed as two 40-bit words. An extra 20-bit word per tube stored an offset value into the secondary storage. Secondary storage was provided in the form of a 512-page magnetic drum, storing two pages per track,
https://en.wikipedia.org/wiki/Tencent%20QQ
Tencent QQ (), also known as QQ, is an instant messaging software service and web portal developed by the Chinese technology company Tencent. QQ offers services that provide online social games, music, shopping, microblogging, movies, and group and voice chat software. As of March 2023, there were 597 million monthly active QQ accounts. History Tencent QQ was first released in China in February 1999 under the name of OICQ ("Open ICQ", a reference to the early IM service ICQ). After the threat of a trademark infringement lawsuit by the AOL-owned ICQ, the product's name was changed to QQ (with "Q" and "QQ" used to imply "cute"). The software inherited existing functions from ICQ, and additional features such as software skins, people's images, and emoticons. QQ was first released as a "network paging" real-time communications service. Other features were later added, such as chatrooms, games, personal avatars (similar to "Meego" in MSN), online storage, and Internet dating services. The official client runs on Microsoft Windows and a beta public version was launched for Mac OS X version 10.4.9 or newer. Formerly, two web versions, WebQQ (full version) and WebQQ Mini (Lite version), which made use of Ajax, were available. Development, support, and availability of WebQQ Mini, however, has since been discontinued. On 31 July 2008, Tencent released an official client for Linux, but this has not been made compatible with the Windows version and it is not capable of voice chat. In response to competition with other instant messengers, such as Windows Live Messenger, Tencent released Tencent Messenger, which is aimed at businesses. Membership In 2002, Tencent stopped its free membership registration, requiring all new members to pay a fee. In 2003, however, this decision was reversed due to pressure from other instant messaging services such as Windows Live Messenger and Sina UC. Tencent currently offers a premium membership scheme, where premium members enjoy features
https://en.wikipedia.org/wiki/Simulcast
Simulcast (a portmanteau of simultaneous broadcast) is the broadcasting of programmes/programs or events across more than one resolution, bitrate or medium, or more than one service on the same medium, at exactly the same time (that is, simultaneously). For example, Absolute Radio is simulcast on both AM and on satellite radio. Likewise, the BBC's Prom concerts were formerly simulcast on both BBC Radio 3 and BBC Television. Another application is the transmission of the original-language soundtrack of movies or TV series over local or Internet radio, with the television broadcast having been dubbed into a local language. Early radio simulcasts Before launching stereo radio, experiments were conducted by transmitting left and right channels on different radio channels. The earliest record found was a broadcast by the BBC in 1926 of a Halle Orchestra concert from Manchester, using the wavelengths of the regional stations and Daventry. In its earliest days, the BBC often transmitted the same programme on the "National Service" and the "Regional Network". An early use of the word "simulcast" is from 1925. Between 1990 and 1994 the BBC broadcast a channel of entertainment (Radio 5) which offered a wide range of simulcasts, taking programmes from the BBC World Service and Radio 1, 2, 3 and 4 for simultaneous broadcast. Simulcasting to provide stereo sound for TV broadcasts Before stereo TV sound transmission was possible, simulcasting on TV and radio was a method of effectively transmitting "stereo" sound to music TV broadcasts. Typically, an FM frequency in the broadcast area for viewers to tune their stereo systems to would be displayed on the screen. The band Grateful Dead and their concert "Great Canadian Train Ride" in 1970 was the first TV broadcast of a live concert with FM simulcast. In the 1970s WPXI in Pittsburgh broadcast a live Boz Scaggs performance which had the audio simultaneously broadcast on two FM radio stations to create a quadrophonic sound, the
https://en.wikipedia.org/wiki/The%20Source%20%28online%20service%29
The Source (Source Telecomputing Corporation) was an early online service, one of the first such services to be oriented toward and available to the general public. The Source described itself as follows: The Source was in operation from 1978 to 1989, when it was purchased by rival CompuServe and discontinued sometime thereafter. The Source's headquarters were located at 1616 Anderson Road, McLean, Virginia, 22102. History The Source was founded in 1978 as Digital Broadcasting Corporation by Bill von Meister, with support from Jack R. Taub, a businessman who had been very successful publishing the Scott catalogue of postage stamps. Initially the idea was to transmit email using an unused subcarrier piggy-backed onto FM radio signals. Instead, the two hit on the idea of an "information utility," using cheap overnight excess capacity in minicomputers and data networks to make online information available to dial-up subscribers. Dialcom Inc., located in Silver Spring, MD was the backbone of The Source and supplied all of the networking, computing power and software development until the sale of The Source to The Reader's Digest Association. Robert Ryan was the President and CEO of Dialcom for fifteen years and concurrently served as the founding President of The Source and remained in that role for three years and then decided to return full-time to Dialcom. Having secured publishing rights and put in place the necessary software, the system was announced at Comdex in June 1979. At a launch in New York the following month, Isaac Asimov declared it to be "the start of the information age." Prices were initially US$100 for a subscription, then $2.75 per hour off-peak. However, the project had already run up large debts, and soon began running out of money. Taub sold an 80% controlling stake to The Reader's Digest Association to keep the company afloat. Von Meister initiated legal action, and received a $1 million pay-off. He went on to found Control Video Corporation,
https://en.wikipedia.org/wiki/Complete%20partial%20order
In mathematics, the phrase complete partial order is variously used to refer to at least three similar, but distinct, classes of partially ordered sets, characterized by particular completeness properties. Complete partial orders play a central role in theoretical computer science: in denotational semantics and domain theory. Definitions A complete partial order, abbreviated cpo, can refer to any of the following concepts depending on context. A partially ordered set is a directed-complete partial order (dcpo) if each of its directed subsets has a supremum. A subset of a partial order is directed if it is non-empty and every pair of elements has an upper bound in the subset. In the literature, dcpos sometimes also appear under the label up-complete poset. A partially ordered set is a pointed directed-complete partial order if it is a dcpo with a least element. They are sometimes abbreviated cppos. A partially ordered set is a ω-complete partial order (ω-cpo) if it is a poset in which every ω-chain (x1 ≤ x2 ≤ x3 ≤ x4 ≤ ...) has a supremum that belongs to the poset. Every dcpo is an ω-cpo, since every ω-chain is a directed set, but the converse is not true. However, every ω-cpo with a basis is also a dcpo (with the same basis). An ω-cpo (dcpo) with a basis is also called a continuous ω-cpo (continuous dcpo). Note that complete partial order is never used to mean a poset in which all subsets have suprema; the terminology complete lattice is used for this concept. Requiring the existence of directed suprema can be motivated by viewing directed sets as generalized approximation sequences and suprema as limits of the respective (approximative) computations. This intuition, in the context of denotational semantics, was the motivation behind the development of domain theory. The dual notion of a directed-complete partial order is called a filtered-complete partial order. However, this concept occurs far less frequently in practice, since one usually can work o
https://en.wikipedia.org/wiki/Bell%20polynomials
In combinatorial mathematics, the Bell polynomials, named in honor of Eric Temple Bell, are used in the study of set partitions. They are related to Stirling and Bell numbers. They also occur in many applications, such as in the Faà di Bruno's formula. Definitions Exponential Bell polynomials The partial or incomplete exponential Bell polynomials are a triangular array of polynomials given by where the sum is taken over all sequences j1, j2, j3, ..., jn−k+1 of non-negative integers such that these two conditions are satisfied: The sum is called the nth complete exponential Bell polynomial. Ordinary Bell polynomials Likewise, the partial ordinary Bell polynomial is defined by where the sum runs over all sequences j1, j2, j3, ..., jn−k+1 of non-negative integers such that The ordinary Bell polynomials can be expressed in the terms of exponential Bell polynomials: In general, Bell polynomial refers to the exponential Bell polynomial, unless otherwise explicitly stated. Combinatorial meaning The exponential Bell polynomial encodes the information related to the ways a set can be partitioned. For example, if we consider a set {A, B, C}, it can be partitioned into two non-empty, non-overlapping subsets, which are also referred to as parts or blocks, in 3 different ways: {{A}, {B, C}} {{B}, {A, C}} {{C}, {B, A}} Thus, we can encode the information regarding these partitions as Here, the subscripts of B3,2 tell us that we are considering the partitioning of a set with 3 elements into 2 blocks. The subscript of each xi indicates the presence of a block with i elements (or block of size i) in a given partition. So here, x2 indicates the presence of a block with two elements. Similarly, x1 indicates the presence of a block with a single element. The exponent of xij indicates that there are j such blocks of size i in a single partition. Here, the fact that both x1 and x2 have exponent 1 indicates that there is only one such block in a given partition. The coeff
https://en.wikipedia.org/wiki/Generator%20%28computer%20programming%29
In computer science, a generator is a routine that can be used to control the iteration behaviour of a loop. All generators are also iterators. A generator is very similar to a function that returns an array, in that a generator has parameters, can be called, and generates a sequence of values. However, instead of building an array containing all the values and returning them all at once, a generator yields the values one at a time, which requires less memory and allows the caller to get started processing the first few values immediately. In short, a generator looks like a function but behaves like an iterator. Generators can be implemented in terms of more expressive control flow constructs, such as coroutines or first-class continuations. Generators, also known as semicoroutines, are a special case of (and weaker than) coroutines, in that they always yield control back to the caller (when passing a value back), rather than specifying a coroutine to jump to; see comparison of coroutines with generators. Uses Generators are usually invoked inside loops. The first time that a generator invocation is reached in a loop, an iterator object is created that encapsulates the state of the generator routine at its beginning, with arguments bound to the corresponding parameters. The generator's body is then executed in the context of that iterator until a special yield action is encountered; at that time, the value provided with the yield action is used as the value of the invocation expression. The next time the same generator invocation is reached in a subsequent iteration, the execution of the generator's body is resumed after the yield action, until yet another yield action is encountered. In addition to the yield action, execution of the generator body can also be terminated by a finish action, at which time the innermost loop enclosing the generator invocation is terminated. In more complicated situations, a generator may be used manually outside of a loop to c
https://en.wikipedia.org/wiki/Systems%20development%20life%20cycle
In systems engineering, information systems and software engineering, the systems development life cycle (SDLC), also referred to as the application development life cycle, is a process for planning, creating, testing, and deploying an information system. The SDLC concept applies to a range of hardware and software configurations, as a system can be composed of hardware only, software only, or a combination of both. There are usually six stages in this cycle: requirement analysis, design, development and testing, implementation, documentation, and evaluation. Overview A systems development life cycle is composed of distinct work phases that are used by systems engineers and systems developers to deliver information systems. Like anything that is manufactured on an assembly line, an SDLC aims to produce high-quality systems that meet or exceed expectations, based on requirements, by delivering systems within scheduled time frames and cost estimates. Computer systems are complex and often link components with varying origins. Various SDLC methodologies have been created, such as waterfall, spiral, agile, rapid prototyping, incremental, and synchronize and stabilize. SDLC methodologies fit within a flexibility spectrum ranging from agile to iterative to sequential. Agile methodologies, such as XP and Scrum, focus on lightweight processes that allow for rapid changes. Iterative methodologies, such as Rational Unified Process and dynamic systems development method, focus on stabilizing project scope and iteratively expanding or improving products. Sequential or big-design-up-front (BDUF) models, such as waterfall, focus on complete and correct planning to guide larger projects and limit risks to successful and predictable results. Anamorphic development is guided by project scope and adaptive iterations. In project management a project can include both a project life cycle (PLC) and an SDLC, during which somewhat different activities occur. According to Taylor (2004
https://en.wikipedia.org/wiki/General%20protection%20fault
A general protection fault (GPF) in the x86 instruction set architectures (ISAs) is a fault (a type of interrupt) initiated by ISA-defined protection mechanisms in response to an access violation caused by some running code, either in the kernel or a user program. The mechanism is first described in Intel manuals and datasheets for the Intel 80286 CPU, which was introduced in 1983; it is also described in section 9.8.13 in the Intel 80386 programmer's reference manual from 1986. A general protection fault is implemented as an interrupt (vector number 13 (0Dh)). Some operating systems may also classify some exceptions not related to access violations, such as illegal opcode exceptions, as general protection faults, even though they have nothing to do with memory protection. If a CPU detects a protection violation, it stops executing the code and sends a GPF interrupt. In most cases, the operating system removes the failing process from the execution queue, signals the user, and continues executing other processes. If, however, the operating system fails to catch the general protection fault, i.e. another protection violation occurs before the operating system returns from the previous GPF interrupt, the CPU signals a double fault, stopping the operating system. If yet another failure (triple fault) occurs, the CPU is unable to recover; since 80286, the CPU enters a special halt state called "Shutdown", which can only be exited through a hardware reset. The IBM PC AT, the first PC-compatible system to contain an 80286, has hardware that detects the Shutdown state and automatically resets the CPU when it occurs. All descendants of the PC AT do the same, so in a PC, a triple fault causes an immediate system reset. Specific behavior In Microsoft Windows, the general protection fault presents with varied language, depending on product version: In Unix and Linux, the errors are reported separately (e.g. segmentation fault for memory errors). Memory errors In memory err
https://en.wikipedia.org/wiki/Hilbert%20transform
In mathematics and signal processing, the Hilbert transform is a specific singular integral that takes a function, of a real variable and produces another function of a real variable . The Hilbert transform is given by the Cauchy principal value of the convolution with the function (see ). The Hilbert transform has a particularly simple representation in the frequency domain: It imparts a phase shift of ±90° (/2 radians) to every frequency component of a function, the sign of the shift depending on the sign of the frequency (see ). The Hilbert transform is important in signal processing, where it is a component of the analytic representation of a real-valued signal . The Hilbert transform was first introduced by David Hilbert in this setting, to solve a special case of the Riemann–Hilbert problem for analytic functions. Definition The Hilbert transform of can be thought of as the convolution of with the function , known as the Cauchy kernel. Because 1/ is not integrable across , the integral defining the convolution does not always converge. Instead, the Hilbert transform is defined using the Cauchy principal value (denoted here by ). Explicitly, the Hilbert transform of a function (or signal) is given by provided this integral exists as a principal value. This is precisely the convolution of with the tempered distribution . Alternatively, by changing variables, the principal-value integral can be written explicitly as When the Hilbert transform is applied twice in succession to a function , the result is provided the integrals defining both iterations converge in a suitable sense. In particular, the inverse transform is . This fact can most easily be seen by considering the effect of the Hilbert transform on the Fourier transform of (see below). For an analytic function in the upper half-plane, the Hilbert transform describes the relationship between the real part and the imaginary part of the boundary values. That is, if is analytic in the upp
https://en.wikipedia.org/wiki/YEnc
yEnc is a binary-to-text encoding scheme for transferring binary files in messages on Usenet or via e-mail. It reduces the overhead over previous US-ASCII-based encoding methods by using an 8-bit encoding method. yEnc's overhead is often (if each byte value appears approximately with the same frequency on average) as little as 1–2%, compared to 33–40% overhead for 6-bit encoding methods like uuencode and Base64. yEnc was initially developed by Jürgen Helbing, and its first release was early 2001. By 2003 yEnc became the de facto standard encoding system for binary files on Usenet. The name yEncode is a wordplay on "Why encode?", since the idea is to only encode characters if it is absolutely required to adhere to the message format standard. How yEnc works Usenet and email message bodies were intended to contain only ASCII characters ( or ). Most competing encodings represent binary files by converting them into printable ASCII characters, because the range of printable ASCII characters is supported by most operating systems. However, since this reduces the available character set considerably, there is significant overhead (wasted bandwidth) over 8bit-byte networks. For example, in uuencode and Base64, three bytes of data are encoded into four printable ASCII characters, which equals four bytes, a 33% overhead (not including the overhead from headers). yEnc uses one character (one byte) to represent one byte of the file, with a few exceptions. yEnc assumes that binary data mostly can be transmitted through Usenet and email. Therefore, 252 of the 256 possible bytes are passed through unencoded as a single byte, whether that result is a printable ASCII character or not. Only NUL, LF, CR, and = are escaped. LF and CR are escaped because the RFCs that define Internet messages still require that carriage returns and line feeds have special meaning in a mail message. = is the escape character, so it itself is escaped. NUL is also escaped because of problems handling n
https://en.wikipedia.org/wiki/Circular%20motion
In physics, circular motion is a movement of an object along the circumference of a circle or rotation along a circular arc. It can be uniform, with a constant rate of rotation and constant tangential speed, or non-uniform with a changing rate of rotation. The rotation around a fixed axis of a three-dimensional body involves the circular motion of its parts. The equations of motion describe the movement of the center of mass of a body, which remains at a constant distance from the axis of rotation. In circular motion, the distance between the body and a fixed point on its surface remains the same, i.e., the body is assumed rigid. Examples of circular motion include: special satellite orbits around the Earth (circular orbits), a ceiling fan's blades rotating around a hub, a stone that is tied to a rope and is being swung in circles, a car turning through a curve in a race track, an electron moving perpendicular to a uniform magnetic field, and a gear turning inside a mechanism. Since the object's velocity vector is constantly changing direction, the moving object is undergoing acceleration by a centripetal force in the direction of the center of rotation. Without this acceleration, the object would move in a straight line, according to Newton's laws of motion. Uniform circular motion In physics, uniform circular motion describes the motion of a body traversing a circular path at a constant speed. Since the body describes circular motion, its distance from the axis of rotation remains constant at all times. Though the body's speed is constant, its velocity is not constant: velocity, a vector quantity, depends on both the body's speed and its direction of travel. This changing velocity indicates the presence of an acceleration; this centripetal acceleration is of constant magnitude and directed at all times toward the axis of rotation. This acceleration is, in turn, produced by a centripetal force which is also constant in magnitude and directed toward the axis of
https://en.wikipedia.org/wiki/Feed%20forward%20%28control%29
A feed forward (sometimes written feedforward) is an element or pathway within a control system that passes a controlling signal from a source in its external environment to a load elsewhere in its external environment. This is often a command signal from an external operator. In mechanical engineering, a feedforward control system is a control system that uses sensors to detect disturbances affecting the machine and then applies an additional input to minimize the effect of the disturbance. This requires a mathematical model of the machine so that the effect of disturbances can be properly predicted. A control system which has only feed-forward behavior responds to its control signal in a pre-defined way without responding to the way the load reacts; it is in contrast with a system that also has feedback, which adjusts the input to take account of how it affects the load, and how the load itself may vary unpredictably; the load is considered to belong to the external environment of the system. In a feed-forward system, the control variable adjustment is not error-based. Instead it is based on knowledge about the process in the form of a mathematical model of the process and knowledge about, or measurements of, the process disturbances. Some prerequisites are needed for control scheme to be reliable by pure feed-forward without feedback: the external command or controlling signal must be available, and the effect of the output of the system on the load should be known (that usually means that the load must be predictably unchanging with time). Sometimes pure feed-forward control without feedback is called 'ballistic', because once a control signal has been sent, it cannot be further adjusted; any corrective adjustment must be by way of a new control signal. In contrast, 'cruise control' adjusts the output in response to the load that it encounters, by a feedback mechanism. These systems could relate to control theory, physiology, or computing. Overview With f
https://en.wikipedia.org/wiki/Abstraction%20layer
In computing, an abstraction layer or abstraction level is a way of hiding the working details of a subsystem. Examples of software models that use layers of abstraction include the OSI model for network protocols, OpenGL, and other graphics libraries, which allow the separation of concerns to facilitate interoperability and platform independence. Another example is Media Transfer Protocol. In computer science, an abstraction layer is a generalization of a conceptual model or algorithm, away from any specific implementation. These generalizations arise from broad similarities that are best encapsulated by models that express similarities present in various specific implementations. The simplification provided by a good abstraction layer allows for easy reuse by distilling a useful concept or design pattern so that situations, where it may be accurately applied, can be quickly recognized. A layer is considered to be on top of another if it depends on it. Every layer can exist without the layers above it, and requires the layers below it to function. Frequently abstraction layers can be composed into a hierarchy of abstraction levels. The OSI model comprises seven abstraction layers. Each layer of the model encapsulates and addresses a different part of the needs of digital communications, thereby reducing the complexity of the associated engineering solutions. A famous aphorism of David Wheeler is, "All problems in computer science can be solved by another level of indirection." This is often deliberately misquoted with "abstraction" substituted for "indirection." It is also sometimes misattributed to Butler Lampson. Kevlin Henney's corollary to this is, "...except for the problem of too many layers of indirection." Computer architecture In a computer architecture, a computer system is usually represented as consisting of several abstraction levels such as: software programmable logic hardware Programmable logic is often considered part of the hardware, while
https://en.wikipedia.org/wiki/Keith%20Devlin
Keith James Devlin (born 16 March 1947) is a British mathematician and popular science writer. Since 1987 he has lived in the United States. He has dual British-American citizenship. Biography He was born and grew up in England, in Kingston upon Hull. There he attended a local primary school followed by Greatfield High School in Hull. In the last school year he was appointed head boy. Devlin earned a BSc (special) in mathematics at King's College London in 1968, and a PhD in mathematics at the University of Bristol in 1971 under the supervision of Frederick Rowbottom. Career Later he got a position as a scientific assistant in mathematics at the University of Oslo, Norway, from August till December 1972. In 1974 he became a scientific assistant in mathematics at the University of Heidelberg, Germany. In 1976 he was an assistant professor of mathematics at the University of Toronto, Canada. From 1977 till 1987 he served as a lecturer, then reader, in mathematics at the University of Lancaster, England. From 1987 to 1989 he was a visiting professor of mathematics at Stanford University in California. From 1989 to 1993 he was the Carter Professor of Mathematics and Chair of Department at Colby College in Maine. From 1993 to 2000 he was Dean of Science and a professor of mathematics at St. Mary's College of California. From 2001 until he retired he was a senior researcher at Stanford University. He is co-founder and executive director of Stanford University's Human-Sciences and Technologies Advanced Research Institute (2006), a co-founder of Stanford Media X university-industry research partnership program, and a senior researcher in the Center for the Study of Language and Information (CSLI). He is a commentator on National Public Radio's Weekend Edition Saturday, where he is known as "The Math Guy." His current research is mainly focused on the use of different media to teach mathematics to different audiences. He is also co-founder and president of the company B
https://en.wikipedia.org/wiki/Isolation%20transformer
An isolation transformer is a transformer used to transfer electrical power from a source of alternating current (AC) power to some equipment or device while isolating the powered device from the power source, usually for safety reasons or to reduce transients and harmonics. Isolation transformers provide galvanic isolation; no conductive path is present between source and load. This isolation is used to protect against electric shock, to suppress electrical noise in sensitive devices, or to transfer power between two circuits which must not be connected. A transformer sold for isolation is often built with special insulation between primary and secondary, and is specified to withstand a high voltage between windings. Isolation transformers block transmission of the DC component in signals from one circuit to the other, but allow AC components in signals to pass. Transformers that have a ratio of 1 to 1 between the primary and secondary windings are often used to protect secondary circuits and individuals from electrical shocks between energized conductors and earth ground. Suitably designed isolation transformers block interference caused by ground loops. Isolation transformers with electrostatic shields are used for power supplies for sensitive equipment such as computers, medical devices, or laboratory instruments. Some specifications require that Isolation transformers be a part of the lightning protection on the AC circuits. Terminology Sometimes the term is used to emphasize that a device is not an autotransformer whose primary and secondary circuits are connected. Power transformers with specified insulation between primary and secondary are not usually described only as "isolation transformers" unless this is their primary function. Only transformers whose primary purpose is to isolate circuits are routinely described as isolation transformers. Operation Isolation transformers are designed with attention to capacitive coupling between the two winding
https://en.wikipedia.org/wiki/Instruction%20cycle
The instruction cycle (also known as the fetch–decode–execute cycle, or simply the fetch-execute cycle) is the cycle that the central processing unit (CPU) follows from boot-up until the computer has shut down in order to process instructions. It is composed of three main stages: the fetch stage, the decode stage, and the execute stage. In simpler CPUs, the instruction cycle is executed sequentially, each instruction being processed before the next one is started. In most modern CPUs, the instruction cycles are instead executed concurrently, and often in parallel, through an instruction pipeline: the next instruction starts being processed before the previous instruction has finished, which is possible because the cycle is broken up into separate steps. Role of components The program counter (PC) is a special register that holds the memory address of the next instruction to be executed. During the fetch stage, the address stored in the PC is copied into the memory address register (MAR) and then the PC is incremented in order to "point" to the memory address of the next instruction to be executed. The CPU then takes the instruction at the memory address described by the MAR and copies it into the memory data register (MDR). The MDR also acts as a two-way register that holds data fetched from memory or data waiting to be stored in memory (it is also known as the memory buffer register (MBR) because of this). Eventually, the instruction in the MDR is copied into the current instruction register (CIR) which acts as a temporary holding ground for the instruction that has just been fetched from memory. During the decode stage, the control unit (CU) will decode the instruction in the CIR. The CU then sends signals to other components within the CPU, such as the arithmetic logic unit (ALU) and the floating point unit (FPU). The ALU performs arithmetic operations such as addition and subtraction and also multiplication via repeated addition and division via repeated sub
https://en.wikipedia.org/wiki/Casting%20out%20nines
Casting out nines is any of three arithmetical procedures: Adding the decimal digits of a positive whole number, while optionally ignoring any 9s or digits which sum to a multiple of 9. The result of this procedure is a number which is smaller than the original whenever the original has more than one digit, leaves the same remainder as the original after division by nine, and may be obtained from the original by subtracting a multiple of 9 from it. The name of the procedure derives from this latter property. Repeated application of this procedure to the results obtained from previous applications until a single-digit number is obtained. This single-digit number is called the "digital root" of the original. If a number is divisible by 9, its digital root is 9. Otherwise, its digital root is the remainder it leaves after being divided by 9. A sanity test in which the above-mentioned procedures are used to check for errors in arithmetical calculations. The test is carried out by applying the same sequence of arithmetical operations to the digital roots of the operands as are applied to the operands themselves. If no mistakes are made in the calculations, the digital roots of the two resultants will be the same. If they are different, therefore, one or more mistakes must have been made in the calculations. Digit sums To "cast out nines" from a single number, its decimal digits can be simply added together to obtain its so-called digit sum. The digit sum of 2946, for example is 2 + 9 + 4 + 6 = 21. Since 21 = 2946 − 325 × 9, the effect of taking the digit sum of 2946 is to "cast out" 325 lots of 9 from it. If the digit 9 is ignored when summing the digits, the effect is to "cast out" one more 9 to give the result 12. More generally, when casting out nines by summing digits, any set of digits which add up to 9, or a multiple of 9, can be ignored. In the number 3264, for example, the digits 3 and 6 sum to 9. Ignoring these two digits, therefore, and su
https://en.wikipedia.org/wiki/Cheminformatics
Cheminformatics (also known as chemoinformatics) refers to the use of physical chemistry theory with computer and information science techniques—so called "in silico" techniques—in application to a range of descriptive and prescriptive problems in the field of chemistry, including in its applications to biology and related molecular fields. Such in silico techniques are used, for example, by pharmaceutical companies and in academic settings to aid and inform the process of drug discovery, for instance in the design of well-defined combinatorial libraries of synthetic compounds, or to assist in structure-based drug design. The methods can also be used in chemical and allied industries, and such fields as environmental science and pharmacology, where chemical processes are involved or studied. History Cheminformatics has been an active field in various guises since the 1970s and earlier, with activity in academic departments and commercial pharmaceutical research and development departments. The term chemoinformatics was defined in its application to drug discovery by F.K. Brown in 1998:Chemoinformatics is the mixing of those information resources to transform data into information and information into knowledge for the intended purpose of making better decisions faster in the area of drug lead identification and optimization. Since then, both terms, cheminformatics and chemoinformatics, have been used, although, lexicographically, cheminformatics appears to be more frequently used, despite academics in Europe declaring for the variant chemoinformatics in 2006. In 2009, a prominent Springer journal in the field was founded by transatlantic executive editors named the Journal of Cheminformatics. Background Cheminformatics combines the scientific working fields of chemistry, computer science, and information science—for example in the areas of topology, chemical graph theory, information retrieval and data mining in the chemical space. Cheminformatics can also be ap
https://en.wikipedia.org/wiki/Core%20rope%20memory
Core rope memory is a form of read-only memory (ROM) for computers, first used in the 1960s by early NASA Mars space probes and then in the Apollo Guidance Computer (AGC) and programmed by the Massachusetts Institute of Technology (MIT) Instrumentation Lab and built by Raytheon. Software written by MIT programmers was woven into core rope memory by female workers in factories. Some programmers nicknamed the finished product LOL memory, for Little Old Lady memory. Memory density By the standards of the time, a relatively large amount of data could be stored in a small installed volume of core rope memory: 72 kilobytes per cubic foot, or roughly 2.5 megabytes per cubic meter. This was about 18 times the amount of magnetic-core memory (within two cubic feet). References External links "Computer for Apollo" NASA/MIT film from 1965 which demonstrates how rope memory was manufactured. Visual Introduction to the Apollo Guidance Computer, part 3: Manufacturing the Apollo Guidance Computer. – By Raytheon; hosted by the Library of the California Institute of Technology's History of Recent Science & Technology site (originally hosted by the Dibner Institute) Computers in Spaceflight: The NASA Experience – By James Tomayko (Chapter 2, Part 5, "The Apollo guidance computer: Hardware") Brent Hilpert's Core Rope & Woven-Wire Memory Systems page has a detailed explanation of pulse-transformer and switching-core techniques. SV3ORA's Core rope memory: A practical guide of how to build your own gives a description, schematics and photos of a simple core rope memory board using the pulse transformer technique, including a demonstration of operation. Software woven into wire: Core rope and the Apollo Guidance Computer, extensive blog post by computer restoration expert Ken Shirriff Australian 'ropes' demonstrated at MIT Letter from Ramon L. Alonso to Gordon Rose, dated 10 December 1963: "We are finding the Australian ideas on ‘ropes' to be very fruitful indeed, and we are
https://en.wikipedia.org/wiki/Parametric%20equation
In mathematics, a parametric equation defines a group of quantities as functions of one or more independent variables called parameters. Parametric equations are commonly used to express the coordinates of the points that make up a geometric object such as a curve or surface, called a parametric curve and parametric surface, respectively. In such cases, the equations are collectively called a parametric representation, or parametric system, or parameterization (alternatively spelled as parametrisation) of the object. For example, the equations form a parametric representation of the unit circle, where is the parameter: A point is on the unit circle if and only if there is a value of such that these two equations generate that point. Sometimes the parametric equations for the individual scalar output variables are combined into a single parametric equation in vectors: Parametric representations are generally nonunique (see the "Examples in two dimensions" section below), so the same quantities may be expressed by a number of different parameterizations. In addition to curves and surfaces, parametric equations can describe manifolds and algebraic varieties of higher dimension, with the number of parameters being equal to the dimension of the manifold or variety, and the number of equations being equal to the dimension of the space in which the manifold or variety is considered (for curves the dimension is one and one parameter is used, for surfaces dimension two and two parameters, etc.). Parametric equations are commonly used in kinematics, where the trajectory of an object is represented by equations depending on time as the parameter. Because of this application, a single parameter is often labeled ; however, parameters can represent other physical quantities (such as geometric variables) or can be selected arbitrarily for convenience. Parameterizations are non-unique; more than one set of parametric equations can specify the same curve. Applications
https://en.wikipedia.org/wiki/142857
The number 142,857 is a Kaprekar number. 142857, the six repeating digits of (0.), is the best-known cyclic number in base 10. If it is multiplied by 2, 3, 4, 5, or 6, the answer will be a cyclic permutation of itself, and will correspond to the repeating digits of , , , , or respectively. Calculation 1 × 142,857 = 142,857 2 × 142,857 = 285,714 3 × 142,857 = 428,571 4 × 142,857 = 571,428 5 × 142,857 = 714,285 6 × 142,857 = 857,142 7 × 142,857 = 999,999 If multiplying by an integer greater than 7, there is a simple process to get to a cyclic permutation of 142857. By adding the rightmost six digits (ones through hundred thousands) to the remaining digits and repeating this process until only six digits are left, it will result in a cyclic permutation of 142857: 142857 × 8 = 1142856 1 + 142856 = 142857 142857 × 815 = 116428455 116 + 428455 = 428571 1428572 = 142857 × 142857 = 20408122449 20408 + 122449 = 142857 Multiplying by a multiple of 7 will result in 999999 through this process: 142857 × 74 = 342999657 342 + 999657 = 999999 If you square the last three digits and subtract the square of the first three digits, you also get back a cyclic permutation of the number. 8572 = 734449 1422 = 20164 734449 − 20164 = 714285 It is the repeating part in the decimal expansion of the rational number = 0.. Thus, multiples of are simply repeated copies of the corresponding multiples of 142857: Connection to the enneagram The 142857 number sequence is used in the enneagram figure, a symbol of the Gurdjieff Work used to explain and visualize the dynamics of the interaction between the two great laws of the Universe (according to G. I. Gurdjieff), the Law of Three and the Law of Seven. The movement of the numbers of 142857 divided by , . etc., and the subsequent movement of the enneagram, are portrayed in Gurdjieff's sacred dances known as the movements. Other properties The 142857 number sequence is also found in several decimals in which the d
https://en.wikipedia.org/wiki/Finite%20field%20arithmetic
In mathematics, finite field arithmetic is arithmetic in a finite field (a field containing a finite number of elements) contrary to arithmetic in a field with an infinite number of elements, like the field of rational numbers. There are infinitely many different finite fields. Their number of elements is necessarily of the form pn where p is a prime number and n is a positive integer, and two finite fields of the same size are isomorphic. The prime p is called the characteristic of the field, and the positive integer n is called the dimension of the field over its prime field. Finite fields are used in a variety of applications, including in classical coding theory in linear block codes such as BCH codes and Reed–Solomon error correction, in cryptography algorithms such as the Rijndael (AES) encryption algorithm, in tournament scheduling, and in the design of experiments. Effective polynomial representation The finite field with pn elements is denoted GF(pn) and is also called the Galois field of order pn, in honor of the founder of finite field theory, Évariste Galois. GF(p), where p is a prime number, is simply the ring of integers modulo p. That is, one can perform operations (addition, subtraction, multiplication) using the usual operation on integers, followed by reduction modulo p. For instance, in GF(5), is reduced to 2 modulo 5. Division is multiplication by the inverse modulo p, which may be computed using the extended Euclidean algorithm. A particular case is GF(2), where addition is exclusive OR (XOR) and multiplication is AND. Since the only invertible element is 1, division is the identity function. Elements of GF(pn) may be represented as polynomials of degree strictly less than n over GF(p). Operations are then performed modulo R where R is an irreducible polynomial of degree n over GF(p), for instance using polynomial long division. The addition of two polynomials P and Q is done as usual; multiplication may be done as follows: compute as usu
https://en.wikipedia.org/wiki/Environmental%20Audio%20Extensions
The Environmental Audio Extensions (or EAX) are a number of digital signal processing presets for audio, present in Creative Technology Sound Blaster sound cards starting with the Sound Blaster Live and the Creative NOMAD/Creative ZEN product lines. Due to the release of Windows Vista in 2007, which deprecated the DirectSound3D API that EAX was based on, Creative discouraged EAX implementation in favour of its OpenAL-based EFX equivalent – though at that point relatively few games used the API. Technology EAX is a library of extensions to Microsoft's DirectSound3D, itself an extension to DirectSound introduced with DirectX 3 in 1996 with the intention to standardize 3D audio for Microsoft Windows, adding environmental audio presets to DS3D's audio positioning. Ergo, the aim of EAX has nothing to do with 3D audio positioning, this is usually done by a sound library like DirectSound3D or OpenAL. Rather, EAX can be seen as a library of sound effects written and compiled to be executed on a DSP instead of the CPU, often called "hardware-accelerated". The aim of EAX was to create more ambiance within video games by more accurately simulating a real-world audio environment. Up to EAX 2.0, the technology was based around the effects engine aboard the E-mu 10K1 on Creative Technology's and the Maestro2 on ESS1968 chipset driven sound cards. The hardware accelerated effects engine is an E-mu FX8010 DSP integrated into the Creative Technology's audio chip and was historically used to enhance MIDI output by adding effects (such as reverb and chorus) to the sampled instruments on 'wavetable' sample-based synthesis cards (which is often confused with the "wavetable synthesis" developed by Wolfgang Palm of PPG and Michael Mcnabb in the late-1970s, however not related). A similar effects DSP was also present on Creative's cards back to the AWE 32. However, the EMU10K1's DSP was faster and more flexible and was able to produce not only MIDI output but also other outputs, includi
https://en.wikipedia.org/wiki/Durham%20tube
Durham tubes are used in microbiology to detect production of gas by microorganisms. They are simply smaller test tubes inserted upside down in another test tube so they are freely movable. The culture media to be tested is then added to the larger tube and sterilized, which also eliminates the initial air gap produced when the tube is inserted upside down. The culture media typically contains a single substance to be tested with the organism, such as to determine whether an organism can ferment a particular carbohydrate. After inoculation and incubation, any gas that is produced will form a visible gas bubble inside the small tube. Litmus solution can also be added to the culture media to give a visual representation of pH changes that occur during the production of gas. The method was first reported in 1898 by British microbiologist Herbert Durham. One limitation of the Durham tube is that it does not allow for precise determination of the type of gas that is produced within the inner tube, or measurements of the quantity of gas produced. However, Durham argued that quantitive measurements are of limited value because of the culture solution will absorb some of the gas in unknown, variable proportions. Additionally, using Durham tubes to provide evidence of fermentation may not be able to detect slow- or weakly-fermenting organisms when the resultant carbon dioxide diffuses back into the solution as quickly as it is formed, so a negative test using Durham tubes does not indicate decisive physiological significance. References Microbiology equipment
https://en.wikipedia.org/wiki/Bacteriological%20water%20analysis
Bacteriological water analysis is a method of analysing water to estimate the numbers of bacteria present and, if needed, to find out what sort of bacteria they are. It represents one aspect of water quality. It is a microbiological analytical procedure which uses samples of water and from these samples determines the concentration of bacteria. It is then possible to draw inferences about the suitability of the water for use from these concentrations. This process is used, for example, to routinely confirm that water is safe for human consumption or that bathing and recreational waters are safe to use. The interpretation and the action trigger levels for different waters vary depending on the use made of the water. Whilst very stringent levels apply to drinking water, more relaxed levels apply to marine bathing waters, where much lower volumes of water are expected to be ingested by users. Approach The common feature of all these routine screening procedures is that the primary analysis is for indicator organisms rather than the pathogens that might cause concern. Indicator organisms are bacteria such as non-specific coliforms, Escherichia coli and Pseudomonas aeruginosa that are very commonly found in the human or animal gut and which, if detected, may suggest the presence of sewage. Indicator organisms are used because even when a person is infected with a more pathogenic bacteria, they will still be excreting many millions times more indicator organisms than pathogens. It is therefore reasonable to surmise that if indicator organism levels are low, then pathogen levels will be very much lower or absent. Judgements as to suitability of water for use are based on very extensive precedents and relate to the probability of any sample population of bacteria being able to be infective at a reasonable statistical level of confidence. Analysis is usually performed using culture, biochemical and sometimes optical methods. When indicator organisms levels exceed pre-set
https://en.wikipedia.org/wiki/Port%20scanner
A port scanner is an application designed to probe a server or host for open ports. Such an application may be used by administrators to verify security policies of their networks and by attackers to identify network services running on a host and exploit vulnerabilities. A port scan or portscan is a process that sends client requests to a range of server port addresses on a host, with the goal of finding an active port; this is not a nefarious process in and of itself. The majority of uses of a port scan are not attacks, but rather simple probes to determine services available on a remote machine. To portsweep is to scan multiple hosts for a specific listening port. The latter is typically used to search for a specific service, for example, an SQL-based computer worm may portsweep looking for hosts listening on TCP port 1433. TCP/IP basics The design and operation of the Internet is based on the Internet Protocol Suite, commonly also called TCP/IP. In this system, network services are referenced using two components: a host address and a port number. There are 65535 distinct and usable port numbers, numbered 1..65535. (Port zero is not a usable port number.) Most services use one, or at most a limited range of, port numbers. Some port scanners scan only the most common port numbers, or ports most commonly associated with vulnerable services, on a given host. The result of a scan on a port is usually generalized into one of three categories: Open or Accepted: The host sent a reply indicating that a service is listening on the port. Closed or Denied or Not Listening: The host sent a reply indicating that connections will be denied to the port. Filtered, Dropped or Blocked: There was no reply from the host. Open ports present two vulnerabilities of which administrators must be wary: Security and stability concerns associated with the program responsible for delivering the service - Open ports. Security and stability concerns associated with the operating sy
https://en.wikipedia.org/wiki/2.5D
2.5D (two-and-a-half dimensional) perspective refers to gameplay or movement in a video game or virtual reality environment that is restricted to a two-dimensional (2D) plane with little or no access to a third dimension in a space that otherwise appears to be three-dimensional and is often simulated and rendered in a 3D digital environment. This is similar but different from pseudo-3D perspective (sometimes called three-quarter view when the environment is portrayed from an angled top-down perspective), which refers to 2D graphical projections and similar techniques used to cause images or scenes to simulate the appearance of being three-dimensional (3D) when in fact they are not. By contrast, games, spaces or perspectives that are simulated and rendered in 3D and used in 3D level design are said to be true 3D, and 2D rendered games made to appear as 2D without approximating a 3D image are said to be true 2D. Common in video games, 2.5D projections have also been useful in geographic visualization (GVIS) to help understand visual-cognitive spatial representations or 3D visualization. The terms three-quarter perspective and three-quarter view trace their origins to the three-quarter profile in portraiture and facial recognition, which depicts a person's face that is partway between a frontal view and a side view. Computer graphics Axonometric and oblique projection In axonometric projection and oblique projection, two forms of parallel projection, the viewpoint is rotated slightly to reveal other facets of the environment than what are visible in a top-down perspective or side view, thereby producing a three-dimensional effect. An object is "considered to be in an inclined position resulting in foreshortening of all three axes", and the image is a "representation on a single plane (as a drawing surface) of a three-dimensional object placed at an angle to the plane of projection." Lines perpendicular to the plane become points, lines parallel to the plane have
https://en.wikipedia.org/wiki/Speedometer
A speedometer or speed meter is a gauge that measures and displays the instantaneous speed of a vehicle. Now universally fitted to motor vehicles, they started to be available as options in the early 20th century, and as standard equipment from about 1910 onwards. Other vehicles may use devices analogous to the speedometer with different means of sensing speed, eg. boats use a pit log, while aircraft use an airspeed indicator. Charles Babbage is credited with creating an early type of a speedometer, which was usually fitted to locomotives. The electric speedometer was invented by the Croatian Josip Belušić in 1888 and was originally called a velocimeter. Operation The speedometer was originally patented by Josip Belušić (Giuseppe Bellussich) in 1888. He presented his invention at the 1889 Exposition Universelle in Paris. His invention had a pointer and a magnet, using electricity to work. German inventor Otto Schultze patented his version (which, like Belušić's, ran on eddy currents) on 7 October 1902. Mechanical Many speedometers use a rotating flexible cable driven by gearing linked to the vehicle's transmission. The early Volkswagen Beetle and many motorcycles, however, use a cable driven from a front wheel. Some early mechanical speedometers operated on the governor principle where a rotating weight acting against a spring moved further out as the speed increased, similar to the governor used on steam engines. This movement was transferred to the pointer to indicate speed. This was followed by the Chronometric speedometer where the distance traveled was measured over a precise interval of time (Some Smiths speedometers used 3/4 of a second) measured by an escapement. This was transferred to the speedometer pointer. The chronometric speedometer is tolerant of vibration and was used in motorcycles up to the 1970s. The electric speedometer was invented by the Croatian Josip Belušić in 1888, and was originally called a velocimeter. When the vehicle is in m
https://en.wikipedia.org/wiki/Congruence%20of%20squares
In number theory, a congruence of squares is a congruence commonly used in integer factorization algorithms. Derivation Given a positive integer n, Fermat's factorization method relies on finding numbers x and y satisfying the equality We can then factor n = x2 − y2 = (x + y)(x − y). This algorithm is slow in practice because we need to search many such numbers, and only a few satisfy the equation. However, n may also be factored if we can satisfy the weaker congruence of squares condition: From here we easily deduce This means that n divides the product (x + y)(x − y). Thus (x + y) and (x − y) each contain factors of n, but those factors can be trivial. In this case we need to find another x and y. Computing the greatest common divisors of (x + y, n) and of (x − y, n) will give us these factors; this can be done quickly using the Euclidean algorithm. Congruences of squares are extremely useful in integer factorization algorithms and are extensively used in, for example, the quadratic sieve, general number field sieve, continued fraction factorization, and Dixon's factorization. Conversely, because finding square roots modulo a composite number turns out to be probabilistic polynomial-time equivalent to factoring that number, any integer factorization algorithm can be used efficiently to identify a congruence of squares. Further generalizations It is also possible to use factor bases to help find congruences of squares more quickly. Instead of looking for from the outset, we find many where the y have small prime factors, and try to multiply a few of these together to get a square on the right-hand side. Examples Factorize 35 We take n = 35 and find that . We thus factor as Factorize 1649 Using n = 1649, as an example of finding a congruence of squares built up from the products of non-squares (see Dixon's factorization method), first we obtain several congruences of these, two have only small primes as factors and a combination of these has an even
https://en.wikipedia.org/wiki/Binary%20decision%20diagram
In computer science, a binary decision diagram (BDD) or branching program is a data structure that is used to represent a Boolean function. On a more abstract level, BDDs can be considered as a compressed representation of sets or relations. Unlike other compressed representations, operations are performed directly on the compressed representation, i.e. without decompression. Similar data structures include negation normal form (NNF), Zhegalkin polynomials, and propositional directed acyclic graphs (PDAG). Definition A Boolean function can be represented as a rooted, directed, acyclic graph, which consists of several (decision) nodes and two terminal nodes. The two terminal nodes are labeled 0 (FALSE) and 1 (TRUE). Each (decision) node is labeled by a Boolean variable and has two child nodes called low child and high child. The edge from node to a low (or high) child represents an assignment of the value FALSE (or TRUE, respectively) to variable . Such a BDD is called 'ordered' if different variables appear in the same order on all paths from the root. A BDD is said to be 'reduced' if the following two rules have been applied to its graph: Merge any isomorphic subgraphs. Eliminate any node whose two children are isomorphic. In popular usage, the term BDD almost always refers to Reduced Ordered Binary Decision Diagram (ROBDD in the literature, used when the ordering and reduction aspects need to be emphasized). The advantage of an ROBDD is that it is canonical (unique) for a particular function and variable order. This property makes it useful in functional equivalence checking and other operations like functional technology mapping. A path from the root node to the 1-terminal represents a (possibly partial) variable assignment for which the represented Boolean function is true. As the path descends to a low (or high) child from a node, then that node's variable is assigned to 0 (respectively 1). Example The left figure below shows a binary decision tree
https://en.wikipedia.org/wiki/Quadtree
A quadtree is a tree data structure in which each internal node has exactly four children. Quadtrees are the two-dimensional analog of octrees and are most often used to partition a two-dimensional space by recursively subdividing it into four quadrants or regions. The data associated with a leaf cell varies by application, but the leaf cell represents a "unit of interesting spatial information". The subdivided regions may be square or rectangular, or may have arbitrary shapes. This data structure was named a quadtree by Raphael Finkel and J.L. Bentley in 1974. A similar partitioning is also known as a Q-tree. All forms of quadtrees share some common features: They decompose space into adaptable cells. Each cell (or bucket) has a maximum capacity. When maximum capacity is reached, the bucket splits. The tree directory follows the spatial decomposition of the quadtree. A tree-pyramid (T-pyramid) is a "complete" tree; every node of the T-pyramid has four child nodes except leaf nodes; all leaves are on the same level, the level that corresponds to individual pixels in the image. The data in a tree-pyramid can be stored compactly in an array as an implicit data structure similar to the way a complete binary tree can be stored compactly in an array. Types Quadtrees may be classified according to the type of data they represent, including areas, points, lines and curves. Quadtrees may also be classified by whether the shape of the tree is independent of the order in which data is processed. The following are common types of quadtrees. Region quadtree The region quadtree represents a partition of space in two dimensions by decomposing the region into four equal quadrants, subquadrants, and so on with each leaf node containing data corresponding to a specific subregion. Each node in the tree either has exactly four children, or has no children (a leaf node). The height of quadtrees that follow this decomposition strategy (i.e. subdividing subquadrants as long as t
https://en.wikipedia.org/wiki/Magnitude%20%28mathematics%29
In mathematics, the magnitude or size of a mathematical object is a property which determines whether the object is larger or smaller than other objects of the same kind. More formally, an object's magnitude is the displayed result of an ordering (or ranking) of the class of objects to which it belongs. In physics, magnitude can be defined as quantity or distance. History The Greeks distinguished between several types of magnitude, including: Positive fractions Line segments (ordered by length) Plane figures (ordered by area) Solids (ordered by volume) Angles (ordered by angular magnitude) They proved that the first two could not be the same, or even isomorphic systems of magnitude. They did not consider negative magnitudes to be meaningful, and magnitude is still primarily used in contexts in which zero is either the smallest size or less than all possible sizes. Numbers The magnitude of any number is usually called its absolute value or modulus, denoted by . Real numbers The absolute value of a real number r is defined by: Absolute value may also be thought of as the number's distance from zero on the real number line. For example, the absolute value of both 70 and −70 is 70. Complex numbers A complex number z may be viewed as the position of a point P in a 2-dimensional space, called the complex plane. The absolute value (or modulus) of z may be thought of as the distance of P from the origin of that space. The formula for the absolute value of is similar to that for the Euclidean norm of a vector in a 2-dimensional Euclidean space: where the real numbers a and b are the real part and the imaginary part of z, respectively. For instance, the modulus of is . Alternatively, the magnitude of a complex number z may be defined as the square root of the product of itself and its complex conjugate, , where for any complex number , its complex conjugate is . (where ). Vector spaces Euclidean vector space A Euclidean vector represents the p
https://en.wikipedia.org/wiki/Banach%E2%80%93Alaoglu%20theorem
In functional analysis and related branches of mathematics, the Banach–Alaoglu theorem (also known as Alaoglu's theorem) states that the closed unit ball of the dual space of a normed vector space is compact in the weak* topology. A common proof identifies the unit ball with the weak-* topology as a closed subset of a product of compact sets with the product topology. As a consequence of Tychonoff's theorem, this product, and hence the unit ball within, is compact. This theorem has applications in physics when one describes the set of states of an algebra of observables, namely that any state can be written as a convex linear combination of so-called pure states. History According to Lawrence Narici and Edward Beckenstein, the Alaoglu theorem is a “very important result—maybe most important fact about the weak-* topology—[that] echos throughout functional analysis.” In 1912, Helly proved that the unit ball of the continuous dual space of is countably weak-* compact. In 1932, Stefan Banach proved that the closed unit ball in the continuous dual space of any separable normed space is sequentially weak-* compact (Banach only considered sequential compactness). The proof for the general case was published in 1940 by the mathematician Leonidas Alaoglu. According to Pietsch [2007], there are at least twelve mathematicians who can lay claim to this theorem or an important predecessor to it. The Bourbaki–Alaoglu theorem is a generalization of the original theorem by Bourbaki to dual topologies on locally convex spaces. This theorem is also called the Banach–Alaoglu theorem or the weak-* compactness theorem and it is commonly called simply the Alaoglu theorem. Statement If is a vector space over the field then will denote the algebraic dual space of and these two spaces are henceforth associated with the bilinear defined by where the triple forms a dual system called the . If is a topological vector space (TVS) then its continuous dual space will
https://en.wikipedia.org/wiki/Center%20frequency
In electrical engineering and telecommunications, the center frequency of a filter or channel is a measure of a central frequency between the upper and lower cutoff frequencies. It is usually defined as either the arithmetic mean or the geometric mean of the lower cutoff frequency and the upper cutoff frequency of a band-pass system or a band-stop system. Typically, the geometric mean is used in systems based on certain transformations of lowpass filter designs, where the frequency response is constructed to be symmetric on a logarithmic frequency scale. The geometric center frequency corresponds to a mapping of the DC response of the prototype lowpass filter, which is a resonant frequency sometimes equal to the peak frequency of such systems, for example as in a Butterworth filter. The arithmetic definition is used in more general situations, such as in describing passband telecommunication systems, where filters are not necessarily symmetric but are treated on a linear frequency scale for applications such as frequency-division multiplexing. References External links Calculations and comparisons between the geometric mean and the arithmetic mean Electrical engineering Telecommunication theory Frequency-domain analysis
https://en.wikipedia.org/wiki/Simple%20algebra%20%28universal%20algebra%29
In universal algebra, an abstract algebra A is called simple if and only if it has no nontrivial congruence relations, or equivalently, if every homomorphism with domain A is either injective or constant. As congruences on rings are characterized by their ideals, this notion is a straightforward generalization of the notion from ring theory: a ring is simple in the sense that it has no nontrivial ideals if and only if it is simple in the sense of universal algebra. The same remark applies with respect to groups and normal subgroups; hence the universal notion is also a generalization of a simple group (it is a matter of convention whether a one-element algebra should be or should not be considered simple, hence only in this special case the notions might not match). A theorem by Roberto Magari in 1969 asserts that every variety contains a simple algebra. See also simple group simple ring central simple algebra References Algebras Ring theory
https://en.wikipedia.org/wiki/Roofing%20filter
A roofing filter is a type of filter used in a HF radio receiver that limits the passband in the early stages of the receiver electronics. It blocks strong signals outside the receive channel which can overload following amplifier and mixer stages. Purpose The roofing filter is usually found after the first receiver mixer (which normally contains an amplifier) to limit the first intermediate frequency (IF) stage's passband. It prevents overloading later amplifier stages, which would cause nonlinearity ("distortion") or clipping ("buzz") even if the overload occurred on frequencies whose signal is not heard directly. Roofing filters are usually crystal or ceramic filter types, with a passband for general purpose shortwave radio reception of about 6–20 kHz (for AM–NFM). The receiver's bandwidth is not determined by the roofing filter passband, but instead by a follow-on crystal filter, mechanical filter, or DSP filter, all of which allow a much tighter filtering curve than a typical roofing filter. For more demanding uses like listening to weak CW or SSB signals, a roofing filter is required that gives a smaller passband appropriate to the mode of the received signal. It is often used at a high first IF stage above 40 MHz, with passband widths of 250 Hz, 500 Hz (for CW), or 1.8 kHz (for SSB). These narrow filters require that the receiver uses a first IF well below VHF range, perhaps 9 or 11 MHz. See also Bandpass filter – category that includes roofing filters Preselector – an external device that serves a similar function References Radio electronics Radio technology Receiver (radio) Wireless tuning and filtering
https://en.wikipedia.org/wiki/Nurture
Nurture is usually defined as the process of caring for an organism as it grows, usually a human. It is often used in debates as the opposite of "nature", whereby nurture means the process of replicating learned cultural information from one mind to another, and nature means the replication of genetic non-learned behavior. Nurture is important in the nature versus nurture debate as some people see either nature or nurture as the final outcome of the origins of most of humanity's behaviours. There are many agents of socialization that are responsible, in some respects the outcome of a child's personality, behaviour, thoughts, social and emotional skills, feelings, and mental priorities. References Notes Ecology Virtue Psychology Nature
https://en.wikipedia.org/wiki/John%20Innes%20Centre
The John Innes Centre (JIC), located in Norwich, Norfolk, England, is an independent centre for research and training in plant and microbial science founded in 1910. It is a registered charity (No 223852) grant-aided by the Biotechnology and Biological Sciences Research Council (BBSRC), the European Research Council (ERC) and the Bill and Melinda Gates Foundation and is a member of the Norwich Research Park. In 2017, the John Innes Centre was awarded a gold Athena SWAN Charter award for equality in the workplace. History The John Innes Horticultural Institution was founded in 1910 at Merton Park, Surrey (now London Borough of Merton), with funds bequeathed by John Innes, a merchant and philanthropist. The Institution occupied Innes's former estate at Merton Park, Surrey until 1945 when it moved to Bayfordbury, Hertfordshire. It moved to its present site in 1967. In 1910, William Bateson became the first director of the John Innes Horticultural Institution and moved with his family to Merton Park. John Innes compost was developed by the institution in the 1930s, who donated the recipe to the "Dig for Victory" war effort. The John Innes Centre has never sold John Innes compost. During the 1980s, the administration of the John Innes Institute was combined with that of the Plant Breeding Institute (formerly at Trumpington, Cambridgeshire) and the Nitrogen Fixation Laboratory. In 1994, following the relocation of the operations of other two organisations to the Norwich site, the three were merged as the John Innes Centre. As of 2011 the institute was divided into six departments: Biological Chemistry, Cell & Developmental Biology, Computational & Systems Biology, Crop Genetics, Metabolic Biology and Molecular Microbiology. The John Innes Centre has a tradition of training PhD students and post-docs. PhD degrees obtained via the John Innes Centre are awarded by the University of East Anglia. The John Innes Centre has a contingent of postdoctoral researchers, many of
https://en.wikipedia.org/wiki/Service%20life
A product's service life is its period of use in service. Several related terms describe more precisely a product's life, from the point of manufacture, storage, and distribution, and eventual use. Service life has been defined as "a product's total life in use from the point of sale to the point of discard" and distinguished from replacement life, "the period after which the initial purchaser returns to the shop for a replacement". Determining a product's expected service life as part of business policy (product life cycle management) involves using tools and calculations from maintainability and reliability analysis. Service life represents a commitment made by the item's manufacturer and is usually specified as a median. It is the time that any manufactured item can be expected to be "serviceable" or supported by its manufacturer. Service life is not to be confused with shelf life, which deals with storage time, or with technical life, which is the maximum period during which it can physically function. Service life also differs from predicted life, in terms of mean time before failure (MTBF) or maintenance-free operating period (MFOP). Predicted life is useful such that a manufacturer may estimate, by hypothetical modeling and calculation, a general rule for which it will honor warranty claims, or planning for mission fulfillment. The difference between service life and predicted life is most clear when considering mission time and reliability in comparison to MTBF and service life. For example, a missile system can have a mission time of less than one minute, service life of 20 years, active MTBF of 20 minutes, dormant MTBF of 50 years, and reliability of 99.9999%. Consumers will have different expectations about service life and longevity based upon factors such as use, cost, and quality. Product strategy Manufacturers will commit to very conservative service life, usually 2 to 5 years for most commercial and consumer products (for example computer periph
https://en.wikipedia.org/wiki/Key%20exchange
Key exchange (also key establishment) is a method in cryptography by which cryptographic keys are exchanged between two parties, allowing use of a cryptographic algorithm. If the sender and receiver wish to exchange encrypted messages, each must be equipped to encrypt messages to be sent and decrypt messages received. The nature of the equipping they require depends on the encryption technique they might use. If they use a code, both will require a copy of the same codebook. If they use a cipher, they will need appropriate keys. If the cipher is a symmetric key cipher, both will need a copy of the same key. If it is an asymmetric key cipher with the public/private key property, both will need the other's public key. Channel of exchange Key exchange is done either in-band or out-of-band. The key exchange problem The key exchange problem describes ways to exchange whatever keys or other information are needed for establishing a secure communication channel so that no one else can obtain a copy. Historically, before the invention of public-key cryptography (asymmetrical cryptography), symmetric-key cryptography utilized a single key to encrypt and decrypt messages. For two parties to communicate confidentially, they must first exchange the secret key so that each party is able to encrypt messages before sending, and decrypt received ones. This process is known as the key exchange. The overarching problem with symmetrical cryptography, or single-key cryptography, is that it requires a secret key to be communicated through trusted couriers, diplomatic bags, or any other secure communication channel. If two parties cannot establish a secure initial key exchange, they won't be able to communicate securely without the risk of messages being intercepted and decrypted by a third party who acquired the key during the initial key exchange. Public-key cryptography uses a two-key system, consisting of the public and the private keys, where messages are encrypted with one key
https://en.wikipedia.org/wiki/Audio%20description
Audio description, (AD) also referred to as a video description, described video, or more precisely visual description, is a form of narration used to provide information surrounding key visual elements in a media work (such as a film or television program, or theatrical performance) for the benefit of blind and visually impaired consumers. These narrations are typically placed during natural pauses in the audio, and sometimes overlap dialogue if deemed necessary. Occasionally when a film briefly has subtitled dialogue in a different language, such as Greedo's confrontation with Han Solo in the 1977 film Star Wars: A New Hope, the narrator will read out the dialogue in character. In museums or visual art exhibitions, audio described tours (or universally designed tours that include description or the augmentation of existing recorded programs on audio- or videotape), are used to provide access to visitors who are blind or have low vision. Docents or tour guides can be trained to employ audio description in their presentations. In film and television, description is typically delivered via a secondary audio track. In North America, Second audio program (SAP) is typically used to deliver audio description by television broadcasters. To promote accessibility, some countries (such as Canada and the United States) have implemented requirements for broadcasters to air specific quotas of programming containing audio description. History The transition to "talkies" in the late 1920s resulted in a push to make the cinema accessible to the visually impaired. The New York Times documented the "first talking picture ever shown especially for the blind"—a 1929 screening of Bulldog Drummond attended by members of the New York Association for the Blind and New York League for the Hard of Hearing, which offered a live description for the visually-impaired portion of the audience. In the 1940s and 1950s, Radio Nacional de España aired live audio simulcasts of films from cinemas
https://en.wikipedia.org/wiki/Future%20value
Future value is the value of an asset at a specific date. It measures the nominal future sum of money that a given sum of money is "worth" at a specified time in the future assuming a certain interest rate, or more generally, rate of return; it is the present value multiplied by the accumulation function. The value does not include corrections for inflation or other factors that affect the true value of money in the future. This is used in time value of money calculations. Overview Money value fluctuates over time: $100 today has a different value than $100 in five years. This is because one can invest $100 today in an interest-bearing bank account or any other investment, and that money will grow/shrink due to the rate of return. Also, if $100 today allows the purchase of an item, it is possible that $100 will not be enough to purchase the same item in five years, because of inflation (increase in purchase price). An investor who has some money has two options: to spend it right now or to invest it. The financial compensation for saving it (and not spending it) is that the money value will accrue through the interests that he will receive from a borrower (the bank account on which he has the money deposited). Therefore, to evaluate the real worthiness of an amount of money today after a given period of time, economic agents compound the amount of money at a given interest rate. Most actuarial calculations use the risk-free interest rate which corresponds the minimum guaranteed rate provided the bank's saving account, for example. If one wants to compare their change in purchasing power, then they should use the real interest rate (nominal interest rate minus inflation rate). The operation of evaluating a present value into the future value is called capitalization (how much will $100 today be worth in 5 years?). The reverse operation which consists in evaluating the present value of a future amount of money is called a discounting (how much $100 that will be r
https://en.wikipedia.org/wiki/Loop%20invariant
In computer science, a loop invariant is a property of a program loop that is true before (and after) each iteration. It is a logical assertion, sometimes checked with a code assertion. Knowing its invariant(s) is essential in understanding the effect of a loop. In formal program verification, particularly the Floyd-Hoare approach, loop invariants are expressed by formal predicate logic and used to prove properties of loops and by extension algorithms that employ loops (usually correctness properties). The loop invariants will be true on entry into a loop and following each iteration, so that on exit from the loop both the loop invariants and the loop termination condition can be guaranteed. From a programming methodology viewpoint, the loop invariant can be viewed as a more abstract specification of the loop, which characterizes the deeper purpose of the loop beyond the details of this implementation. A survey article covers fundamental algorithms from many areas of computer science (searching, sorting, optimization, arithmetic etc.), characterizing each of them from the viewpoint of its invariant. Because of the similarity of loops and recursive programs, proving partial correctness of loops with invariants is very similar to proving the correctness of recursive programs via induction. In fact, the loop invariant is often the same as the inductive hypothesis to be proved for a recursive program equivalent to a given loop. Informal example The following C subroutine max() returns the maximum value in its argument array a[], provided its length n is at least 1. Comments are provided at lines 3, 6, 9, 11, and 13. Each comment makes an assertion about the values of one or more variables at that stage of the function. The highlighted assertions within the loop body, at the beginning and end of the loop (lines 6 and 11), are exactly the same. They thus describe an invariant property of the loop. When line 13 is reached, this invariant still holds, and it is known
https://en.wikipedia.org/wiki/Theory%20of%20Games%20and%20Economic%20Behavior
Theory of Games and Economic Behavior, published in 1944 by Princeton University Press, is a book by mathematician John von Neumann and economist Oskar Morgenstern which is considered the groundbreaking text that created the interdisciplinary research field of game theory. In the introduction of its 60th anniversary commemorative edition from the Princeton University Press, the book is described as "the classic work upon which modern-day game theory is based." Overview The book is based partly on earlier research by von Neumann, published in 1928 under the German title "Zur Theorie der Gesellschaftsspiele" ("On the Theory of Board Games"). The derivation of expected utility from its axioms appeared in an appendix to the Second Edition (1947). Von Neumann and Morgenstern used objective probabilities, supposing that all the agents had the same probability distribution, as a convenience. However, Neumann and Morgenstern mentioned that a theory of subjective probability could be provided, and this task was completed by Jimmie Savage in 1954 and Johann Pfanzagl in 1967. Savage extended von Neumann and Morgenstern's axioms of rational preferences to endogenize probability and make it subjective. He then used Bayes' theorem to update these subject probabilities in light of new information, thus linking rational choice and inference. See also Commemorative edition of the book Theory of Games and Economic Behavior References External links Theory of Games and Economic Behavior, full text at archive.org (public domain) 1944 non-fiction books Economics books Books about game theory Political science books Sociology books 1944 in economics John von Neumann Princeton University Press books Collaborative non-fiction books
https://en.wikipedia.org/wiki/Frequency%20counter
A frequency counter is an electronic instrument, or component of one, that is used for measuring frequency. Frequency counters usually measure the number of cycles of oscillation or pulses per second in a periodic electronic signal. Such an instrument is sometimes called a cymometer, particularly one of Chinese manufacture. Operating principle Most frequency counters work by using a counter, which accumulates the number of events occurring within a specific period of time. After a preset period known as the gate time (1 second, for example), the value in the counter is transferred to a display, and the counter is reset to zero. If the event being measured repeats itself with sufficient stability and the frequency is considerably lower than that of the clock oscillator being used, the resolution of the measurement can be greatly improved by measuring the time required for an entire number of cycles, rather than counting the number of entire cycles observed for a pre-set duration (often referred to as the reciprocal technique). The internal oscillator, which provides the time signals, is called the timebase, and must be calibrated very accurately. If the event to be counted is already in electronic form, simple interfacing with the instrument is all that is required. More complex signals may need some conditioning to make them suitable for counting. Most general-purpose frequency counters will include some form of amplifier, filtering, and shaping circuitry at the input. DSP technology, sensitivity control and hysteresis are other techniques to improve performance. Other types of periodic events that are not inherently electronic in nature will need to be converted using some form of transducer. For example, a mechanical event could be arranged to interrupt a light beam, and the counter made to count the resulting pulses. Frequency counters designed for radio frequencies (RF) are also common and operate on the same principles as lower frequency counters. Often, th
https://en.wikipedia.org/wiki/Transition-minimized%20differential%20signaling
Transition-minimized differential signaling (TMDS) is a technology for transmitting high-speed serial data used by the DVI and HDMI video interfaces, as well as by other digital communication interfaces. The transmitter incorporates an advanced coding algorithm which reduces electromagnetic interference over copper cables and enables robust clock recovery at the receiver to achieve high skew tolerance for driving longer cables as well as shorter low-cost cables. Coding The method is a form of 8b/10b encoding but using a code-set that differs from the original IBM form. A two-stage process converts an input of 8 bits into a 10 bit code with particular desirable properties. In the first stage, the first bit is untransformed and each subsequent bit is either XOR or XNOR transformed against the previous bit. The encoder chooses between XOR and XNOR by determining which will result in the fewest transitions; the ninth bit encodes which operation was used. In the second stage, the first eight bits are optionally inverted to even out the balance of ones and zeros and therefore the sustained average DC level; the tenth bit encodes whether this inversion took place. The 10-bit TMDS symbol can represent either an 8-bit data value during normal data transmission, or 2 bits of control signals during screen blanking. Of the 1,024 possible combinations of the 10 transmitted bits: 460 combinations are used to represent an 8-bit data value, as most of the 256 possible values have two encoded variants (some values have only one), 4 combinations are used to represent 2 bits of control signals (C0 and C1 in the table below); unlike the data symbols these have such properties that they can be reliably recognized even if sync is lost and are therefore also used for synchronizing the decoder, 2 combinations are used as a guard band before HDMI data, 558 remaining combinations are reserved and forbidden. Control data is encoded using the values in the table below. Control data c
https://en.wikipedia.org/wiki/Programmable%20logic%20array
A programmable logic array (PLA) is a kind of programmable logic device used to implement combinational logic circuits. The PLA has a set of programmable AND gate planes, which link to a set of programmable OR gate planes, which can then be conditionally complemented to produce an output. It has 2N AND gates for N input variables, and for M outputs from PLA, there should be M OR gates, each with programmable inputs from all of the AND gates. This layout allows for many logic functions to be synthesized in the sum of products canonical forms. PLAs differ from programmable array logic devices (PALs and GALs) in that both the AND and OR gate planes are programmable.[PAL has programmable AND gates but fixed OR gates] History In 1970, Texas Instruments developed a mask-programmable IC based on the IBM read-only associative memory or ROAM. This device, the TMS2000, was programmed by altering the metal layer during the production of the IC. The TMS2000 had up to 17 inputs and 18 outputs with 8 JK flip-flops for memory. TI coined the term Programmable Logic Array for this device. Implementation procedure Preparation in SOP (sum of products) form. Obtain the minimum SOP form to reduce the number of product terms to a minimum. Decide the input connection of the AND matrix for generating the required product term. Then decide the input connections of OR matrix to generate the sum terms. Decide the connections of invert matrix. Program the PLA. PLA block diagram: Advantages over read-only memory The desired outputs for each combination of inputs could be programmed into a read-only memory, with the inputs being driven by the address bus and the outputs being read out as data. However, that would require a separate memory location for every possible combination of inputs, including combinations that are never supposed to occur, and also duplicating data for "don't care" conditions (for example, logic like "if input A is 1, then, as far as output X is concerned, w
https://en.wikipedia.org/wiki/Programmable%20Array%20Logic
Programmable Array Logic (PAL) is a family of programmable logic device semiconductors used to implement logic functions in digital circuits introduced by Monolithic Memories, Inc. (MMI) in March 1978. MMI obtained a registered trademark on the term PAL for use in "Programmable Semiconductor Logic Circuits". The trademark is currently held by Lattice Semiconductor. PAL devices consisted of a small PROM (programmable read-only memory) core and additional output logic used to implement particular desired logic functions with few components. Using specialized machines, PAL devices were "field-programmable". PALs were available in several variants: "One-time programmable" (OTP) devices could not be updated and reused after initial programming (MMI also offered a similar family called HAL, or "hard array logic", which were like PAL devices except that they were mask-programmed at the factory.). UV erasable versions (e.g.: PALCxxxxx e.g.: PALC22V10) had a quartz window over the chip die and could be erased for re-use with an ultraviolet light source just like an EPROM. Later versions (PALCExxx e.g.: PALCE22V10) were flash erasable devices. In most applications, electrically-erasable GALs are now deployed as pin-compatible direct replacements for one-time programmable PALs. History Before PALs were introduced, designers of digital logic circuits would use small-scale integration (SSI) components, such as those in the 7400 series TTL (transistor-transistor logic) family; the 7400 family included a variety of logic building blocks, such as gates (NOT, NAND, NOR, AND, OR), multiplexers (MUXes) and demultiplexers (DEMUXes), flip flops (D-type, JK, etc.) and others. One PAL device would typically replace dozens of such "discrete" logic packages, so the SSI business declined as the PAL business took off. PALs were used advantageously in many products, such as minicomputers, as documented in Tracy Kidder's best-selling book The Soul of a New Machine. PALs were not the
https://en.wikipedia.org/wiki/Photobiology
Photobiology is the scientific study of the beneficial and harmful interactions of light (technically, non-ionizing radiation) in living organisms. The field includes the study of photophysics, photochemistry, photosynthesis, photomorphogenesis, visual processing, circadian rhythms, photomovement, bioluminescence, and ultraviolet radiation effects. The division between ionizing radiation and non-ionizing radiation is typically considered to be a photon energy greater than 10 eV, which approximately corresponds to both the first ionization energy of oxygen, and the ionization energy of hydrogen at about 14 eV. When photons come into contact with molecules, these molecules can absorb the energy in photons and become excited. Then they can react with molecules around them and stimulate "photochemical" and "photophysical" changes of molecular structures. Photophysics This area of Photobiology focuses on the physical interactions of light and matter. When molecules absorb photons that matches their energy requirements they promote a valence electron from a ground state to an excited state and they become a lot more reactive. This is an extremely fast process, but very important for different processes. Photochemistry This area of Photobiology studies the reactivity of a molecule when it absorbs energy that comes from light. It also studies what happens with this energy, it could be given off as heat or fluorescence so the molecule goes back to ground state. There are 3 basic laws of photochemistry: 1) First Law of Photochemistry: This law explains that in order for photochemistry to happen, light has to be absorbed. 2) Second Law of Photochemistry: This law explains that only one molecule will be activated by each photon that is absorbed. 3) Bunsen-Roscoe Law of Reciprosity: This law explains that the energy in the final products of a photochemical reaction will be directly proportional to the total energy that was initially absorbed by the system. Plant Photo
https://en.wikipedia.org/wiki/Magnetomotive%20force
In physics, the magnetomotive force (abbreviated mmf or MMF, symbol ) is a quantity appearing in the equation for the magnetic flux in a magnetic circuit, Hopkinson's law. It is the property of certain substances or phenomena that give rise to magnetic fields: where is the magnetic flux and is the reluctance of the circuit. It can be seen that the magnetomotive force plays a role in this equation analogous to the voltage in Ohm's law, , since it is the cause of magnetic flux in a magnetic circuit: where is the number of turns in the coil and is the electric current through the circuit. where is the magnetic flux and is the magnetic reluctance where is the magnetizing force (the strength of the magnetizing field) and is the mean length of a solenoid or the circumference of a toroid. Units The SI unit of mmf is the ampere, the same as the unit of current (analogously the units of emf and voltage are both the volt). Informally, and frequently, this unit is stated as the ampere-turn to avoid confusion with current. This was the unit name in the MKS system. Occasionally, the cgs system unit of the gilbert may also be encountered. History The term magnetomotive force was coined by Henry Augustus Rowland in 1880. Rowland intended this to indicate a direct analogy with electromotive force. The idea of a magnetic analogy to electromotive force can be found much earlier in the work of Michael Faraday (1791–1867) and it is hinted at by James Clerk Maxwell (1831–1879). However, Rowland coined the term and was the first to make explicit an Ohm's law for magnetic circuits in 1873. Ohm's law for magnetic circuits is sometimes referred to as Hopkinson's law rather than Rowland's law as some authors attribute the law to John Hopkinson instead of Rowland. According to a review of magnetic circuit analysis methods this is an incorrect attribution originating from an 1885 paper by Hopkinson. Furthermore, Hopkinson actually cites Rowland's 1873 paper in th
https://en.wikipedia.org/wiki/Monad%20%28functional%20programming%29
In functional programming, a monad is a structure that combines program fragments (functions) and wraps their return values in a type with additional computation. In addition to defining a wrapping monadic type, monads define two operators: one to wrap a value in the monad type, and another to compose together functions that output values of the monad type (these are known as monadic functions). General-purpose languages use monads to reduce boilerplate code needed for common operations (such as dealing with undefined values or fallible functions, or encapsulating bookkeeping code). Functional languages use monads to turn complicated sequences of functions into succinct pipelines that abstract away control flow, and side-effects. Both the concept of a monad and the term originally come from category theory, where a monad is defined as a functor with additional structure. Research beginning in the late 1980s and early 1990s established that monads could bring seemingly disparate computer-science problems under a unified, functional model. Category theory also provides a few formal requirements, known as the monad laws, which should be satisfied by any monad and can be used to verify monadic code. Since monads make semantics explicit for a kind of computation, they can also be used to implement convenient language features. Some languages, such as Haskell, even offer pre-built definitions in their core libraries for the general monad structure and common instances. Overview "For a monad m, a value of type m a represents having access to a value of type a within the context of the monad." —C. A. McCann More exactly, a monad can be used where unrestricted access to a value is inappropriate for reasons specific to the scenario. In the case of the Maybe monad, it is because the value may not exist. In the case of the IO monad, it is because the value may not be known yet, such as when the monad represents user input that will only be provided after a prompt is displa
https://en.wikipedia.org/wiki/Image%20%28mathematics%29
In mathematics, the image of a function is the set of all output values it may produce. More generally, evaluating a given function at each element of a given subset of its domain produces a set, called the "image of under (or through) ". Similarly, the inverse image (or preimage) of a given subset of the codomain of is the set of all elements of the domain that map to the members of Image and inverse image may also be defined for general binary relations, not just functions. Definition The word "image" is used in three related ways. In these definitions, is a function from the set to the set Image of an element If is a member of then the image of under denoted is the value of when applied to is alternatively known as the output of for argument Given the function is said to "" or "" if there exists some in the function's domain such that Similarly, given a set is said to "" if there exists in the function's domain such that However, "" and "" means that for point in 's domain. Image of a subset Throughout, let be a function. The under of a subset of is the set of all for It is denoted by or by when there is no risk of confusion. Using set-builder notation, this definition can be written as This induces a function where denotes the power set of a set that is the set of all subsets of See below for more. Image of a function The image of a function is the image of its entire domain, also known as the range of the function. This last usage should be avoided because the word "range" is also commonly used to mean the codomain of Generalization to binary relations If is an arbitrary binary relation on then the set is called the image, or the range, of Dually, the set is called the domain of Inverse image Let be a function from to The preimage or inverse image of a set under denoted by is the subset of defined by Other notations include and The inverse image of a singleton set, denoted by or by
https://en.wikipedia.org/wiki/Gene%20prediction
In computational biology, gene prediction or gene finding refers to the process of identifying the regions of genomic DNA that encode genes. This includes protein-coding genes as well as RNA genes, but may also include prediction of other functional elements such as regulatory regions. Gene finding is one of the first and most important steps in understanding the genome of a species once it has been sequenced. In its earliest days, "gene finding" was based on painstaking experimentation on living cells and organisms. Statistical analysis of the rates of homologous recombination of several different genes could determine their order on a certain chromosome, and information from many such experiments could be combined to create a genetic map specifying the rough location of known genes relative to each other. Today, with comprehensive genome sequence and powerful computational resources at the disposal of the research community, gene finding has been redefined as a largely computational problem. Determining that a sequence is functional should be distinguished from determining the function of the gene or its product. Predicting the function of a gene and confirming that the gene prediction is accurate still demands in vivo experimentation through gene knockout and other assays, although frontiers of bioinformatics research are making it increasingly possible to predict the function of a gene based on its sequence alone. Gene prediction is one of the key steps in genome annotation, following sequence assembly, the filtering of non-coding regions and repeat masking. Gene prediction is closely related to the so-called 'target search problem' investigating how DNA-binding proteins (transcription factors) locate specific binding sites within the genome. Many aspects of structural gene prediction are based on current understanding of underlying biochemical processes in the cell such as gene transcription, translation, protein–protein interactions and regulation process
https://en.wikipedia.org/wiki/Sinclair%20QDOS
QDOS is the multitasking operating system found on the Sinclair QL personal computer and its clones. It was designed by Tony Tebby whilst working at Sinclair Research, as an in-house alternative to 68K/OS, which was later cancelled by Sinclair, but released by original authors GST Computer Systems. Its name is not regarded as an acronym and sometimes written as Qdos in official literature (see also the identically pronounced word kudos). QDOS was implemented in Motorola 68000 assembly language, and on the QL, resided in 48 KB of ROM, consisting of either three 16 KB EPROM chips or one 32 KB and one 16 KB ROM chip. These ROMs also held the SuperBASIC interpreter, an advanced variant of BASIC programming language with structured programming additions. This also acted as the QDOS command-line interpreter. Facilities provided by QDOS included management of processes (or "jobs" in QDOS terminology), memory allocation, and an extensible "redirectable I/O system", providing a generic framework for filesystems and device drivers. Very basic screen window functionality was also provided. This, and several other features, were never fully implemented in the released versions of QDOS, but were improved in later extensions to the operating system produced by Tebby's own company, QJUMP. Rewritten, enhanced versions of QDOS were also developed, including Laurence Reeves' Minerva and Tebby's SMS2 and SMSQ/E. The last is the most modern variant and is still being improved. Versions QDOS versions were identified by numerical version numbers. However, the QL firmware ROMs as a whole (including SuperBASIC) were given two- or three-letter alphabetic identifiers (returned by the SuperBASIC function VER$). The following version of QDOS were released (dates are estimated first customer shipments): 0.08: the last pre-production version. 1.00: corresponded to the FB version QL ROMs, released in April 1984. 1.01: corresponded to the PM version ROMs. This was faster and had improved
https://en.wikipedia.org/wiki/Data%20center
A data center (American English) or data centre (Commonwealth English) is a building, a dedicated space within a building, or a group of buildings used to house computer systems and associated components, such as telecommunications and storage systems. Since IT operations are crucial for business continuity, it generally includes redundant or backup components and infrastructure for power supply, data communication connections, environmental controls (e.g., air conditioning, fire suppression), and various security devices. A large data center is an industrial-scale operation using as much electricity as a small town. Estimated global data center electricity consumption in 2022 was 240-340 TWh, or roughly 1-1.3% of global electricity demand. This excludes energy used for cryptocurrency mining, which was estimated to be around 110 TWh in 2022, or another 0.4% of global electricity demand. Data centers can vary widely in terms of size, power requirements, redundancy, and overall structure. Four common categories used to segment types of data centers are onsite data centers, colocation facilities, hyperscale data centers, and edge data centers. History Data centers have their roots in the huge computer rooms of the 1940s, typified by ENIAC, one of the earliest examples of a data center. Early computer systems, complex to operate and maintain, required a special environment in which to operate. Many cables were necessary to connect all the components, and methods to accommodate and organize these were devised such as standard racks to mount equipment, raised floors, and cable trays (installed overhead or under the elevated floor). A single mainframe required a great deal of power and had to be cooled to avoid overheating. Security became important – computers were expensive, and were often used for military purposes. Basic design guidelines for controlling access to the computer room were therefore devised. During the boom of the microcomputer industry, and especia
https://en.wikipedia.org/wiki/Antarctic%20realm
The Antarctic realm is one of eight terrestrial biogeographic realms. The ecosystem includes Antarctica and several island groups in the southern Atlantic and Indian oceans. The continent of Antarctica is so cold that it has supported only 2 vascular plants for millions of years, and its flora presently consists of around 250 lichens, 100 mosses, 25-30 liverworts, and around 700 terrestrial and aquatic algal species, which live on the areas of exposed rock and soil around the shore of the continent. Antarctica's two flowering plant species, the Antarctic hair grass (Deschampsia antarctica) and Antarctic pearlwort (Colobanthus quitensis), are found on the northern and western parts of the Antarctic Peninsula. Antarctica is also home to a diversity of animal life, including penguins, seals, and whales. Several Antarctic and sub-Antarctic island groups are considered part of the Antarctic realm, including Bouvet Island, the Crozet Islands, Heard Island, the Kerguelen Islands, the McDonald Islands, the Prince Edward Islands, the South Georgia Group, the South Orkney Islands, the South Sandwich Islands, and the South Shetland Islands. These islands have a somewhat milder climate than Antarctica proper, and support a greater diversity of tundra plants, although they are all too windy and cold to support trees. Antarctic krill is the keystone species of the ecosystem of the Southern Ocean, and is an important food organism for whales, seals, leopard seals, fur seals, crabeater seals, squid, icefish, penguins, albatrosses and many other birds. The ocean there is so full of phytoplankton because water rises from the depths to the light-flooded surface, bringing nutrients from all oceans back to the photic zone. On August 20, 2014, scientists confirmed the existence of microorganisms living below the ice of Antarctica. History Millions of years ago, Antarctica was warmer and wetter, and supported the Antarctic flora, including forests of podocarps and southern beech. An
https://en.wikipedia.org/wiki/Ecoinformatics
Ecoinformatics, or ecological informatics, is the science of information in ecology and environmental science. It integrates environmental and information sciences to define entities and natural processes with language common to both humans and computers. However, this is a rapidly developing area in ecology and there are alternative perspectives on what constitutes ecoinformatics. A few definitions have been circulating, mostly centered on the creation of tools to access and analyze natural system data. However, the scope and aims of ecoinformatics are certainly broader than the development of metadata standards to be used in documenting datasets. Ecoinformatics aims to facilitate environmental research and management by developing ways to access, integrate databases of environmental information, and develop new algorithms enabling different environmental datasets to be combined to test ecological hypotheses. Ecoinformatics is related to the concept of ecosystem services. Ecoinformatics characterize the semantics of natural system knowledge. For this reason, much of today's ecoinformatics research relates to the branch of computer science known as knowledge representation, and active ecoinformatics projects are developing links to activities such as the Semantic Web. Current initiatives to effectively manage, share, and reuse ecological data are indicative of the increasing importance of fields like ecoinformatics to develop the foundations for effectively managing ecological information. Examples of these initiatives are National Science Foundation Datanet projects, DataONE, Data Conservancy, and Artificial Intelligence for Environment & Sustainability. References External links ecoinformatics.org, Online Resource for Managing Ecological Data and Information Ecoinformatics Collaboratory, Research links and public wiki for discussion. Ecoinformatics Education, Ecosystem Informatics at Oregon State University industrial Environmental Informatics, Industrial En
https://en.wikipedia.org/wiki/List%20of%20mathematical%20topics%20in%20relativity
This is a list of mathematical topics in relativity, by Wikipedia page. Special relativity Foundational issues principle of relativity speed of light faster-than-light biquaternion conjugate diameters four-vector four-acceleration four-force four-gradient four-momentum four-velocity hyperbolic orthogonality hyperboloid model light-like Lorentz covariance Lorentz group Lorentz transformation Lorentz–FitzGerald contraction hypothesis Minkowski diagram Minkowski space Poincaré group proper length proper time rapidity relativistic wave equations relativistic mass split-complex number unit hyperbola world line General relativity black holes no-hair theorem Hawking radiation Hawking temperature Black hole entropy charged black hole rotating black hole micro black hole Schwarzschild black hole Schwarzschild metric Schwarzschild radius Reissner–Nordström black hole Immirzi parameter closed timelike curve cosmic censorship hypothesis chronology protection conjecture Einstein–Cartan theory Einstein's field equation geodesic gravitational redshift Penrose–Hawking singularity theorems Pseudo-Riemannian manifold stress–energy tensor worm hole Cosmology anti-de Sitter space Ashtekar variables Batalin–Vilkovisky formalism Big Bang Cauchy horizon cosmic inflation cosmic microwave background cosmic variance cosmological constant dark energy dark matter de Sitter space Friedmann–Lemaître–Robertson–Walker metric horizon problem large-scale structure of the cosmos Randall–Sundrum model warped geometry Weyl curvature hypothesis Relativity Mathematics
https://en.wikipedia.org/wiki/Reuleaux%20triangle
A Reuleaux triangle is a curved triangle with constant width, the simplest and best known curve of constant width other than the circle. It is formed from the intersection of three circular disks, each having its center on the boundary of the other two. Constant width means that the separation of every two parallel supporting lines is the same, independent of their orientation. Because its width is constant, the Reuleaux triangle is one answer to the question "Other than a circle, what shape can a manhole cover be made so that it cannot fall down through the hole?" They are named after Franz Reuleaux, a 19th-century German engineer who pioneered the study of machines for translating one type of motion into another, and who used Reuleaux triangles in his designs. However, these shapes were known before his time, for instance by the designers of Gothic church windows, by Leonardo da Vinci, who used it for a map projection, and by Leonhard Euler in his study of constant-width shapes. Other applications of the Reuleaux triangle include giving the shape to guitar picks, fire hydrant nuts, pencils, and drill bits for drilling filleted square holes, as well as in graphic design in the shapes of some signs and corporate logos. Among constant-width shapes with a given width, the Reuleaux triangle has the minimum area and the sharpest (smallest) possible angle (120°) at its corners. By several numerical measures it is the farthest from being centrally symmetric. It provides the largest constant-width shape avoiding the points of an integer lattice, and is closely related to the shape of the quadrilateral maximizing the ratio of perimeter to diameter. It can perform a complete rotation within a square while at all times touching all four sides of the square, and has the smallest possible area of shapes with this property. However, although it covers most of the square in this rotation process, it fails to cover a small fraction of the square's area, near its corners. Becaus
https://en.wikipedia.org/wiki/Barbier%27s%20theorem
In geometry, Barbier's theorem states that every curve of constant width has perimeter times its width, regardless of its precise shape. This theorem was first published by Joseph-Émile Barbier in 1860. Examples The most familiar examples of curves of constant width are the circle and the Reuleaux triangle. For a circle, the width is the same as the diameter; a circle of width w has perimeter w. A Reuleaux triangle of width w consists of three arcs of circles of radius w. Each of these arcs has central angle /3, so the perimeter of the Reuleaux triangle of width w is equal to half the perimeter of a circle of radius w and therefore is equal to w. A similar analysis of other simple examples such as Reuleaux polygons gives the same answer. Proofs One proof of the theorem uses the properties of Minkowski sums. If K is a body of constant width w, then the Minkowski sum of K and its 180° rotation is a disk with radius w and perimeter 2w. However, the Minkowski sum acts linearly on the perimeters of convex bodies, so the perimeter of K must be half the perimeter of this disk, which is w as the theorem states. Alternatively, the theorem follows immediately from the Crofton formula in integral geometry according to which the length of any curve equals the measure of the set of lines that cross the curve, multiplied by their numbers of crossings. Any two curves that have the same constant width are crossed by sets of lines with the same measure, and therefore they have the same length. Historically, Crofton derived his formula later than, and independently of, Barbier's theorem. An elementary probabilistic proof of the theorem can be found at Buffon's noodle. Higher dimensions The analogue of Barbier's theorem for surfaces of constant width is false. In particular, the unit sphere has surface area , while the surface of revolution of a Reuleaux triangle with the same constant width has surface area . Instead, Barbier's theorem generalizes to bodies of constant br
https://en.wikipedia.org/wiki/Hopf%20fibration
In the mathematical field of differential topology, the Hopf fibration (also known as the Hopf bundle or Hopf map) describes a 3-sphere (a hypersphere in four-dimensional space) in terms of circles and an ordinary sphere. Discovered by Heinz Hopf in 1931, it is an influential early example of a fiber bundle. Technically, Hopf found a many-to-one continuous function (or "map") from the -sphere onto the -sphere such that each distinct point of the -sphere is mapped from a distinct great circle of the -sphere . Thus the -sphere is composed of fibers, where each fiber is a circle — one for each point of the -sphere. This fiber bundle structure is denoted meaning that the fiber space (a circle) is embedded in the total space (the -sphere), and (Hopf's map) projects onto the base space (the ordinary -sphere). The Hopf fibration, like any fiber bundle, has the important property that it is locally a product space. However it is not a trivial fiber bundle, i.e., is not globally a product of and although locally it is indistinguishable from it. This has many implications: for example the existence of this bundle shows that the higher homotopy groups of spheres are not trivial in general. It also provides a basic example of a principal bundle, by identifying the fiber with the circle group. Stereographic projection of the Hopf fibration induces a remarkable structure on , in which all of 3-dimensional space, except for the z-axis, is filled with nested tori made of linking Villarceau circles. Here each fiber projects to a circle in space (one of which is a line, thought of as a "circle through infinity"). Each torus is the stereographic projection of the inverse image of a circle of latitude of the -sphere. (Topologically, a torus is the product of two circles.) These tori are illustrated in the images at right. When is compressed to the boundary of a ball, some geometric structure is lost although the topological structure is retained (see Topology and geometr
https://en.wikipedia.org/wiki/Presence%20information
In computer and telecommunications networks, presence information is a status indicator that conveys ability and willingness of a potential communication partner—for example a user—to communicate. A user's client provides presence information (presence state) via a network connection to a presence service, which is stored in what constitutes his personal availability record (called a presentity) and can be made available for distribution to other users (called watchers) to convey their availability for communication. Presence information has wide application in many communication services and is one of the innovations driving the popularity of instant messaging or recent implementations of voice over IP clients. Presence state A user client may publish a presence state to indicate its current communication status. This published state informs others that wish to contact the user of his availability and willingness to communicate. The most common use of presence today is to display an indicator icon on instant messaging clients, typically from a choice of graphic symbols with easy-to-convey meanings, and a list of corresponding text descriptions of each of the states. Even when technically not the same, the "on-hook" or "off-hook" state of called telephone is an analogy, as long as the caller receives a distinctive tone indicating unavailability or availability. Common states on the user's availability are "free for chat", "busy", "away", "do not disturb", "out to lunch". Such states exist in many variations across different modern instant messaging clients. Current standards support a rich choice of additional presence attributes that can be used for presence information, such as user mood, location, or free text status. The analogy with free/busy tone on PSTN is inexact, as the "on-hook" telephone status reflects the ability of the network to reach the recipient after the requester has initiated the conversation. The requester must commit to the connection metho
https://en.wikipedia.org/wiki/Write-only%20memory%20%28joke%29
Write-only memory (WOM), the opposite of read-only memory (ROM), began as a humorous reference to a memory device that could be written to but not read, as there seemed to be no practical use for a memory circuit from which data could not be retrieved. However, it was eventually recognized that write-only describes certain functionalities in microprocessor systems. The concept is still often used as a joke or euphemism for a failed memory device. The first use of the term is generally attributed to Signetics, whose write-only memory literature, created in 1972 as in-house practical joke, is frequently referenced within the electronics industry, a staple of software engineering lexicons, and included in "best hoaxes" collections. Signetics A "Write-Only Memory" datasheet was created "as a lark" by Signetics engineer John G "Jack" Curtis, inspired by a fictitious and humorous vacuum tube datasheet from the 1940s. Considered "an icebreaker", it was deliberately included in the Signetics catalog. Roy L Twitty, a Signetics PR representative, released a tongue-in-cheek press release touting WOM on April 1, 1973. Instead of the more conventional characteristic curves, the 25120 "fully encoded, 9046×N, Random Access, write-only-memory" data sheet included meaningless diagrams of "bit capacity vs. Temp.", "Iff vs. Vff", "Number of pins remaining vs. number of socket insertions", and "AQL vs. selling price". The fictional device required a 6.3 VAC Vff (vacuum tube filament) supply, a +10 Vcc (double the Vcc of standard TTL logic of the day), and Vdd of 0±2% volt (i.e. ground). It was specified to run between 0 and −70°C. Apple In 1982, Apple published their official Apple IIe Reference Manual (part number A2L2005), which included two references to write-only memory: On page 233: bit bucket: The final resting place of all information; see write-only memory. On page 250: write-only memory: A form of computer memory into which information can be stored but never, ever r
https://en.wikipedia.org/wiki/Implicit%20function%20theorem
In multivariable calculus, the implicit function theorem is a tool that allows relations to be converted to functions of several real variables. It does so by representing the relation as the graph of a function. There may not be a single function whose graph can represent the entire relation, but there may be such a function on a restriction of the domain of the relation. The implicit function theorem gives a sufficient condition to ensure that there is such a function. More precisely, given a system of equations (often abbreviated into ), the theorem states that, under a mild condition on the partial derivatives (with respect to each ) at a point, the variables are differentiable functions of the in some neighborhood of the point. As these functions can generally not be expressed in closed form, they are implicitly defined by the equations, and this motivated the name of the theorem. In other words, under a mild condition on the partial derivatives, the set of zeros of a system of equations is locally the graph of a function. History Augustin-Louis Cauchy (1789–1857) is credited with the first rigorous form of the implicit function theorem. Ulisse Dini (1845–1918) generalized the real-variable version of the implicit function theorem to the context of functions of any number of real variables. First example If we define the function , then the equation cuts out the unit circle as the level set . There is no way to represent the unit circle as the graph of a function of one variable because for each choice of , there are two choices of y, namely . However, it is possible to represent part of the circle as the graph of a function of one variable. If we let for , then the graph of provides the upper half of the circle. Similarly, if , then the graph of gives the lower half of the circle. The purpose of the implicit function theorem is to tell us that functions like and almost always exist, even in situations where we cannot write down explicit f