source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Alternating%20series%20test
In mathematical analysis, the alternating series test is the method used to show that an alternating series is convergent when its terms (1) decrease in absolute value, and (2) approach zero in the limit. The test was used by Gottfried Leibniz and is sometimes known as Leibniz's test, Leibniz's rule, or the Leibniz criterion. The test is only sufficient, not necessary, so some convergent alternating series may fail the first part of the test. Formal Statement Alternating series test A series of the form where either all an are positive or all an are negative, is called an alternating series. The alternating series test guarantees that an alternating series converges if the following two conditions are met: decreases monotonically, i.e., , and Alternating series estimation theorem Moreover, let L denote the sum of the series, then the partial sum approximates L with error bounded by the next omitted term: Proof Suppose we are given a series of the form , where and for all natural numbers n. (The case follows by taking the negative.) Proof of the alternating series test We will prove that both the partial sums with odd number of terms, and with even number of terms, converge to the same number L. Thus the usual partial sum also converges to L. The odd partial sums decrease monotonically: while the even partial sums increase monotonically: both because an decreases monotonically with n. Moreover, since an are positive, . Thus we can collect these facts to form the following suggestive inequality: Now, note that a1 − a2 is a lower bound of the monotonically decreasing sequence S2m+1, the monotone convergence theorem then implies that this sequence converges as m approaches infinity. Similarly, the sequence of even partial sum converges too. Finally, they must converge to the same number because Call the limit L, then the monotone convergence theorem also tells us extra information that for any m. This means the partial sums of an
https://en.wikipedia.org/wiki/Digit%20sum
In mathematics, the digit sum of a natural number in a given number base is the sum of all its digits. For example, the digit sum of the decimal number would be Definition Let be a natural number. We define the digit sum for base , to be the following: where is one less than the number of digits in the number in base , and is the value of each digit of the number. For example, in base 10, the digit sum of 84001 is For any two bases and for sufficiently large natural numbers The sum of the base 10 digits of the integers 0, 1, 2, ... is given by in the On-Line Encyclopedia of Integer Sequences. use the generating function of this integer sequence (and of the analogous sequence for binary digit sums) to derive several rapidly converging series with rational and transcendental sums. Extension to negative integers The digit sum can be extended to the negative integers by use of a signed-digit representation to represent each integer. Applications The concept of a decimal digit sum is closely related to, but not the same as, the digital root, which is the result of repeatedly applying the digit sum operation until the remaining value is only a single digit. The decimal digital root of any non-zero integer will be a number in the range 1 to 9, whereas the digit sum can take any value. Digit sums and digital roots can be used for quick divisibility tests: a natural number is divisible by 3 or 9 if and only if its digit sum (or digital root) is divisible by 3 or 9, respectively. For divisibility by 9, this test is called the rule of nines and is the basis of the casting out nines technique for checking calculations. Digit sums are also a common ingredient in checksum algorithms to check the arithmetic operations of early computers. Earlier, in an era of hand calculation, suggested using sums of 50 digits taken from mathematical tables of logarithms as a form of random number generation; if one assumes that each digit is random, then by the central limit
https://en.wikipedia.org/wiki/Polydisc
In the theory of functions of several complex variables, a branch of mathematics, a polydisc is a Cartesian product of discs. More specifically, if we denote by the open disc of center z and radius r in the complex plane, then an open polydisc is a set of the form It can be equivalently written as One should not confuse the polydisc with the open ball in Cn, which is defined as Here, the norm is the Euclidean distance in Cn. When , open balls and open polydiscs are not biholomorphically equivalent, that is, there is no biholomorphic mapping between the two. This was proven by Poincaré in 1907 by showing that their automorphism groups have different dimensions as Lie groups. When the term bidisc is sometimes used. A polydisc is an example of logarithmically convex Reinhardt domain.
https://en.wikipedia.org/wiki/Tripod%20%28web%20hosting%29
Tripod.com is a web hosting service owned by Lycos. Originally aiming its services to college students and young adults, it was one of several sites trying to build online communities during the 1990s. As such, Tripod formed part of the first wave of user-generated content. Free webpages are no longer available and have been replaced by paid services. Services Tripod offers web hosting with two paid plans, "personal" and "professional", which differ in features and storage space, but are both powered by the web authoring system "Lycos Publish". This tool has completely replaced the former offering of more general web hosting and removed free plans altogether. Tripod offered free and paid web hosting services, including 20 megabytes of storage space and the ability to run Common Gateway Interface (CGI) scripts in Perl. In addition to basic hosting, Tripod also offered a blogging tool, a photo album manager, and the Trellix site builder for WYSIWYG page editing. Tripod's for-pay services included additional disk space, a shopping cart, domain names, web and POP/IMAP email. History Tripod originated in 1992 with two Williams College classmates, Bo Peabody and Brett Hershey, along with Dick Sabot, an economics professor at the school. The company was headquartered in Williamstown, Massachusetts, with Peabody as CEO. Although it would eventually focus on the Internet, Tripod also published a magazine, Tools for Life, that was distributed with textbooks, and offered a discount card for students. Website launch The domain name Tripod.com was created on September 29, 1994 and the site officially launched in 1995 after operating in "sneak-preview mode" for a period. Billed as a "hip Web site and pay service for and by college students", it offered how-to advice on practical issues that might concern young people when first living away from home. It planned to charge a minimal fee and make money primarily on commissions from partners who would sell products on the site. O
https://en.wikipedia.org/wiki/E6B
The E6B flight computer is a form of circular slide rule used in aviation. It is an instance of an analog calculating device still being used the 21st century. They are mostly used in flight training, because these flight computers have been replaced with electronic planning tools or software and websites that make these calculations for the pilots. These flight computers are used during flight planning (on the ground before takeoff) to aid in calculating fuel burn, wind correction, time en route, and other items. In the air, the flight computer can be used to calculate ground speed, estimated fuel burn and updated estimated time of arrival. The back is designed for wind vector solutions, i.e., determining how much the wind is affecting one's speed and course. They are frequently referred to by the nickname "whiz wheel". Construction Flight computers are usually made out of aluminum, plastic or cardboard, or combinations of these materials. One side is used for wind triangle calculations using a rotating scale and a sliding panel. The other side is a circular slide rule. Extra marks and windows facilitate calculations specifically needed in aviation. Electronic versions are also produced, resembling calculators, rather than manual slide rules. Aviation remains one of the few places that the slide rule is still in widespread use. Manual E6Bs/CRP-1s remain popular with some users and in some environments rather than the electronic ones because they are lighter, smaller, less prone to break, easy to use one-handed, quicker and do not require electrical power. In flight training for a private pilot or instrument rating, mechanical flight computers are still often used to teach the fundamental computations. This is in part also due to the complex nature of some trigonometric calculations which would be comparably difficult to perform on a conventional scientific calculator. The graphic nature of the flight computer also helps in catching many errors which in part ex
https://en.wikipedia.org/wiki/Radiation%20implosion
Radiation implosion is the compression of a target by the use of high levels of electromagnetic radiation. The major use for this technology is in fusion bombs and inertial confinement fusion research. History Radiation implosion was first developed by Klaus Fuchs and John von Neumann in the United States, as part of their work on the original "Classical Super" hydrogen bomb design. Their work resulted in a secret patent filed in 1946, and later given to the USSR by Fuchs as part of his nuclear espionage. However, their scheme was not the same as used in the final hydrogen bomb design, and neither the American nor the Soviet programs were able to make use of it directly in developing the hydrogen bomb (its value would become apparent only after the fact). A modified version of the Fuchs-von Neumann scheme was incorporated into the "George" shot of Operation Greenhouse. In 1951, Stanislaw Ulam had the idea to use hydrodynamic shock of a fission weapon to compress more fissionable material to incredible densities in order to make megaton-range, two-stage fission bombs. He then realized that this approach might be useful for starting a thermonuclear reaction. He presented the idea to Edward Teller, who realized that radiation compression would be both faster and more efficient than mechanical shock. This combination of ideas, along with a fission "sparkplug" embedded inside of the fusion fuel, became what is known as the Teller–Ulam design for the hydrogen bomb. Fission bomb radiation source Most of the energy released by a fission bomb is in the form of x-rays. The spectrum is approximately that of a black body at a temperature of 50,000,000 kelvins (a little more than three times the temperature of the Sun's core). The amplitude can be modeled as a trapezoidal pulse with a one microsecond rise time, one microsecond plateau, and one microsecond fall time. For a 30 kiloton fission bomb, the total x-ray output would be 100 terajoules (more than 70% of the total yie
https://en.wikipedia.org/wiki/Atrophic%20gastritis
Atrophic gastritis is a process of chronic inflammation of the gastric mucosa of the stomach, leading to a loss of gastric glandular cells and their eventual replacement by intestinal and fibrous tissues. As a result, the stomach's secretion of essential substances such as hydrochloric acid, pepsin, and intrinsic factor is impaired, leading to digestive problems. The most common are vitamin B12 deficiency possibly leading to pernicious anemia; and malabsorption of iron, leading to iron deficiency anaemia. It can be caused by persistent infection with Helicobacter pylori, or can be autoimmune in origin. Those with autoimmune atrophic gastritis (Type A gastritis) are statistically more likely to develop gastric carcinoma, Hashimoto's thyroiditis, and achlorhydria. Type A gastritis primarily affects the fundus (body) of the stomach and is more common with pernicious anemia. Type B gastritis primarily affects the antrum, and is more common with H. pylori infection. Signs and symptoms Some people with atrophic gastritis may be asymptomatic. Symptomatic patients are mostly females and signs of atrophic gastritis are those associated with iron deficiency: fatigue, restless legs syndrome, brittle nails, hair loss, impaired immune function, and impaired wound healing. And other symptoms, such as delayed gastric emptying (80%), reflux symptoms (25%), peripheral neuropathy (25% cases), autonomic abnormalities, and memory loss, are less common and occur in 1%–2% of cases. Psychiatric disorders are also reported, such as mania, depression, obsessive compulsive disorder, psychosis and cognitive impairment. Although autoimmune atrophic gastritis impairs iron and vitamin B12 absorption, iron deficiency is detected at a younger age than pernicious anemia. Associated conditions People with atrophic gastritis are also at increased risk for the development of gastric adenocarcinoma. Causes Recent research has shown that autoimmune metaplastic atrophic gastritis (AMAG) is a result
https://en.wikipedia.org/wiki/Edward%20F.%20Moore
Edward Forrest Moore (November 23, 1925 in Baltimore, Maryland – June 14, 2003 in Madison, Wisconsin) was an American professor of mathematics and computer science, the inventor of the Moore finite state machine, and an early pioneer of artificial life. Biography Moore received a B.S. in chemistry from the Virginia Polytechnic Institute in Blacksburg, Virginia in 1947 and a Ph.D. in Mathematics from Brown University in Providence, Rhode Island in June 1950. He worked at the University of Illinois at Urbana–Champaign from 1950 to 1952 and was a visiting professor at MIT and visiting lecturer at Harvard University simultaneously in 1961-1962. He worked at Bell Labs from 1952 to 1966. After that, he was a professor at the University of Wisconsin–Madison from 1966 until he retired in 1985. He married Elinor Constance Martin and they had three children. Scientific work He was the first to use the type of finite state machine (FSM) that is commonly used today, the Moore FSM. With Claude Shannon he did seminal work on computability theory and built reliable circuits using less reliable relays. He also spent a great deal of his later years on a fruitless effort to solve the Four Color Theorem. With John Myhill, Moore proved the Garden of Eden theorem characterizing the cellular automaton rules that have patterns with no predecessor. He is also the namesake of the Moore neighborhood for cellular automata, used by Conway's Game of Life, and was the first to publish on the firing squad synchronization problem in cellular automata. In a 1956 article in Scientific American, he proposed "Artificial Living Plants," which would be floating factories that could create copies of themselves. They could be programmed to perform some function (extracting fresh water, harvesting minerals from seawater) for an investment that would be relatively small compared to the huge returns from the exponentially growing numbers of factories. Moore also asked which regular graphs can have t
https://en.wikipedia.org/wiki/Self-replicating%20machine
A self-replicating machine is a type of autonomous robot that is capable of reproducing itself autonomously using raw materials found in the environment, thus exhibiting self-replication in a way analogous to that found in nature. The concept of self-replicating machines has been advanced and examined by Homer Jacobson, Edward F. Moore, Freeman Dyson, John von Neumann, Konrad Zuse and in more recent times by K. Eric Drexler in his book on nanotechnology, Engines of Creation (coining the term clanking replicator for such machines) and by Robert Freitas and Ralph Merkle in their review Kinematic Self-Replicating Machines which provided the first comprehensive analysis of the entire replicator design space. The future development of such technology is an integral part of several plans involving the mining of moons and asteroid belts for ore and other materials, the creation of lunar factories, and even the construction of solar power satellites in space. The von Neumann probe is one theoretical example of such a machine. Von Neumann also worked on what he called the universal constructor, a self-replicating machine that would be able to evolve and which he formalized in a cellular automata environment. Notably, Von Neumann's Self-Reproducing Automata scheme posited that open-ended evolution requires inherited information to be copied and passed to offspring separately from the self-replicating machine, an insight that preceded the discovery of the structure of the DNA molecule by Watson and Crick and how it is separately translated and replicated in the cell. A self-replicating machine is an artificial self-replicating system that relies on conventional large-scale technology and automation. Although suggested earlier than in the late 1940's by Von Neumann, no self-replicating machine has been seen until today. Certain idiosyncratic terms are occasionally found in the literature. For example, the term clanking replicator was once used by Drexler to distinguish macros
https://en.wikipedia.org/wiki/Tommaso%20Toffoli
Tommaso Toffoli () is an Italian-American professor of electrical and computer engineering at Boston University where he joined the faculty in 1995. He has worked on cellular automata and the theory of artificial life (with Edward Fredkin and others), and is known for the invention of the Toffoli gate. Early life and career He was born in June, 1943 in Montereale Valcellina, in northeastern Italy, to Francesco and Valentina (Saveri) Toffoli and was raised in Rome. He received his laurea in physics (equivalent to a Master's degree) from the University of Rome La Sapienza in 1967. Toffoli moved to the United States in 1969. In 1976 he received a Ph.D. in computer and communication science from the University of Michigan, then in 1978 he joined the faculty of Massachusetts Institute of Technology as a principal research scientist, where he remained until 1995, when he joined the faculty of Boston University. Books Cellular Automata Machines: A New Environment for Modeling, MIT Press (1987), with Norman Margolus. . See also Billiard-ball computer Block cellular automaton CAM-6 Computronium Critters (cellular automaton) Programmable matter Reversible cellular automaton
https://en.wikipedia.org/wiki/Discrete%20modelling
Discrete modelling is the discrete analogue of continuous modelling. In discrete modelling, formulae are fit to discrete data—data that could potentially take on only a countable set of values, such as the integers, and which are not infinitely divisible. A common method in this form of modelling is to use recurrence relations. Applied mathematics
https://en.wikipedia.org/wiki/KvLQT1
Kv7.1 (KvLQT1) is a potassium channel protein whose primary subunit in humans is encoded by the KCNQ1 gene. Kv7.1 is a voltage and lipid-gated potassium channel present in the cell membranes of cardiac tissue and in inner ear neurons among other tissues. In the cardiac cells, Kv7.1 mediates the IKs (or slow delayed rectifying K+) current that contributes to the repolarization of the cell, terminating the cardiac action potential and thereby the heart's contraction. It is a member of the KCNQ family of potassium channels. Structure KvLQT1 is made of six membrane-spanning domains S1-S6, two intracellular domains, and a pore loop. The KvLQT1 channel is made of four KCNQ1 subunits, which form the actual ion channel. Function This gene encodes a protein for a voltage-gated potassium channel required for the repolarization phase of the cardiac action potential. The gene product can form heteromultimers with two other potassium channel proteins, KCNE1 and KCNE3. The gene is located in a region of chromosome 11 that contains a large number of contiguous genes that are abnormally imprinted in cancer and the Beckwith-Wiedemann syndrome. Two alternative transcripts encoding distinct isoforms have been described. Clinical significance Mutations in the gene can lead to a defective protein and several forms of inherited arrhythmias as Long QT syndrome which is a prolongation of the QT interval of heart repolarization, Short QT syndrome, and Familial Atrial Fibrillation. KvLQT1 are also expressed in the pancreas, and KvLQT1 Long QT syndrome patients has been shown to have hyperinsulinemic hypoglycaemia following an oral glucose load. Currents arising from Kv7.1 in over-expression systems have never been recapitulated in native tissues - Kv7.1 is always found in native tissues with a modulatory subunit. In cardiac tissue, these subunits comprise KCNE1 and yotiao. Though physiologically irrelevant, homotetrameric Kv7.1 channels also display a unique form of C-type ina
https://en.wikipedia.org/wiki/Astrology%20and%20the%20classical%20elements
Astrology has used the concept of classical elements from antiquity up until the present. In Western astrology and Sidereal astrology four elements are used: Fire, Earth, Air, and Water. Western astrology In Western tropical astrology, there are 12 astrological signs. Each of the four elements is associated with three signs of the Zodiac, which are always located exactly 120 degrees away from each other along the ecliptic and said to be in trine with one another. Most modern astrologers use the four classical elements extensively, (also known as triplicities), and indeed it is still viewed as a critical part of interpreting the astrological chart. Beginning with the first sign Aries which is a Fire sign, the next in line Taurus is Earth, then to Gemini which is Air, and finally to Cancer which is Water. This cycle continues on twice more and ends with the twelfth and final astrological sign, Pisces. The elemental rulerships for the twelve astrological signs of the zodiac (according to Marcus Manilius) are summarised as follows: Fire — 1 – Aries; 5 – Leo; 9 – Sagittarius – hot, dry, ardent Earth — 2 – Taurus; 6 – Virgo; 10 – Capricorn – heavy, cold, dry Air — 3 – Gemini; 7 – Libra; 11 – Aquarius – light, hot, wet Water — 4 – Cancer; 8 – Scorpio; 12 – Pisces – cold, wet, soft Elements of the zodiac Triplicity rulerships In traditional astrology, each triplicity has several planetary rulers, which change with conditions of sect – that is, whether the chart is a day chart or a night chart. Triplicity rulerships are an important essential dignity – one of the several factors used by traditional astrologers to weigh strength, effectiveness, and integrity of each planet in a chart. Triplicity rulerships (using the "Dorothean system") are as follows: "Participating" rulers were not used by Ptolemy, as well as some subsequent astrologers in later traditions who followed his approach. Triplicities by season In ancient astrology, triplicities were more of a sea
https://en.wikipedia.org/wiki/Iodine%E2%80%93starch%20test
The iodine–starch test is a chemical reaction that is used to test for the presence of starch or for iodine. The combination of starch and iodine is intensely blue-black. The interaction between starch and the triiodide anion () is the basis for iodometry. History and principles The iodine–starch test was first described by J. J. Colin and H. F. Gaultier de Claubry, and independently by F. Stromeyer, in 1814. The triiodide anion instantly produces an intense blue-black colour upon contact with starch. The intensity of the colour decreases with increasing temperature and with the presence of water-miscible organic solvents such as ethanol. The test cannot be performed at very low pH due to the hydrolysis of the starch under these conditions. It is thought that the iodine–iodide mixture combines with the starch to form an infinite polyiodide homopolymer. This was rationalized through single crystal X-ray crystallography and comparative Raman spectroscopy. Starch as an indicator Starch is often used in chemistry as an indicator for redox titrations where triiodide is present. Starch forms a very dark blue-black complex with triiodide. However, the complex is not formed if only iodine or only iodide (I−) is present. The colour of the starch complex is so deep, that it can be detected visually when the concentration of the iodine is as low as 20 µM at 20 °C. During iodine titrations, concentrated iodine solutions must be reacted with some titrant, often thiosulfate, in order to remove most of the iodine before the starch is added. This is due to the insolubility of the starch–triiodide complex which may prevent some of the iodine reacting with the titrant. Close to the endpoint, the starch is added, and the titration process is resumed taking into account the amount of thiosulfate added before adding the starch. The color change can be used to detect moisture or perspiration, as in the Minor test or starch–iodine test. See also Lugol's iodine Counterfeit banknote d
https://en.wikipedia.org/wiki/Jellium
Jellium, also known as the uniform electron gas (UEG) or homogeneous electron gas (HEG), is a quantum mechanical model of interacting electrons in a solid where the positive charges (i.e. atomic nuclei) are assumed to be uniformly distributed in space; the electron density is a uniform quantity as well in space. This model allows one to focus on the effects in solids that occur due to the quantum nature of electrons and their mutual repulsive interactions (due to like charge) without explicit introduction of the atomic lattice and structure making up a real material. Jellium is often used in solid-state physics as a simple model of delocalized electrons in a metal, where it can qualitatively reproduce features of real metals such as screening, plasmons, Wigner crystallization and Friedel oscillations. At zero temperature, the properties of jellium depend solely upon the constant electronic density. This property lends it to a treatment within density functional theory; the formalism itself provides the basis for the local-density approximation to the exchange-correlation energy density functional. The term jellium was coined by Conyers Herring in 1952, alluding to the "positive jelly" background, and the typical metallic behavior it displays. Hamiltonian The jellium model treats the electron-electron coupling rigorously. The artificial and structureless background charge interacts electrostatically with itself and the electrons. The jellium Hamiltonian for N electrons confined within a volume of space Ω, and with electronic density ρ(r) and (constant) background charge density n(R) = N/Ω is where Hel is the electronic Hamiltonian consisting of the kinetic and electron-electron repulsion terms: Hback is the Hamiltonian of the positive background charge interacting electrostatically with itself: Hel-back is the electron-background interaction Hamiltonian, again an electrostatic interaction: Hback is a constant and, in the limit of an infinite volume, divergent
https://en.wikipedia.org/wiki/Dirk%20Meyer
Derrick R. "Dirk" Meyer (born November 24, 1961) is a former Chief Executive Officer of Advanced Micro Devices, serving in the position from July 18, 2008 to January 10, 2011. Education He received a bachelor's degree in computer engineering from the University of Illinois at Urbana-Champaign and a master's degree in business administration from Boston University Graduate School of Management. Career He was a co-architect of the Alpha 21064 and Alpha 21264 microprocessors during his employment at DEC and also worked at Intel in its microprocessor design group. Meyer joined AMD in 1996, where he personally led the team that designed and developed the Athlon processor. Dirk Meyer was formerly president and chief executive officer of AMD. At one time, he was the chief operating officer. In this role, he shared leadership and management of AMD with former Chief Executive Officer and Chairman of AMD Hector Ruiz. Prior to this role, Meyer served as president and chief operating officer of AMD’s Microprocessor Solutions Sector where he had overall responsibility for AMD’s microprocessor business, including product development, manufacturing, operations and product marketing. On July 18, 2008, Dirk Meyer replaced Hector Ruiz as the CEO of AMD. Meyer focused the company onto the PC and data center server market. AMD announced 2010 that more than ever notebooks are hitting the market in 2010. Meyer justifies the focus by stating that an increasing mobile and consumer electronics market does not shrink the traditional market. He wanted to address these markets later, when AMD is prepared to dedicate sufficient resources to address them well. This strategy led to Bulldozer architecture for the traditional PC market and Bobcat addressing the netbook and tablet market nearly exclusively dominated by Intel, both hitting the market in 2011 and 2012. In the beginning of 2011, Meyer was removed as CEO after discussions with the AMD Board. His departure is widely believed to
https://en.wikipedia.org/wiki/Swiss%20coordinate%20system
The Swiss coordinate system (or Swiss grid) is a geographic coordinate system used in Switzerland and Liechtenstein for maps and surveying by the Swiss Federal Office of Topography (Swisstopo). A first coordinate system was introduced in 1903 under the name LV03 (Landesvermessung 1903, German for “land survey 1903”), based on the Mercator projection and the Bessel ellipsoid. With the advent of GPS technology, a new coordinate system was introduced in 1995 under the name LV95 (Landesvermessung 1995, German for “land survey 1995”) after a 7-year measurement campaign. LV is translated as MN in English. LV03 Introduced in 1903, this first geographic coordinate system rested upon the two dominant methodological pillars of geodesy and cartography at the time: the Bessel ellipsoid and the Mercator projection. Its measurements used the Bessel ellipsoid as an approximation of the Earth's shape, and its maps used the Mercator projection as a projection technique. Although not ideal, these approximations still offered a high level of precision in the case of Switzerland, due to the small size of its territory (41,285 km2 with max. 350km lengthways and 220km from North to South). The fundamental reference point of the LV03 coordinate system was the old observatory of Bern, nowadays the location of the Institute of Exact Sciences of Bern University, in downtown Bern (Sidlerstrasse 5 - 46°57'3.9" N, 7°26'19.1" E). The coordinates of this reference point were arbitrarily fixed at 600'000 m E / 200'000 m N – with the East coordinate (E) noted before the North coordinate (N), unlike in the traditional latitude / longitude coordinate system. In selecting the values of these reference coordinates, the intention of the Swiss Federal Office of Topography was to guarantee that every point of the Swiss territory be identified by positive coordinates. The reference coordinates thus needed to be large enough to allow for positive coordinates to be allocated to the southernmost and west
https://en.wikipedia.org/wiki/Direct%20comparison%20test
In mathematics, the comparison test, sometimes called the direct comparison test to distinguish it from similar related tests (especially the limit comparison test), provides a way of deducing the convergence or divergence of an infinite series or an improper integral. In both cases, the test works by comparing the given series or integral to one whose convergence properties are known. For series In calculus, the comparison test for series typically consists of a pair of statements about infinite series with non-negative (real-valued) terms: If the infinite series converges and for all sufficiently large n (that is, for all for some fixed value N), then the infinite series also converges. If the infinite series diverges and for all sufficiently large n, then the infinite series also diverges. Note that the series having larger terms is sometimes said to dominate (or eventually dominate) the series with smaller terms. Alternatively, the test may be stated in terms of absolute convergence, in which case it also applies to series with complex terms: If the infinite series is absolutely convergent and for all sufficiently large n, then the infinite series is also absolutely convergent. If the infinite series is not absolutely convergent and for all sufficiently large n, then the infinite series is also not absolutely convergent. Note that in this last statement, the series could still be conditionally convergent; for real-valued series, this could happen if the an are not all nonnegative. The second pair of statements are equivalent to the first in the case of real-valued series because converges absolutely if and only if , a series with nonnegative terms, converges. Proof The proofs of all the statements given above are similar. Here is a proof of the third statement. Let and be infinite series such that converges absolutely (thus converges), and without loss of generality assume that for all positive integers n. Consider the partial sum
https://en.wikipedia.org/wiki/Solution%20concept
In game theory, a solution concept is a formal rule for predicting how a game will be played. These predictions are called "solutions", and describe which strategies will be adopted by players and, therefore, the result of the game. The most commonly used solution concepts are equilibrium concepts, most famously Nash equilibrium. Many solution concepts, for many games, will result in more than one solution. This puts any one of the solutions in doubt, so a game theorist may apply a refinement to narrow down the solutions. Each successive solution concept presented in the following improves on its predecessor by eliminating implausible equilibria in richer games. Formal definition Let be the class of all games and, for each game , let be the set of strategy profiles of . A solution concept is an element of the direct product i.e., a function such that for all Rationalizability and iterated dominance In this solution concept, players are assumed to be rational and so strictly dominated strategies are eliminated from the set of strategies that might feasibly be played. A strategy is strictly dominated when there is some other strategy available to the player that always has a higher payoff, regardless of the strategies that the other players choose. (Strictly dominated strategies are also important in minimax game-tree search.) For example, in the (single period) prisoners' dilemma (shown below), cooperate is strictly dominated by defect for both players because either player is always better off playing defect, regardless of what his opponent does. Nash equilibrium A Nash equilibrium is a strategy profile (a strategy profile specifies a strategy for every player, e.g. in the above prisoners' dilemma game (cooperate, defect) specifies that prisoner 1 plays cooperate and prisoner 2 plays defect) in which every strategy is a best response to every other strategy played. A strategy by a player is a best response to another player's strategy if there is no othe
https://en.wikipedia.org/wiki/Bat-Signal
The Bat-Signal is a distress signal device appearing in American comic books published by DC Comics, as a means to summon the superhero, Batman. It is a specially modified searchlight with a stylized emblem of a bat affixed to the light, allowing it to project a large bat symbol onto cloudy night skies over Gotham City. The signal is used by the Gotham City Police Department as a method of contacting and summoning Batman in the event his help is needed, but also as a weapon of psychological intimidation to the numerous criminals of Gotham City. It doubles as the primary logo for the Batman series of comic books, TV shows, and films. To celebrate Batman's 80th anniversary, DC Comics and Warner Bros. lit the Bat-Signal in thirteen cities on September 21, 2019, starting in Melbourne and ending in Los Angeles. Origins The Bat-Signal first appeared in Detective Comics #60 (February 1942). The signal has several different origins in comics featuring post-Crisis continuity. It is introduced as a new tool after Batman's first encounter with the Joker in the 2005 series Batman: The Man Who Laughs, and also during the 1990 "Prey" storyline in Legends of the Dark Knight. In the 2006 series Batman and the Mad Monk, Commissioner James Gordon initially uses a pager to contact Batman, but during a meeting with the superhero, Gordon throws it away, saying he prefers a more public means of contacting him. After Batman departs, Gordon looks out at the city and considers the exceptional view from his current position, hinting at the future creation of the Signal. In the 1989 Batman film, Batman gives the signal to the Gotham police force, enabling them to call him when the city was in danger. In 2005's Batman Begins, then-lieutenant James Gordon installs the Bat-signal on the roof of the police department himself. The film suggests Gordon was inspired to create the signal after Batman left mobster Carmine Falcone chained across a spotlight after a confrontation at the docks, F
https://en.wikipedia.org/wiki/Mojiky%C5%8D
(), also known by its full name , is a character encoding scheme. The , which published the character set, also published computer software and TrueType fonts to accompany it. The Mojikyō Institute, chaired by , originally had its character set and related software and data redistributed on CD-ROMs sold in Kinokuniya stores. Conceptualized in 1996, the first version of the CD-ROM was released in July 1997. For a time, the Mojikyō Institute also offered a web subscription, termed " WEB" (), which had more up-to-date characters. , Mojikyō encoded 174,975 characters. Among those, 150,366 characters (86%) then belonged to the extended Chinese–Japanese–Korean–Vietnamese (CJKV) family. Many of Mojikyō's characters are considered obsolete or obscure, and are not encoded by any other character set, including the most widely used international text encoding standard, Unicode. Originally a paid proprietary software product, as of 2015, the Mojikyō Institute began to upload its latest releases to Internet Archive as freeware, as a memorial to honor one of its developers, , who died that year. On December 15, 2018, version 4.0 was released. The next day, Ishikawa announced that without Furuya this would be the final release of Mojikyō. Premise The encoding was created to provide a complete index of Chinese, Korean, and Japanese characters. It also encodes a large number of characters in ancient scripts, such as the oracle bone script, the seal script, and Sanskrit (Siddhaṃ). For many characters, it is the only character encoding to encode them, and its data is often used as a starting point for Unicode proposals. However, has much looser standards than Unicode for encoding, which leads to have many encoded glyphs of dubious, or even unintentionally fictional, origin. As such, while many non-Unicode characters are suitable for addition to Unicode, not all can become Unicode characters, due to the differing standards of evidence required by each. Composition The font
https://en.wikipedia.org/wiki/Intensional%20logic
Intensional logic is an approach to predicate logic that extends first-order logic, which has quantifiers that range over the individuals of a universe (extensions), by additional quantifiers that range over terms that may have such individuals as their value (intensions). The distinction between intensional and extensional entities is parallel to the distinction between sense and reference. Overview Logic is the study of proof and deduction as manifested in language (abstracting from any underlying psychological or biological processes). Logic is not a closed, completed science, and presumably, it will never stop developing: the logical analysis can penetrate into varying depths of the language (sentences regarded as atomic, or splitting them to predicates applied to individual terms, or even revealing such fine logical structures like modal, temporal, dynamic, epistemic ones). In order to achieve its special goal, logic was forced to develop its own formal tools, most notably its own grammar, detached from simply making direct use of the underlying natural language. Functors (also known as function words) belong to the most important categories in logical grammar (along with basic categories like sentence and individual name): a functor can be regarded as an "incomplete" expression with argument places to fill in. If we fill them in with appropriate subexpressions, then the resulting entirely completed expression can be regarded as a result, an output. Thus, a functor acts like a function sign, taking on input expressions, resulting in a new, output expression. Semantics links expressions of language to the outside world. Also logical semantics has developed its own structure. Semantic values can be attributed to expressions in basic categories: the reference of an individual name (the "designated" object named by that) is called its extension; and as for sentences, their truth value is their extension. As for functors, some of them are simpler than others:
https://en.wikipedia.org/wiki/Solution%20in%20radicals
A solution in radicals or algebraic solution is a closed-form expression, and more specifically a closed-form algebraic expression, that is the solution of a polynomial equation, and relies only on addition, subtraction, multiplication, division, raising to integer powers, and the extraction of th roots (square roots, cube roots, and other integer roots). A well-known example is the solution of the quadratic equation There exist more complicated algebraic solutions for cubic equations and quartic equations. The Abel–Ruffini theorem, and, more generally Galois theory, state that some quintic equations, such as do not have any algebraic solution. The same is true for every higher degree. However, for any degree there are some polynomial equations that have algebraic solutions; for example, the equation can be solved as The eight other solutions are nonreal complex numbers, which are also algebraic and have the form where is a fifth root of unity, which can be expressed with two nested square roots. See also for various other examples in degree 5. Évariste Galois introduced a criterion allowing one to decide which equations are solvable in radicals. See Radical extension for the precise formulation of his result. Algebraic solutions form a subset of closed-form expressions, because the latter permit transcendental functions (non-algebraic functions) such as the exponential function, the logarithmic function, and the trigonometric functions and their inverses. See also Solvable quintics Solvable sextics Solvable septics
https://en.wikipedia.org/wiki/Organic%20field-effect%20transistor
An organic field-effect transistor (OFET) is a field-effect transistor using an organic semiconductor in its channel. OFETs can be prepared either by vacuum evaporation of small molecules, by solution-casting of polymers or small molecules, or by mechanical transfer of a peeled single-crystalline organic layer onto a substrate. These devices have been developed to realize low-cost, large-area electronic products and biodegradable electronics. OFETs have been fabricated with various device geometries. The most commonly used device geometry is bottom gate with top drain and source electrodes, because this geometry is similar to the thin-film silicon transistor (TFT) using thermally grown SiO2 as gate dielectric. Organic polymers, such as poly(methyl-methacrylate) (PMMA), can also be used as dielectric. One of the benefits of OFETs, especially compared with inorganic TFTs, is their unprecedented physical flexibility, which leads to biocompatible applications, for instance in the future health care industry of personalized biomedicines and bioelectronics. In May 2007, Sony reported the first full-color, video-rate, flexible, all plastic display, in which both the thin-film transistors and the light-emitting pixels were made of organic materials. History of OFETs The concept of a field-effect transistor (FET) was first proposed by Julius Edgar Lilienfeld, who received a patent for his idea in 1930. He proposed that a field-effect transistor behaves as a capacitor with a conducting channel between a source and a drain electrode. Applied voltage on the gate electrode controls the amount of charge carriers flowing through the system. The first insulated-gate field-effect transistor was designed and prepared by Mohamed Atalla and Dawon Kahng at Bell Labs using a metal–oxide–semiconductor: the MOSFET (metal–oxide–semiconductor field-effect transistor). It was invented in 1959, and presented in 1960. Also known as the MOS transistor, the MOSFET is the most widely manufact
https://en.wikipedia.org/wiki/NSA%20Suite%20B%20Cryptography
NSA Suite B Cryptography was a set of cryptographic algorithms promulgated by the National Security Agency as part of its Cryptographic Modernization Program. It was to serve as an interoperable cryptographic base for both unclassified information and most classified information. Suite B was announced on 16 February 2005. A corresponding set of unpublished algorithms, Suite A, is "used in applications where Suite B may not be appropriate. Both Suite A and Suite B can be used to protect foreign releasable information, US-Only information, and Sensitive Compartmented Information (SCI)." In 2018, NSA replaced Suite B with the Commercial National Security Algorithm Suite (CNSA). Suite B's components were: Advanced Encryption Standard (AES) with key sizes of 128 and 256 bits. For traffic flow, AES should be used with either the Counter Mode (CTR) for low bandwidth traffic or the Galois/Counter Mode (GCM) mode of operation for high bandwidth traffic (see Block cipher modes of operation) symmetric encryption Elliptic Curve Digital Signature Algorithm (ECDSA) digital signatures Elliptic Curve Diffie–Hellman (ECDH) key agreement Secure Hash Algorithm 2 (SHA-256 and SHA-384) message digest General information NIST, Recommendation for Pair-Wise Key Establishment Schemes Using Discrete Logarithm Cryptography, Special Publication 800-56A Suite B Cryptography Standards , Suite B Certificate and Certificate Revocation List (CRL) Profile , Suite B Cryptographic Suites for Secure Shell (SSH) , Suite B Cryptographic Suites for IPsec , Suite B Profile for Transport Layer Security (TLS) These RFC have been downgraded to historic references per . History In December 2006, NSA submitted an Internet Draft on implementing Suite B as part of IPsec. This draft had been accepted for publication by IETF as RFC 4869, later made obsolete by RFC 6379. Certicom Corporation of Ontario, Canada, which was purchased by BlackBerry Limited in 2009, holds some elliptic curve patents, wh
https://en.wikipedia.org/wiki/Friedreich%27s%20ataxia
Friedreich's ataxia (FRDA or FA) is an autosomal-recessive genetic disease that causes difficulty walking, a loss of coordination in the arms and legs, and impaired speech that worsens over time. Symptoms generally start between 5 and 20 years of age. Many develop hypertrophic cardiomyopathy and require a mobility aid such as a cane, walker, or wheelchair in their teens. As the disease progresses, some affected people lose their sight and hearing. Other complications may include scoliosis and diabetes mellitus. The condition is caused by mutations in the FXN gene on chromosome 9, which makes a protein called frataxin. In FRDA, cells produce less frataxin. Degeneration of nerve tissue in the spinal cord causes the ataxia; particularly affected are the sensory neurons essential for directing muscle movement of the arms and legs through connections with the cerebellum. The spinal cord becomes thinner, and nerve cells lose some myelin sheath. In February 2023, the first approval of a treatment for FA was granted by the FDA. Approval in the EU is pending. There are several additional therapies in trial. FRDA shortens life expectancy due to heart disease, but some people can live into their 60s or older. FRDA affects one in 50,000 people in the United States and is the most common inherited ataxia. Rates are highest in people of Western European descent. The condition is named after German physician Nikolaus Friedreich, who first described it in the 1860s. Symptoms Symptoms typically start between the ages of 5 and 15, but in late-onset FRDA, they may occur after age 25 years. The symptoms are broad, but consistently involve gait and limb ataxia, dysarthria and loss of lower limb reflexes. Classical symptoms There is some variability in symptom frequency, onset and progression. All individuals with FRDA develop neurological symptoms, including dysarthria and loss of lower limb reflexes, and more than 90% present with ataxia. Cardiac issues are very common with
https://en.wikipedia.org/wiki/Source%20counts
The source counts distribution of radio-sources from a radio-astronomical survey is the cumulative distribution of the number of sources (N) brighter than a given flux density (S). As it is usually plotted on a log-log scale its distribution is known as the log N – log S plot. It is one of several cosmological tests that were conceived in the 1930s to check the viability of and compare new cosmological models. Early work to catalogue radio sources aimed to determine the source count distribution as a discriminating test of different cosmological models. For example, a uniform distribution of radio sources at low redshift, such as might be found in a 'steady-state Euclidean universe,' would produce a slope of −1.5 in the cumulative distribution of log(N) versus log(S). Data from the early Cambridge 2C survey (published 1955) apparently implied a (log(N), log(S)) slope of nearly −3.0. This appeared to invalidate the steady state theory of Fred Hoyle, Hermann Bondi and Thomas Gold. Unfortunately many of these weaker sources were subsequently found to be due to 'confusion' (the blending of several weak sources in the side-lobes of the interferometer, producing a stronger response). By contrast, analysis from the contemporaneous Mills Cross data (by Slee and Mills) were consistent with an index of −1.5. Later and more accurate surveys from Cambridge, 3C, 3CR, and 4C, also showed source count slopes steeper than −1.5, though by a smaller margin than 2C. This convinced some cosmologists that the steady state theory was wrong, although residual problems with confusion provided some defense for Hoyle and his colleagues. The immediate interest in testing the steady-state theory through source-counts was reduced by the discovery of the 3K microwave background radiation in the mid 1960s, which essentially confirmed the Big-Bang model. Later radio survey data have shown a complex picture — the 3C and 4C claims appear to hold up, while at fainter levels the source counts
https://en.wikipedia.org/wiki/Presbycusis
Presbycusis (also spelled presbyacusis, from Greek πρέσβυς presbys "old" + ἄκουσις akousis "hearing"), or age-related hearing loss, is the cumulative effect of aging on hearing. It is a progressive and irreversible bilateral symmetrical age-related sensorineural hearing loss resulting from degeneration of the cochlea or associated structures of the inner ear or auditory nerves. The hearing loss is most marked at higher frequencies. Hearing loss that accumulates with age but is caused by factors other than normal aging (nosocusis and sociocusis) is not presbycusis, although differentiating the individual effects of distinct causes of hearing loss can be difficult. The cause of presbycusis is a combination of genetics, cumulative environmental exposures and pathophysiological changes related to aging. At present there are no preventive measures known; treatment is by hearing aid or surgical implant. Presbycusis is the most common cause of hearing loss, affecting one out of three persons by age 65, and one out of two by age 75. Presbycusis is the second most common illness next to arthritis in aged people. Many vertebrates such as fish, birds and amphibians do not experience presbycusis in old age as they are able to regenerate their cochlear sensory cells, whereas mammals including humans have genetically lost this regenerative ability. Presentation Primary symptoms: sounds or speech becoming dull, muffled or attenuated need for increased volume on television, radio, music and other audio sources difficulty using the telephone loss of directionality of sound difficulty understanding speech, especially women and children difficulty in speech discrimination against background noise (cocktail party effect) Secondary symptoms: hyperacusis, heightened sensitivity to certain volumes and frequencies of sound, resulting from "recruitment" tinnitus, ringing, buzzing, hissing or other sounds in the ear when no external sound is present Usually occurs after age 50, but dete
https://en.wikipedia.org/wiki/Information%20set%20%28game%20theory%29
The information set is the basis for decision making in a game, which includes the actions available to both sides and the benefits of each action. The information set is an important concept in non-perfect games. In game theory, an information set is the set of all possible actions in the game for a given player, built on their observations and a set for a particular player that, given what that player has observed, shows the decision vertices available to the player which are indistinguishable to them at the current point in the game. For a better idea on decision vertices, refer to Figure 1. If the game has perfect information, every information set contains only one member, namely the point actually reached at that stage of the game, since each player knows the exact mix of chance moves and player strategies up to the current point in the game. Otherwise, it is the case that some players cannot be sure what the game state is; for instance, not knowing what exactly happened in the past or what should be done right now. Information sets are used in extensive form games and are often depicted in game trees. Game trees show the path from the start of a game and the subsequent paths that can be made depending on each player's next move. For non-perfect information game problems, there is hidden information. That is, each player does not have complete knowledge of the opponent's information, such as cards that do not appear in a poker game. Therefore, when constructing a game tree, it is difficult to determine precisely where a node is located based on known information alone, as we do not know certain information about our opponent. We can only be sure that we are at one of a range of possible nodes. This inability to distinguish the set of nodes in a particular player's game tree is known as the 'information set'. Information sets can be easily depicted in game trees to display each player's possible moves typically using dotted lines, circles or even by just label
https://en.wikipedia.org/wiki/Complete%20information
In economics and game theory, complete information is an economic situation or game in which knowledge about other market participants or players is available to all participants. The utility functions (including risk aversion), payoffs, strategies and "types" of players are thus common knowledge. Complete information is the concept that each player in the game is aware of the sequence, strategies, and payoffs throughout gameplay. Given this information, the players have the ability to plan accordingly based on the information to maximize their own strategies and utility at the end of the game. Inversely, in a game with incomplete information, players do not possess full information about their opponents. Some players possess private information, a fact that the others should take into account when forming expectations about how those players will behave. A typical example is an auction: each player knows his own utility function (valuation for the item), but does not know the utility function of the other players. Applications Games of incomplete information arise frequently in social science. For instance, John Harsanyi was motivated by consideration of arms control negotiations, where the players may be uncertain both of the capabilities of their opponents and of their desires and beliefs. It is often assumed that the players have some statistical information about the other players, e.g. in an auction, each player knows that the valuations of the other players are drawn from some probability distribution. In this case, the game is called a Bayesian game. In games that have a varying degree of complete information and game type, there are different methods available to the player to solve the game based on this information. In games with static, complete information, the approach to solve is to use Nash equilibrium to find viable strategies. In dynamic games with complete information, backward induction is the solution concept, which eliminates non-credible
https://en.wikipedia.org/wiki/Marie-Jean-L%C3%A9on%2C%20Marquis%20d%27Hervey%20de%20Saint%20Denys
Marie-Jean-Léon Lecoq, Baron d'Hervey de Juchereau, Marquis d'Hervey de Saint-Denys (; 6 May 1822 – 2 November 1892) son of Pierre Marin Alexandre Le Coq or Lecoq, Baron d'Hervey (1780-1858), and Marie Louise Josephine Mélanie Juchereau de Saint-Denys (1789-1844) was born on 6 May 1822. D'Hervey was a French sinologist also known for his research on dreams. Contributions to Sinology Hervey de Saint Denys made an intense study of Chinese, and in 1851 D'Hervey published his Recherches sur l'agriculture et l'horticulture des Chinois (Transl: Research on the agriculture and horticulture of the Chinese), in which he dealt with the plants and animals that potentially might be able to be acclimatized to and introduced in Western countries. He translated as well Chinese texts as some Chinese stories, not of classical interest, but valuable for the light they throw on Chinese culture and customs. He was a man of letters too. E.g. he translated some Spanish-language works, and wrote a history of the Spanish drama. D'Hervey also created a literary translation theory, paraphrased by Joshua A. Fogel, the author of a book review on De l'un au multiple: Traductions du chinois vers les langues européenes, as "empowering the translator to use his own creative talents to embellish wherever necessary—not a completely free hand, but some leeway to avoid the pitfall of becoming too leaden." By adoption by his uncle Amédée Louis Vincent Juchereau (1782-1858) he became Marquis de Saint-Denys. At the Paris Exhibition of 1867, Hervey de Saint Denys acted as commissioner for the Chinese exhibits. and is decorated by the Legion of Honour.On June the 11th 1868 the Marquis married the 19-year-old Austrian orphan Louise de Ward. In 1874 he succeeded Stanislas Julien in the chair of Chinese at the Collège de France, while in 1878 he was elected a member of the Académie des Inscriptions et de Belles-Lettres. D'Hervey died in his hotel at Paris on 2 November 1892. Contributions to Oneirology
https://en.wikipedia.org/wiki/Mittag-Leffler%20function
In mathematics, the Mittag-Leffler function is a special function, a complex function which depends on two complex parameters and . It may be defined by the following series when the real part of is strictly positive: where is the gamma function. When , it is abbreviated as . For , the series above equals the Taylor expansion of the geometric series and consequently . In the case and are real and positive, the series converges for all values of the argument , so the Mittag-Leffler function is an entire function. This function is named after Gösta Mittag-Leffler. This class of functions are important in the theory of the fractional calculus. For , the Mittag-Leffler function is an entire function of order , and is in some sense the simplest entire function of its order. The Mittag-Leffler function satisfies the recurrence property (Theorem 5.1 of ) from which the following Poincaré asymptotic expansion holds : for and real such that then for all , we can show the following asymptotic expansions (Section 6. of ): -as : , -and as : , where we used the notation . Special cases For we find: (Section 2 of ) Error function: The sum of a geometric progression: Exponential function: Hyperbolic cosine: For , we have For , the integral gives, respectively: , , . Mittag-Leffler's integral representation The integral representation of the Mittag-Leffler function is (Section 6 of ) where the contour starts and ends at and circles around the singularities and branch points of the integrand. Related to the Laplace transform and Mittag-Leffler summation is the expression (Eq (7.5) of with ) Applications of Mittag-Leffler function One of the applications of the Mittag-Leffler function is in modeling fractional order viscoelastic materials. Experimental investigations into the time-dependent relaxation behavior of viscoelastic materials are characterized by a very fast decrease of the stress at the beginning of the relaxation process and an extr
https://en.wikipedia.org/wiki/Cadmium%20poisoning
Cadmium is a naturally occurring toxic metal with common exposure in industrial workplaces, plant soils, and from smoking. Due to its low permissible exposure in humans, overexposure may occur even in situations where trace quantities of cadmium are found. Cadmium is used extensively in electroplating, although the nature of the operation does not generally lead to overexposure. Cadmium is also found in some industrial paints and may represent a hazard when sprayed. Operations involving removal of cadmium paints by scraping or blasting may pose a significant hazard. The primary use of cadmium is in the manufacturing of NiCd rechargeable batteries. The primary source for cadmium is as a byproduct of refining zinc metal. Exposures to cadmium are addressed in specific standards for the general industry, shipyard employment, the construction industry, and the agricultural industry. Signs and symptoms Acute Acute exposure to cadmium fumes may cause flu-like symptoms including chills, fever, and muscle ache sometimes referred to as "the cadmium blues." Symptoms may resolve after a week if there is no respiratory damage. More severe exposures can cause tracheobronchitis, pneumonitis, and pulmonary edema. Symptoms of inflammation may start hours after the exposure and include cough, dryness and irritation of the nose and throat, headache, dizziness, weakness, fever, chills, and chest pain. Chronic Complications of cadmium poisoning include cough, anemia, and kidney failure (possibly leading to death). Cadmium exposure increases one's chances of developing cancer. Similar to zinc, long-term exposure to cadmium fumes can cause lifelong anosmia. Bone and joints One of the main effects of cadmium poisoning is weak and brittle bones. The bones become soft (osteomalacia), lose bone mineral density (osteoporosis), and become weaker. This results in joint and back pain, and increases the risk of fractures. Spinal and leg pain is common, and a waddling gait often develops d
https://en.wikipedia.org/wiki/Weierstrass%20functions
In mathematics, the Weierstrass functions are special functions of a complex variable that are auxiliary to the Weierstrass elliptic function. They are named for Karl Weierstrass. The relation between the sigma, zeta, and functions is analogous to that between the sine, cotangent, and squared cosecant functions: the logarithmic derivative of the sine is the cotangent, whose derivative is negative the squared cosecant. Weierstrass sigma function The Weierstrass sigma function associated to a two-dimensional lattice is defined to be the product where denotes or are a fundamental pair of periods. Through careful manipulation of the Weierstrass factorization theorem as it relates also to the sine function, another potentially more manageable infinite product definition is for any with and where we have used the notation (see zeta function below). Weierstrass zeta function The Weierstrass zeta function is defined by the sum The Weierstrass zeta function is the logarithmic derivative of the sigma-function. The zeta function can be rewritten as: where is the Eisenstein series of weight 2k + 2. The derivative of the zeta function is , where is the Weierstrass elliptic function. The Weierstrass zeta function should not be confused with the Riemann zeta function in number theory. Weierstrass eta function The Weierstrass eta function is defined to be and any w in the lattice This is well-defined, i.e. only depends on the lattice vector w. The Weierstrass eta function should not be confused with either the Dedekind eta function or the Dirichlet eta function. Weierstrass ℘-function The Weierstrass p-function is related to the zeta function by The Weierstrass ℘-function is an even elliptic function of order N=2 with a double pole at each lattice point and no other poles. Degenerate case Consider the situation where one period is real, which we can scale to be and the other is taken to the limit of so that the functions are only singly-periodic.
https://en.wikipedia.org/wiki/Index%20%28economics%29
In statistics, economics, and finance, an index is a statistical measure of change in a representative group of individual data points. These data may be derived from any number of sources, including company performance, prices, productivity, and employment. Economic indices track economic health from different perspectives. Examples include the consumer price index, which measures changes in retail prices paid by consumers, and the cost-of-living index (COLI), which measures the relative cost of living over time. Influential global financial indices such as the Global Dow, and the NASDAQ Composite track the performance of selected large and powerful companies in order to evaluate and predict economic trends. The Dow Jones Industrial Average and the S&P 500 primarily track U.S. markets, though some legacy international companies are included. The consumer price index tracks the variation in prices for different consumer goods and services over time in a constant geographical location and is integral to calculations used to adjust salaries, bond interest rates, and tax thresholds for inflation. The GDP Deflator Index, or real GDP, measures the level of prices of all-new, domestically produced, final goods and services in an economy. Market performance indices include the labour market index/job index and proprietary stock market index investment instruments offered by brokerage houses. Some indices display market variations. For example, the Economist provides a Big Mac Index that expresses the adjusted cost of a globally ubiquitous Big Mac as a percentage over or under the cost of a Big Mac in the U.S. in USD. Such indices can be used to help forecast currency values. Index numbers An index number is an economic data figure reflecting price or quantity compared with a standard or base value. The base usually equals 100 and the index number is usually expressed as 100 times the ratio to the base value. For example, if a commodity costs twice as much in 1970
https://en.wikipedia.org/wiki/Structured%20cabling
In telecommunications, structured cabling is building or campus cabling infrastructure that consists of a number of standardized smaller elements (hence structured) called subsystems. Structured cabling components include twisted pair and optical cabling, patch panels and patch cables. Overview Structured cabling is the design and installation of a cabling system that will support multiple hardware uses and be suitable for today's needs and those of the future. With a correctly installed system, current and future requirements can be met, and hardware that is added in the future will be supported Structured cabling design and installation is governed by a set of standards that specify wiring data centers, offices, and apartment buildings for data or voice communications using various kinds of cable, most commonly category 5e (Cat 5e), category 6 (Cat 6), and fiber optic cabling and modular connectors. These standards define how to lay the cabling in various topologies in order to meet the needs of the customer, typically using a central patch panel (which is normally 19-inch rack-mounted), from where each modular connection can be used as needed. Each outlet is then patched into a network switch (normally also rack-mounted) for network use or into an IP or PBX (private branch exchange) telephone system patch panel. Lines patched as data ports into a network switch require simple straight-through patch cables at each end to connect a computer. Voice patches to PBXs in most countries require an adapter at the remote end to translate the configuration on 8P8C modular connectors into the local standard telephone wall socket. No adapter is needed in North America as the 6P2C and 6P4C plugs most commonly used with RJ11 and RJ14 telephone connections are physically and electrically compatible with the larger 8P8C socket. RJ25 and RJ61 connections are physically but not electrically compatible, and cannot be used. In the United Kingdom, an adapter must be present at
https://en.wikipedia.org/wiki/Joseph%20Oesterl%C3%A9
Joseph Oesterlé (born 1954) is a French mathematician who, along with David Masser, formulated the abc conjecture which has been called "the most important unsolved problem in diophantine analysis". He is a member of Bourbaki.
https://en.wikipedia.org/wiki/Steric%20factor
The steric factor, usually denoted ρ, is a quantity used in collision theory. Also called the probability factor, the steric factor is defined as the ratio between the experimental value of the rate constant and the one predicted by collision theory. It can also be defined as the ratio between the pre-exponential factor and the collision frequency, and it is most often less than unity. Physically, the steric factor can be interpreted as the ratio of the cross section for reactive collisions to the total collision cross section. Usually, the more complex the reactant molecules, the lower the steric factors. Nevertheless, some reactions exhibit steric factors greater than unity: the harpoon reactions, which involve atoms that exchange electrons, producing ions. The deviation from unity can have different causes: the molecules are not spherical, so different geometries are possible; not all the kinetic energy is delivered into the right spot; the presence of a solvent (when applied to solutions); and so on. When collision theory is applied to reactions in solution, the solvent cage has an effect on the reactant molecules, as several collisions can take place in a single encounter, which leads to predicted preexponential factors being too large. ρ values greater than unity can be attributed to favorable entropic contributions. Usually there is no simple way to accurately estimate steric factors without performing trajectory or scattering calculations. It is also more commonly known as the frequency factor. Notes Chemical kinetics Physical chemistry
https://en.wikipedia.org/wiki/Salt
In common usage, salt is a mineral composed primarily of sodium chloride (NaCl). When used in food, especially at table in ground form in dispensers, it is more formally called table salt. In the form of a natural crystalline mineral, salt is also known as rock salt or halite. Salt is essential for life in general, and saltiness is one of the basic human tastes. Salt is one of the oldest and most ubiquitous food seasonings, and is known to uniformly improve the taste perception of food, including otherwise unpalatable food. Salting, brining, and pickling are also ancient and important methods of food preservation. Some of the earliest evidence of salt processing dates to around 6000 BC, when people living in the area of present-day Romania boiled spring water to extract salts; a salt works in China dates to approximately the same period. Salt was also prized by the ancient Hebrews, Greeks, Romans, Byzantines, Hittites, Egyptians, and Indians. Salt became an important article of trade and was transported by boat across the Mediterranean Sea, along specially built salt roads, and across the Sahara on camel caravans. The scarcity and universal need for salt have led nations to go to war over it and use it to raise tax revenues. Salt is used in religious ceremonies and has other cultural and traditional significance. Salt is processed from salt mines, and by the evaporation of seawater (sea salt) and mineral-rich spring water in shallow pools. The greatest single use for salt (sodium chloride) is as a feedstock for the production of chemicals. It is used to produce caustic soda and chlorine; it is also used in the manufacturing processes of polyvinyl chloride, plastics, paper pulp and many other products. Of the annual global production of around three hundred million tonnes of salt, only a small percentage is used for human consumption. Other uses include water conditioning processes, de-icing highways, and agricultural use. Edible salt is sold in forms such as sea s
https://en.wikipedia.org/wiki/Reverse%20proxy
In computer networks, a reverse proxy is an application that sits in front of back-end applications and forwards client (e.g. browser) requests to those applications. Reverse proxies help increase scalability, performance, resilience and security. The resources returned to the client appear as if they originated from the web server itself. Large websites and content delivery networks use reverse proxies, together with other techniques, to balance the load between internal servers. Reverse proxies can keep a cache of static content, which further reduces the load on these internal servers and the internal network. It is also common for reverse proxies to add features such as compression or TLS encryption to the communication channel between the client and the reverse proxy. Reverse proxies are typically owned or managed by the web service, and they are accessed by clients from the public Internet. In contrast, a forward proxy is typically managed by a client (or their company) who is restricted to a private, internal network, except that the client can ask the forward proxy to retrieve resources from the public Internet on behalf of the client. Reverse proxy servers are implemented in popular open-source web servers such as Apache, Nginx, and Caddy. This software can inspect HTTP headers, which, for example, allows it to present a single IP address to the Internet while relaying requests to different internal servers based on the URL of the HTTP request. Dedicated reverse proxy servers such as the open source software HAProxy and Squid are used by some of the biggest websites on the Internet. Uses Reverse proxies can hide the existence and characteristics of origin servers. This can make it more difficult to determine the actual location of the origin server / website and, for instance, more challenging to initiate legal action such as takedowns or block access to the website, as the IP address of the website may not be immediately apparent. Additionally, th
https://en.wikipedia.org/wiki/Infinity%20symbol
The infinity symbol () is a mathematical symbol representing the concept of infinity. This symbol is also called a lemniscate, after the lemniscate curves of a similar shape studied in algebraic geometry, or "lazy eight", in the terminology of livestock branding. This symbol was first used mathematically by John Wallis in the 17th century, although it has a longer history of other uses. In mathematics, it often refers to infinite processes (potential infinity) rather than infinite values (actual infinity). It has other related technical meanings, such as the use of long-lasting paper in bookbinding, and has been used for its symbolic value of the infinite in modern mysticism and literature. It is a common element of graphic design, for instance in corporate logos as well as in older designs such as the Métis flag. Both the infinity symbol itself and several variations of the symbol are available in various character encodings. History The lemniscate has been a common decorative motif since ancient times; for instance it is commonly seen on Viking Age combs. The English mathematician John Wallis is credited with introducing the infinity symbol with its mathematical meaning in 1655, in his De sectionibus conicis. Wallis did not explain his choice of this symbol. It has been conjectured to be a variant form of a Roman numeral, but which Roman numeral is unclear. One theory proposes that the infinity symbol was based on the numeral for 100 million, which resembled the same symbol enclosed within a rectangular frame. Another proposes instead that it was based on the notation CIↃ used to represent 1,000. Instead of a Roman numeral, it may alternatively be derived from a variant the lower-case form of omega, the last letter in the Greek alphabet. Perhaps in some cases because of typographic limitations, other symbols resembling the infinity sign have been used for the same meaning. Leonhard Euler used an open letterform more closely resembling a reflected and sidewa
https://en.wikipedia.org/wiki/Bart%20Kosko
Bart Andrew Kosko (born February 7, 1960) is a writer and professor of electrical engineering and law at the University of Southern California (USC). He is a researcher and popularizer of fuzzy logic, neural networks, and noise, and the author of several trade books and textbooks on these and related subjects of machine intelligence. He was awarded the 2022 Donald O. Hebb Award for neural learning by the International Neural Network Society. Personal background Kosko holds bachelor's degrees in philosophy and in economics from USC (1982), a master's degree in applied mathematics from UC San Diego (1983), a PhD in electrical engineering from UC Irvine (1987) under Allen Stubberud, and a J.D. from Concord Law School. He is an attorney licensed in California and federal court, and worked part-time as a law clerk for the Los Angeles District Attorney's Office. Kosko is a political and religious skeptic. He is a contributing editor of the libertarian periodical Liberty, where he has published essays on "Palestinian vouchers". Writing Kosko's most popular book to date was the international best-seller Fuzzy Thinking, about man and machines thinking in shades of gray, and his most recent book was Noise. He has also published short fiction and the cyber-thriller novel Nanotime, about a possible World War III that takes place in two days of the year 2030. The novel's title coins the term "nanotime" to describe the time speed-up that occurs when fast computer chips, rather than slow brains, house minds. Kosko has a minimalist prose style, not even using commas in his book Noise. Research Kosko's technical contributions have been in three main areas: fuzzy logic, neural networks, and noise. In fuzzy logic, he introduced fuzzy cognitive maps, fuzzy subsethood, additive fuzzy systems, fuzzy approximation theorems, optimal fuzzy rules, fuzzy associative memories, various neural-based adaptive fuzzy systems, ratio measures of fuzziness, the shape of fuzzy sets, the conditi
https://en.wikipedia.org/wiki/Fundamental%20pair%20of%20periods
In mathematics, a fundamental pair of periods is an ordered pair of complex numbers that defines a lattice in the complex plane. This type of lattice is the underlying object with which elliptic functions and modular forms are defined. Definition A fundamental pair of periods is a pair of complex numbers such that their ratio is not real. If considered as vectors in , the two are not collinear. The lattice generated by and is This lattice is also sometimes denoted as to make clear that it depends on and It is also sometimes denoted by or or simply by The two generators and are called the lattice basis. The parallelogram with vertices is called the fundamental parallelogram. While a fundamental pair generates a lattice, a lattice does not have any unique fundamental pair; in fact, an infinite number of fundamental pairs correspond to the same lattice. Algebraic properties A number of properties, listed below, can be seen. Equivalence Two pairs of complex numbers and are called equivalent if they generate the same lattice: that is, if No interior points The fundamental parallelogram contains no further lattice points in its interior or boundary. Conversely, any pair of lattice points with this property constitute a fundamental pair, and furthermore, they generate the same lattice. Modular symmetry Two pairs and are equivalent if and only if there exists a matrix with integer entries and and determinant such that that is, so that This matrix belongs to the modular group This equivalence of lattices can be thought of as underlying many of the properties of elliptic functions (especially the Weierstrass elliptic function) and modular forms. Topological properties The abelian group maps the complex plane into the fundamental parallelogram. That is, every point can be written as for integers with a point in the fundamental parallelogram. Since this mapping identifies opposite sides of the parallelogram as being the same, the f
https://en.wikipedia.org/wiki/Stackelberg%20competition
The Stackelberg leadership model is a strategic game in economics in which the leader firm moves first and then the follower firms move sequentially (hence, it is sometimes described as the "leader-follower game"). It is named after the German economist Heinrich Freiherr von Stackelberg who published Marktform und Gleichgewicht [Market Structure and Equilibrium] in 1934, which described the model. In game theory terms, the players of this game are a leader and a follower and they compete on quantity. The Stackelberg leader is sometimes referred to as the Market Leader. There are some further constraints upon the sustaining of a Stackelberg equilibrium. The leader must know ex ante that the follower observes its action. The follower must have no means of committing to a future non-Stackelberg leader's action and the leader must know this. Indeed, if the 'follower' could commit to a Stackelberg leader action and the 'leader' knew this, the leader's best response would be to play a Stackelberg follower action. Firms may engage in Stackelberg competition if one has some sort of advantage enabling it to move first. More generally, the leader must have commitment power. Moving observably first is the most obvious means of commitment: once the leader has made its move, it cannot undo it—it is committed to that action. Moving first may be possible if the leader was the incumbent monopoly of the industry and the follower is a new entrant. Holding excess capacity is another means of commitment. Subgame perfect Nash equilibrium The Stackelberg model can be solved to find the subgame perfect Nash equilibrium or equilibria (SPNE), i.e. the strategy profile that serves best each player, given the strategies of the other player and that entails every player playing in a Nash equilibrium in every subgame. In very general terms, let the price function for the (duopoly) industry be ; price is simply a function of total (industry) output, so is where the subscript represents t
https://en.wikipedia.org/wiki/Pinwheel%20%28cryptography%29
In cryptography, a pinwheel was a device for producing a short pseudorandom sequence of bits (determined by the machine's initial settings), as a component in a cipher machine. A pinwheel consisted of a rotating wheel with a certain number of positions on its periphery. Each position had a "pin", "cam" or "lug" which could be either "set" or "unset". As the wheel rotated, each of these pins would in turn affect other parts of the machine, producing a series of "on" or "off" pulses which would repeat after one full rotation of the wheel. If the machine contained more than one wheel, usually their periods would be relatively prime to maximize the combined period. Pinwheels might be turned through a purely mechanical action (as in the M-209) or electromechanically (as in the Lorenz SZ 40/42). Development The Swedish engineer Boris Caesar Wilhelm Hagelin is credited with having invented the first pinwheel device in 1925. He developed the machine while employed by Emanuel Nobel to oversee the Nobel interests in Aktiebolaget Cryptograph. He was the nephew of the founder of the Nobel Prize. The device was later introduced in France and Hagelin was awarded the French order of merit, Legion d'Honneur, for his work. One of the earliest cipher machines that Hagaelin developed was the C-38 and was later improved into the more portable Hagelin m-209. The M-209 is composed of a set of pinwheels and a rotating cage. Other cipher machines which used pinwheels include the C-52, the CD-57 and the Siemens and Halske T52. Pinwheels can be viewed as a predecessor to the electronic linear-feedback shift register (LFSR), used in later cryptosystems. See also Rotor machine
https://en.wikipedia.org/wiki/Fecal%20impaction
A fecal impaction or an impacted bowel is a solid, immobile bulk of feces that can develop in the rectum as a result of chronic constipation (a related term is fecal loading which refers to a large volume of stool in the rectum of any consistency). Fecal impaction is a common result of neurogenic bowel dysfunction and causes immense discomfort and pain. Its treatment includes laxatives, enemas, and pulsed irrigation evacuation (PIE) as well as digital removal. It is not a condition that resolves without direct treatment. Signs and symptoms Symptoms of a fecal impaction include the following: Chronic constipation Fecal incontinence-- paradoxical overflow diarrhea (encopresis) as a result of liquid stool passing around the obstruction Abdominal pain and bloating Loss of appetite Complications may include necrosis and ulcers of the rectal tissue, which if untreated can cause death. Causes There are many possible causes; these include a long period of physical inactivity, failure to consume adequate dietary fiber, dehydration, and deliberate retention of fecal matter. Medications such as fentanyl, buprenorphine, methadone, codeine, oxycodone, hydrocodone, morphine, and hydromorphone as well as certain sedatives that reduce intestinal movement may cause fecal matter to become too large, hard and/or dry to expel. Specific conditions, such as irritable bowel syndrome, certain neurological disorders, paralytic ileus, gastroparesis, diabetes, enlarged prostate gland, distended colon, an ingested foreign object, inflammatory bowel diseases such as Crohn's disease and colitis, and autoimmune diseases such as amyloidosis, celiac disease, lupus, and scleroderma can cause a fecal impaction. Hypothyroidism can also cause chronic constipation because of sluggish, slower, or weaker colon contractions. Iron supplements or increased blood calcium levels are also potential causes. Spinal cord injury is a common cause of constipation, due to ileus. Prevention Reducing opiat
https://en.wikipedia.org/wiki/Smith%E2%80%93Waterman%20algorithm
The Smith–Waterman algorithm performs local sequence alignment; that is, for determining similar regions between two strings of nucleic acid sequences or protein sequences. Instead of looking at the entire sequence, the Smith–Waterman algorithm compares segments of all possible lengths and optimizes the similarity measure. The algorithm was first proposed by Temple F. Smith and Michael S. Waterman in 1981. Like the Needleman–Wunsch algorithm, of which it is a variation, Smith–Waterman is a dynamic programming algorithm. As such, it has the desirable property that it is guaranteed to find the optimal local alignment with respect to the scoring system being used (which includes the substitution matrix and the gap-scoring scheme). The main difference to the Needleman–Wunsch algorithm is that negative scoring matrix cells are set to zero, which renders the (thus positively scoring) local alignments visible. Traceback procedure starts at the highest scoring matrix cell and proceeds until a cell with score zero is encountered, yielding the highest scoring local alignment. Because of its cubic time complexity, it often cannot be practically applied to large-scale problems and is replaced in favor of computationally more efficient alternatives such as (Gotoh, 1982), (Altschul and Erickson, 1986), and (Myers and Miller, 1988). History In 1970, Saul B. Needleman and Christian D. Wunsch proposed a heuristic homology algorithm for sequence alignment, also referred to as the Needleman–Wunsch algorithm. It is a global alignment algorithm that requires calculation steps ( and are the lengths of the two sequences being aligned). It uses the iterative calculation of a matrix for the purpose of showing global alignment. In the following decade, Sankoff, Reichert, Beyer and others formulated alternative heuristic algorithms for analyzing gene sequences. Sellers introduced a system for measuring sequence distances. In 1976, Waterman et al. added the concept of gaps into the origina
https://en.wikipedia.org/wiki/Rapid%20plant%20movement
Rapid plant movement encompasses movement in plant structures occurring over a very short period, usually under one second. For example, the Venus flytrap closes its trap in about 100 milliseconds. The traps of Utricularia are much faster, closing in about 0.5 milliseconds. The dogwood bunchberry's flower opens its petals and fires pollen in less than 0.5 milliseconds. The record is currently held by the white mulberry tree, with flower movement taking 25 microseconds, as pollen is catapulted from the stamens at velocities in excess of half the speed of sound—near the theoretical physical limits for movements in plants. These rapid plant movements differ from the more common, but much slower "growth-movements" of plants, called tropisms. Tropisms encompass movements that lead to physical, permanent alterations of the plant while rapid plant movements are usually reversible or occur over a shorter span of time. A variety of mechanisms are employed by plants in order to achieve these fast movements. Extremely fast movements such as the explosive spore dispersal techniques of Sphagnum mosses may involve increasing internal pressure via dehydration, causing a sudden propulsion of spores up or through the rapid opening of the "flower" opening triggered by insect pollination. Fast movement can also be demonstrated in predatory plants, where the mechanical stimulation of insect movement creates an electrical action potential and a release of elastic energy within the plant tissues. This release can be seen in the closing of a Venus flytrap, the curling of sundew leaves, and in the trapdoor action and suction of bladderworts. Slower movement, such as the folding of Mimosa pudica leaves, may depend on reversible, but drastic or uneven changes in water pressure in the plant tissues This process is controlled by the fluctuation of ions in and out of the cell, and the osmotic response of water to the ion flux. In 1880 Charles Darwin published The Power of Movement in Plants,
https://en.wikipedia.org/wiki/Kroll%20process
The Kroll process is a pyrometallurgical industrial process used to produce metallic titanium from titanium tetrachloride. The Kroll process replaced the Hunter process for almost all commercial production. Process In the Kroll process, the TiCl4 is reduced by liquid magnesium to give titanium metal: TiCl4 + 2Mg ->[825^oC]Ti + 2MgCl2 The reduction is conducted at 800–850 °C in a stainless steel retort. Complications result from partial reduction of the TiCl4, giving to the lower chlorides TiCl2 and TiCl3. The MgCl2 can be further refined back to magnesium. The resulting porous metallic titanium sponge is purified by leaching or vacuum distillation. The sponge is crushed, and pressed before it is melted in a consumable carbon electrode vacuum arc furnace. The melted ingot is allowed to solidify under vacuum. It is often remelted to remove inclusions and ensure uniformity. These melting steps add to the cost of the product. Titanium is about six times as expensive as stainless steel. In the earlier Hunter process, which ceased to be commercial in the 1990s, the TiCl4 from the chloride process is reduced to the metal by sodium. History and subsequent developments The Kroll process was invented in 1940 by William J. Kroll in Luxembourg. After moving to the United States, Kroll further developed the method for the production of zirconium. Many methods had been applied to the production of titanium metal, beginning with a report in 1887 by Nilsen and Pettersen using sodium, which was optimized into the commercial Hunter process. In the 1920s van Arkel had described the thermal decomposition of titanium tetraiodide to give highly pure titanium. Titanium tetrachloride was found to reduce with hydrogen at high temperatures to give hydrides that can be thermally processed to the pure metal. With this background, Kroll developed both new reductants and new apparatus for the reduction of titanium tetrachloride. Its high reactivity toward trace amounts of water a
https://en.wikipedia.org/wiki/Semiconductor%20Research%20Corporation
Semiconductor Research Corporation (SRC), commonly known as SRC, is a high-technology research consortium active in the semiconductor industry. It is a leading semiconductor research consortium. Todd Younkin is the incumbent president and chief executive officer of the company. The consortium comprises more than twenty-five companies and government agencies with more than a hundred universities under contract performing research. History SRC was founded in 1982 as a consortium to fund research by semiconductor companies. In the past, it has funded university research projects in hardware and software co-design, new architectures, circuit design, transistors, memories, interconnects, and materials and has sponsored over 15,000 PhD students. Research SRC has funded research in areas such as automotive, advanced memory technologies, logic and processing, advanced packaging, edge intelligence, and communications. Programs Joint University Microelectronics Program It is a long-term research program, in collaboration with DARPA and industry, that focuses on energy-efficient electronics, including actuation and sensing, signal processing, computing, and intelligent storage. Global Research Collaboration Program It is an industry-led international research program with eight sub-topics including artificial intelligence hardware; analog mixed-signal circuits; computer-aided design and test; environment safety and health; hardware security; logic and memory devices; nanomanufacturing materials and processes; and packaging. Undergraduate Research Program Undergraduate Research Program (URP) is a one-year academic program for students. The program combines an undergraduate research curriculum with industry experience. The program helps sustain a high retention rate of students who are interested in semiconductor research. It gives networking and job opportunities through an annual event. Recognition In 2005, SRC received the National Medal of Technology and Innov
https://en.wikipedia.org/wiki/Coast%20Guard%20One
Coast Guard One is the call sign of any United States Coast Guard aircraft carrying the president of the United States. Similarly, any Coast Guard aircraft carrying the vice president is designated Coast Guard Two. , there has never been a Coast Guard One flight. Coast Guard Two was activated on September 25, 2009, when then-Vice President Joe Biden took a flight on CG 6019, an HH-60 Jayhawk helicopter, over the recently flooded Atlanta area. Other executive travel The Commandant of the Coast Guard often travels aboard a Gulfstream C-37A aircraft whose standard callsign is "Coast Guard Zero One". The aircraft is stationed at Coast Guard Station Washington, D.C. See also Transportation of the president of the United States
https://en.wikipedia.org/wiki/FM%20broadcasting
FM broadcasting is a method of radio broadcasting that uses frequency modulation (FM) of the radio broadcast carrier wave. Invented in 1933 by American engineer Edwin Armstrong, wide-band FM is used worldwide to transmit high-fidelity sound over broadcast radio. FM broadcasting offers higher fidelity—more accurate reproduction of the original program sound—than other broadcasting techniques, such as AM broadcasting. It is also less susceptible to common forms of interference, having less static and popping sounds than are often heard on AM. Therefore, FM is used for most broadcasts of music and general audio (in the audio spectrum). FM radio stations use the very high frequency range of radio frequencies. Broadcast bands Throughout the world, the FM broadcast band falls within the VHF part of the radio spectrum. Usually 87.5 to 108.0 MHz is used, or some portion of it, with few exceptions: In the former Soviet republics, and some former Eastern Bloc countries, the older 65.8–74 MHz band is also used. Assigned frequencies are at intervals of 30 kHz. This band, sometimes referred to as the OIRT band, is slowly phased out. Where the OIRT band is used, the 87.5–108.0 MHz band is referred to as the CCIR band. In Japan, the band 76–95 MHz is used. In Brazil, until the late 2010s, FM broadcast stations only used the 88-108 MHz Band, but with the phasing out of analog television, the 76-88 MHz band (old band channels 5 and 6 in VHF television) are allocated for old local MW stations who have moved to FM in agreement with ANATEL. The frequency of an FM broadcast station (more strictly its assigned nominal center frequency) is usually a multiple of 100 kHz. In most of South Korea, the Americas, the Philippines, and the Caribbean, only odd multiples are used. Some other countries follow this plan because of the import of vehicles, principally from the United States, with radios that can only tune to these frequencies. In some parts of Europe, Greenland, and Africa, only
https://en.wikipedia.org/wiki/History%20of%20ecology
Ecology is a new science and considered as an important branch of biological science, having only become prominent during the second half of the 20th century. Ecological thought is derivative of established currents in philosophy, particularly from ethics and politics. Its history stems all the way back to the 4th century. One of the first ecologists whose writings survive may have been Aristotle or perhaps his student, Theophrastus, both of whom had interest in many species of animals and plants. Theophrastus described interrelationships between animals and their environment as early as the 4th century BC. Ecology developed substantially in the 18th and 19th century. It began with Carl Linnaeus and his work with the economy of nature. Soon after came Alexander von Humboldt and his work with botanical geography. Alexander von Humboldt and Karl Möbius then contributed with the notion of biocoenosis. Eugenius Warming's work with ecological plant geography led to the founding of ecology as a discipline. Charles Darwin's work also contributed to the science of ecology, and Darwin is often attributed with progressing the discipline more than anyone else in its young history. Ecological thought expanded even more in the early 20th century. Major contributions included: Eduard Suess’ and Vladimir Vernadsky's work with the biosphere, Arthur Tansley's ecosystem, Charles Elton's Animal Ecology, and Henry Cowles ecological succession. Ecology influenced the social sciences and humanities. Human ecology began in the early 20th century and it recognized humans as an ecological factor. Later James Lovelock advanced views on earth as a macro-organism with the Gaia hypothesis. Conservation stemmed from the science of ecology. Important figures and movements include Shelford and the ESA, National Environmental Policy act, George Perkins Marsh, Theodore Roosevelt, Stephen A. Forbes, and post-Dust Bowl conservation. Later in the 20th century world governments collaborated on man’s e
https://en.wikipedia.org/wiki/Postulates%20of%20special%20relativity
In physics, Albert Einstein derived the theory of special relativity in 1905 from principle now called the postulates of special relativity. Einstein's formulation is said to only require two postulates, though his derivation implies a few more assumptions. The idea that special relativity depended only on two postulates, both of which seemed to be follow from the theory and experiment of the day, was one of the most compelling arguments for the correctness of the theory (Einstein 1912: "This theory is correct to the extent to which the two principles upon which it is based are correct. Since these seem to be correct to a great extent, ...") Postulates of special relativity 1. First postulate (principle of relativity) The laws of physics take the same form in all inertial frames of reference. 2. Second postulate (invariance of c) As measured in any inertial frame of reference, light is always propagated in empty space with a definite velocity c that is independent of the state of motion of the emitting body. Or: the speed of light in free space has the same value c in all inertial frames of reference. The two-postulate basis for special relativity is the one historically used by Einstein, and it is sometimes the starting point today. As Einstein himself later acknowledged, the derivation of the Lorentz transformation tacitly makes use of some additional assumptions, including spatial homogeneity, isotropy, and memorylessness. Also Hermann Minkowski implicitly used both postulates when he introduced the Minkowski space formulation, even though he showed that c can be seen as a space-time constant, and the identification with the speed of light is derived from optics. Alternative derivations of special relativity Historically, Hendrik Lorentz and Henri Poincaré (1892–1905) derived the Lorentz transformation from Maxwell's equations, which served to explain the negative result of all aether drift measurements. By that the luminiferous aether becomes unde
https://en.wikipedia.org/wiki/Experimenter%27s%20regress
In science, experimenter's regress refers to a loop of dependence between theory and evidence. In order to judge whether evidence is erroneous we must rely on theory-based expectations, and to judge the value of competing theories we rely on evidence. Cognitive bias affects experiments, and experiments determine which theory is valid. This issue is particularly important in new fields of science where there is no community consensus regarding the relative values of various competing theories, and where sources of experimental error are not well known. In a true scientific process, no consensus does exist and no consensus can exist as the process is conducted scientifically in the pursuit of knowledge. If any party involved in the process stands to personally lose or gain from the result, the process will be flawed and unscientific. In a true scientific process, a theory is formed after a scientist - amateur or professional - has observed a phenomenon and has asked "why?" as a result. The theory is the answer the scientist creates using logic and reason to explain the phenomenon. The scientist then focuses on how to conduct experiments to test the theory incrementally and the theory is either proven to be true or false through repeatable and legitimate experimentation. Legitimate scientific experiments conducted by the person who formulated the theory seek to prove the theory false rather than prove it true specifically to counter the effects of bias. If experimenter's regress acts a positive feedback system, it can be a source of pathological science. An experimenter's strong belief in a new theory produces confirmation bias, and any biased evidence they obtain then strengthens their belief in that particular theory. Neither individual researchers nor entire scientific communities are immune to this effect: see N-rays and polywater. Experimenter's regress is a typical relativistic phenomenon in the Empirical Programme of Relativism (EPOR). EPOR is very much co
https://en.wikipedia.org/wiki/Isogamy
Isogamy is a form of sexual reproduction that involves gametes of the same morphology (indistinguishable in shape and size), found in most unicellular eukaryotes. Because both gametes look alike, they generally cannot be classified as male or female. Instead, organisms undergoing isogamy are said to have different mating types, most commonly noted as "+" and "−" strains. Etymology The literal meaning of isogamy is "equal marriage" which refers to equal contribution of resources by both gametes to a zygote. The term isogamous was first used in the year 1887. Characteristics of isogamous species Isogamous species often have two mating types. Some isogamous species have more than two mating types, but the number is usually lower than ten. In some extremely rare cases a species can have thousands of mating types. In all cases, fertilization occurs when gametes of two different mating types fuse to form a zygote. Evolution It is generally accepted that isogamy is an ancestral state for anisogamy and that isogamy was the first stage in the evolution of sexual reproduction. Isogamous reproduction evolved independently in several lineages of plants and animals to anisogamous species with gametes of male and female types and subsequently to oogamous species in which the female gamete is much larger than the male and has no ability to move. This pattern may have been driven by the physical constraints on the mechanisms by which two gametes get together as required for sexual reproduction. Isogamy is the norm in unicellular eukaryote species, although it is possible that isogamy is evolutionarily stable in multicellular species. Occurrence Almost all unicellular eukaryotes are isogamous. Among multicellular organisms, isogamy is restricted to fungi such as baker's yeast and algae. Many species of green algae are isogamous. It is typical in the genera Ulva, Hydrodictyon, Tetraspora, Zygnema, Spirogyra, Ulothrix, and Chlamydomonas. Many fungi are isogamous. See also
https://en.wikipedia.org/wiki/Mass%20General%20Brigham
Mass General Brigham is a not-for-profit, integrated health care system that is a national leader in medical research, teaching, and patient care. It is the largest hospital-based research enterprise in the United States, with annual funding of more than $2 billion. The system's annual revenue was nearly $18 billion in 2022. It is also an educational institution, founded by Brigham and Women's Hospital and Massachusetts General Hospital. The system provides clinical care through two academic hospitals, three specialty hospitals, seven community hospitals, home care services, a health insurance plan, and a robust network of specialty practices, urgent care facilities, and outpatient clinics/surgical centers. It is the largest private employer in Massachusetts. In 2023, the system reported that from 2017–2021 its overall economic impact was $53.4 billion – more than the annual state budget. History Mass General Brigham was founded by the renowned academic medical centers (AMCs) which give it its name: Massachusetts General Hospital (colloquially referred to as "Mass General") and Brigham and Women's Hospital ("the Brigham"). Both hospitals were founded in the early 1800s, are based in Boston, and are among the nation's leading academic medical centers – including serving as major teaching hospitals of Harvard Medical School. Many of the AMC's doctors not only teach at Harvard, but are also often among the most respected leaders in their fields. For most of their history, Mass General and the Brigham have been fierce rivals. But in 1994, fueled by economic and political pressure to cut costs on patient care and health care education, the two hospitals merged to create a new parent corporation: Partners Healthcare. The two entities continued to operate largely independently, and remained competitors in multiple areas, until 2019. In 2015, Partners became one of the first health care systems in the nation to launch an electronic health record (EHR) system, allowing do
https://en.wikipedia.org/wiki/WJTC
WJTC (channel 44) is an independent television station licensed to Pensacola, Florida, United States, serving northwest Florida and southwest Alabama. It is owned by Deerfield Media alongside Mobile, Alabama–licensed NBC affiliate WPMI-TV (channel 15); Deerfield maintains a local marketing agreement (LMA) with Sinclair Broadcast Group, owner of Pensacola-licensed ABC affiliate WEAR-TV (channel 3) and Fort Walton Beach–licensed MyNetworkTV affiliate WFGX (channel 35), for the provision of certain services. WJTC and WPMI-TV share studios on Azalea Road (near I-10) in Mobile; master control and some internal operations are based at the shared facilities of WEAR-TV and WFGX on Mobile Highway (US 90) in unincorporated Escambia County, Florida (with a Pensacola mailing address). WJTC's transmitter is located near Robertsdale, Alabama. History WJTC signed on the air on December 24, 1984 as an independent station; the station was founded by its original owners, the Mercury Broadcasting Company. The station—which was branded as "C44"—maintained a general entertainment programming format consisting of old movies, cartoons, westerns, dramas, and a few classic sitcoms. In 1991, Clear Channel Communications, then-owner of WPMI, entered into a local marketing agreement with WJTC. Mercury Broadcasting retained the license, but Clear Channel owned local rights to programming broadcast on the station. WJTC affiliated with the United Paramount Network (UPN) upon its debut on January 16, 1995. Clear Channel acquired WJTC outright in 2001, creating a duopoly with WPMI. On January 24, 2006, CBS Corporation and Time Warner announced the shutdown of both UPN and The WB effective that fall. A new "fifth" network—"The CW" (named after its corporate parents), to be jointly owned by both companies, would launch in their place, with a lineup primarily featuring the most popular programs from both networks. On February 22, 2006, News Corporation announced that it would launch a network of i
https://en.wikipedia.org/wiki/Paned%20window%20%28computing%29
A paned window is a windows (or build-ups) in a graphical user interface that has multiple parts, layers, or sections. Examples of this include a code browser in a typical integrated development environment; a file browser with multiple panels; a tiling window manager; or a web page that contains multiple frames. Simple console applications use an edit pane for accepting input and an output pane for displaying output. The term task pane is used by Microsoft to identify any area cordoned off from the main screen area of an application and used for a specific function, such as changing the displayed font in a word processor. Three-pane interface A Three-pane interface is a category of graphical user interface in which the screen or window is divided into three panes displaying information. This information typically falls into a hierarchal relationship of master-detail with an embedded inspector window. Microsoft's Outlook Express email client popularized a mailboxes / mailbox contents / email text layout that became the norm until web-based user interfaces rose in popularity during the mid-2000s. Even today, many webmail scripts emulate this interface style.
https://en.wikipedia.org/wiki/Tests%20of%20special%20relativity
Special relativity is a physical theory that plays a fundamental role in the description of all physical phenomena, as long as gravitation is not significant. Many experiments played (and still play) an important role in its development and justification. The strength of the theory lies in its unique ability to correctly predict to high precision the outcome of an extremely diverse range of experiments. Repeats of many of those experiments are still being conducted with steadily increased precision, with modern experiments focusing on effects such as at the Planck scale and in the neutrino sector. Their results are consistent with the predictions of special relativity. Collections of various tests were given by Jakob Laub, Zhang, Mattingly, Clifford Will, and Roberts/Schleif. Special relativity is restricted to flat spacetime, i.e., to all phenomena without significant influence of gravitation. The latter lies in the domain of general relativity and the corresponding tests of general relativity must be considered. Experiments paving the way to relativity The predominant theory of light in the 19th century was that of the luminiferous aether, a stationary medium in which light propagates in a manner analogous to the way sound propagates through air. By analogy, it follows that the speed of light is constant in all directions in the aether and is independent of the velocity of the source. Thus an observer moving relative to the aether must measure some sort of "aether wind" even as an observer moving relative to air measures an apparent wind. First-order experiments Beginning with the work of François Arago (1810), a series of optical experiments had been conducted, which should have given a positive result for magnitudes of first order in (i.e., of ) and which thus should have demonstrated the relative motion of the aether. Yet the results were negative. An explanation was provided by Augustin Fresnel (1818) with the introduction of an auxiliary hypothesis, th
https://en.wikipedia.org/wiki/Concanavalin%20A
Concanavalin A (ConA) is a lectin (carbohydrate-binding protein) originally extracted from the jack-bean (Canavalia ensiformis). It is a member of the legume lectin family. It binds specifically to certain structures found in various sugars, glycoproteins, and glycolipids, mainly internal and nonreducing terminal α-D-mannosyl and α-D-glucosyl groups. Its physiological function in plants, however, is still unknown. ConA is a plant mitogen, and is known for its ability to stimulate mouse T-cell subsets giving rise to four functionally distinct T cell populations, including precursors to regulatory T cells; a subset of human suppressor T-cells is also sensitive to ConA. ConA was the first lectin to be available on a commercial basis, and is widely used in biology and biochemistry to characterize glycoproteins and other sugar-containing entities on the surface of various cells. It is also used to purify glycosylated macromolecules in lectin affinity chromatography, as well as to study immune regulation by various immune cells. Structure and properties Like most lectins, ConA is a homotetramer: each sub-unit (26.5kDa, 235 amino-acids, heavily glycated) binds a metallic atom (usually Mn2+ and a Ca2+). It has the D2 symmetry. Its tertiary structure has been elucidated, as have the molecular basis of its interactions with metals as well as its affinity for the sugars mannose and glucose are well known. ConA binds specifically α-D-mannosyl and α-D-glucosyl residues (two hexoses differing only in the alcohol on carbon 2) in terminal position of ramified structures from B-Glycans (rich in α-mannose, or hybrid and bi-antennary glycan complexes). It has 4 binding sites, corresponding to the 4 sub-units. The molecular weight is 104-112kDa and the isoelectric point (pI) is in the range of 4.5-5.5. ConA can also initiate cell division (mitogenesis), primarily acting on T-lymphocytes, by stimulating their energy metabolism within seconds of exposure. Maturation process ConA a
https://en.wikipedia.org/wiki/Thoughtworks
Thoughtworks is a publicly-traded, global technology company with 49 offices in 18 countries. It provides software design and delivery, and tools and consulting services. The company is closely associated with the movement for agile software development, and has contributed to open source products. Thoughtworks' business includes Digital Product Development Services, Digital Experience and Distributed Agile software development. History 1980s to 1990s In the late 1980s, Roy Singham founded Singham Business Services as a management consulting company servicing the equipment leasing industry in a Chicago basement. According to Singham, after two-to-three years, Singham started recruiting additional staff and came up with the name Thoughtworks in 1990. The company was incorporated under the new name in 1993 and focused on building software applications. Over time, Thoughtworks' technology shifted from C++ and Forte 4GL in the mid-1990s to include Java in the late 1990s. 1990s–2010s Martin Fowler joined the company in 1999 and became its chief scientist in 2000. In 2001, Thoughtworks agreed to settle a lawsuit by Microsoft for $480,000 for deploying unlicensed copies of office productivity software to employees. Also in 2001, Fowler, Jim Highsmith, and other key software figures authored the Agile Manifesto. The company began using agile techniques while working on a leasing project. Thoughtworks' technical expertise expanded with the .NET Framework in 2002, C# in 2004, Ruby and the Rails platform in 2006. In 2002, Thoughtworks chief scientist Martin Fowler wrote "Patterns of Enterprise Application Architecture" with contributions by ThoughtWorkers David Rice and Matthew Foemmel, as well as outside contributors Edward Hieatt, Robert Mee, and Randy Stafford. Thoughtworks Studios was launched as its product division in 2006 and shut down in 2020. The division created, supported and sold agile project management and software development and deployment tools includi
https://en.wikipedia.org/wiki/Phoenix%20Labs%20%28software%29
Phoenix Labs (formerly Methlabs) was a software developing community founded by Tim Leonard and Ken McClelland and best known for PeerGuardian, an open-source software program optimized for use as a personal firewall on file sharing networks. History The group was originally organized to work on PeerGuardian, a program Leonard created in 2003 in response to the shutdown of the Audiogalaxy file sharing site. The name Methlabs came from Leonard's handle, 'method'. In late 2004, the group added Blocklist.org, a website designed to allow users to interactively manage and block the IP addresses of certain organisations and companies. Phoenix Labs was formed in September 2005 following a dispute between members of the Methlabs team. According to a Phoenix Labs press release, a "series of threats and incidents" forced most of the group out of the methlabs.org website. The press release attributed this action to a team member with responsibility for finances and server control. Based on information from other participants, internet news sources identified this person as using the nickname "Cerberius", but Cerberius reportedly disputed their charges of domain hijacking and described the situation as a "revolt". Method and a few members of the group decided they could not get the methlabs.com domain back, and decided took suggestions for a new name on their IRC channel. A user named 'bizzyb0t' came up with the name "phoenixlabs.org" to represent the group being reborn "rising from the ashes". Products PeerGuardian and PeerGuardian 2 are free and open source programs capable of blocking incoming and outgoing connections based on IP blocklists. Their purpose is to block several organizations, including the RIAA and MPAA while using file sharing networks such as FastTrack and BitTorrent. The system is also advertising, spyware, government and educational ranges, depending upon user preferences. Other programs developed by Phoenix Labs are XS, a file-sharing hub and client
https://en.wikipedia.org/wiki/PeerGuardian
PeerGuardian is a free and open source program developed by Phoenix Labs (software). It is capable of blocking incoming and outgoing connections based on IP blacklists. The aim of its use was to block peers on the same torrent download from any visibility of your own peer connection using IP lists. The system is also capable of blocking custom ranges, depending upon user preferences. The Windows version of this program has been discontinued in favor of other applications (Phoenix Labs encourage current PeerGuardian users to migrate to PeerBlock which is based on PeerGuardian 2). History Development on PeerGuardian started in late 2002, led by programmer Tim Leonard. The first public version was released in 2003, at a time when the music industry started to sue individual file sharing users (a change from its previous stance that it would not target consumers with copyright infringement lawsuits). Version 1 The original PeerGuardian (1.0) was programmed in Visual Basic and quickly became popular among P2P users despite blocking only the common TCP protocol and being known for high RAM and CPU usage when connected to P2P networks. By December 2003, it had been downloaded 1 million times. The original version was released for free and the source code was made available under an open source license. Due to Version 1.0 only blocking TCP ports PeerGuardian.net then shifted to bluetack.co.uk where Protowall, The blocklist Manager, B.I.M.S and the Hosts Manager were developed. Version 2 After 7 months of development, in February 2005 Version 2 of PeerGuardian was released as a beta. The development of version 2.0 was led by Cory Nelson, and aimed to resolve many of the shortcomings of Version 1. Version 2 enabled support for more protocols (TCP, UDP, ICMP, etc.), multiple block lists, and automatic updates. The installation procedure was also simplified, no longer requiring a system restart and driver installation. Speed and resource inefficiencies were fixed by re-
https://en.wikipedia.org/wiki/Signedness
In computing, signedness is a property of data types representing numbers in computer programs. A numeric variable is signed if it can represent both positive and negative numbers, and unsigned if it can only represent non-negative numbers (zero or positive numbers). As signed numbers can represent negative numbers, they lose a range of positive numbers that can only be represented with unsigned numbers of the same size (in bits) because roughly half the possible values are non-positive values, whereas the respective unsigned type can dedicate all the possible values to the positive number range. For example, a two's complement signed 16-bit integer can hold the values −32768 to 32767 inclusively, while an unsigned 16 bit integer can hold the values 0 to 65535. For this sign representation method, the leftmost bit (most significant bit) denotes whether the value is negative (0 for positive or zero, 1 for negative). In programming languages For most architectures, there is no signed–unsigned type distinction in the machine language. Nevertheless, arithmetic instructions usually set different CPU flags such as the carry flag for unsigned arithmetic and the overflow flag for signed. Those values can be taken into account by subsequent branch or arithmetic commands. The C programming language, along with its derivatives, implements a signedness for all integer data types, as well as for "character". For Integers, the modifier defines the type to be unsigned. The default integer signedness is signed, but can be set explicitly with modifier. By contrast, the C standard declares , , and , to be three distinct types, but specifies that all three must have the same size and alignment. Further, must have the same numeric range as either or , but the choice of which depends on the platform. Integer literals can be made unsigned with suffix. For example, gives −1, but gives 4,294,967,295 for 32-bit code. Compilers often issue a warning when comparisons are made b
https://en.wikipedia.org/wiki/Vi%C3%A8te%27s%20formula
In mathematics, Viète's formula is the following infinite product of nested radicals representing twice the reciprocal of the mathematical constant : It can also be represented as: The formula is named after François Viète, who published it in 1593. As the first formula of European mathematics to represent an infinite process, it can be given a rigorous meaning as a limit expression, and marks the beginning of mathematical analysis. It has linear convergence, and can be used for calculations of , but other methods before and since have led to greater accuracy. It has also been used in calculations of the behavior of systems of springs and masses, and as a motivating example for the concept of statistical independence. The formula can be derived as a telescoping product of either the areas or perimeters of nested polygons converging to a circle. Alternatively, repeated use of the half-angle formula from trigonometry leads to a generalized formula, discovered by Leonhard Euler, that has Viète's formula as a special case. Many similar formulas involving nested roots or infinite products are now known. Significance François Viète (1540–1603) was a French lawyer, privy councillor to two French kings, and amateur mathematician. He published this formula in 1593 in his work Variorum de rebus mathematicis responsorum, liber VIII. At this time, methods for approximating to (in principle) arbitrary accuracy had long been known. Viète's own method can be interpreted as a variation of an idea of Archimedes of approximating the circumference of a circle by the perimeter of a many-sided polygon, used by Archimedes to find the approximation By publishing his method as a mathematical formula, Viète formulated the first instance of an infinite product known in mathematics, and the first example of an explicit formula for the exact value of . As the first representation in European mathematics of a number as the result of an infinite process rather than of a finite calculation,
https://en.wikipedia.org/wiki/Thomas%20Tooke
Thomas Tooke (; 28 February 177426 February 1858) was an English economist known for writing on money and economic statistics. After Tooke's death the Statistical Society endowed the Tooke Chair of economics at King's College London, and a Tooke Prize. In business, he served several terms between 1840 and 1852 as governor of the Royal Exchange Corporation. Likewise, he served for several terms as chairman of the St Katharine's Docks company. He was also an early director of the London and Birmingham Railway. Life Born at Kronstadt on 29 February 1774, he was the eldest son of William Tooke, at that time chaplain to the British factory there. Thomas began his professional life at the age of fifteen in a house of business at St Petersburg, and subsequently became a partner in the London firms of Stephen Thornton & Co., and Astell, Tooke, & Thornton. He took no serious part in discussion of economic questions until 1819, when he gave evidence before committees of both Houses of Parliament on the resumption of cash payments by the Bank of England. Tooke was one of the earliest supporters of the free trade movement which assumed the form in the petition of the merchants of the City of London presented to the House of Commons by Alexander Baring, on 8 May 1820. This document was drawn up by Tooke; and the circumstances which led to its preparation are described in the sixth volume of his History of Prices. Lord Liverpool's government, especially through William Huskisson after 1828, moved in the direction sought. It was to support the principles of the merchants' petition that Tooke, with David Ricardo, Robert Malthus, James Mill, and others, founded the Political Economy Club in April 1821. From the beginning Tooke took part in its discussions, and continued to attend its meetings to the end of his life. Out of controversy over paper money emerged the Bank Charter Act 1844, the main object of which was to prevent the over-issue of notes. Tooke was opposed to the pr
https://en.wikipedia.org/wiki/Laser-induced%20breakdown%20spectroscopy
Laser-induced breakdown spectroscopy (LIBS) is a type of atomic emission spectroscopy which uses a highly energetic laser pulse as the excitation source. The laser is focused to form a plasma, which atomizes and excites samples. The formation of the plasma only begins when the focused laser achieves a certain threshold for optical breakdown, which generally depends on the environment and the target material. 2000s developments From 2000 to 2010, the U.S. Army Research Laboratory (ARL) researched potential extensions to LIBS technology, which focused on hazardous material detection. Applications investigated at ARL included the standoff detection of explosive residues and other hazardous materials, plastic landmine discrimination, and material characterization of various metal alloys and polymers. Results presented by ARL suggest that LIBS may be able to discriminate between energetic and non-energetic materials. Research Broadband high-resolution spectrometers were developed in 2000 and commercialized in 2003. Designed for material analysis, the spectrometer allowed the LIBS system to be sensitive to chemical elements in low concentration. ARL LIBS applications studied from 2000 to 2010 included: Tested for detection of Halon alternative agents Tested a field-portable LIBS system for the detection of lead in soil and paint Studied the spectral emission of aluminum and aluminum oxides from bulk aluminum in different bath gases Performed kinetic modeling of LIBS plumes Demonstrated the detection and discrimination of geological materials, plastic landmines, explosives, and chemical and biological warfare agent surrogates ARL LIBS prototypes studied during this period included: Laboratory LIBS setup Commercial LIBS system Man-portable LIBS device Standoff LIBS system developed for 100+ m detection and discriminate on of explosive residues. 2010s developments LIBS is one of several analytical techniques that can be deployed in the field as oppose
https://en.wikipedia.org/wiki/5040%20%28number%29
5040 is a factorial (7!), a superior highly composite number, abundant number, colossally abundant number and the number of permutations of 4 items out of 10 choices (10 × 9 × 8 × 7 = 5040). It is also one less than a square, making (7, 71) a Brown number pair. Philosophy Plato mentions in his Laws that 5040 is a convenient number to use for dividing many things (including both the citizens and the land of a city-state or polis) into lesser parts, making it an ideal number for the number of citizens (heads of families) making up a polis. He remarks that this number can be divided by all the (natural) numbers from 1 to 12 with the single exception of 11 (however, it is not the smallest number to have this property; 2520 is). He rectifies this "defect" by suggesting that two families could be subtracted from the citizen body to produce the number 5038, which is divisible by 11. Plato also took notice of the fact that 5040 can be divided by 12 twice over. Indeed, Plato's repeated insistence on the use of 5040 for various state purposes is so evident that Benjamin Jowett, in the introduction to his translation of Laws, wrote, "Plato, writing under Pythagorean influences, seems really to have supposed that the well-being of the city depended almost as much on the number 5040 as on justice and moderation." Jean-Pierre Kahane has suggested that Plato's use of the number 5040 marks the first appearance of the concept of a highly composite number, a number with more divisors than any smaller number. Number theoretical If is the divisor function and is the Euler–Mascheroni constant, then 5040 is the largest of 27 known numbers for which this inequality holds: . This is somewhat unusual, since in the limit we have: Guy Robin showed in 1984 that the inequality fails for all larger numbers if and only if the Riemann hypothesis is true. Interesting notes 5040 has exactly 60 divisors, counting itself and 1. 5040 is the largest factorial (7! = 5040) that is also a high
https://en.wikipedia.org/wiki/Forward-confirmed%20reverse%20DNS
Forward-confirmed reverse DNS (FCrDNS), also known as full-circle reverse DNS, double-reverse DNS, or iprev, is a networking parameter configuration in which a given IP address has both forward (name-to-address) and reverse (address-to-name) Domain Name System (DNS) entries that match each other. This is the standard configuration expected by the Internet standards supporting many DNS-reliant protocols. David Barr published an opinion in RFC 1912 (Informational) recommending it as best practice for DNS administrators, but there are no formal requirements for it codified within the DNS standard itself. A FCrDNS verification can create a weak form of authentication that there is a valid relationship between the owner of a domain name and the owner of the network that has been given an IP address. While weak, this authentication is strong enough that it can be used for whitelisting purposes because spammers and phishers cannot usually by-pass this verification when they use zombie computers for email spoofing. That is, the reverse DNS might verify, but it will usually be part of another domain than the claimed domain name. Using an ISP's mail server as a relay may solve the reverse DNS problem, because the requirement is the forward and reverse lookup for the sending relay have to match, it does not have to be related to the from-field or sending domain of messages it relays. Other methods for establishing a relation between an IP address and a domain in email are the Sender Policy Framework (SPF) and the MX record. ISPs that will not or cannot configure reverse DNS will generate problems for hosts on their networks, by virtue of being unable to support applications or protocols that require reverse DNS agree with the corresponding A (or AAAA) record. ISPs that cannot or will not provide reverse DNS ultimately will be limiting the ability of their client base to use Internet services they provide effectively and securely. Applications Most e-mail mail transfer a
https://en.wikipedia.org/wiki/Energy%20density
In physics, energy density is the amount of energy stored in a given system or region of space per unit volume. It is sometimes confused with energy per unit mass which is properly called specific energy or . Often only the useful or extractable energy is measured, which is to say that inaccessible energy (such as rest mass energy) is ignored. In cosmological and other general relativistic contexts, however, the energy densities considered are those that correspond to the elements of the stress-energy tensor and therefore do include mass energy as well as energy densities associated with pressure. Energy per unit volume has the same physical units as pressure and in many situations is synonymous. For example, the energy density of a magnetic field may be expressed as and behaves like a physical pressure. Likewise, the energy required to compress a gas to a certain volume may be determined by multiplying the difference between the gas pressure and the external pressure by the change in volume. A pressure gradient describes the potential to perform work on the surroundings by converting internal energy to work until equilibrium is reached. Overview There are different types of energy stored in materials, and it takes a particular type of reaction to release each type of energy. In order of the typical magnitude of the energy released, these types of reactions are: nuclear, chemical, electrochemical, and electrical. Nuclear reactions take place in stars and nuclear power plants, both of which derive energy from the binding energy of nuclei. Chemical reactions are used by organisms to derive energy from food and by automobiles to derive energy from gasoline. Liquid hydrocarbons (fuels such as gasoline, diesel and kerosene) are today the densest way known to economically store and transport chemical energy at a large scale (1 kg of diesel fuel burns with the oxygen contained in ≈15 kg of air). Electrochemical reactions are used by most mobile devices such as laptop
https://en.wikipedia.org/wiki/Amorphea
Amorphea is a taxonomic supergroup that includes the basal Amoebozoa and Obazoa. That latter contains the Opisthokonta, which includes the Fungi, Animals and the Choanomonada, or Choanoflagellates. The taxonomic affinities of the members of this clade were originally described and proposed by Thomas Cavalier-Smith in 2002. The International Society of Protistologists, the recognised body for taxonomy of protozoa, recommended in 2012 that the term Unikont be changed to Amorphea because the name "Unikont" is based on a hypothesized synapomorphy that the ISOP authors and other scientists later rejected. It includes amoebozoa, opisthokonts, and Apusomonada. Taxonomic revisions within this group Thomas Cavalier-Smith proposed two new phyla: Sulcozoa, which consists of the subphyla Apusozoa (Apusomonadida and Breviatea), and Varisulca, which includes the subphyla Diphyllatea, Discocelida, Mantamonadidae, Planomonadida and Rigifilida. Further work by Cavalier-Smith showed that Sulcozoa is paraphyletic. Apusozoa also appears to be paraphyletic. Varisulca has been redefined to include planomonads, Mantamonas and Collodictyon. A new taxon has been created - Glissodiscea - for the planomonads and Mantamonas. Again, the validity of this revised taxonomy awaits confirmation. Amoebozoa seems to be monophyletic with two major branches: Conosa and Lobosa. Conosa is divided into the aerobic infraphylum Semiconosia (Mycetozoa and Variosea) and secondarily anaerobic Archamoebae. Lobosa consists entirely of non-flagellated lobose amoebae and has been divided into two classes: Discosea, which have flattened cells, and Tubulinea, which has predominantly tube-shaped pseudopodia. Clade The group includes eukaryotic cells that, for the most part, have a single emergent flagellum, or are amoebae with no flagella. The unikonts include opisthokonts (animals, fungi, and related forms) and Amoebozoa. By contrast, other well-known eukaryotic groups, which more often have two emergent flage
https://en.wikipedia.org/wiki/Voltage-gated%20calcium%20channel
Voltage-gated calcium channels (VGCCs), also known as voltage-dependent calcium channels (VDCCs), are a group of voltage-gated ion channels found in the membrane of excitable cells (e.g., muscle, glial cells, neurons, etc.) with a permeability to the calcium ion Ca2+. These channels are slightly permeable to sodium ions, so they are also called Ca2+–Na+ channels, but their permeability to calcium is about 1000-fold greater than to sodium under normal physiological conditions. At physiologic or resting membrane potential, VGCCs are normally closed. They are activated (i.e.: opened) at depolarized membrane potentials and this is the source of the "voltage-gated" epithet. The concentration of calcium (Ca2+ ions) is normally several thousand times higher outside the cell than inside. Activation of particular VGCCs allows a Ca2+ influx into the cell, which, depending on the cell type, results in activation of calcium-sensitive potassium channels, muscular contraction, excitation of neurons, up-regulation of gene expression, or release of hormones or neurotransmitters. VGCCs have been immunolocalized in the zona glomerulosa of normal and hyperplastic human adrenal, as well as in aldosterone-producing adenomas (APA), and in the latter T-type VGCCs correlated with plasma aldosterone levels of patients. Excessive activation of VGCCs is a major component of excitotoxicity, as severely elevated levels of intracellular calcium activates enzymes which, at high enough levels, can degrade essential cellular structures. Structure Voltage-gated calcium channels are formed as a complex of several different subunits: α1, α2δ, β1-4, and γ. The α1 subunit forms the ion-conducting pore while the associated subunits have several functions including modulation of gating. Channel subunits There are several different kinds of high-voltage-gated calcium channels (HVGCCs). They are structurally homologous among varying types; they are all similar, but not structurally identical. In the la
https://en.wikipedia.org/wiki/Xfire
Xfire was a proprietary freeware instant messaging service for gamers that also served as a game server browser with various other features. It was available for Microsoft Windows. Xfire was originally developed by Ultimate Arena based in Menlo Park, California. On January 3, 2014, it had over 24 million registered users. Xfire's partnership with Livestream allowed users to broadcast live video streams of their current game to an audience. The Xfire website also maintained a "Top Ten" games list, ranking games by the number of hours Xfire users spend playing each game every day. World of Warcraft had been the most played game for many years, but was surpassed by League of Legends on June 20, 2011. Social.xfire.com was a community site for Xfire users, allowing them to upload screenshots, photos and videos and to make contacts. Xfire hosted events every month, which included debates, game tournaments, machinima contests, and chat sessions with Xfire or game developers. Xfire's web based social media was discontinued on June 12, 2015, and the messaging function was shut down on June 27, 2015. The last of Xfire's services were shut down on April 30, 2016. History Xfire, Inc. was founded in 2002 by Dennis "Thresh" Fong, Mike Cassidy, Max Woon, and David Lawee. The company was formerly known as Ultimate Arena, but changed its name to Xfire when its desktop client Xfire became more popular and successful than its gaming website. The first version of the Xfire desktop client was code-named Scoville, which was first developed in 2003 by Garrett Blythe, Chris Kirmse, Mike Judge, and others. The services ability to track game play hours and quickly launch web games, compared to other services at the time quickly gained it popularity. On April 25, 2006, Xfire was acquired by Viacom in a US$102 million deal. In September 2006, Sony was misinterpreted to have announced that Xfire would be used for the PlayStation 3. The confusion came when one PlayStation 3 game, Untold
https://en.wikipedia.org/wiki/Hypermetabolism
Hypermetabolism is defined as an elevated resting energy expenditure (REE) > 110% of predicted REE. Hypermetabolism is accompanied by a variety of internal and external symptoms, most notably extreme weight loss, and can also be a symptom in itself. This state of increased metabolic activity can signal underlying issues, especially hyperthyroidism. Patients with Fatal familial insomnia, an extremely rare and strictly hereditary disorder, also presents with hypermetabolism; however, this universally fatal disorder is exceedingly rare, with only a few known cases worldwide. The drastic impact of the hypermetabolic state on patient nutritional requirements is often understated or overlooked as well. Signs and symptoms Symptoms may last for days, weeks, or months until the disorder is healed. The most apparent sign of hypermetabolism is an abnormally high intake of calories followed by continuous weight loss. Internal symptoms of hypermetabolism include: peripheral insulin resistance, elevated catabolism of protein, carbohydrates and triglycerides, and a negative nitrogen balance in the body. Outward symptoms of hypermetabolism may include: Weight loss Anemia Fatigue Elevated heart rate Irregular heartbeat Insomnia Dysautonomia Shortness of breath Muscle weakness Excessive sweating Pathophysiology During the acute phase, the liver redirects protein synthesis, causing up-regulation of certain proteins and down-regulation of others. Measuring the serum level of proteins that are up- and down-regulated during the acute phase can reveal extremely important information about the patient's nutritional state. The most important up-regulated protein is C-reactive protein, which can rapidly increase 20- to 1,000-fold during the acute phase. Hypermetabolism also causes expedited catabolism of carbohydrates, proteins, and triglycerides in order to meet the increased metabolic demands. Diagnosis Quantitation by indirect calorimetry, as opposed to the Harris-Benedict e
https://en.wikipedia.org/wiki/Ajax%20%28programming%29
Ajax (also AJAX ; short for "asynchronous JavaScript and XML") is a set of web development techniques that uses various web technologies on the client-side to create asynchronous web applications. With Ajax, web applications can send and retrieve data from a server asynchronously (in the background) without interfering with the display and behaviour of the existing page. By decoupling the data interchange layer from the presentation layer, Ajax allows web pages and, by extension, web applications, to change content dynamically without the need to reload the entire page. In practice, modern implementations commonly utilize JSON instead of XML. Ajax is not a technology, but rather a programming concept. HTML and CSS can be used in combination to mark up and style information. The webpage can be modified by JavaScript to dynamically display—and allow the user to interact with the new information. The built-in XMLHttpRequest object is used to execute Ajax on webpages, allowing websites to load content onto the screen without refreshing the page. Ajax is not a new technology, nor is it a new language. Instead, it is existing technologies used in a new way. History In the early-to-mid 1990s, most Websites were based on complete HTML pages. Each user action required a complete new page to be loaded from the server. This process was inefficient, as reflected by the user experience: all page content disappeared, then the new page appeared. Each time the browser reloaded a page because of a partial change, all the content had to be re-sent, even though only some of the information had changed. This placed additional load on the server and made bandwidth a limiting factor in performance. In 1996, the iframe tag was introduced by Internet Explorer; like the object element, it can load or fetch content asynchronously. In 1998, the Microsoft Outlook Web Access team developed the concept behind the XMLHttpRequest scripting object. It appeared as XMLHTTP in the second version o
https://en.wikipedia.org/wiki/Lauryl%20tryptose%20broth
Lauryl tryptose broth (LTB) is a selective growth medium (broth) for coliforms. Lauryl tryptose broth is used for the most probable number test of coliforms in waters, effluent or sewage. It acts as a confirmation test for lactose fermentation with gas production. Sodium lauryl sulfate inhibits organisms other than coliforms. Formula in grams/litre (g/L) Tryptose: 20.0, Lactose : 5.0, Sodium chloride : 5.0, Dipotassium phosphate : 2.75, Potassium dihydrogen phosphate : 2.75, Sodium dodecyl sulfate : 0.1 pH 6.8 ± 0.2 Samples positive for gas production are transferred to brilliant green lactose bile broth (BLGB) to detect the ability to grow in the presence of bile and produce gas at 95 °F (35 °C) for 48 hours. The absence of gas production in 48 hours is considered a negative test for coliforms. Gas production serves as both a presumptive test and a confirmatory medium. Fecal coliforms are distinguished from coliforms by growth in EC broth at 113.9 °F (45.5 °C) for 24 hours.
https://en.wikipedia.org/wiki/PowerBuilder
PowerBuilder is an integrated development environment owned by SAP since the acquisition of Sybase in 2010. On July 5, 2016, SAP and Appeon entered into an agreement whereby Appeon, an independent company, would be responsible for developing, selling, and supporting PowerBuilder. Over the years, PowerBuilder has been updated with new standards. In 2010, a major upgrade of PowerBuilder was released to provide support for the Microsoft .NET Framework. In 2014, support was added for OData, dockable windows, and 64-bit native applications. In 2019 support was added for rapidly creating RESTful Web APIs and non-visual .NET assemblies using the C# language and the .NET Core framework. And PowerScript client app development was revamped with new UI technologies and cloud architecture. Appeon has been releasing new features every 6-12 month cycles, which per the product roadmap focus on four key focus areas: sustaining core features, modernizing application UI, improving developer productivity, and incorporating more Cloud technology. Features PowerBuilder has a native data-handling object called a DataWindow, which can be used to create, edit, and display data from a database. This object gives the programmer a number of tools for specifying and controlling user interface appearance and behavior, and also provides simplified access to database content and JSON or XML from Web services. To some extent, the DataWindow frees the programmer from considering the differences between Database Management Systems from different vendors. DataWindow can display data using multiple presentation styles and can connect to various data sources. Usage PowerBuilder is used primarily for building business CRUD applications. Although new software products are rarely built with PowerBuilder, many client-server ERP products and line-of-business applications built in the late 1980s to early 2000s with PowerBuilder still provide core database functions for large enterprises in governmen
https://en.wikipedia.org/wiki/Third-party%20reproduction
Third-party reproduction or donor-assisted reproduction is any human reproduction in which DNA or gestation is provided by a third party or donor other than the one or two parents who will raise the resulting child. This goes beyond the traditional father–mother model, and the third party's involvement is limited to the reproductive process and does not extend into the raising of the child. Third-party reproduction is used by couples unable to reproduce by traditional means, by same-sex couples, and by men and women without a partner. Where donor gametes are provided by a donor, the donor will be a biological parent of the resulting child, but in third party reproduction, he or she will not be the caring parent. Categories One can distinguish several categories, some of which may be combined: Sperm donation. A donor provides sperm in order to father a child for a third-party female. Egg donation. A donor provides ova to a woman or couple in order for the egg to be fertilized and implanted in the recipient woman. Spindle transfer. A third party's mitochondrial DNA is transferred to the future mother's ovum. This is used to prevent mitochondrial disease. Embryo donation with embryos which were originally created for a genetic mother's assisted pregnancy. Once the genetic mother has completed her own treatment, she may donate unused embryos for use by a third party. or where embryos are specifically created for donation using donor eggs and donor sperm. Embryo adoption. Embryos created during a donor's assisted pregnancy are adopted to be implanted in a third party recipient. Surrogacy. An embryo is gestated in a third party's uterus (traditional surrogacy) or a woman is inseminated in order to gestate a child for a third party (straight surrogacy). Gestation Gestation is typically initiated by artificial insemination in the case of sperm donation and by embryo transfer after in vitro fertilisation (IVF) in the case of egg donation, embryo donation, and sur
https://en.wikipedia.org/wiki/Allergen%20immunotherapy
Allergen immunotherapy, also known as desensitization or hypo-sensitization, is a medical treatment for environmental allergies, such as insect bites, and asthma. Immunotherapy involves exposing people to larger and larger amounts of allergens in an attempt to change the immune system's response. Meta-analyses have found that injections of allergens under the skin are effective in the treatment in allergic rhinitis in children and in asthma. The benefits may last for years after treatment is stopped. It is generally safe and effective for allergic rhinitis, allergic conjunctivitis, allergic forms of asthma, and stinging insects. The evidence also supports the use of sublingual immunotherapy against rhinitis and asthma, but it is less strong. In this form the allergen is given under the tongue and people often prefer it to injections. Immunotherapy is not recommended as a stand-alone treatment for asthma. Side effects during sublingual immunotherapy treatment are usually local and mild and can often be eliminated by adjusting the dosage. Anaphylaxis during sublingual immunotherapy treatment has occurred on rare occasions. Potential side effects related to subcutaneous immunotherapy treatment for asthma and allergic rhinoconjunctivitis include mild or moderate skin or respiratory reactions. Severe side effects such as anaphylaxis during subcutaneous immunotherapy treatment are relatively uncommon. Discovered by Leonard Noon and John Freeman in 1911, allergen immunotherapy is the only medicine known to tackle not only the symptoms but also the causes of respiratory allergies. A detailed diagnosis is necessary to identify the allergens involved. Types Subcutaneous Subcutaneous immunotherapy (SCIT), also known as allergy shots, is the historical route of administration and consists of injections of allergen extract, which must be performed by a medical professional. Subcutaneous immunotherapy protocols generally involve weekly injections during a build-up phase, f
https://en.wikipedia.org/wiki/Fran%C3%A7ois%20de%20Loys
Louis François Fernand Hector de Loys (1892–1935) was a Swiss oil geologist. He is remembered today for the claim that he discovered a previously unknown primate, De Loys's ape, during a 1920 oil survey expedition in Venezuela. The identity of the animal he photographed has long been established with considerable confidence to be a spider monkey, and the identification as a new species is generally regarded as a hoax. De Loys's ape Between 1917–1920, de Loys and his men were searching for oil around the River Tarra and Río Catatumbo at the Venezuela–Colombia border in South America (Bernard Heuvelmans, 1959). This mountainous region, the Sierra de Perijaa, was heavily forested, and that time was inhabited by the 'dangerous' Motilone Indians. One day, while de Loys and his crew were resting near the Tarra River deep in the jungle, two monkeys suddenly stepped out of the woods, screaming and shaking branches. They were holding onto bushes, walked upright, then broke off several branches, waving them like weapons. When the monkeys threw their own excrement at the terrified de Loys and his exhausted companions, they grabbed their guns and fired at the more aggressive-looking male, but killed the female. The male stepped aside, though wounded, and disappeared in the forest. Since de Loys and his people had never seen such large monkeys, he wanted to preserve the carcass. When de Loys finally returned home with the only remaining evidence, a picture which he had placed into his travel-notebook, he basically forgot about his encounter with the unknown monkeys. Years later, his friend – French anthropologist George Montandon – flipped the pages of de Loys' notebook and discovered the photo.Although Professor Montandon was familiar with most of the monkeys discovered to that date, he had never seen one like that in de Loys' picture. Montandon speculated that the large monkey in the picture was a very human-like creature. It had no tail. Its size, according to de Loys, was
https://en.wikipedia.org/wiki/Law%20of%20total%20cumulance
In probability theory and mathematical statistics, the law of total cumulance is a generalization to cumulants of the law of total probability, the law of total expectation, and the law of total variance. It has applications in the analysis of time series. It was introduced by David Brillinger. It is most transparent when stated in its most general form, for joint cumulants, rather than for cumulants of a specified order for just one random variable. In general, we have where κ(X1, ..., Xn) is the joint cumulant of n random variables X1, ..., Xn, and the sum is over all partitions of the set { 1, ..., n } of indices, and "B ∈ ;" means B runs through the whole list of "blocks" of the partition , and κ(Xi : i ∈ B | Y) is a conditional cumulant given the value of the random variable Y. It is therefore a random variable in its own right—a function of the random variable Y. Examples The special case of just one random variable and n = 2 or 3 Only in case n = either 2 or 3 is the nth cumulant the same as the nth central moment. The case n = 2 is well-known (see law of total variance). Below is the case n = 3. The notation μ3 means the third central moment. General 4th-order joint cumulants For general 4th-order cumulants, the rule gives a sum of 15 terms, as follows: Cumulants of compound Poisson random variables Suppose Y has a Poisson distribution with expected value λ, and X is the sum of Y copies of W that are independent of each other and of Y. All of the cumulants of the Poisson distribution are equal to each other, and so in this case are equal to λ. Also recall that if random variables W1, ..., Wm are independent, then the nth cumulant is additive: We will find the 4th cumulant of X. We have: We recognize the last sum as the sum over all partitions of the set { 1, 2, 3, 4 }, of the product over all blocks of the partition, of cumulants of W of order equal to the size of the block. That is precisely the 4th raw moment of W (see cumulan
https://en.wikipedia.org/wiki/Gimenez%20stain
The Gimenez staining technique uses biological stains to detect and identify bacterial infections in tissue samples. Although largely superseded by techniques like Giemsa staining, the Gimenez technique may be valuable for detecting certain slow-growing or fastidious bacteria. Basic fuchsin stain in aqueous solution with phenol and ethanol colours many bacteria (both gram positive and Gram negative) red, magenta, or pink. A malachite green counterstain gives a blue-green background cast to the surrounding tissue. See also Histology List of common staining protocols Microscopy
https://en.wikipedia.org/wiki/Guidon%20%28United%20States%29
In the United States Army, Navy, Air Force, Marine Corps and Coast Guard, a guidon is a military standard or flag that company/battery/troop or platoon-sized detachments carry to signify their unit designation and branch/corps affiliation or the title of the individual who carries it. A basic guidon can be rectangular, but sometimes has a triangular portion removed from the fly (known as "swallow-tailed"). Significance The significance and importance of the guidon is that it represents the unit and its commanding officer. When the commander is in service, his or her guidon is displayed for everyone to see. When the commander leaves for the day, the guidon is taken down. It is an honor to be the guidon carrier for a unit, known as a "guidon bearer" or "guide". He or she stands in front of the unit alongside of the commander (or the commander's representative) and is the rallying point for troops to fall into formation when the order is given. In drill and ceremonies, the guidon bearers and commander are always in front of the formation. The guidon is a great source of pride for the unit, and several military traditions have developed around it, stemming back from ancient times. Any sort of disgrace toward the guidon is considered a dishonor of the unit as a whole, and punishment is typical. For example, should the guidon bearer drop the guidon, they must fall with it and perform punishment, often in the form of push-ups. Other units may attempt to steal the guidon to demoralize or antagonize the unit. Veteran soldiers know not to give up the guidon to anyone outside their unit, but new recruits may be tempted into relinquishing it by a superior, especially during a unit run. By branch Army As described in Chapter 6 of Army Regulation 840-10, guidons are swallow-tailed marker flags in branch-of-service colors, measuring at the hoist by at the fly, with the swallow-tail end forked . Previously guidons were made of wool bunting, and if serviceable these older ve
https://en.wikipedia.org/wiki/Procerus%20muscle
The procerus muscle (or pyramidalis nasi) is a small pyramidal slip of muscle deep to the superior orbital nerve, artery and vein. Procerus is Latin, meaning tall or extended. Structure The procerus muscle arises by tendinous fibers from the fascia covering the lower part of the nasal bone and upper part of the lateral nasal cartilage. It is inserted into the skin over the lower part of the forehead between the two eyebrows on either side of the midline, its fibers merging with those of the frontalis muscle. Nerve supply The procerus muscle is supplied by the temporal branch of the facial nerve (VII). It may also be supplied by other branches of the facial nerve, which can be varied. Function The procerus muscle helps to pull that part of the skin between the eyebrows downwards, which assists in flaring the nostrils. It can also contribute to an expression of anger. Procerus is supplied by temporal and lower zygomatic branches from the facial nerve. A supply from its buccal branch has also been described. Its contraction can produce transverse wrinkles. Clinical significance Procerus sign Dystonia of the procerus muscle is involved in the procerus sign, which is indicative of progressive supranuclear palsy (PSP). Denervation The procerus muscle may be denervated to reduce furrow lines around the glabella caused by frowning. This may be for cosmetic purposes. Surgery can be used to transect the temporal branch of the facial nerve, although other branches of the facial nerve may also need to be cut.
https://en.wikipedia.org/wiki/Modal%20testing
Modal testing is the form of vibration testing of an object whereby the natural (modal) frequencies, modal masses, modal damping ratios and mode shapes of the object under test are determined. A modal test consists of an acquisition phase and an analysis phase. The complete process is often referred to as a Modal Analysis or Experimental Modal Analysis. There are several ways to do modal testing but impact hammer testing and shaker (vibration tester) testing are commonplace. In both cases energy is supplied to the system with a known frequency content. Where structural resonances occur there will be an amplification of the response, clearly seen in the response spectra. Using the response spectra and force spectra, a transfer function can be obtained. The transfer function (or frequency response function (FRF)) is often curve fitted to estimate the modal parameters; however, there are many methods of modal parameter estimation and it is the topic of much research. Impact Hammer Modal Testing An ideal impact to a structure is a perfect impulse, which has an infinitely small duration, causing a constant amplitude in the frequency domain; this would result in all modes of vibration being excited with equal energy. The impact hammer test is designed to replicate this; however, in reality a hammer strike cannot last for an infinitely small duration, but has a known contact time. The duration of the contact time directly influences the frequency content of the force, with a larger contact time causing a smaller range of bandwidth. A load cell is attached to the end of the hammer to record the force. Impact hammer testing is ideal for small lightweight structures. However, as the size of the structure increases, issues can occur due to a poor signal-to-noise ratio, which is common on large civil engineering structures. Shaker Modal Testing A shaker is a device that excites the object or structure according to its amplified input signal. Several input signa
https://en.wikipedia.org/wiki/L%C3%A9on%20Brillouin
Léon Nicolas Brillouin (; August 7, 1889 – October 4, 1969) was a French physicist. He made contributions to quantum mechanics, radio wave propagation in the atmosphere, solid-state physics, and information theory. Early life Brillouin was born in Sèvres, near Paris, France. His father, Marcel Brillouin, grandfather, Éleuthère Mascart, and great-grandfather, Charles Briot, were physicists as well. Education From 1908 to 1912, Brillouin studied physics at the École Normale Supérieure, in Paris. From 1911 he studied under Jean Perrin until he left for the Ludwig Maximilian University of Munich (LMU), in 1912. At LMU, he studied theoretical physics with Arnold Sommerfeld. Just a few months before Brillouin's arrival at LMU, Max von Laue had conducted his experiment showing X-ray diffraction in a crystal lattice. In 1913, he went back to France to study at the University of Paris and it was in this year that Niels Bohr submitted his first paper on the Bohr model of the hydrogen atom. From 1914 until 1919, during World War I, he served in the military, developing the valve amplifier with G. A. Beauvais. At the conclusion of the war, he returned to the University of Paris to continue his studies with Paul Langevin, and was awarded his Docteur ès science in 1920. Brillouin's thesis jury was composed of Langevin, Marie Curie, and Jean Perrin and his thesis topic was on the quantum theory of solids. In his thesis, he proposed an equation of state based on the atomic vibrations (phonons) that propagate through it. He also studied the propagation of monochromatic light waves and their interaction with acoustic waves, i.e., scattering of light with a frequency change, which became known as Brillouin scattering. Career After receipt of his doctorate, Brillouin became the scientific secretary of the reorganized Journal de Physique et le Radium. In 1932, he became associate director of the physics laboratories at the Collège de France. In 1926, Gregor Wentzel, Hendrik Kra
https://en.wikipedia.org/wiki/Index%20of%20genetics%20articles
Genetics (from Ancient Greek , “genite” and that from , “origin”), a discipline of biology, is the science of heredity and variation in living organisms. Articles (arranged alphabetically) related to genetics include: # A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
https://en.wikipedia.org/wiki/Planetary%20protection
Planetary protection is a guiding principle in the design of an interplanetary mission, aiming to prevent biological contamination of both the target celestial body and the Earth in the case of sample-return missions. Planetary protection reflects both the unknown nature of the space environment and the desire of the scientific community to preserve the pristine nature of celestial bodies until they can be studied in detail. There are two types of interplanetary contamination. Forward contamination is the transfer of viable organisms from Earth to another celestial body. Back contamination is the transfer of extraterrestrial organisms, if they exist, back to the Earth's biosphere. History The potential problem of lunar and planetary contamination was first raised at the International Astronautical Federation VIIth Congress in Rome in 1956. In 1958 the U.S. National Academy of Sciences (NAS) passed a resolution stating, “The National Academy of Sciences of the United States of America urges that scientists plan lunar and planetary studies with great care and deep concern so that initial operations do not compromise and make impossible forever after critical scientific experiments.” This led to creation of the ad hoc Committee on Contamination by Extraterrestrial Exploration (CETEX), which met for a year and recommended that interplanetary spacecraft be sterilized, and stated, “The need for sterilization is only temporary. Mars and possibly Venus need to remain uncontaminated only until study by manned ships becomes possible”. In 1959, planetary protection was transferred to the newly formed Committee on Space Research (COSPAR). COSPAR in 1964 issued Resolution 26 affirming that: In 1967, the US, USSR, and UK ratified the United Nations Outer Space Treaty. The legal basis for planetary protection lies in Article IX of this treaty: This treaty has since been signed and ratified by 104 nation-states. Another 24 have signed but not ratified. All the current spa
https://en.wikipedia.org/wiki/Rybczynski%20theorem
The Rybczynski theorem was developed in 1955 by the Polish-born English economist Tadeusz Rybczynski (1923–1998). It states that at constant relative goods prices, a rise in the endowment of one factor will lead to a more than proportional expansion of the output in the sector which uses that factor intensively, and an absolute decline of the output of the other good. In the context of the Heckscher–Ohlin model of international trade, open trade between two regions often leads to changes in relative factor supplies between the regions. This can lead to an adjustment in the quantities and types of outputs between the two regions. The Rybczynski theorem explains the outcome from an increase in one of these factor's supply as well as the effect on the output of a good which depends on an opposing factor. Eventually, across both countries, market forces would return the system toward equality of production in regard to input prices such as wages (the state of factor price equalization). Relationship between endowments and outputs The Rybczynski theorem displays how changes in an endowment affects the outputs of the goods when full employment is sustained. The theorem is useful in analyzing the effects of capital investment, immigration and emigration within the context of a Heckscher-Ohlin model. Consider the diagram below, depicting a labour constraint in red and a capital constraint in blue. Suppose production occurs initially on the production possibility frontier (PPF) at point A. Suppose there is an increase in the labour endowment. This will cause an outward shift in the labour constraint. The PPF and thus production will shift to point B. Production of clothing, the labour-intensive good, will rise from C1 to C2. Production of cars, the capital-intensive good, will fall from S1 to S2. If the endowment of capital rose the capital constraint would shift out causing an increase in car production and a decrease in clothing production. Since the labour constraint
https://en.wikipedia.org/wiki/Light%20beam
A light beam or beam of light is a directional projection of light energy radiating from a light source. Sunlight forms a light beam (a sunbeam) when filtered through media such as clouds, foliage, or windows. To artificially produce a light beam, a lamp and a parabolic reflector is used in many lighting devices such as spotlights, car headlights, PAR Cans, and LED housings. Light from certain types of laser has the smallest possible beam divergence. Visible light beams From the side, a beam of light is only visible if part of the light is scattered by objects: tiny particles like dust, water droplets (mist, fog, rain), hail, snow, or smoke, or larger objects such as birds. If there are many objects in the light path, then it appears as a continuous beam, but if there are only a few objects, then the light is visible as a few individual bright points. In any case, this scattering of light from a beam, and the resultant visibility of a light beam from the side, is known as the Tyndall effect. Visibility from the side as side effect Flashlight (UK 'Torch'), beam directed by hand Headlight, forward beam; the lamp is mounted in a vehicle, or on the forehead of a person, e.g. built into a helmet Lighthouse, beam sweeping around horizontally Searchlight, beam directed at something Visibility from the side as purpose For the purpose of visibility of light beams from the side, sometimes a haze machine or fog machine is used. The difference between the two is that the fog itself is also a visual effect. Laser lighting display- Laser beams are often used for visual effects, often in combination with music. Searchlights are often used in advertising, for instance by automobile dealers; the beam of light is visible over a large area, and (at least in theory) interested persons can find the dealer or store by following the beam to its source. This also used to be done for movie premieres; the waving searchlight beams are still to be seen as a design element in the
https://en.wikipedia.org/wiki/138%20%28number%29
138 (one hundred [and] thirty-eight) is the natural number following 137 and preceding 139. In mathematics 138 is a sphenic number, and the smallest product of three primes such that in base 10, the third prime is a concatenation of the other two: . It is also a one-step palindrome in decimal (138 + 831 = 969). 138 has eight total divisors that generate an arithmetic mean of 36, which is the eighth triangular number. While the sum of the digits of 138 is 12, the product of its digits is 24. 138 is an Ulam number, the thirty-first abundant number, and a primitive (square-free) congruent number. It is the third 47-gonal number. As an interprime, 138 lies between the eleventh pair of twin primes (137, 139), respectively the 33rd and 34th prime numbers. It is the sum of two consecutive primes (67 + 71), and the sum of four consecutive primes (29 + 31 + 37 + 41). There are a total of 44 numbers that are relatively prime with 138 (and up to), while 22 is its reduced totient. 138 is the denominator of the twenty-second Bernoulli number (whose respective numerator, is 854513). A magic sum of 138 is generated inside four magic circles that features the first thirty-three non-zero integers, with a 9 in the center (first constructed by Yang Hui). The simplest Catalan solid, the triakis tetrahedron, produces 138 stellations (depending on rules chosen), 44 of which are fully symmetric and 94 of which are enantiomorphs. Using two radii to divide a circle according to the golden ratio yields sectors of approximately 138 degrees (the golden angle), and 222 degrees. In science The Saros number of the solar eclipse series which began on June 6, 1472, and will end on July 11, 2716. The duration of Saros series 138 is 1244 years, and it contains 70 solar eclipses 138 Tolosa is a brightly colored, stony main belt asteroid The New General Catalogue object NGC 138, a spiral galaxy in the constellation Pisces 138P/Shoemaker-Levy is a periodic comet in the Solar Sy
https://en.wikipedia.org/wiki/Screen-door%20effect
The screen-door effect (SDE) is a visual artifact of displays, where the fine lines separating pixels (or subpixels) become visible in the displayed image. This can be seen in digital projector images and regular displays under magnification or at close range, but the increases in display resolutions have made this much less significant. More recently, the screen door effect has been an issue with virtual reality headsets and other head-mounted displays, because these are viewed at a much closer distance, and stretch a single display across a much wider field of view. SDE in projectors In LCD and DLP projectors, SDE can be seen because projector optics typically have significantly lower pixel density than the size of the image they project, enlarging these fine lines, which are much smaller than the pixels themselves, to be seen. This results in an image that appears as if viewed through a fine screen or mesh such as those used on anti-insect screen doors. The screen door effect was noticed on the first digital projector: an LCD projector made in 1984 by Gene Dolgoff. To eliminate this artifact, Dolgoff invented depixelization, which used various optical methods to eliminate the visibility of the spaces between the pixels. The dominant method made use of a microlens array, wherein each micro-lens caused a slightly magnified image of the pixel behind it, filling in the previously-visible spaces between pixels. In addition, when making a projector with a single, full-color LCD panel, an additional appearance of pixelation was visible due to the noticeability of green pixels (appearing bright) adjacent to red and blue pixels (appearing dark), forming a noticeable repeating light and dark pattern. Use of a micro-lens array at a slightly greater distance created new pixel images, with each "new" pixel being a summation of six neighboring sub-pixels (made up of two full color pixels, one above the other). Since there were as many micro-lenses as there were original pi
https://en.wikipedia.org/wiki/HyperZ
HyperZ is the brand for a set of processing techniques developed by ATI Technologies and later Advanced Micro Devices and implemented in their Radeon-GPUs. HyperZ was announced in November 2000 and was still available in the TeraScale-based Radeon HD 2000 Series and in current Graphics Core Next-based graphics products. On the Radeon R100-based cores, Radeon DDR through 7500, where HyperZ debuted, ATI claimed a 20% improvement in overall rendering efficiency. They stated that with HyperZ, Radeon could be said to offer 1.5 gigatexels per second fillrate performance instead of the card's apparent theoretical rate of 1.2 gigatexels. In testing it was shown that HyperZ did indeed offer a tangible performance improvement that allowed the less endowed Radeon to keep up with the less efficient GeForce 2 GTS. Functionality HyperZ consists of three mechanisms: Z compression The Z-buffer is stored in a lossless compressed format to minimize the Z-Buffer bandwidth as Z read or writes are taking place. The compression scheme ATI used on Radeon 8500 operated 20% more effectively than on the original Radeon and Radeon 7500. Fast Z clear Rather than writing zeros throughout the entire Z-buffer, and thus using the bandwidth of another Z-Buffer write, a Fast Z Clear technique is used that can tag entire blocks of the Z-Buffer as cleared, such that only each of these blocks need be tagged as cleared. On Radeon 8500, ATI claimed that this process could clear the Z-Buffer up to approximately 64 times faster than that of a card without fast Z clear. Hierarchical Z-buffer This feature allows for the pixel being rendered to be checked against the z-buffer before the pixel actually arrives in the rendering pipelines. This allows useless pixels to be thrown out early (early Z reject), before the Radeon has to render them. Versions of HyperZ With each new microarchitecture, ATI has revised and improved the technology. HyperZ – R100 HyperZ II – R200 (8500-9250) HyperZ III – R300 in
https://en.wikipedia.org/wiki/Hackenbush
Hackenbush is a two-player game invented by mathematician John Horton Conway. It may be played on any configuration of colored line segments connected to one another by their endpoints and to a "ground" line. Gameplay The game starts with the players drawing a "ground" line (conventionally, but not necessarily, a horizontal line at the bottom of the paper or other playing area) and several line segments such that each line segment is connected to the ground, either directly at an endpoint, or indirectly, via a chain of other segments connected by endpoints. Any number of segments may meet at a point and thus there may be multiple paths to ground. On their turn, a player "cuts" (erases) any line segment of their choice. Every line segment no longer connected to the ground by any path "falls" (i.e., gets erased). According to the normal play convention of combinatorial game theory, the first player who is unable to move loses. Hackenbush boards can consist of finitely many (in the case of a "finite board") or infinitely many (in the case of an "infinite board") line segments. The existence of an infinite number of line segments does not violate the game theory assumption that the game can be finished in a finite amount of time, provided that there are only finitely many line segments directly "touching" the ground. On an infinite board, based on the layout of the board the game can continue on forever, assuming there are infinitely many points touching the ground. Variants In the original folklore version of Hackenbush, any player is allowed to cut any edge: as this is an impartial game it is comparatively straightforward to give a complete analysis using the Sprague–Grundy theorem. Thus the versions of Hackenbush of interest in combinatorial game theory are more complex partisan games, meaning that the options (moves) available to one player would not necessarily be the ones available to the other player if it were their turn to move given the same position.
https://en.wikipedia.org/wiki/Word%20%28computer%20architecture%29
In computing, a word is the natural unit of data used by a particular processor design. A word is a fixed-sized datum handled as a unit by the instruction set or the hardware of the processor. The number of bits or digits in a word (the word size, word width, or word length) is an important characteristic of any specific processor design or computer architecture. The size of a word is reflected in many aspects of a computer's structure and operation; the majority of the registers in a processor are usually word-sized and the largest datum that can be transferred to and from the working memory in a single operation is a word in many (not all) architectures. The largest possible address size, used to designate a location in memory, is typically a hardware word (here, "hardware word" means the full-sized natural word of the processor, as opposed to any other definition used). Documentation for older computers with fixed word size commonly states memory sizes in words rather than bytes or characters. The documentation sometimes uses metric prefixes correctly, sometimes with rounding, e.g., 65 kilowords (KW) meaning for 65536 words, and sometimes uses them incorrectly, with kilowords (KW) meaning 1024 words (210) and megawords (MW) meaning 1,048,576 words (220). With standardization on 8-bit bytes and byte addressability, stating memory sizes in bytes, kilobytes, and megabytes with powers of 1024 rather than 1000 has become the norm, although there is some use of the IEC binary prefixes. Several of the earliest computers (and a few modern as well) use binary-coded decimal rather than plain binary, typically having a word size of 10 or 12 decimal digits, and some early decimal computers have no fixed word length at all. Early binary systems tended to use word lengths that were some multiple of 6-bits, with the 36-bit word being especially common on mainframe computers. The introduction of ASCII led to the move to systems with word lengths that were a multiple of 8-bit
https://en.wikipedia.org/wiki/Federated%20Naming%20Service
In computing, the Federated Naming Service (FNS) or XFN (X/Open Federated Naming) is a system for uniting various name services under a single interface for the basic naming operations. It is produced by X/Open and included in various Unix operating systems, primarily Solaris versions 2.5 to 9. The purpose of XFN and FNS is to allow applications to use widely heterogeneous naming services (such as NIS, DNS and so on) via a single interface, to avoid duplication of programming effort. Unlike the similar LDAP, neither XFN nor FNS were ever popular nor widely used. FNS was last included in Solaris 9 and was not included with Solaris 10. External links and references Overview of FNS (Solaris 9 man page) Overview of the XFN interface (Solaris 9 man page) X/Open Federated Naming - specification for uniform naming interfaces between multiple naming systems (Elizabeth A. Martin, Hewlett-Packard Journal, December 1995) Federated Naming Service Programming Guide (Sun Microsystems 816–1470–10, September 2002) Sun Microsystems software Identity management Solaris software
https://en.wikipedia.org/wiki/Quality%20of%20results
Quality of Results (QoR) is a term used in evaluating technological processes. It is generally represented as a vector of components, with the special case of uni-dimensional value as a synthetic measure. History The term was coined by the Electronic Design Automation (EDA) industry in the late 1980s. QoR was meant to be an indicator of the performance of integrated circuits (chips), and initially measured the area and speed of a chip. As the industry evolved, new chip parameters were considered for coverage by the QoR, illustrating new areas of focus for chip designers (for example power dissipation, power efficiency, routing overhead, etc.). Because of the broad scope of quality assessment, QoR eventually evolved into a generic vector representation comprising a number of different values, where the meaning of each vector value was explicitly specified in the QoR analysis document. Currently the term is gaining popularity in other sectors of technology, with each sector using its own appropriate components. Current trends in EDA Originally, the QoR was used to specify absolute values such as chip area, power dissipation, speed, etc. (for example, a QoR could be specified as a {100 MHz, 1W, 1 mm²} vector), and could only be used for comparing the different achievements of a single design specification. The current trend among designers is to include normalized values in the QoR vector, such that they will remain meaningful for a longer period of time (as technologies change), and/or across broad classes of design. For example, one often uses – as a QoR component – a number representing the ratio between the area required by a combinational logic block and the area required by a simple logic gate, this number being often referred to as "relative density of combinational logic". In this case, a relative density of five will generally be accepted as a good quality of result – relative density of combinational logic component – while a relative density of fifty will
https://en.wikipedia.org/wiki/Life%20table
In actuarial science and demography, a life table (also called a mortality table or actuarial table) is a table which shows, for each age, what the probability is that a person of that age will die before their next birthday ("probability of death"). In other words, it represents the survivorship of people from a certain population. They can also be explained as a long-term mathematical way to measure a population's longevity. Tables have been created by demographers including John Graunt, Reed and Merrell, Keyfitz, and Greville. There are two types of life tables used in actuarial science. The period life table represents mortality rates during a specific time period for a certain population. A cohort life table, often referred to as a generation life table, is used to represent the overall mortality rates of a certain population's entire lifetime. They must have had to be born during the same specific time interval. A cohort life table is more frequently used because it is able to make a prediction of any expected changes in the mortality rates of a population in the future. This type of table also analyzes patterns in mortality rates that can be observed over time. Both of these types of life tables are created based on an actual population from the present, as well as an educated prediction of the experience of a population in the near future. In order to find the true life expectancy average, 100 years would need to pass and by then finding that data would be of no use as healthcare is continually advancing. Other life tables in historical demography may be based on historical records, although these often undercount infants and understate infant mortality, on comparison with other regions with better records, and on mathematical adjustments for varying mortality levels and life expectancies at birth. From this starting point, a number of inferences can be derived. The probability of surviving any particular year of age The remaining life expectancy for peo
https://en.wikipedia.org/wiki/Evolution%20and%20the%20Theory%20of%20Games
Evolution and the Theory of Games is a book by the British evolutionary biologist John Maynard Smith on evolutionary game theory. The book was initially published in December 1982 by Cambridge University Press. Overview In the book, John Maynard Smith summarises work on evolutionary game theory that had developed in the 1970s, to which he made several important contributions. The book is also noted for being well written and not overly mathematically challenging. The main contribution to be had from this book is the introduction of the Evolutionarily Stable Strategy, or ESS, concept, which states that for a set of behaviours to be conserved over evolutionary time, they must be the most profitable avenue of action when common, so that no alternative behaviour can invade. So, for instance, suppose that in a population of frogs, males fight to the death over breeding ponds. This would be an ESS if any one cowardly frog that does not fight to the death always fares worse (in fitness terms, of course). A more likely scenario is one where fighting to the death is not an ESS because a frog might arise that will stop fighting if it realises that it is going to lose. This frog would then reap the benefits of fighting, but not the ultimate cost. Hence, fighting to the death would easily be invaded by a mutation that causes this sort of "informed fighting." Much complexity can be built from this, and Maynard Smith is outstanding at explaining in clear prose and with simple math. Reception See also Evolutionary biology